uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,750,881 | arxiv | \section{\textit{Introduction}}
High end cosmological observations of the Supernova of type Ia (SN
Ia), WMAP, etc.,
\cite{1,1a,2,2a,3,3a,3b,3c,3d,3e,3f,4,4a,4b,4c,4d,4e,5,6} suggest
the fact that the universe may be accelerating lately again after
the early phase. Many theories are formulated to explain this late
time acceleration. However, these theories can be divided mainly
in two categories fulfilling the criteria of a homogeneous and
isotropic universe. First kind of theory (better to known as
`Standard model' or $\Lambda$CDM model) assumes a fluid of
negative pressure named as `dark energy' (DE). The name arises
from the fact the exact origin of this energy is still unexplained
in theoretical set up. Observations, anyway, indicate nearly
70$\%$ of the universe may be occupied by this kind of energy.
Dust matter (cold dark matter (CDM) and baryon matter) comprises
the rest 30$\%$ and there is negligible radiation. Cosmologists
are inclined to suspect dark energy as the primal cause of the
late acceleration of universe. Theory of dark energy has remained
one of the foremost area of research in cosmology till the
discovery of acceleration of the universe at late times
\cite{7,7a,7b,7c,8,8a}. One could clearly notice from the second
field equation, that the expansion would be accelerated if the
equation of state (EoS) parameter satisfies, $ p/{\rho} {\equiv}
{\omega} < -1/3$. Accordingly, then a priori choice for dark
energy is a time independent positive `cosmological constant'
which relates to the equation of state (EoS) $\omega=-1$. This
gives an universe which is expanding forever at exponential rate.
Anyway, cosmological constant has some severe shortcomings like
fine tuning problem etc (see\cite{7} for a review), some recent
data \cite{m8a-A1n,m8a-A1nn} in some sense, agrees with this
choice. By the way, observations which constrains $\omega$ close
to the value of cosmological constant, of $\omega$ does not
indicate whether $\omega$ changes with time or not. So
theoretically, one could consider $\omega$ as a function of cosmic
time, such as inflationary cosmology (see \cite{A1,B2,C3,D4,O15}
for review). Scalar fields evolve in particle physics quite
naturally. Till date, a large variety of scalar field inflationary
models are discussed. This theory is active area in literature
nowadays (see \cite{7}). The scalar field which lightly interacts
with gravity is called `quintessence'. Quintessence fields are
first hand choice because this field can lessen fine tuning
problem of cosmological constant to some extent. Needless to say,
some common drawback for quintessence are also there. Observations
point that at current epoch energy density of scalar field and
matter energy density is comparable. But, we know they evolve from
different initial conditions. This discrepancy (known as
`coincidence problem') arises for any scalar field dark energy,
quintessence too suffer from this problem \cite{mO-Wn1}. Of
course, there is resolution of this problem, they are called
`tracking solution' \cite{mO-Wn2}. In the tracking regime, field
value should be of the order of Planck mass. Anyway a general
setback is that we always need to seek for such potentials (see
\cite{mO-Wn3} for related discussion). Eos parameter $\omega$ of
quintessence satisfies $-1{\leq}\omega{\leq}1$. Some current data
indicates that $\omega$ lies in small neighbourhood of
$\omega=-1$. Hence it is technically feasible to relax $\omega$ to
go down the line $\omega<-1$ \cite{mO-Wn4}. There exists another
scalar field with negative kinetic energy term,which can describe
late acceleration. This is named as phantom field, which has Eos
$\omega<-1$ (see details in \cite{7,mO-Wn5}). Phantom field energy
density increases with time. As a result, Hubble factor and
curvature diverges in finite time causing `Big-rip' singularity
(see \cite{mO-Wn6,mO-Wn7,mO-Wn8}). By the way, some specific
choice of potential can avoid this flaw. Present data perhaps
favours a dark energy model with $\omega>-1$ of recent past to
$\omega<-1$ at present time \cite{mO-Wn9}. The line $\omega=-1$ is
known as `phantom' divide. Evidently, neither quintessence nor
phantom field alone can cross the phantom divide. In this
direction, a firsthand choice is to combine both quintessence and
phantom field. This is known in the literature as `quintom' (i.e.,
hybrid of quintessence and phantom) \cite{mO-Wn9}. This can serve
the purpose, but still has some fallacy. A single canonical
complex field is quite natural and useful (like `spintessence'
model \cite{mO-Wn10,mO-Wn11}). However, canonical complex scalar
fields suffer a serious setback, namely the formation of `Q-ball'
(a kind of stable non-topological soliton) \cite{mO-Wn10,mO-Wn11}.\\
To overcome various difficulties with above mentioned models Wei
et al in their paper \cite{W23,W23n} introduced a non-canonical
complex scalar field which plays the role of quintom
\cite{W24,W24n,W24nn}. They name this unique model as `hessence'.
However, hessence is unlike other canonical complex scalar fields
which suffer from the formation of Q-ball. Second kind of theory
modifies the classical general relativity (GR) by higher degree
curvature terms (namely, $f(R)$ theory) \cite{1n,2n,3n} or by
replacing symmetric Levi-Civita connection in GR theory by
antisymmetric Weitzenb$\ddot{o}$ck connection. In other words,
torsion is taken for gravitational interaction instead of
curvature. The resulting theory \cite{4n,5n,6n} (called
`Teleparallel' gravity) was considered initially by Einstein to
unify gravity with electromagnetism in non-Riemannian
Weitzenb\"{o}ck manifold. Later further modification done to
obtain $f(T)$ gravity as in the same vein of $f(R)$ gravity theory
\cite{7n}. Although,the Eos of `cosmological constant'
($\Lambda$CDM model) is well within the various dataset, till now
not a single observation can detect DE or DM, and search for
possible alternative are on their way \cite{mCh7n}. In this regard
alternate gravity theory (like, $f(T)$) really worth discussing.
The work \cite{mC7n} is a nice account in establishing matter stability
of $f(T)$ theory in weak field limit in contrast to $f(R)$ theory.
It is shown that any choice of $f(T)$ can be used.Other reason for
the theoretical advantage for their choice are discussed in the next section.\\
We, in this work have chosen hessence in $f(T)$ gravity. Since,
the system is complex, we have preferred a dynamical analysis. As,
we have mentioned previously hessence field and $f(T)$ theory both
are promising candidates to explain present accelerated phase.
So, we merged them to find if they can highlight present acceleration
more accurately with current dataset. A mixed dynamical system with
tachyon, quintessence and phantom in $f(T)$ theory is considered in
\cite{mC8}. Dynamical systems with quintom are there in literature also
(see \cite{mC8n,mC8nn} for review). The dynamical system analysis for
normal scalar field model in $f(T)$ gravity has been discussed in ref \cite{Ch}.
But, to the best of our knowledge hessence in $f(T)$ gravity is not considered before.\\
We arrange the paper in following manner. Short sketch of $f(T)$
theory is presented in section 2. Hessence field in $f(T)$ gravity
is introduced to form dynamical system in section 3. Section 4 is
devoted dynamical system analysing and the stability of the system
for hessence dark energy model. The significance of our result is
discussed in section 5 in light of recent data. We concluded the
paper with relevant remarks in section 6. We use
normalized units as $8\pi G={\hbar}=c=1$ in this paper.\\
\section{\textit{A Brief Outline of $f(T)$ gravity: Some Basic Equations}}
In teleparallelism \cite{7n,8n,9n}, $e^\mu_A$ are called the
orthonormal tetrad components $(A=0,1,2,3)$. The index $A$ is used
for each point $x^\mu$ for a tangent space of the manifold, hence
each $e^\mu_A$ represents a tangent vector to the manifold (i.e.,
the so called vierbein). Also the inverse of the vierbein is
obtained from the relation ${e^\mu_A}{e^A_\nu}=\delta^\mu_\nu$.
The metric tensor is given as,
$g_{\mu\nu}=\eta_{AB}{e^A_\mu}{e^A_\nu}$
($\mu,\nu$=0,1,2,3),$\mu,\nu$ are coordinate indices on the
manifold (here, $\eta_{AB}$=diag(1,-1,-1,-1)). Recently, to
explain the acceleration the teleparallel torsion $(T)$ in
Lagrangian density has been modified from linear torsion to some
differentiable function of $T$ \cite{10n,11n} (i.e., $f(T)$)
likewise $f(R)$ theory mentioned earlier. In this new set up of
gravity the field equation is of second order unlike $f(R)$ (which
is fourth order). In $f(T)$ theory of gravitation, corresponding
action reads as,
\begin{equation}\label{2.1}
\mathcal{A}=\frac{1}{2{\kappa^2}}\int{d^4x}[\sqrt{-g}(T+f(T))+\mathcal{L}_m]
\end{equation}
where $T$ is the torsion scalar, $f(T)$ is some differentiable
function of torsion $T$, $\mathcal{L}_m$ is the matter Lagrangian,
$\sqrt{-g}=det(e^A_\mu)$ and $\kappa^2=8\pi G$. The torsion scalar
$T$ mentioned above is defined as,
\begin{equation}\label{2.2}
T={S_\rho}^{\mu\nu}{T^\rho}_{\mu\nu}
\end{equation}
with the components of torsion tensor ${T^\rho}_{\mu\nu}$ of
(\ref{2.2}) is given by,
\begin{equation}\label{2.3}
{T^\rho}_{\mu\nu}={{\Gamma^W}^\lambda}_{\nu\mu}-{{\Gamma^W}^\lambda}_{\mu\nu}
={e^\lambda_A}({\partial_\mu}{e^A_\nu}-{\partial_\nu}{e^A_\mu})
\end{equation}
where,
${{\Gamma^W}^\lambda}_{\nu\mu}={e^\lambda_A}{\partial_\mu}{e^A_\nu}$
is the Weitzenb\"{o}ck connexion. Here, The superpotential
${S_\rho}^{\mu\nu}$ (\ref{2.2}) is defined as bellow,
\begin{equation}\label{2.4}
{S_\rho}^{\mu\nu}=\frac{1}{2}({K^{\mu\nu}}_\rho+{\delta^{\mu}_{\rho}}
{T^{\theta\nu}}_ \theta-{\delta^{\nu}_ {\rho}}{T^{\theta\mu}}_\theta)
\end{equation}
\begin{equation}\label{2.5}
{K^{\mu\nu}}_\rho=(-)\frac{1}{2}({T^{\mu\nu}}_\rho-{T^{\nu\mu}}_\rho-{T_\rho}^{\mu\nu})
\end{equation}
${K^{\mu\nu}}_\rho$ is called as contortion tensor. The contortion
tensor measures the difference between symmetric Levi-Civita
connection and anti-symmetric Weitzenb\"{o}ck conexion. It is easy
to check that the equation of motion reduces to Einstein gravity
if $f(T)=0$. Actually this is the correspondence between
teleparallel and Einsteinian theory \cite{6n}. It is noticed that
$f(T)$ theory can address early acceleration and late evolution of
universe depending on the choice of $f(T)$. For example, power law
or exponential form can't overcome phantom divide \cite{12n}, but
some other choices of $f(T)$ \cite{13n} can cross phantom divide.
The reconstruction of $f(T)$ model \cite{14n,15n}, various
cosmological \cite{16n,17n} and thermodynamical \cite{18n}
analysis, has been reported. It is to interesting to note that
linear $f(T)$ model (i.e., when $\frac{dF}{dT}=$ constant) behaves
as cosmological constant. Anyway, a preferable choice of $f(T)$ is
such that it reduces to General Relativity (GR) when redshift is
large in tune with primordial nucleosynthesis and cosmic microwave
data at early times (i.e., $f/T{\rightarrow}0$ for $a<<1$).
Moreover, in future it should give de-Sitter like state. One such
choice is given in power form as in \cite{19n}, namely
\begin{equation}\label{2.6}
f(T)=\beta {(-T)^n}
\end{equation}
$\beta$ being a constant. In particular, $n=1/2$ gives same
expanding model as the theory referred in \cite{19n,20n}. Current
data needs the bound $`n<<1'$ to permit $f(T)$ as an alternate
gravity theory. The effective DE equation of state varies from
$\omega=-1+n$ of past to $\omega=-1$ in future.\\
Throughout the work we assume flat, homogeneous, isotropic
Friedman-Lema\^{i}tre-Robertson-Walker (FLRW) metric,
\begin{equation*}
ds^2=dt^2-{a^2}(t){\sum}_{i=1}^3{(dx^i)^2}
\end{equation*}
which arises from the vierbein $e^A_\mu=diag(1,a(t),a(t),a(t))$.
Here $a(t)$ is the scale factor as a function of cosmic time
t.Using (\ref{2.3}),(\ref{2.4}),(\ref{2.5}) one gets,
\begin{equation*}
T=S^{\rho\mu\nu}T_{\rho\mu\nu}=-6{H^2}
\end{equation*}
where $H={\dot a(t)}/a(t)$ is the Hubble factor (from here and in
rest of the paper `overdot' will mean the derivative operator
$\frac{d}{dt}$).
\section{\textit{Hessence Dark Energy in $f(T)$ Gravity Theory: Formation of Dynamical Equations }}
Here, we consider a non-canonical complex scalar field
\begin{equation}\label{3.1}
\Phi={\phi_1}+ \textit{i}{\phi_2}
\end{equation}
where, $\textit{i}=\sqrt{-1}$.
with Lagrangian density,
\begin{equation}\label{3.2}
\mathcal{L}_{DE}=\frac{1}{4}[({\partial_\mu}\Phi)^2+({\partial_\mu}\Phi^\ast)^2]
-V(\Phi,\Phi^\ast)
\end{equation}
Clearly the Lagrangian density is identical to the Lagrangian
given by two real scalar fields, which looks like
\begin{equation}\label{3.3}
\mathcal{L}_{DE}=\frac{1}{2}({\partial_\mu}\phi_1)^2-\frac{1}{2}({\partial_\mu}\phi_2)^2
-V(\phi_1,\phi_2)
\end{equation}
where $\phi_1$ and $\phi_2$ are quintessence and phantom fields
respectively. It is noteworthy that, the Lagrangian in (\ref{3.2})
consists of one field, instead of two independent field as in
(\ref{3.3}) of reference \cite{mO-Wn9}. It also differs from
canonical complex scalar field (like `spintessence' in
\cite{mO-Wn10,mO-Wn11}) which has the Lagrangian
\begin{equation}\label{3.4}
\mathcal{L}_{DE}=\frac{1}{2}({\partial_\mu}\Psi^*)({\partial_\mu}\Psi)-V(|\Psi|),
\end{equation}
$|\Psi|$ denote the absolute value of $\Psi$, i.e.,
$|\Psi|^2={\Psi^*}{\Psi}$. However, hessence is unlike canonical
complex scalar fields which suffer from the formation of `Q-ball'
(a kind of stable non-topological soliton). Following Wei et al as
in \cite{W23,W23n}, the energy density $\rho_h$ and pressure $p_h$
of hessence field can be written as,
\begin{equation}\label{3.5}
\rho_h=\frac{1}{2}({\dot{\phi}^2}-\frac{Q^2}{{a^6}{\phi^2}})+V(\phi)
\end{equation}
\begin{equation}\label{3.6}
p_h=\frac{1}{2}({\dot{\phi}^2}-\frac{Q^2}{{a^6}{\phi^2}})-V(\phi)
\end{equation}
where, $Q$ is a constant and denotes the total induced charge in
the physical volume (refer \cite{W23,W23n}). In this paper, we
will consider interaction of hessence field and matter. The matter
is perfect fluid with barotropic equation of state,
\begin{equation}\label{3.7}
p_m={w_m}{\rho_m}{\equiv}(\gamma-1){\rho_m}
\end{equation}
where $\gamma$ is the barotropic index satisfying
$0<\gamma{\leq}2$. Also $p_m$ and $\rho_m$ respectively denotes
the pressure and energy density of matter. In particular
$\gamma=1$ and $\gamma=4/3$ indicate dust matter and radiation
respectively. We suppose hessence and background fluid interacts
through a term $C$. This term $C$ indicates energy transfer
between dark energy and dark matter. Positive $C$ is needed to
solve coincidence problem since positive magnitude of $C$
indicates energy transfer from dark energy to dark matter. Also
$2^{ND}$ law of thermodynamics is also valid with this choice. An
interesting work to settle this problem is reviewed in
\cite{m20n21}. A rigourous dynamical analysis is done there.
Similar approach exists for quintom model too. Various choices of
this interaction term $C$ are used in the literature. Here in view
of dimensional requirement of energy conservation equation and to
make the dynamical system simple, we have taken $C=\delta
\dot{\phi} \rho_m$, where $\delta$ is a real constant of small
magnitude, which may be chosen as positive or negative at will,
such that $C$ remains positive. Also, $\dot{\phi}$ may be positive
or negative according the hessence field $\phi$. So we have,
\begin{equation}\label{3.8}
\dot{\rho}_h+3H(\rho_h+p_h)=-C,
\end{equation}
\begin{equation}\label{3.9}
\dot{\rho}_m+3H(\rho_m+p_m)=C
\end{equation}
preserving the total energy conservation equation
\begin{equation*}
\dot{\rho}_{total}+3H(\rho_{total}+p_{total})=0
\end{equation*}
The modified field equations in $f(T)$ gravity are,
\begin{equation}\label{3.10}
H^2=\frac{1}{(2f_T+1)}\left[\frac{1}{3}(\rho_h+\rho_m)-\frac{f}{6}\right],
\end{equation}
\begin{equation}\label{3.11}
\dot{H}=(-\frac{1}{2})\left[\frac{\rho_h+p_h+\rho_m}{1+{f_T}+2T{f_T}}\right]
\end{equation}
In view of equations (\ref{3.5}) and (\ref{3.8}) we have,
\begin{equation}\label{3.12}
\ddot{\phi}+3H\dot{\phi}+\frac{Q^2}{{a^6}{\phi^2}}+V^{\prime}=-{\delta}{\rho_m}
\end{equation}
Here, `$\prime$' means `$\frac{d}{d\phi}$'. Similarly equations
(\ref{3.7}) and (\ref{3.9}) give,
\begin{equation}\label{3.13}
\dot{\rho}_m+3H{\gamma}{\rho_m}={\delta}{\dot{\phi}}{\rho_m}
\end{equation}
Now, we introduce five auxiliary variables,
\begin{equation}\label{3.14}
x=\frac{\dot{\phi}}{\sqrt{6}H},\quad y=\frac{\sqrt{V}}{\sqrt{3}H},
\quad u=\frac{\sqrt{6}}{\phi},\quad
v=\frac{Q}{\sqrt{6}H\sqrt{{a^3}{\phi}}},\quad
\Omega_m=\frac{\rho_m}{3 {H^2}}
\end{equation}
We form the following autonomous system after some manipulation,
\begin{equation}\label{3.15}
\frac{dx}{dN}=-3x-u{v^2}-{\lambda}{\sqrt{\frac{3}{2}}}{y^2}-{\delta}{\sqrt{\frac{3}{2}}}
{\Omega_m}+\frac{3x}{2}({2x^2}-{2v^2}+\Omega_m)
\end{equation}
\begin{equation}\label{3.16}
\frac{dy}{dN}={\lambda}{\sqrt{\frac{3}{2}}}xy+\frac{3}{2}y({2x^2}-{2v^2}+\Omega_m)
\end{equation}
\begin{equation}\label{3.17}
\frac{du}{dN}=-x{u^2}
\end{equation}
\begin{equation}\label{3.18}
\frac{dv}{dN}=-xuv-3v+\frac{3}{2}v({2x^2}-{2v^2}+\Omega_m)
\end{equation}
\begin{equation}\label{3.19}
\frac{d\Omega_m}{dN}=\Omega_m(-3\gamma-\delta\sqrt{6}x+3({2x^2}-{2v^2}+\Omega_m))
\end{equation}
In above calculations, $N={\int}\frac{\dot{a}}{a}dt=lna$, denotes
the `e-folding' number. We have chosen $N$ as independent
variable. We have taken $f(T)=\beta \sqrt{-T}$ for above
derivation of autonomous system. Also, we have chosen exponential
form of potential i.e., $\frac{V^\prime}{V}=\lambda$ ( where
$\lambda$ is a real constant) for simplicity of the autonomous
system. This kind of choice is standard in literature with coupled
real scalar field \cite{m20n21-21}, complex field (like, hessence
in loop quantum cosmology) in \cite{mC8nn}. The work \cite{Ch}
dealing quintessense, matter in $f(T)$ theory, is also done with
exponential potential. But, to our knowledge hessense, matter in
$f(T)$ theory is not considered before. In view of (\ref{3.14}),
the Friedmann equation (\ref{3.10}) reduces as,
\begin{equation}\label{3.20}
{x^2}+{y^2}-{v^2}+{\Omega_m}=1
\end{equation}
The Raychoudhury equation becomes,
\begin{equation}\label{3.21}
-\frac{\dot{H}}{H^2}=\frac{3}{2}(2{x^2}-2{v^2}+{\Omega_m})
\end{equation}
The density parameter of hessence ($\Omega_h$) dark energy and
background matter ($\Omega_m$) are obtained in the following
forms:,
\begin{equation}\label{3.22}
\Omega_h=\frac{\rho_h}{3{H^2}}={x^2}+{y^2}-{v^2}, {\quad} \Omega_m=\frac{\rho_m}{3{H^2}}=1-({x^2}+{y^2}-{v^2})
\end{equation}
The Eos of hessence $\omega_h$ dark energy and total Eos of the
system $\omega_{total}$ are calculated in the forms:
\begin{equation}\label{3.23}
\omega_h=\frac{p_h}{\rho_h}=\frac{{x^2}-{y^2}-{v^2}}{{x^2}+{y^2}-{v^2}}, {\quad}
\omega_{total}=\frac{p_h+p_m}{\rho_h+\rho_m}={x^2}-{y^2}-{v^2}+(\gamma-1){\Omega_m}
\end{equation}
Also, the deceleration parameter $q$ can be expressed as
\begin{equation}\label{3.24}
q=-1-\frac{\dot{H}}{H^2}=-1+\frac{3}{2}(2{x^2}-2{v^2}+{\Omega_m})
\end{equation}
\section{\textit{Fixed Points and Stability Analysis of the Autonomous System}}
\subsection{\textit{Fixed Points with Exponential Potential}}
We have made the choice of exponential form of potential i.e.,
$\frac{V^\prime}{V}=\lambda$ (where $\lambda$ is a real constant).
The fixed points $P_i$, the co-ordinates of $P_i$ i.e.,
(${x_c}$,${y_c}$,${u_c}$,${v_c}$,${{\Omega_m}_c}$) are given in
Table $\bf{1}$ with relevant parameters and existence
condition(s).
\begin{table}[h]
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
$P_i$ & ${x_c}$ , ${y_c}$ , ${u_c}$ , ${v_c}$ , ${{\Omega_m}_c}$ & ${\Omega_m}$ & ${\omega_h}$ & ${\omega_{total}}$ & ${\Omega_h}$ & q & Existence \\
\hline
$P_1$ & 1 , 0 , 0 , 0 , 0 & 0 & 1 & 1 & 1 & 2 & always \\
\hline
$P_2$ & -1 , 0 , 0 , 0 , 0 & 0 & 1 & 1 & 1 & 2 & always \\
\hline
$P_{3\pm}$ & ${\pm}1$ or $\frac{6-3\gamma}{\delta\sqrt{6}}$ , 0 , 0 , 0 , 0 & 0 & 1 & 1 & 1 & 2 & $\frac{6-3\gamma}{\delta\sqrt{6}}={\pm}1$\\
\hline
$P_4$ & $-\sqrt{\frac{2}{3}}\delta$ , 0 , 0 , 0 , $\frac{6-3\gamma+2\delta^2}{3}$ & $\frac{6-3\gamma+2\delta^2}{3}$ & 1 & $\gamma(1-\frac{2{\delta^2}}{3})$ & $\frac{2{\delta^2}}{3}$ & $\frac{1}{2}+{\delta^2}$ & $\frac{2\delta^2}{3}+\frac{6-3\gamma+2\delta^2}{3}=1$\\
& & & & $+ \frac{4{\delta^2}}{3}-1$ & & & \\
\hline
$P_5$ & $x=\frac{6-3\gamma}{\delta\sqrt{6}}$ , 0 , 0 , $\sqrt{{x^2}-1}$ , 0 & 0 & 1 & 1 & 1& 2 & $6{\delta^2}{\leq}{(6-3\gamma)^2}$\\
\hline
$P_6 $& $x=\frac{6-3\gamma}{\delta\sqrt{6}}$ , 0 , 0 , $-\sqrt{{x^2}-1}$ , 0 & 0 & 1 & 1 & 1& 2 & $6{\delta^2}{\leq}{(6-3\gamma)^2}$\\
\hline
$P_7$ & 0 , 1 , any value , 0 , 0 & 0 & -1 & -1 & 1 & -1 & $\gamma=0$\\
\hline
$P_8$ & 0 , 1 , any value , 0 , 0 & 0 & -1 & -1 & 1 & -1 & $\gamma=0$\\
\hline
$P_9$ & $-\frac{\lambda}{\sqrt{6}}$ , $\sqrt{1-\frac{\lambda^2}{6}}$ , 0 , 0 , 0 & 0 & $-1 + \frac{\lambda^2}{3}$ & $-1 + \frac{\lambda^2}{3}$ & 1 & $-1 + \frac{\lambda^2}{2}$ &${\lambda^2}{\leq}6$\\
\hline
$P_{10}$ & $-\frac{\lambda}{\sqrt{6}}$ , $-\sqrt{1-\frac{\lambda^2}{6}}$ , 0 , 0 , 0 & 0 & $-1 + \frac{\lambda^2}{3}$ & $-1 + \frac{\lambda^2}{3}$ & 1 & $-1 + \frac{\lambda^2}{2}$ &${\lambda^2}{\leq}6$\\
\hline
$P_{11}$ & A , $\sqrt{1-{A^2}-{B^2}}$ , 0 , 0 , B & B & $\frac{-1+2{A^2}+{B^2}}{1-B}$ & $-1+{A^2}+{B^2}$ & 1-B & $-1+\frac{3}{2}B$ &$\delta+\lambda{\neq}0 $\\
& & & & $+(\gamma-2)B$ & & $+3{B^2}$ &\\
\hline
$P_{12}$ & A , $-\sqrt{1-{A^2}-{B^2}}$ , 0 , 0 , B & B & $\frac{-1+2{A^2}+{B^2}}{1-B}$ & $-1+{A^2}+{B^2}$ & 1-B & $-1+\frac{3}{2}B$ & $\delta+\lambda{\neq}0 $\\
& & & & $+(\gamma-2)B$ & & $+3{B^2}$ & \\
\hline
$P_{13}$ & $ -\frac{\sqrt{6}}{\lambda}$ , 0 , 0 , $\sqrt{\frac{6}{\lambda^2}-1}$ , 0 & 0 & 1 & 1 & 1 & 2 & ${\lambda^2}{\leq}6$\\
\hline
$P_{14}$ & $ -\frac{\sqrt{6}}{\lambda}$ , 0 , 0 , $-\sqrt{\frac{6}{\lambda^2}-1}$ , 0 & 0 & 1 & 1 & 1 & 2 & ${\lambda^2}{\leq}6$\\
\hline
$P_{15}$ & $x=-\frac{6}{\lambda}=\frac{6-3\gamma}{\delta \sqrt{6}}$ , 0 , 0 , $\sqrt{x^2-1}$ , 0 & 0 & 1 & 1 & 1 & 2 &$-\frac{6}{\lambda}=\frac{6-3\gamma}{\delta\sqrt{6}}$ \\
\hline
$P_{16}$ & $x=-\frac{6}{\lambda}=\frac{6-3\gamma}{\delta \sqrt{6}}$ , 0 , 0 , $-\sqrt{x^2-1}$ , 0 & 0 & 1 & 1 & 1 & 2 & $-\frac{6}{\lambda}=\frac{6-3\gamma}{\delta\sqrt{6}}$ \\
\hline
\end{tabular}
\caption{Fixed points of the autonomous system of equations
(\ref{3.15})-(\ref{3.19}) and various physical parameters with
existence conditions. Here
$A=-\sqrt{\frac{3}{2}}~\frac{\gamma}{\delta+\lambda}$ and
$B=\frac{6+\lambda \sqrt{6}A-6{A^2}}{9}$.}
\end{table}
From Table$\bf{1}$ we note that,
${\bullet}$ Case $\bf{1}$: Fixed points $P_1,P_2=({\pm1} , 0 , 0 , 0 , 0)$ always exist with the physical parameter ${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$, ${\Omega_h}=1$, $q=2$.\\
${\bullet}$ Case $\bf{2}$: Fixed point $P_{3\pm}=({\pm1}$ or $\frac{6-3\gamma}{\delta\sqrt{6}}$ , 0 , 0 , 0 , 0 ) exists under the condition the $\frac{6-3\gamma}{\delta\sqrt{6}}={\pm1}$ with the physical parameter ${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$, ${\Omega_h}=1$, $q=2$, i.e., same as $P_1$ and $P_2$.\\
${\bullet}$ Case $\bf{3}$: Fixed point $P_4=(-\sqrt{\frac{2}{3}}\delta$ , 0 , 0 , 0 , $\frac{6-3\gamma+2\delta^2}{3}$) exists under the condition the $\frac{2\delta^2}{3}+\frac{6-3\gamma+2\delta^2}{3}=1$ with physical parameter ${\Omega_m}=\frac{6-3\gamma+2\delta^2}{3}$, ${\omega_h}=1$, ${\omega_{total}}=-1 + \gamma(1-\frac{2{\delta^2}}{3})+ \frac{4{\delta^2}}{3}$, ${\Omega_h}=\frac{2{\delta^2}}{3}$, $q=\frac{1}{2}+{\delta^2}$.\\
${\bullet}$ Case $\bf{4}$: Fixed points $P_5,P_6=(x=\frac{6-3\gamma}{\delta\sqrt{6}}$ , 0 , 0 , ${\pm}\sqrt{{x^2}-1}$ , 0) exist under the condition the $6{\delta^2}{\leq}{(6-3\gamma)^2}$ with physical parameter ${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$, ${\Omega_h}=1$, $q=2$.\\
${\bullet}$ Case $\bf{5}$: Fixed points $P_7,P_8=(0 , 1 , any value , 0 , 0)$ exist under the condition the $\gamma=0$ with physical parameter ${\Omega_m}=0$, ${\omega_h}=-1$, ${\omega_{total}}=-1$, ${\Omega_h}=1$, $q=-1$.\\
${\bullet}$ Case $\bf{6}$: Fixed points $P_9,P_{10}=(-\frac{\lambda}{\sqrt{6}}$ , ${\pm}\sqrt{1-\frac{\lambda^2}{6}}$ , 0 , 0 , 0) exist under the condition the ${\lambda^2}{\leq}6$ with physical parameter ${\Omega_m}=0$, ${\omega_h}=-1+\frac{\lambda^2}{3}$, ${\omega_{total}}=-1+\frac{\lambda^2}{3}$, ${\Omega_h}=1$, $q=-1+\frac{\lambda^2}{2}$.\\
${\bullet}$ Case $\bf{7}$: Fixed points $P_{11},P_{12}=(A , {\pm}\sqrt{1-{A^2}-{B^2}}$ , 0 , 0 , B) exist under the condition the $\delta+\lambda{\neq}0$ with physical parameter ${\Omega_m}=B$, ${\omega_h}=\frac{-1+2{A^2}+{B^2}}{1-B}$, ${\omega_{total}}=-1+{A^2}+{B^2}+(\gamma-2)B$, ${\Omega_h}=1-B$, $q=-1+\frac{3}{2}B+3{B^2}$.\\
${\bullet}$ Case $\bf{8}$: Fixed points $P_{13},P_{14}=(-\frac{\sqrt{6}}{\lambda}$ , 0 , 0 ,${\pm}\sqrt{\frac{6}{\lambda^2}-1}$ , 0 ) exist under the condition the ${\lambda^2}{\leq}6$ with physical parameter ${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$, ${\Omega_h}=1$, $q=2$.\\
${\bullet}$ Case $\bf{9}$: Fixed points $P_{15},P_{16}=(x=-\frac{6}{\lambda}=\frac{6-3\gamma}{\delta \sqrt{6}}$ , 0 , 0 , ${\pm}\sqrt{x^2-1}$ , 0) exist under the condition the $-\frac{6}{\lambda}=\frac{6-3\gamma}{\delta\sqrt{6}}$ and $\lambda{\neq}2\delta$ with physical parameter ${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$, ${\Omega_h}=1$, $q=2$.\\
\subsection{\textit{Stabilitity of the Fixed Points}}
Dynamical analysis is a powerful technique to study cosmological
evolution, where exact solution could not be found due to
complicated system. This can be done without any information of
specific initial conditions.The dynamical systems mostly
encountered in cosmological system are non linear system of
differential equations (DE). Here the dynamical system is also non
linear.Very few works in literature is devoted to analyse non
linear dynamical system. But, we used the methods developed till
now,\cite{21}. Also we devised some method (as in the plotting of
the dynamical evolution, use of normally hyperbolic fixed points).
We now analyse stability of the fixed points. In this regard, we
find the eigenvalues of the linear perturbation matrix of the
dynamical system (\ref{3.15})-(\ref{3.19}). Due to the Friedmann
equation (\ref{3.20}) we have four independent perturbed equation.
The eigenvalues of the $4{\times}4$ linear perturbation matrix
corresponding each fixed point $P_i$ are given in Table$\bf{2}$.
Before further discussion we state some basics from non linear
system of differential equation (DE) \cite{21}. If the real part
of each eigenvalue is non-zero, then the fixed point is called
hyperbolic fixed point (otherwise, it is called non hyperbolic).
Let us write a non linear system of DE in $R^n$ (the $n$
dimensional Euclidean plane) as,
\begin{equation}\label{4.1}
\mathbf{\dot{x}=f(x)}
\end{equation}
where, $f:E $${\rightarrow}{R^n}$ is derivable and $E$ is an open
set in $R$. For non linear system the DE can not be written in
matrix form as done in linear system. Near hyperbolic fixed point,
although a non-linear dynamical system could be linearised and
stability of the fixed point is found by Hartman-Grobman theorem.
As we can see from the following, Let ${x_c}$ be a fixed point and
$\zeta(t)$ be the perturbation from ${x_c}$ i.e.,
$\zeta(t)=x-{x_c}$, i.e., $x={x_c}+\zeta(t)$. We find the time
evolution of $\zeta(t)$ for (\ref{4.1})as,
\begin{equation}\label{4.2}
\mathbf{\dot{\zeta}=\frac{d}{dt}(x-{x_c})=\dot{x}=f(x)=f({x_c}+\zeta)}
\end{equation}
Since, $f$ is assumed derivable, we use the Taylor expansion of
$f$ to get,
\begin{equation}\label{4.3}
\mathbf{f({x_c}+\zeta)=f(x_c)+\zeta Df(x_c)+...}
\end{equation}
$Df(x)=\frac{\partial{f_i}}{\partial{x_j}} , i,j=1,2,...,n$, as
$\zeta$ is very small higher order terms are neglected above. As,
$f(x_c)=0$, (\ref{4.2}) reduces to,
\begin{equation}\label{4.3}
\mathbf{\dot{\zeta}=\zeta Df(x_c)}
\end{equation}
This is called the linearization of the DE near a fixed point.
Stability of the fixed point $x_c$ is inferred from the sign of
eigenvalues of Jacobian matrix $Df(x_c)$. If the fixed point is
hyperbolic, then stability is concluded from Hartman Grobman theorem, which states,\\
Theorem(Hartman Grobman):Let the non linear DE (\ref{4.1}) in
$R^n$ where, $f$ is derivable with flow $\phi_t$.If $x_c$ is a
hyperbolic fixed point , then there exists a neighbourhood N of
$x_c$ on which $\phi_t$ is homeomorphic to the flow of
linearization of the DE near $x_c$.\\ But for non hyperbolic fixed
point this cannot be done and the study of stability becomes hard
due to lack of theoretical set up. If at least one eigenvalue
corresponding the fixed point is zero, then it is termed as non
hyperbolic. For this case, we can not find out stability near the
fixed point. Consequently, we have to resort to other techniques
like numerical solution of the system near fixed point and to
study asymptotic behaviour with the help of plot of the solution,
as is done in this work (details are described later). However, we
can find the dimension of stable manifold (if exists) with the
help of centre manifold theorem. There are a separate class of of
important non hyperbolic fixed points known as normally hyperbolic
fixed points,which are rarely considered in literature (see,
\cite{22}). As some fixed points encountered in our work are of
this kind, we state the basics here. We are also interested in non
isolated normally hyperbolic fixed points of a given DE (for
example a curve of fixed points, such a set is called equilibrium
set).If an equilibrium set has only one zero eigenvalue at each
point and all other eigenvalue has non-zero real part then the
equilibrium set is called normally hyperbolic. The stability of
normally hyperbolic fixed point is deduced from invariant manifold theorem, which states,\\
Theorem (Invariant manifold):Let $x=x_c$ be a fixed point of the DE $\dot{x}=f(x)$ on $R^n$ and let
$E^s$, $E^u$ and $E^c$ denote the stable ,unstable and centre subspaces of the linearization
of the DE at $x_c$.Then there exists,\\
a stable manifold $W^s$ tangent to $E^s$,\\
an unstable manifold $W^s$ tangent to $E^u$,and\\
a centre manifold $W^c$ tangent to $E^c$ at $x_c$.In other words,
the stability depends on the sign of remaining eigenvalues. If the
sign of remaining eigenvalues are negative, then the fixed point
is stable, otherwise unstable. Table $\mathbf{2}$ shows the
eigenvalues corresponding the fixed points given in Table
$\mathbf{1}$ and existence for hyperbolic, non-hyperbolic or
normally hyperbolic fixed points with the nature of stability (if
any).
\begin{table}
\begin{tabular}{|l|l|l|}
\hline
$P_i$&Eigenvalues&{Nature of Stability (if exists \dag)}\\
\hline
$P_1$ & 0,0,$3+\delta\sqrt{6},3+\sqrt{\frac{3}{2}}\lambda$ & 2D stable manifold \\
$P_2$ & 0,0,$3-\delta\sqrt{6},3-\sqrt{\frac{3}{2}}\lambda$ & 2D stable manifold \\
$P_{3\pm}$ & 0,0,$3{\pm}\delta\sqrt{6},3{\pm}\sqrt{\frac{3}{2}}\lambda$ & 2D stable manifold\\
$P_4$ & $0,-\frac{3}{2}+{\delta^2},-\frac{3}{2}+{\delta^2},\frac{3}{2}+{\delta^2}
-{\delta}{\lambda}$ & 3D stable manifold\\
$P_5$ & 0,0,$3+\sqrt{6}x\delta,3+\sqrt{\frac{3}{2}}x\lambda $ & 2D stable manifold\\
$P_6$ & 0,0,$3+\sqrt{6}x\delta,3+\sqrt{\frac{3}{2}}x\lambda $ & 2D stable manifold\\
$P_7$ & -3,0,$-3-\sqrt{3(\delta\lambda-{\lambda^2})},-3+\sqrt{3(\delta\lambda
-{\lambda^2})}$ & stable\\
$P_8$ & -3,0,$-3-\sqrt{3(\delta\lambda-{\lambda^2})}$,$-3+\sqrt{3(\delta\lambda
-{\lambda^2})}$ & stable\\
$P_9$ & 0,$-3+\frac{\lambda^2}{2},-3+\frac{\lambda^2}{2},-3-\delta\lambda+{\lambda^2}$ & 3D stable manifold\\
$P_{10}$ & 0,$-3+\frac{\lambda^2}{2},-3+\frac{\lambda^2}{2},-3-\delta\lambda +{\lambda^2}$ & 3D stable manifold\\
$P_{11}$ & 0,$-3+3{a^2}+3\frac{b^2}{2},\frac{1}{4}(D-\sqrt{\Delta})$
,$\frac{1}{4}(D+\sqrt{\Delta})$ & 3D stable manifold\\
$P_{12}$ & 0,$-3+3{a^2}+3\frac{b^2}{2},\frac{1}{4}(D-\sqrt{\Delta})$
,$\frac{1}{4}(D+\sqrt{\Delta})$ & 3D stable manifold\\
$P_{13}$ & 0,0,0,$-3\frac{(2\delta\lambda-{\lambda^2})}{\lambda^2}$ & 1D stable manifold\\
$P_{14}$ & 0,0,0,$-3\frac{(2 \delta\lambda-{\lambda^2})}{\lambda^2}$ & 1D stable manifold\\
$P_{15}$ & 0,0,$3 + \sqrt{6}x\delta,3+\sqrt{\frac{3}{2}}x\lambda$ & 2D stable manifold\\
$P_{16}$ & 0,0,$3 + \sqrt{6}x\delta,3+\sqrt{\frac{3}{2}}x\lambda$ & 2D stable manifold\\
\hline
\end{tabular}
\caption{Eigenvalues of the fixed points of the autonomous system
of equations (\ref{3.15})-(\ref{3.19}) and the nature of stability
(if any)} where $D=-12 + 24{a^2} +
12{b^2}+2\sqrt{6}a\delta+\sqrt{6}a\lambda$ and
$\Delta=-144{a^2}+144{a^4}+144{a^2}{b^2}+36{b^4}+48\sqrt{6}a\delta-48\sqrt{6}{a^3}
\delta-72\sqrt{6}a{b^2}\delta+24{a^2}{\delta^2}-72\sqrt{6}a\lambda+72\sqrt{6}{a^3}\lambda+
84\sqrt{6}a{b^2}\lambda+48\delta\lambda-72{a^2}\delta\lambda-48{b^2}\delta\lambda-
48{\lambda^2} +54{a^2}{\lambda^2}+48{b^2}{\lambda^2}$.\\
{\dag}Nature of Stability is discussed in details.
\end{table}
We see from Table$\bf{2}$ that each fixed point $P_i$ is non
hyperbolic, except $P_7$ and $P_8$ \textbf{(which are normally
hyperbolic)}. So we cannot use linear stability analysis.Hence,we
have utilised the following scheme to infer the stability of non
hyperbolic fixed points.We find the numerical solutions of the
system of differential equations (\ref{3.15})-(\ref{3.19}). Then,
we have investigated the variation of the dynamical variables
$x,y,u,v,$$\Omega_m$ against e-folding N, which in turn gives the
variation against time $t$ through graphs in the neighbourhood of
each fixed points and notice if the dynamical variables
asymptotically converges to any of the fixed points. In that case
we can say the fixed point is stable (otherwise, unstable). This
method are used nowadays in absence of proper mathematical
analysis of non linear dynamical system. But, we must remember the
method is not full proof. Since, we have to consider the
neighbourhood of N as large as possible (i.e.,
$|N|{\rightarrow}{\infty}$). Because a small perturbation can lead
to unstability. The graphs corresponding to each fixed point are
given and analysed below. We consider the fixed points one by one.
We note from figure $\mathbf{1}$ that $P_1$ is not a stable fixed point.
Similar is the case of $P_2$, as is evident from figure $\mathbf{2}$.
We note that if $\lambda{\leq}-\sqrt{6}$ and $\delta{\leq}-\sqrt{\frac{3}{2}}$
(or, $\lambda{\geq}\sqrt{6}$ and $\delta{\geq}\sqrt{\frac{3}{2}}$)
(equality should occur in one of them), then $P_1$ (or, $P_2$) may
admit 2 dimensional stable manifold corresponding the two negative
eigenvalues with Eos of hessence and total Eos being 1, and universe decelerates.\\
We note that $P_{3\pm}$ bears same feature like $P_1$ and $P_2$. So, none of
$P_1$ ,$ P_2$ and $P_3$ describes the current phase of universe.
The points bear no physical significance.\\
\begin{figure}\centering
\epsfig{file=D1.eps,width=.50\linewidth}
\caption{Plot of (1) variations of x (blue),y (green),u
(orange),v (red),$\Omega_m$ (yellow) versus N near $P_1$, for $\gamma=1$ , $\delta=0.5$ and $\lambda=-0.5.$.The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D2.eps,width=.50\linewidth}
\caption{Plot of(2) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_2$, for $\gamma=1$ , $\delta=0.5$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D4.eps,width=.50\linewidth}
\caption{Plot of (3) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$ (yellow) versus N near $P_4$, for $\gamma=4/3$ , $\delta=0.5$ and $\lambda=-0.5.$ The position corresponding N=0 is the fixed point under consideration.}
\end{figure}
If, ${\delta^2}{\leq}\frac{3}{2}$ and ${\delta^2}-\delta\lambda{\leq}-\frac{3}{2}$ (equality should occur in one of them) $P_4$ may admit 2 dimensional stable manifold corresponding the two negative eigenvalues with Eos of hessence is 1 and total Eos is $-1 + \gamma(1-\frac{2{\delta^2}}{3})+ \frac{4{\delta^2}}{3}$ and universe decelerates.Here, the plot figure $\mathbf{3}$ indicates that with a small increase of N the solution moves away from $P_4$.This is a unstable fixed point.\\
\begin{figure}\centering
\epsfig{file=D5.eps,width=.50\linewidth}
\caption{Plot of (4) variations of x (blue),y (green),u
(orange),v (red),$\Omega_m$ (yellow) versus N near $P_5$,for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D6.eps,width=.50\linewidth}
\caption{Plot of (5) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_6$, for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
We note that for $P_5$ and $P_6$ if, $x\delta{\leq}-\sqrt{\frac{3}{2}}$ and
$x\lambda {\leq}-\sqrt{6}$ (equality should occur in one of them) $P_5$ may
admit 2 dimensional stable manifold corresponding the two negative eigenvalues
and $P_6$ too may admit 2 dimensional stable manifold corresponding the two
negative eigenvalues with Eos of hessence is 1 and total Eos is 1 and universe
decelerates. The figure $\mathbf{4}$ indicates that the three of the variables
(namely $x, v, $$\Omega_m$) are moving away from $P_5$ and intruding in a
neighbourhood of N=10. This may denote the stable manifold corresponding the
negative eigenvalues.However, this point gives the decelerated phase of the universe.
Similar phenomena can be noted from figure $\mathbf{5}$.\\
\begin{figure}\centering
\epsfig{file=D7.eps,width=.50\linewidth}
\caption{Plot of (6) variations of x (blue),y (green),u
(orange),v (red),$\Omega_m$ (yellow) versus N near $P_7$,for $\gamma=0$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D8.eps,width=.50\linewidth} \caption{Plot of (7)
variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_8$,for $\gamma=0$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
We note that if $\delta\lambda-{\lambda^2}{\leq}3$ both $P_7$ and
$P_8$ are normally hyperbolic set of fixed points and as the rest
three non-zero eigenvalues are negative they are stable. The set
of fixed points has Eos of hessence is $-1$ and total Eos is also
$-1$ and universe accelerates like `cosmological constant'. We
note clearly from figures $\mathbf{6}$ and $\mathbf{7}$
that all lines from negative and positive values of N (i.e.,
from past and future) are converging towards N=0 (i.e., the set of fixed points).\\
\begin{figure}\centering
\epsfig{file=D9.eps,width=.50\linewidth}
\caption{Plot of (8) variations of x (blue),y (green),u
(orange),v (red),$\Omega_m$ (yellow) versus N near $P_9$,for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D10.eps,width=.50\linewidth}
\caption{Plot of (9) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_{10}$,for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
We note that if ${\lambda^2}{\leq}6$ and
${\lambda^2}-\delta\lambda{\leq}3$ (equality should occur in one
of them) $P_9$ and $P_{10}$ may admit 3 dimensional stable
manifold corresponding the negative eigenvalues with Eos of
hessence $-1 + \frac{\lambda^2}{3}$ and total Eos also $-1 +
\frac{\lambda^2}{3}$, (i.e., both Eos are `quintessencelike' if
${\lambda^2}<3$ or `dustlike' if ${\lambda^2}=3$ ). The graphs in
figure $\mathbf{8}$ and $\mathbf{9}$ also supports the fact
corresponding the stable manifolds. In our choice of
$\lambda=-0.5$, Eos of hessence and total Eos, both behaves like `quintessence'.\\
\begin{figure}\centering
\epsfig{file=D11.eps,width=.50\linewidth} \caption{Plot of (10)
variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_{11}$,for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D12.eps,width=.50\linewidth}
\caption{Plot of (11) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_{12}$,for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
We note that if ${a^2}+\frac{b^2}{2}{\leq}1,D{\leq}-\sqrt{\Delta}$
(equality should occur in one of them) $P_{11}$ and $P_{12}$ may admit 3
dimensional stable manifold corresponding the negative eigenvalues with
Eos of hessence $\frac{-1+2{A^2}+{B^2}}{1-B}$ and total Eos $-1+{A^2}+{B^2}+(\gamma-2)B$.
We see from figure $\mathbf{10}$ that the system is moving away from the
fixed point $P_{11}$. Similar phenomena happens for fixed point $P_{12}$
as seen from figure $\mathbf{11}$.\\
\begin{figure}\centering
\epsfig{file=D13.eps,width=.50\linewidth}
\caption{Plot of (12) variations of x (blue),y (green),u
(orange),v (red),$\Omega_m$ (yellow) versus N near $P_{13}$,for $\gamma=1$ , $\delta=1$ and $\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D14.eps,width=.50\linewidth}
\caption{Plot of (13) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_{14}$,for $\gamma=1$ , $\delta=1$ and
$\lambda=-0.5.$ The position corresponding N=0 is the fixed point
under consideration.}
\end{figure}
We note that if $2\delta<\lambda<0$ or $0<\lambda<2\delta$, then
$P_{13}$ and $P_{14}$ may admit 1 dimensional stable manifold corresponding
the negative eigenvalues with Eos of hessence and total Eos being 1 and universe decelerates.
The graphs figure $\mathbf{12}$ and figure $\mathbf{13}$ shows that the
system is diverging from the fixed point $P_{13}$ and $P_{14}$. So, both the points are unstable in nature.\\
\begin{figure}\centering
\epsfig{file=D15.eps,width=.50\linewidth}
\caption {Plot of (14) variations of x (blue),y (green),u
(orange),v (red),$\Omega_m$ (yellow) versus N near $P_{15}$,for $\gamma=1$ ,
$\delta=1/(4\sqrt{6})$ and $\lambda=-0.5.$ The position
corresponding N=0 is the fixed point under consideration.}
\end{figure}
\begin{figure}\centering
\epsfig{file=D16.eps,width=.50\linewidth}
\caption {Plot of (15) variations of x (blue),y (green),u (orange),v (red),$\Omega_m$
(yellow) versus N near $P_{16}$,for $\gamma=1$ ,
$\delta=1/(4\sqrt{6})$ and $\lambda=-0.5.$ The position
corresponding N=0 is the fixed point under consideration.}
\end{figure}
We note that if $x\delta{\leq}-\sqrt{\frac{3}{2}}$ and $x\lambda{\leq}-\sqrt{6}$
then $P_{15}$ and $P_{16}$ may admit 2 dimensional stable manifold corresponding
the two negative eigenvalues with Eos of hessence and total Eos being 1 and universe decelerates.
Here, we note that the solution set of the dynamical system moving rapidly
from the fixed points $P_{15}$ and $P_{16}$ as clear from figures $\mathbf{14}$
and $\mathbf{15}$.The fixed points are unstable.\\
\section{\textit{Cosmological Significance of the Fixed Points}}
In this section we discuss about the possible singularities that
any dark energy model could have and compare the fixed points
against recent dataset Planck 2015 data \cite{m8a-A1nn}. If the
Eos $\omega{\leq}-1$ (i.e., the null energy condition
$p+\rho{\geq}0$ is violated) and Big rip singularity happens
within a finite time \cite{7}. This singularity happens when at
finite time $t{\rightarrow}t_s$,
$a{\rightarrow}{\infty}$, $\rho{\rightarrow}{\infty}$ and $|p|{\rightarrow}{\infty}$.\\
We now analyse the stable fixed points to see if they can avoid (or, suffer)
Big rip singularity. For the stable fixed points $P_7$ and $P_8$ or, we have
$\dot{H}/{H^2}=0$ which gives $H=k$ (the integral constant), we get $a{\varpropto}{e^{kt}}$.
Also, in these cases ${\omega_{total}}=-1$ which with energy conservation equation
gives $\rho=constant$. Hence universe suffers no Big rip here. Fixed points $P_7$
and $P_8$ exist with physical parameter ${\Omega_m}=0$, ${\omega_h}=-1$, ${\omega_{total}}=-1$.
The value of the parameters are well within the best fit of Planck 2015 data i.e.,
${\Omega_m}=0.3089{\pm}0.0062$ from TT, TE, EE+low P+lensing+ext data, and Eos of
dark energy $\omega={-1.019}^{+0.075}_{-0.080}$ \\
Now, we consider the unstable fixed points. An unstable fixed
point may describe the initial phase of universe, whereas a stable
fixed point may be the end phase of the universe. For fixed points $P_1$, $P_2$ and $P_3$
exist with the physical parameter ${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$. Clearly,
no Big rip occurs here.Here, the parameter ${\Omega_m}$ lies within the best fit of Planck 2015
data i.e., ${\Omega_m}=0.3089{\pm}0.0062$ from TT, TE, EE+low P+lensing+ext data. But,${\omega_h}$,
${\omega_{total}}$ defy the Eos of dark energy $\omega={-1.019}^{+0.075}_{-0.080}$.\\
Fixed point $P_4$ has values of physical parameters ${\Omega_m}=\frac{6-3\gamma+2\delta^2}{3}$,
${\omega_h}=1$, ${\omega_{total}}=-1 + \gamma(1-\frac{2{\delta^2}}{3})+ \frac{4{\delta^2}}{3}$.
Here, ${\omega_h}$ and ${\omega_{total}}$ both are greater than -1, no Big rip occurs here too.
A wide choices of $\gamma$ and $\delta$ can can fit $\Omega_m$ and ${\omega_{total}}$ within
Planck 2015 data i.e., ${\Omega_m}=0.3089{\pm}0.0062$, but ${\omega_h}$, disobey the Eos of
dark energy $\omega={-1.019}^{+0.075}_{-0.080}$. \\
Fixed points $P_5$ ,$P_6$ exist with physical parameters
${\Omega_m}=0$, ${\omega_h}=1$, ${\omega_{total}}=1$. We observe
this solution are devoid of Big rip.Here,${\Omega_m}$ lies within
the best fit of Planck 2015 data data. But,${\omega_h}$,
${\omega_{total}}$ defy the Eos of dark energy $\omega={-1.019}^{+0.075}_{-0.080}$.\\
Fixed points $P_9,P_{10}$ admit physical parameters as
${\Omega_m}=0$, ${\omega_h}=-1+\frac{\lambda^2}{3}$,
${\omega_{total}}=-1+\frac{\lambda^2}{3}$ and so avoid Big
rip.Also, $\Omega_m$ is within Planck 2015 data. Also, suitable
choice of $\lambda$ fits ${\omega_h}$, ${\omega_{total}}$ within dataset.\\
Fixed points $P_{11}$ and $P_{12}$ have physical parameters
${\Omega_m}=B$, ${\omega_h}=\frac{-1+2{A^2}+{B^2}}{1-B}$,
${\omega_{total}}=-1+{A^2}+{B^2}+(\gamma-2)B$, where
$A=-\sqrt{\frac{3}{2}}~\frac{\gamma}{\delta+\lambda}$ and
$B=\frac{6+\lambda \sqrt{6}A-6{A^2}}{9}$. Here, we can adjust $A$
and $B$ to make ${\omega_h}$ and ${\omega_{total}}{\geq}-1$ to miss Big rip.
Since, only, $0<\gamma{\leq}2$ but, $\delta$ can take arbitrary small value
and $\lambda$ can have any real value, $A$ and hence, $B$ can be adjusted well
within Planck ${\Omega_m}=0.3089{\pm}0.0062$ from TT, TE, EE+low P+lensing+ext
data and Eos of dark energy $\omega={-1.019}^{+0.075}_{-0.080}$ data.\\
Fixed points $P_{13}$,$P_{14}$, $P_{15}$ and $P_{16}$ can avoid
Big rip, as they bear physical parameters ${\Omega_m}=0$,
${\omega_h}=1$, ${\omega_{total}}=1$.Here, the parameter
${\Omega_m}$ lies within the best fit of Planck 2015 data i.e.,
${\Omega_m}=0.3089{\pm}0.0062$ from TT, TE, EE+low P+lensing+ext
data. But, ${\omega_h}$, ${\omega_{total}}$ totally defy
the Eos of dark energy $\omega={-1.019}^{+0.075}_{-0.080}$.\\
\section{\textit{Concluding Remarks}}
In this paper we have performed a dynamical system study of an
unique scalar field hessence coupling with dark matter in an
alternate theory of gravity, namely $f(T)$ gravity. The system is
unconventional, complex but quite interesting. The model is chosen
to explore one of the various possibilities about the fate of
the universe. The sole purpose is to explain the current
acceleration of universe. An unstable fixed point may describe the
initial phase of universe, whereas a stable fixed point may be the
end of the universe. We have chosen exponential form of potential
of the form $V={V_0}{e^{\lambda\phi}}$ (where $V_0$ and $\lambda$
are real constant and $\phi$ is the hessence field) for
simplicity. The interaction term $C$ is chosen to solve the so
called `cosmological constant' problem in tune with second law of
thermodynamics and is quite arbitrary (only $C$ should remain
positive), since $C=\delta \dot{\phi} \rho_m$, where $\delta$ is a
real constant of small magnitude, which may be chosen as positive
or negative, such that $C$ remains positive. Also, $\dot{\phi}$
may be positive or negative according the hessence field $\phi$.
The resulting non linear dynamical system gives sixteen possible
fixed points. Among them $P_7$ and $P_8$ are stable set of
normally hyperbolic fixed points, which resembles like
`cosmological constant', so it explain the current phase of
acceleration of universe. But, interestingly it does not show
`hessence like' nature. Among the other fixed points the initial
phases of evolution may begin. However, the complexity of the
system is main obstacle for a precise explanation. Anyway, in
future work, we may try some other possible alternative.\\\\
{\bf Conflict of Interest:} The authors declare that there is no
conflict of interest regarding the publication of this paper.\\\\
{\bf Acknowledgement:} One of the authors (UD) is thankful to
IUCAA, Pune, India for warm hospitality where part of the work was
carried out.
|
1,477,468,750,882 | arxiv | \section{Introduction}
Whether or not quantum many-body systems out of equilibrium can be understood in terms of well-defined phases of matter is a central question in condensed matter physics. The lack of universal principles, such as those governing equilibrium systems \cite{Chandler:1987,Goldenfeld:2018}, makes the problem exceptionally hard. Still, the concepts of criticality and far-from-equilibrium dynamics have recently been elegantly unified through the discovery of dynamical phase transitions in which a time-evolving quantum many-body system displays sudden changes of its macroscopic properties~\cite{Heyl:2013,Karrasch2013,Andraschko2014,Vajna2014,Vajna2015,Budich2016,Zvyagin2016,Halimeh2017,Heyl:2018}. In equilibrium physics, phase transitions are reflected by singularities in the free energy, and dynamical phase transitions are similarly given by critical \emph{times}, where a non-equilibrium analogue of the free energy becomes non-analytic. Specifically, the role of the partition function is played by the return, or Loschmidt, amplitude of the many-body system after a quench, and its logarithm yields the corresponding free energy.
A typical setup for observing dynamical quantum phase transitions is depicted in Fig.~\ref{fig:schematic}\textbf{a}: a one-dimensional chain of interacting quantum spins is initialized in a ground state characterized by one type of order (or the lack of it) and subsequently made to evolve according to a Hamiltonian whose ground state possesses a different order. Experimentally, dynamical phase transitions have been observed in strings of trapped ions~\cite{Jurcevic2017,Zhang2017}, optical lattices~\cite{Flaeschner2018}, and several other systems that offer a high degree of control~\cite{Guo2019,Tian2019,Wang2019,Tian2020,Xu2020}. The Loschmidt amplitude is the overlap between the initial state of the system and the state of the system at a later time. Moreover, similarly to equilibrium systems, dynamical phase transitions may occur at critical times, where the Loschmidt amplitude vanishes, and the dynamical free energy becomes non-analytic in the thermodynamic limit. As illustrated in Fig.~\ref{fig:schematic}\textbf{b}, these non-analytic signatures may appear as cusps in the dynamical free energy, however, strictly speaking, they only occur for infinitely large systems. For finite-size systems, they are typically smeared out, and often for spin chains at least $L\simeq 50-100$ spins are required in order to identify and determine the critical times of a dynamical phase transition. Since the Hilbert space dimension grows exponentially with the chain length, the outstanding bottleneck for theoretical investigations of dynamical phase transitions is the need to predict the far-from-equilibrium dynamics of large quantum systems. Numerically, the task is computationally costly, or even intractable, and generally it requires advanced system-specific techniques that do not easily generalize to other systems or spatial dimensions~\cite{Pozsgay2013,Karrasch2013,Andraschko2014,Kriel2014,Sharma2015,Halimeh2017,ZaunerStauber2017,Homrighausen2017,Heyl:2018a,Kennes2018,Hagymasi:2019,Lacki2019,Huang2019,Halimeh2020,zunkovic2018,weidinger2017,Feldmeier2019}. For this reason, little is still known about dynamical phase transitions and the general applicability of concepts like universality and scaling. In fact, our current understanding comes to a large extent from a few exactly solvable models~\cite{Heyl:2013,Karrasch2013,Andraschko2014,Vajna2014,Heyl2015,Vajna2015,Schmitt2015,Budich2016,Zvyagin2016,Halimeh2017,Fogarty2017,bhattacharya2017,Heyl:2018,Kennes2018,Sedlmayr2018,Jafari2019,Defenu2019,Najafi2019,Gulacsi2020,zamani2020}. Important questions concern the relationship between critical times and dynamical changes in local observables or the entanglement spectrum (or other dynamical measures), which often exhibit similar but not strictly related time scales. However, case-by-case investigations have revealed that dynamical phase transitions are often accompanied by
interesting dynamics with comparable time scales, and one could view them as indicators of nontrivial dynamics in other many-body properties.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{figure_1_test2.pdf}
\caption{ \textbf{Dynamical phase transitions. a,} A sudden quench of the system parameters causes a dynamical phase transition in a quantum spin chain with $L$ sites. \textbf{b,} In the thermodynamic limit, such phase transitions give rise to singularities in the rate function at the critical times, $t_{c,1}, t_{c,2}\ldots $, see Eqs.~\eqref{eq:Loschmidt_amplitude} and \eqref{eq:ratefunc} for definitions, however, in finite-size systems they are smeared out. \textbf{c,} The singularities in the rate function are associated with the zeros (circles) of the Loschmidt amplitude in the complex-time plane. In the thermodynamic limit, they form continuous lines, and the real critical times are given by the crossing points with the imaginary axis. We determine the zeros of the Loschmidt amplitude from the Loschmidt cumulants evaluated at the basepoint $\tau$.}
\label{fig:schematic}
\end{figure}
Here we pave the way for systematic investigations of dynamical phase transitions in correlated systems using Loschmidt cumulants, which allow us to accurately predict the critical times of a quantum many-body system using remarkably small system sizes, on the order of $L\simeq10-20$. Using modest computational power, we determine the critical times of the interacting Kitaev chain and the spin-1 Heisenberg chain after a quench and find for instance that a dynamical phase transition in the Kitaev chain gets suppressed with increasing interaction strength. The Loschmidt cumulants allow us to determine the complex zeros of the Loschmidt amplitude as illustrated in Fig.~\ref{fig:schematic}\textbf{c}. We can thereby map out the thermodynamic lines of zeros and identify the crossing points with the imaginary axis, corresponding to the real critical times, where a dynamical phase transition occurs. This approach makes it possible to predict the critical dynamics of a wide range of strongly interacting quantum many-body systems and is applicable also in higher dimensions. In two dimensions, the zeros can make up lines or surfaces in the complex plane, and our method can be used to determine all of these zeros as well as their density. Moreover, as we will show, our method provides exciting perspectives for future experiments on dynamical phase transitions. Specifically, our method makes it possible to predict the first critical time of a quantum many-body system after a quench by measuring the fluctuations of the energy in the initial state. In addition, due to the favorable scaling properties of our method, it is conceivable that it can be implemented on a near-term quantum computer with a limited number of qubits.
We now proceed as follows. In Sec.~\ref{sec:LC}, we develop our method for determining the zeros of the Loschmidt echo and their crossing points with the imaginary axis in the thermodynamic limit, which yield the critical times of a quantum many-body system after a quench. In Sec.~\ref{sec:Kitaev}, we consider dynamical phase transitions in the Kitaev chain after a quench, and we show how we can determine the critical times from remarkably small chain lengths even with strong interactions. Section~\ref{sec:heisenberg} is devoted to the spin-1 Heisenberg chain and includes several quenches, for instance, from the Haldane phase to the N\'eel phase. In Sec.~\ref{sec:exp}, we discuss the experimental perspectives for future realizations of our method. Finally, in Sec.~\ref{sec:concl}, we state our conclusions and provide an outlook on possible avenues for further developments.
\section{From Loschmidt cumulants to Loschmidt zeros}
\label{sec:LC}
The fundamental object that describes dynamical phase transitions is the Loschmidt amplitude,
\begin{equation}
\label{eq:Loschmidt_amplitude}
\mathcal{Z}(it) = \bra{\Psi_0}e^{-it\mathcal{\hat H}}\ket{\Psi_0},
\end{equation}
where $\ket{\Psi_0}$ is the initial state of the many-body system at time $t=0$, the post-quench Hamiltonian $\mathcal{\hat H}$ governs the time evolution for times $t>0$, and we set $\hbar=1$ from now on. The Loschmidt amplitude resembles the partition function of a thermal system with Hamiltonian $\mathcal{\hat H}$, however, the inverse temperature is replaced by the imaginary time $\tau = it$, and an average is taken with respect to the initial state $\ket{\Psi_0}$. In equilibrium settings, a thermal phase transition occurs, if a system is cooled below its critical temperature, and it abruptly changes from a disordered to an ordered phase. Similarly, {\it dynamical} phase transitions occur at critical {\it times}, when a quenched system suddenly changes from one phase to another with fundamentally different properties. Such transitions are manifested in the rate function
\begin{equation}
\lambda(t)=-\frac{1}{L}\ln |\mathcal{Z}(it)|^2,
\label{eq:ratefunc}
\end{equation}
which is the non-equilibrium analogue of the free energy density. In some cases, dynamical phase transitions occur, if a system is quenched across an underlying equilibrium phase transition, however, generally, there is no simple relation between dynamical and equilibrium phase transitions. In Fig.~\ref{fig:schematic}\textbf{a}, the system size, denoted by $L$, is the total number of spins along the chain. In the thermodynamic limit of infinitely large systems, dynamical phase transitions give rise to singularities in the rate function, for example a cusp as shown in Fig.~\ref{fig:schematic}\textbf{b}. However, this non-analytic behavior typically becomes apparent for very large systems, and it is hard to pinpoint for smaller systems. For this reason, dynamical phase transitions are difficult to capture in computations and simulations, where the numerical costs grow rapidly with system size.
Here we build on recent progress in Lee-Yang theories of thermal phase transitions~\cite{Deger:2018,Deger:2019,Deger:2020,Deger:2020b} and use Loschmidt cumulants to predict dynamical phase transitions in strongly-correlated many-body systems using remarkably small system sizes. The Lee-Yang formalism of classical equilibrium phase transitions considers the zeros of the partition function in the complex plane of the external control parameters~\cite{Yang:1952,Lee:1952,Blythe:2003,Bena:2005}. In a similar spirit, we treat the Loschmidt amplitude as a function of the complex-valued variable $\tau$. The Loschmidt amplitude of a finite system is an entire function, which can be factorized as~\cite{Arfken2012}
\begin{equation}
\label{eq:factorized_Z}
\mathcal{Z}(\tau) = e^{\alpha\tau}\prod_{k}\left(1-\tau/\tau_k\right),
\end{equation}
where $\alpha$ is a constant, and $\tau_k$ are the complex zeros of the Loschmidt amplitude. For a thermal system, the values of the inverse temperature for which the partition function vanishes are known as Fisher zeros \cite{Fisher:1965}. We refer to the zeros of the Loschmidt amplitude as Loschmidt zeros. For a finite system, the zeros are isolated points in the complex plane. However, they grow denser as the system size is increased, and in the thermodynamic limit, they coalesce to form continuous lines and regions. Their intersections with the imaginary $\tau$ axis determine the real critical times, at which the rate function becomes non-analytic, and dynamical phase transitions occur~\cite{Heyl:2018}. As such, this phenomenology resembles the classical Lee-Yang theory of thermal phase transitions~\cite{Yang:1952,Lee:1952,Blythe:2003,Bena:2005}.
The central task is thus to determine the Loschmidt zeros. To this end, we define the Loschmidt moments and cumulants of the Hamiltonian $\mathcal{\hat H}$ of order $n$ as
\begin{equation}
\label{eq:moment}
\langle \mathcal{\hat H}^n \rangle_\tau = (-1)^n\frac{\partial_{\tau}^n\mathcal{Z}(\tau)}{\mathcal{Z}(\tau)}
\end{equation}
and
\begin{equation}
\label{eq:cumulant}
\llangle \mathcal{\hat H}^n \rrangle_\tau = (-1)^n\partial_{\tau}^n\ln \mathcal{Z}(\tau),
\end{equation}
where $\tau$ is the basepoint, at which the moments and cumulants are evaluated. For $\tau=0$, the Loschmidt moments reduce to the moments of the Hamiltonian with respect to the initial state as $\langle \mathcal{\hat H}^n \rangle_0 = \bra{\Psi_0}\mathcal{\hat H}^n \ket{\Psi_0}$. At finite times, the Loschmidt moments are $\langle \mathcal{\hat H}^n \rangle_\tau = \bra{\Psi_0}\mathcal{\hat H}^n \ket{\Psi(\tau)}/\langle\Psi_0 |\Psi(\tau)\rangle$, where $\ket{\Psi(\tau)} =e^{-\tau\mathcal{\hat H}}\ket{\Psi_0}$ is the time-evolved state. The cumulants can be obtained from the moments using the standard recursive formula
\begin{equation}
\llangle \mathcal{\hat H}^n \rrangle_\tau = \langle \mathcal{\hat H}^n \rangle_\tau - \sum_{m = 1}^{n-1}\binom{n-1}{m-1}\llangle \mathcal{\hat H}^m \rrangle_\tau \langle \mathcal{\hat H}^{n-m} \rangle_\tau.
\end{equation}
For our purposes, it is now convenient to define the normalized Loschmidt cumulants $\kappa_n(\tau)$ as
\begin{equation}
\label{eq:cumulant_series}
\kappa_n(\tau) = \frac{(-1)^{n-1}}{(n-1)!}\llangle \mathcal{\hat H}^n \rrangle_\tau = \sum_{k}\frac{1}{(\tau_k-\tau)^n},\quad n>1,
\end{equation}
having used Eq.~\eqref{eq:factorized_Z} to express them in terms of the zeros. This expression shows that the Loschmidt cumulants are dominated by the zeros that are closest to the (complex) basepoint $\tau$, while the contributions from other zeros rapidly fall off with their inverse distance from the basepoint to the power of the cumulant order $n$. The main idea is now to extract the $m$ closest zeros from $2m$ high Loschmidt cumulants, which we can calculate. For $m=2$, this can be done by adapting the method from Refs.~\cite{Deger:2018,Deger:2019,Deger:2020,Deger:2020b}. However, for arbitrary $m$, we use the general approach presented in App.~\ref{app:determination_zeros}-\ref{app:errors}. For the systems that we consider in the following, we extract the $m = 7$ zeros closest to the movable basepoint using Loschmidt cumulants of order $n=9$ to $n=22$.
It should be emphasized that our approach hardly makes any assumptions about the quantum many-body system at hand or the method used for obtaining the cumulants. As outlined in App.~\ref{app:Krylov_method}, we use a Krylov subspace method~\cite{Park1986,Paeckel2019} to perform the complex time evolution and evaluate the Loschmidt moments and cumulants, which we then use for extracting the Loschmidt zeros. All results presented below have been obtained on a standard laptop, and the method can readily be adapted to a variety of interacting quantum many-body systems, also in higher dimensions.
\begin{figure*}[ht]
\centering
\includegraphics[]{figure_2_blues.pdf}
\caption{{\bf The interacting Kitaev chain.} We quench the chemical potential from the trivial phase $\mu=-1.4$ to the topological phase $\mu=-0.2$ for $t>0$ with fixed $\Delta=0.3$ (all parameters and the inverse time $\tau^{-1}$ are expressed in units of $J=1$). \textbf{a-c,} Complex zeros for different system sizes ($L=7-20$) and boundary conditions ($\Phi = 0,\pi$) in the noninteracting case, compared with the exact thermodynamic lines of zeros. Only the zeros within a finite range from the basepoint can be accurately obtained from the Loschmidt cumulants. This fact is illustrated by moving the basepoint $\tau$ along two different paths (paths A and B in panels \textbf{a} and \textbf{b}). Panel \textbf{c} combines the results from panels \textbf{a} and \textbf{b}. \textbf{d-f,} Loschmidt zeros and critical times obtained with repulsive interactions ($V >0$) along the two paths. The lines of zeros for $V = 0$ (dashed line) are shown for comparison. The critical time $t_c$, shown in each panel as a red cross, is obtained as the intersection between the imaginary axis and the line drawn from the zero $\tau_-$ with smallest negative real part (in absolute value) to the zero $\tau_+$ with the smallest positive real part. The error on the critical time is estimated as $\Delta t_c = \max(|t_c-\mathrm{Im}\,\tau_-|,|t_c-\mathrm{Im}\,\tau_+|)$. \textbf{g-l,} Evolution of zeros and critical times with increasing attractive interactions ($V <0$). For very strong interactions ($V = -1$, panel \textbf{l}), the zeros do not cross the imaginary axis, signalling the absence of a dynamical quantum phase transition. As discussed in App.~\ref{app:errors}, the numerical error in the zeros is of the order of $10^{-3}$.}
\label{fig:kitaev}
\end{figure*}
\section{Interacting Kitaev chain}
\label{sec:Kitaev}
We first consider the spin-1/2 XYZ chain or, equivalently, the interacting Kitaev chain. The non-interacting limit maps to the XY model, which was solved exactly in the pioneering work of Ref.~\cite{Heyl:2013}. Here, we use Loschmidt cumulants to predict a dynamical quantum phase transition in the {\it strongly interacting} regime. The Hamiltonian of the spin-1/2 XYZ chain with a Zeeman field reads
\begin{equation} \label{eq:spinhalf}
\mathcal{\hat H} =\sum_{\alpha,j = 1}^{L} J_\alpha\hat S_j^\alpha\hat S_{j+1}^\alpha
- h\sum_{j = 1}^L\hat{S_j^z}\,,
\end{equation}
where $\hat{S}_j^\alpha$ are the spin-1/2 operators for the $\alpha=x,y,z$ component of the spin on site $j$ of the chain of length $L$, the exchange couplings are denoted by $J_\alpha$, and $h$ is the Zeeman field. We use twisted boundary conditions,
\begin{equation}
\label{eq:bound_cond}
\begin{split}
\hat{S}_{L+1}^x&=\cos(\Phi)\hat{S}_1^x+\sin(\Phi)\hat{S}^y_1,\\
\hat{S}_{L+1}^y&=-\sin(\Phi)\hat{S}_1^x+\cos(\Phi)\hat{S}^y_1,
\end{split}
\end{equation}
and $\hat{S}_{L+1}^z =\hat{S}_{1}^z$, where $\Phi$ is the twist angle. In the fermionic representation, obtained by a Jordan-Wigner transformation \cite{Franchini_book}, the model maps to the interacting Kitaev chain of spinless fermions with operators $\hat{c}_j$~and~$\hat{c}_{j}^\dagger$,
\begin{widetext}
\begin{equation}
\label{eq:kitaev}
\mathcal{\hat{H}} = -\frac{1}{2}\sum_{j = 1}^{L-1}\left(J\,\hat {c}_{j}^\dagger\hat{c}_{j+1} + \Delta\, \hat {c}_{j}^\dagger\hat{c}_{j+1}^\dagger + \text{H.c.}\right)
+V\sum_{j = 1}^{L-1} \left(\hat {n}_{j}-\frac{1}{2}\right)\left(\hat {n}_{j+1}-\frac{1}{2}\right)-\mu\sum_{j = 1}^{L} \hat {c}_{j}^\dagger\hat{c}_{j}
+\frac{s_\Phi \hat{P}}{2} \left(J\,\hat {c}_{L}^\dagger\hat{c}_{1} + \Delta\, \hat {c}_{L}^\dagger\hat{c}_{1}^\dagger + \text{H.c.}\right),
\end{equation}
\end{widetext}
where the twist angle now enters in the last line through the parameter $s_\Phi$, which is $+1$, if the twist angle is $\Phi = 0$, and $-1$, if $\Phi = \pi$. These are the only two values of the twist angle used here. The parameters of the two Hamiltonians are related as $J=-(J_x+J_y)/2$, $\Delta=(J_y-J_x)/2$, $\mu=-h$, and $V=J_z$. Moreover, the number operator on site $j$ is $\hat{n}_j = \hat{c}_j^\dagger\hat{c}_j$, while $\hat{P} = \exp(i\pi\sum_{j}\hat{n}_j)$ is the parity operator.
The Kitaev chain describes a one-dimensional superconductor with a $p$-wave pairing term that is proportional to $\Delta$, supporting two distinct topological phases. The two values of the twist angle, $\Phi = 0, \pi$, physically correspond to a magnetic flux equal to zero or half flux quantum threaded through the ring-shaped chain. These are the only distinct flux values that are consistent with superconducting flux quantization. It is useful to vary the boundary conditions, since in the non-interacting case ($V = 0$), which corresponds to the exactly solvable spin-1/2 XY model, the Loschmidt zeros can be labeled by the quasi-momentum $k_m = (2\pi m - \Phi)/L$ with $m = 0, \dots,L-1$ \cite{Heyl:2013}. Thus, by using the two different values of $\Phi$, we can sample the thermodynamic line of zeros twice as densely for a given system size. It turns out that even in the interacting case ($V\neq 0$) it is useful to vary the boundary conditions for the same reason.
We are now ready to investigate dynamical quantum phase transitions in the interacting Kitaev chain. To this end, we take for the initial state $\ket{\Psi_0}$ the ground state of the Hamiltonian~\eqref{eq:kitaev} with $|\mu /J| > 1$, which corresponds to the topologically trivial phase, and perform a quench into the topological regime with $|\mu/J|<1$ for later times, $t>0$. As shown in Fig.~\ref{fig:kitaev}, from the Loschmidt cumulants we can find the complex zeros of the Loschmidt amplitude even with attractive ($V < 0$) or repulsive ($V >0$) interactions, for which an analytic solution is not available. In the left column, we first consider the non-interacting case, where the thermodynamic lines of zero can be determined analytically~\cite{Heyl:2013}. In panels \textbf{a} and \textbf{b}, we show the zeros found from the Loschmidt cumulants as the basepoint $\tau$ is moved along the paths denoted by A and B, respectively, while panel \textbf{c} shows the combined results. Remarkably, the Loschmidt cumulants allow us to map out the thermodynamic lines of zeros using chains of rather short lengths, $L=7-20$, and thereby identify the crossing points with the imaginary axis, corresponding to the real critical times, where a dynamical phase transition occurs. The comparison between the exact and the approximate zeros obtained from the Loschmidt cumulants provides an important estimate of the accuracy. In the worst cases, the accuracy is an order of magnitude better than the size of the markers in Fig.~\ref{fig:kitaev} (see App.~\ref{app:errors}). We note that our choice of the paths in Fig.~\ref{fig:kitaev} was guided by our knowledge of the zeros in the non-interacting case. However, more generally, without any specific knowledge of a system, one may choose paths that scan the complex plane, in particular along the imaginary
time axis and its immediate vicinity (since those zeros determine whether and when the system exhibits a dynamical phase transition).
Having benchmarked our approach in the non-interacting case, we move on to the strongly interacting regime. In the second column of Fig.~\ref{fig:kitaev}, we show the Loschmidt zeros for repulsive interactions, which tend to shift the critical crossing point with the imaginary axis to earlier times. A more dramatic effect is observed in the third and fourth columns, where we gradually increase the attractive interactions. In this case, the dynamical phase transition happens at later times, and eventually, for sufficiently strong interactions, the thermodynamic line of zero no longer crosses the imaginary axis, implying the absence of a dynamical phase transition.
While in the noninteracting limit the small systems reproduce the thermodynamic lines essentially exactly, interactions give rise to finite-size effects when two different lines come close. Despite that, sufficiently isolated lines and segments, such as the ones that determine the dynamical phase transitions in Fig.~\ref{fig:kitaev}, remain scale-invariant. We stress that these results are obtained for very small chains of lengths from $L=10$ to $L=20$, which, while remarkable, is in line with similar observations for the Lee-Yang zeros in classical equilibrium systems \cite{Deger:2018,Deger:2019,Deger:2020,Deger:2020b}. In particular, for strongly interacting systems, the use of such system sizes makes the approach very attractive from a computational point of view, since direct calculations of the Loschmidt amplitude typically require system sizes that are an order of magnitude larger, in generic cases with an exponential increase in the computational cost.
\begin{figure*}
\centering
\includegraphics[]{figure_3_blues.pdf}
\caption{{\bf The Heisenberg chain.} We quench the system from the Haldane phase to the large-$D$ phase. The initial state is the ground state of the model \eqref{eq:spinone} with $J=J_z=3B$ and $D=0$ (the AKLT state \cite{Affleck1987}). For the post-quench Hamiltonian, we set $B = 0$, while $D$ can take the values 2, 3, or 4. Here $J=J_z=1$ is the unit of energy and inverse time. \textbf{a}, Loschmidt zeros for $D=2$. Panel \textbf{a1} is a magnified view of the area within the black rectangle in panel \textbf{a}. From panels \textbf{a} and \textbf{a1} one can clearly see how the position of a Loschmidt zero for fixed $L$ depends on the twist angle $\Phi$, which is a finite-size effect. It is also useful to consider a fixed twist angle and vary the system size as in the case of the zeros connected by the dash-dotted line in panel \textbf{a1} ($\Phi = 0$, $L = 13$, $14$, $15$, $16$). The finite-size dependence is suppressed for the zeros corresponding to the twist angle $\Phi=\pi/2$, defining the effective thermodynamic line of zeros (solid line, see App.~\ref{app:boundary_conditions}). The critical time, determined by the crossing of the effective line with the imaginary axis, is in excellent agreement with the result of Ref.~\cite{Hagymasi:2019} (red bar) obtained using matrix product states (MPS). \textbf{b}, \textbf{c} Same as in panel \textbf{a} but with $D=3,4$. Finite-size effects are suppressed with increasing $D$. In panels \textbf{b1}, \textbf{b2}, \textbf{c1}, \textbf{c2} the crossings of the effective thermodynamic lines of zeros with the imaginary axis are shown and compared again with the critical times obtained in Ref.~\cite{Hagymasi:2019}.}
\label{fig:spin1_1}
\end{figure*}
\section{Spin-1 Heisenberg chain}
\label{sec:heisenberg}
The Kitaev chain from above possesses an exactly solvable limit, which provides an important benchmark for the use of the Loschmidt cumulants. However, generically, exact solutions are not available, which makes the usefulness of the Loschmidt cumulants further evident. For this reason, we now consider the spin-1 Heisenberg chain, which harbors a rich phase diagrams both with symmetry-broken phases and a topological phase, the Haldane phase \cite{Chen:2003}. The spin-1 Heisenberg chain is defined by the Hamiltonian
\begin{equation} \label{eq:spinone}
\begin{split}
\mathcal{\hat H} = &\sum_{j = 1}^{L}\left[J\big(\hat S_j^x\hat S_{j+1}^x+\hat S_j^y\hat S_{j+1}^y\big) +J_z\hat S_j^z\hat S_{j+1}^z\right]\\ &+ D\sum_{j = 1}^L(\hat{S_j^z})^2+B\sum_{j = 1}^{L}(\hat{ \mathbf{S}}_j\cdot\hat{ \mathbf{S}}_{j+1})^2\,,
\end{split}
\end{equation}
where $\hat S^{i}$ are spin-1 operators, the exchange couplings between neighboring spins are denoted by $J$ and $J_z$, while $D$ and $B$ characterize the single-spin uniaxial anisotropy and the biquadratic exchange coupling, respectively. The first line defines the spin-1 XXZ model, while for $J = J_z = 3B$ and $D = 0$, one obtains the Affleck-Kennedy-Lieb-Tasaki (AKLT) model, whose ground state is known explicitly \cite{Affleck1987}, despite the fact that the Hamiltonian~\eqref{eq:spinone} is not exactly solvable. Again, we employ twisted boundary conditions as defined in Eq.~\eqref{eq:bound_cond} for five different values of the phase $\Phi = 0,\, \pi/4,\, \pi/2,\, 3\pi/4,\, \pi$.
We consider two kinds of quenches in which different parameters in the Hamiltonian~\eqref{eq:spinone} are abruptly changed at $t=0$.
\begin{figure*}
\centering
\includegraphics[width=1.99\columnwidth]{figure_4_blues.pdf}
\caption{\textbf{Quench from the Haldane phase to the N\'eel phase.} The initial state is the ground state of the model \eqref{eq:spinone} with $J_z/J=1/2$ and $D=B=0$. The quench is performed by changing the parameter $J_{z}$ to the values $J_{z} = 1,\,2,\,3,\,4$ (in units of $J$). \textbf{a,} Loschmidt zeros for the quench to $J_{z}=1$, which is not sufficiently large to reach the N\'eel phase. In this case, the zeros do not cross the imaginary axis, and no dynamical phase transition occurs. \textbf{b}, \textbf{c}, \textbf{d} Similar to panel \textbf{a}, but with $J_z=2,3,4$. In this case, several dynamical phase transitions occur as shown for example in panels \textbf{e1-3} and \textbf{f}-\textbf{g}. The critical times are shown as red crosses and are estimated as done in Fig.~\ref{fig:spin1_1} using the zeros for $\Phi = \pi/2$ only (see App.~\ref{app:boundary_conditions} for details).}
\label{fig:spin1_2}
\end{figure*}
In the first quench, we initialize the system in the AKLT ground state, which is a representative of the topological Haldane phase, and evolve it with the Hamiltonian~\eqref{eq:spinone} using the parameters $B = 0$, $J = J_z >0$ and $D/J = 2,\,3,\,4$. The ground states of the post-quench Hamiltonians are within the topologically trivial large-$D$ phase. The same quenches have been explored in Ref.~\cite{Hagymasi:2019} for system sizes up to $L=120$ using matrix product states, providing us with an important benchmark.
Figure~\ref{fig:spin1_1} shows the Loschmidt zeros for finite system sizes extracted from the Loschmidt cumulants for the first quench. We use twisted boundary conditions to gauge finite-size effects as the position of the Loschmidt zeros is expected to become insensitive to the phase $\Phi$ for very large systems. By contrast, for the relatively small system sizes used in Fig.~\ref{fig:spin1_1}, finite-size effects are pronounced in particular in panel \textbf{a}, which shows the zeros for the $D/J = 2$ quench. This value is the closest to the critical one $D_c/J \simeq 1$ (with $J = J_z$ and $B = 0$) separating the Haldane phase from the large-$D$ phase \cite{Chen:2003}, providing a plausible reason for the enhanced finite-size effects. Importantly, as discussed in App.~\ref{app:boundary_conditions}, the oscillatory pattern of zeros for different system sizes and twist angles is highly regular, which enables us to filter out the finite-size effects. In this prescription, a thermodynamic line of zeros is approximated by the smooth line of zeros emerging at the twist angle $\Phi=\pi/2$.
The critical times of the transition, obtained from the crossings of the thermodynamic lines of zeros with the imaginary axis (see panels \textbf{a1}, \textbf{b1-2} and \textbf{c1-2} in Fig.~\ref{fig:spin1_1}), are in excellent agreement with the critical times obtained directly from the Loschmidt amplitude that was calculated using state-of-the-art computations in Ref.~\cite{Hagymasi:2019}. However, in contrast to Ref.~\cite{Hagymasi:2019}, which considers nearly an order of magnitude larger systems, our results are obtained from chain lengths up to $L=16$. This comparison provides an illustration of the power of our method in treating strongly correlated many-body systems.
While the exact correspondence between dynamical phase transitions and the equilibrium phase transitions of the respective model remains unknown \cite{Heyl:2018}, dynamical phase transitions are often observed, when the ground states of the initial and final Hamiltonians belong to different equilibrium phases. To explore this general scenario in the case of transitions between a topological phase and a symmetry-broken phase in a strongly correlated system, we solve for the first time quenches between the topological Haldane phase and the symmetry-broken N\'eel phase~\cite{Chen:2003}. In Fig.~\ref{fig:spin1_2}, we depict the Loschmidt zeros for the initial state with $D=B=0$ and quenching $J_z$ from $J_{z}/J = 1/2$ to the final values $J_{z}/J = 1,\,2,\,3,\,4$. The equilibrium quantum phase transition occurs at the critical value $J_{z,c} \simeq 1.2J$~\cite{Chen:2003}. Indeed, our results confirm that no dynamical phase transition is observed when $J_z/J = 1$, since all the Loschmidt zeros have negative real part as shown in panel \textbf{a} of Fig.~\ref{fig:spin1_2}. By contrast, for the other final values of $J_{z}$, which would put the equilibrium system in the antiferromagnetic N\'eel phase, dynamical phase transitions are observed. As in the first quench, finite-size effects are suppressed for quenches, where the final state resides deeper in the gapped phase.
As we see in panels \textbf{e1-3} of Fig.~\ref{fig:spin1_2}, the Loschmidt zeros for different system sizes and boundary conditions have a structure similar to the one observed in the Haldane-to-large-$D$ quench. The same prescription as above irons out the finite-size oscillations and results in a smooth approximation of the thermodynamic lines of zeros. The critical times can then be accurately read off from the data obtained for chain lengths of $L \leq 16$, as in the case of the first quench.
\section{Experimental perspectives}
\label{sec:exp}
In the previous sections, we focused on using the Loschmidt cumulants for predicting dynamical phase transitions based on numerical calculations. However, as we will now discuss, our approach also provides perspectives for future experiments. We will show that it is possible to predict the first critical time of a quantum many-body system by measuring the fluctuations of the energy in the initial state. We will also discuss the prospects of implementing our method on a near-term quantum computer with a small number of qubits.
\begin{figure}
\centering
\includegraphics[width=0.96\columnwidth]{fig5.pdf}
\caption{\textbf{Determination of the critical time from the initial energy fluctuations.} Loschmidt zeros for the Heisenberg chain~\eqref{eq:spinone} obtained from the energy fluctuations in the ground state of the model for $J=J_z=3B$ and $D=0$ at the initial time $\tau=0$, while the energy is determined by the post-quench Hamiltonian with $B = 0$ and $D=4$. Here $J=J_z=1$ is the unit of energy and inverse time. The zeros correspond to chains of lengths $L=5,\ldots,9$, and in the upper (lower) panel we have extracted the zeros using energy cumulants of orders $n=4,\ldots,14$ ($n=8,\ldots,19$). Importantly, the zeros converge to their exact positions with increasing cumulant orders as it can be seen by comparing the panels. The grey line corresponds to the zeros in panel \textbf{c1} of Fig.~\ref{fig:spin1_1}, and the estimate of the critical time is indicated with a red cross. }
\label{fig:exp}
\end{figure}
The Loschmidt moments are generally complex-valued, and it is not obvious how they can be measured. However, at the initial time, $\tau=0$, the Loschmidt moments simplify to the moments of the post-quench Hamiltonian with respect to the initial state as $\langle \mathcal{\hat H}^n \rangle_0 = \bra{\Psi_0}\mathcal{\hat H}^n \ket{\Psi_0}$. Thus, by repeatedly preparing the system in the state $\ket{\Psi_0}$ and measuring the energy given by the post-quench Hamiltonian $\mathcal{\hat H}$, one can construct the distribution of the energy and extract the corresponding moments and cumulants. From the cumulants, it is then possible to extract the closest Loschmidt zeros as demonstrated in Fig.~\ref{fig:exp} following a quench in a Heisenberg chain of lengths $L=5,\ldots,9$. From these results, we predict the critical time to be around $t_c\simeq 0.42$ as indicated by a red cross. This perspective is fascinating: by measuring the initial energy fluctuations, it is possible to predict the \emph{later} time at which a dynamical phase transition will occur.
The idea behind such an experiment does not depend in detail on the actual physical implementation, and from a practical point of view different platforms may provide certain advantages. We expect, for example, that an experiment could be realized with atoms in optical lattices~\cite{Eckardt_2017} or with spin chains on surfaces \cite{Choi_2019}, systems that both offer a high degree of control and flexibility. As illustrated in Fig.~\ref{fig:exp}, it will be necessary to measure the high cumulants of the energy fluctuations. For large systems, accurate measurements of high cumulants are challenging, since the central-limit theorem dictates that distributions tend to be Gaussian with nearly vanishing high cumulants. However, for the small systems that we consider, the situation is different, and several quantum transport experiments have measured cumulants of up to order 20 \cite{Flindt_2009,Fricke_2010} and used them for determining the zeros of generating functions \cite{Flindt_2013,Brandner_2017}, which are similar to the Loschmidt amplitude. Thus, an experimental determination of Loschmidt zeros for small interacting quantum systems appears feasible with current technology.
Our method may also be implemented on small near-term quantum computers, which are now becoming available. Such quantum computers allow for the specific tailoring of any desired Hamiltonian and for time-evolving an initial state both in real and imaginary time~\cite{Sun_2021,Lin_2021}. Thus, it will be possible to evaluate a time-evolved state of the form $\ket{\Psi(\tau)} =e^{-\tau\mathcal{\hat H}}\ket{\Psi_0}$ and subsequently calculate the Loschmidt moments $\langle \mathcal{\hat H}^n \rangle_\tau = \bra{\Psi_0}\mathcal{\hat H}^n \ket{\Psi(\tau)}/\langle\Psi_0 |\Psi(\tau)\rangle$ and the corresponding cumulants from which the Loschmidt zeros are obtained. Again, the favorable scaling properties of our method become important, as they make it possible to predict the critical times of a quantum many-body system with only 10 to 20 constituents. Such sizes can soon be simulated on quantum computers with a limited number of qubits.
\section{Conclusions}
\label{sec:concl}
We have demonstrated that Loschmidt cumulants are a powerful tool to unravel dynamical phase transitions in strongly interacting quantum many-body systems after a quench, making it possible to accurately predict the critical times of a quantum many-body system using remarkably small system sizes. Using modest computational power, we have explored dynamical phase transitions in the Kitaev chain and the spin-1 Heisenberg chain with a specific focus on the role of strong interactions. As we have shown, our approach circumvents the existing bottleneck of computing the full non-equilibrium dynamics of large quantum many-body systems, and instead we track the zeros of the Loschmidt amplitude in the complex plane of time in a similar spirit to the classical Lee-Yang theory of equilibrium phase transitions. As such, our approach paves the way for systematic investigations of the far-from-equilibrium properties of interacting quantum many-body systems, and we foresee many exciting perspectives ahead. In particular, our method can immediately be applied to dynamical phase transitions in dimensions higher than one, and the ease of implementing it may be critical for comprehensive investigations of the finite-size scaling close to a dynamical phase transition. We have also shown that our approach paves the way for exciting experimental developments by making it possible to predict the first critical time of a quantum many-body system in the thermodynamic limit by measuring the initial energy fluctuations in a much smaller system. In addition, due to the favorable scaling of our method, it seems feasible that it can be implemented on a near-term quantum computer with a limited number of qubits. In a broader perspective, the advances presented here may not only be useful for understanding the dynamical non-equilibrium properties of large quantum systems. They may also be helpful in designing novel quantum materials with specific, desired properties.
\acknowledgements
We thank the authors of Ref.~\cite{Hagymasi:2019} for providing us with their results for the spin-1 Heisenberg chain, which we used to extract the critical times indicated in Fig.~\ref{fig:spin1_1}. The work was supported by the Academy of Finland through the Finnish Centre of Excellence in Quantum Technology (project numbers 312057 and 312299) as well as projects number 308515, 330384, 331094, and 331737. F.B.~acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement number 892956. T.O.~acknowledges project funding from Helsinki Institute of Physics.
|
1,477,468,750,883 | arxiv | \section{Introduction}
\label{intro}
Exoplanets discoveries thus far have been dominated by indirect techniques, mostly due to the success of the radial velocity (RV) and transit techniques. Prior to the discoveries of the Kepler mission \citep{borucki2010a,borucki2016}, the majority of exoplanets were discovered using the RV method \citep{butler2006, schneider2011}, with a growing number of ground-based transit discoveries \citep{konacki2003, alonso2004, bakos2007, kane2008a}. Indirect detection techniques rely on a detailed characterization of the host star, since the properties of the host star determine the extracted planetary parameters \citep{seager2003,vanbelle2009a}. Of particular importance is the effect of stellar activity, since this can severely limit the detection of exoplanets around active stars \citep{desort2007,aigrain2012,zellem2017}, and can even result in false-positive detections, whereby stellar activity cycles can masquerade as exoplanet signatures \citep{nava2020}. Indeed, there have been numerous instances of exoplanet claims using the RV method that were later determined to be the result of stellar activity \citep{henry2002, robertson2014, robertson2015, kane2016a}. This potential confusion may be mitigated in certain cases by utilizing precision photometry for known exoplanet hosts \citep{kane2009c}, such as data acquired by transit surveys. A transit detection of an RV planet can provide confirmation of the planet, as well as provide an additional means to disentangle stellar variability and planetary signatures \citep{boisse2011, diaz2018b}. The Transiting Exoplanet Survey Satellite (TESS) \citep{ricker2015} provides an invaluable photometric data source for known exoplanet hosts \citep{kane2021b} since it is monitoring most of the sky, and is especially well-suited for observing the bright host stars typical of RV exoplanet searches \citep{fischer2016}.
Stellar activity has long been known to affect and sometimes limit RV exoplanet searches \citep{saar1997b}, and can particularly impact detection of planets within the Habitable Zone \citep{vanderburg2016b}. Photometric monitoring of known host stars has been used in numerous cases to determine the effects of their variability on planetary signatures, such as for HD~63454 \citep{kane2011e} and HD~192263 \citep{dragomir2012a}. Another example of a host star exhibiting significant stellar variability is the case of BD-06~1339, which was discovered to host planets by \citet{locurto2013} using data from the High Accuracy Radial velocity Planet Searcher (HARPS) spectrograph \citep{pepe2000}. These observations revealed two planetary signatures with orbital periods of 3.87 days and 125.94 days, with minimum planetary masses of 0.027 and 0.17 $M_J$, respectively. However, photometry of sufficient precision, cadence, and duration was not available in order to confirm a transit signature.
Here, we present an investigation into the BD-06~1339b planetary signature by analyzing the associated TESS photometry and re-analyzing the existing HARPS RV data. In Section~\ref{system}, we discuss the properties of the system, including the stellar parameters, and the possible planets within the system. Section~\ref{data} describes the data analysis for the system, where the data sources are comprised of HARPS RV data and the precision photometry from TESS. Section~\ref{false} combines these results to present an argument that the RV variations originally detected could alternatively be consistent with the intrinsic variability of the host star. We provide concluding remarks in Section~\ref{conclusions}, and outline how the photometric capabilities from TESS not only serve to discover new planets, but also have considerable utility in testing known exoplanet hypotheses.
\section{System Properties}
\label{system}
BD-06~1339 (HIP~27803, GJ~221, TIC~66914642) is a relatively bright high proper-motion star located at a distance of 20.27~pcs \citep{brown2018,brown2021}. According to \citet{locurto2013}, BD-06~1339 is a late-type dwarf star, with a spectral classification of K7V/M0V and an age similar to that of the Sun. The star has an effective temperature of 4324~K, a V magnitude of 9.70, and a stellar mass of 0.7~$M_\odot$. Initial spectroscopic analyses were performed in 1996 for the Palomar/MSU Nearby Star Spectroscopic Survey \citep{hawley1996} among previously reported variable stars. A further survey of chromospheric activity among cool stars by \citet{borosaikia2018} found that BD-06~1339 is moderately active, with an activity index of $\log R'_{HK} = -4.71$. Such magnetic activity is prevalent in later stellar spectral types \citep{mcquillan2012}, lending to the stellar activity of interest for this study.
The host star is currently reported to have two companions, BD-06~1339b and BD-06~1339c, both of planetary mass and discovered via the RV technique \citep{locurto2013}. Though discovered simultaneously, their properties differ greatly; BD-06~1339b has a minimum mass of 8.5~$M_\oplus$ and orbits its host star in 3.873 days at a semi-major axis of 0.0428~AU. Its sibling, BD-06~1339c, has a minimum mass of 53~$M_\oplus$, has an orbital period 125.94 days at a semi-major axis of 0.435~AU. The \citet{locurto2013} analysis of the RV data for BD-06~1339b adopts a fixed circular orbit ($e = 0$) for the b planet, and derives an eccentricity of 0.31 for the c planet. \citet{tuomi2014} conducted a statistical reanalysis of the RVs for BD-06 1339, which we further investigate in Section~\ref{rv}.
\section{Data Analysis}
\label{data}
The motivation for re-analyzing BD-06~1339b stems from a broad stellar variability analysis of stars observed during the TESS primary mission at 2-min cadence (Fetherolf et al. in prep.), and a further investigation into the stellar variability of known exoplanet host stars (Simpson et al. in prep.). The broad stellar variability analysis by Fetherolf et al. (in prep.) searches for periodic photometric modulations on timescales up to the duration of a single orbit of the TESS spacecraft (0.01--13~days), during which TESS obtained continuous observations. The $\sim$700 exoplanet host stars that were selected for the follow-up variability analysis by Simpson et al. (in prep.) includes planets with orbital periods shorter than 13 days that were discovered by either their RV or transit signatures. Since Kepler exoplanet host stars are typically faint and not ideal for RV follow-up observations, they are not included in the stellar variability analysis of known exoplanet host stars. In addition to possible transit events or variations due to stellar activity, some of these planets may also exhibit interactions with their host stars, such as phase variations or star-surface irregularities.
The full TESS light curve, Lomb-Scargle (L-S) periodogram, and light curve that was phase-folded on the most significant photometric variability signature were each visually inspected for the $\sim$700 known exoplanet host stars. The photometric periodicity was determined to be significantly variable if both the normalized and phase-folded light curves displayed sinusoidal behavior that did not align with known spacecraft systematics (i.e., momentum dumps), and if the periodogram maximum exhibited an isolated peak with at least 0.001 normalized power that also exceeded the 0.01 false alarm probability level. For each known exoplanet, the extracted photometric variability period was compared to their orbital period, as reported by either the TESS Objects of Interest (TOI) catalog \citep{guerrero2021} or by cross-referencing the target in the NASA Exoplanet Archive \citep{nea}. Close-period matches between the photometric variability and the planetary orbital period were defined as being within 5--10\% of each other. Out of the $\sim$700 targets subjected to the visual analysis, approximately 180 systems displayed prominent photometric variable behavior, close-period matches, or both.
BD-06~1339b was among the set of targets that matched these criteria, and the resulting TESS light curve, periodogram, and phase-folded light curve are shown in Figure~\ref{fig:variability} (see also Section~\ref{phot}). In this paper we revisit the analysis of the BD-06~1339 system by including the TESS photometry that was unavailable at the time of previous studies. In Section~\ref{rv} we summarize our re-analysis of the RVs using the data provided by the updated HARPS reduction pipeline \citep{trifonov2020}. We then discuss our in-depth analysis of the TESS photometry in Section~\ref{phot}, where we search for the presence of planetary transits, atmospheric variations, and stellar activity.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{BD-06_1339_variability_PDC_paperplot.eps}
\caption{The TESS light curve (left), periodogram (center), and phase-folded light curve (right) of BD-06~1339. The red curve represents a sinusoidal model fit representing the light curve variability. The blue triangles indicate timings of spacecraft thruster firings (i.e., momentum dumps). This system stood out in our visual analysis because of its pronounced sinusoidal behavior and the strong single peak in the periodogram, which indicates a strong variability signal. The detected variability periodicity (3.859\,days) is only 0.03\% away from the reported orbital period for BD-06~1339b (3.873\,days).}
\label{fig:variability}
\end{figure*}
\subsection{Spectroscopic Analysis}
\label{rv}
\citet{locurto2013} verified the planets orbiting BD-06~1339 by requiring the normalized fourier power of the L-S periodogram of the RV time series to have a false alarm probability of $<{10^{-4}}$. The $\log R'_{HK}$ activity index was considered poor quality, and therefore was not utilized in the overall analysis. BD-06 1339b was barely discernible within the RV signal of BD-06 1339c in this stage of analysis, existing initially as additional variations. To verify the planetary nature of this signal, the founding team cut the datasets in half to exclude long-term trends. These variations then increased in strength throughout the observation period as the Ca II H re-emission decreased. The discovery team was unable to determine any other longer-term trends due to their limited window of 102 observations over 8 years. They determined that BD-06 1339 b was an educational case of how planets can hide within the activity of variable stars.
To further analyze the BD-06 1339 system, \citep{tuomi2014} implemented a more meticulous probability check involving an independent statistical method of subsequent samplings and the utilization of log-Bayesian evidence ratios. The Bayesian analysis used by \citet{tuomi2014} evaluated the RV time series as if they were observed in real time. At each iteration, a ``new'' RV measurement was added to the dataset from which the best-fit system parameters were determined. In this case, they utilized HARPS and Planet Finder Spectrograph (PFS; \citealp{crane2010}) velocities in their credibility tests. HARPS found the planetary signals of the two original targets and consistent with each other. PFS could not discern any signals previously found by \citet{locurto2013}. The system itself was not explicitly observed for this publication, instead relying on the previous data available at the time. Their results focused on the discovery of a third d planet at a $\sim$400\,day orbital period based on a statistical probability, and they considered BD-06~1339b as a confirmed planet.
The observations of BD-06~1339 were acquired by the HARPS team and originally published by \citep{locurto2013}. The data has since been re-reduced and includes corrections of several systematics within the observations \citep{trifonov2020}. With the improved precision and availability of the data, a re-analysis could derive the orbital parameters of the known companions in the system with better precision and potentially reveal smaller signals that were previously unreported before the re-reduction of the RVs.
We performed a re-analysis of the RVs for BD-06~1339 using the re-reduced data published by \citet{trifonov2020}. We first ran an RV Keplerian periodogram on the dataset to search for significant signals using \texttt{RVSearch} \citep{rosenthal2021}. The \texttt{RVSearch} algorithm iteratively searches for periodic signals present in the dataset and calculates the change in the Bayesian Information Criterion ($\Delta$BIC) between the model at the current grid and the best fit model based on the goodness of the fit. The result of the search would yield signals that are of planetary origin as well as those that are due to stellar activity. We adopted signals returned by \texttt{RVSearch} if they peak above the 0.1\% false alarm probability level. The search returned two significant signals, one at 125 days and another at 3.9 days. This is consistent with the results from \citet{locurto2013}.
We then used the RV modeling toolkit \texttt{RadVel} \citep{fulton2018a} to fully explore the orbital parameters of these two signals and to assess their associated uncertainties. We provided the orbital parameter initial guesses for the two signals using the values returned by \texttt{RVSearch} and allowed all parameters to vary, including an RV vertical offset, RV jitter, and a linear trend. We fit the data with maximum a posteriori estimation and explored the posteriors of the parameters though Markov Chain Monte Carlo (MCMC). The MCMC exploration successfully converged and we show the results in Table~\ref{tab:param} below.
Orbital parameters of the two signals are mostly consistent with those reported by \citet{locurto2013}, except that the orbit of the c planet is preferred to be nearly circular ($e_c\sim0.09$) instead of a mildly eccentric ($e\sim0.31$), as proposed by \citep{locurto2013}. In addition, there appears to be a significant linear trend ($\sim7\sigma$) present in the data that could be indicative of an additional long orbital period massive companion orbiting in the outer regime of this system. Both the linear trend and two circular orbits model are supported by Bayesian model comparisons. The RV signature for BD-06~1339b is shown in left panel of Figure~\ref{fig:RvvsFlux}, where the contribution from the c planet has been removed. The results of this latest RV re-analysis are consistent within the uncertainties of the original analysis performed by \citet{locurto2013}.
\begin{deluxetable}{lcc}[tbp]
\tablecaption{Updated RV System Parameters of BD-06~1339.
\label{tab:param}}
\tablehead{
\colhead{Parameters} &
\colhead{b} &
\colhead{c}}
\startdata
$P$ (days) & $3.87302^{+0.00036}_{-0.00033}$ & $125.49\pm0.13$ \\
$Tc$ (BJD) & $2455000.91^{+0.21}_{-0.17}$ & $2455279.6^{+2.0}_{-1.8}$ \\
$Tp$ (BJD) & $2455001.65^{+0.39}_{-0.62}$ & $2455285^{+16}_{-14}$ \\
$e$ & $0.22^{+0.16}_{-0.13}$ & $0.089^{+0.054}_{-0.052}$\\
$\omega$ (deg) & $181.23^{+38.39}_{-55.00}$ & $110.58^{+49.27}_{-38.39}$ \\
$K$ (m s$^{-1}$) & $3.47^{+0.52}_{-0.49}$ & $8.32^{+0.46}_{-0.47}$ \\
$M_p$ ($M_{\rm E}$) & $6.45^{+1.0}_{-0.98} $ & $50.9^{+4.5}_{-4.4}$ \\
$a$ (au) & $0.0429^{+0.0014}_{-0.0015}$ & $0.436^{+0.014}_{-0.015}$ \\
\enddata
\tablecomments{$\omega$ values are those of the star, not of the planet. The RV fit includes a linear trend of $\dot{\gamma}$ = $-0.00239^{+0.00032}_{-0.00033}$ m s$^{-1}$ d$^{-1}$.}
\end{deluxetable}
\subsection{Photometric Analysis}
\label{phot}
\citet{gillon2017c} used the Warm mode of the Spitzer mission to search for transits of 24 low-mass planets (all single planet systems) discovered through the RV method, including BD-06~1339b. The Spitzer photometry found no reliable transits for 19 of the 24 planets, including BD-06 1339b. Specifically, BD-06~1339b was found to not display a transit within the observation window, although the photometry did not cover approximately 20\% of the possible transit window. Since then, TESS observed BD-06~1339 at 2-min cadence nearly continuously during the observations of Sector 6. In this section, we use to the TESS photometry to search for transits by either the b or c planets and atmospheric phase variations caused by the b planet.
\begin{figure*}
\plottwo{RV_new.png}{Flux_new.png}
\caption{Left: The RV signature of BD-06 1339b (c planet's signal removed). Right: The TESS photometry phase-folded on the orbit of BD-06~1339b. The black curves represent sinusoidal fits to the data.}
\label{fig:RvvsFlux}
\end{figure*}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Trendmatchnew.png}
\caption{The sinusoidal fits to the RV (red) and flux (blue) curves from Figure~\ref{fig:RvvsFlux}, but with the amplitudes normalized to unity. A strong correlation in the phase offset can be seen around 0.7 phase.}
\label{fig:Correlate}.
\end{figure}
BD-06~1339 was observed during TESS Sector 6 (2018~Dec~11--2019~Jan~07) at 2-min cadence and TESS Sector 33 (2020~Dec~17--2021~Jan~13) at 30-min cadence. The TESS light curves and full-frame images are publicly available through the Mikulski Archive for Space Telescopes\footnote{\url{https://archive.stsci.edu/}} (MAST). Since the anticipated transit of BD-06~1339b is on the order of $\sim$2\,hr, we elect to only use the 2-min cadence light curve from the Sector 6 observations. We use the original data release of the pre-search data conditioning (PDC) light curve that was processed by the Science Processing Operations Center (SPOC) pipeline \citep{Jenkins2016} and additionally remove any observations denoted with poor quality flags or that are 5$\sigma$ outliers. The L-S periodogram \citep{Lomb1976, Scargle1982} is then computed on the BD-06~1339 light curve using an even frequency spacing of 1.35\,min$^{-1}$, where we find a maximum normalized power of $\sim$0.0038 at $3.859\pm0.325$\,days. The 0.01 false alarm probability level for the periodogram of the BD-06~1339 light curve corresponds to 0.0012 normalized power, with the peak of the periodogram having a $\ll$10$^{-4}$ false alarm probability.
Our L-S periodogram analysis of the TESS light curve reveals a sinusoidal periodicity that is consistent with the orbital period of the b planet ($3.8728\pm0.0004$~days) within their uncertainties (see Figure~\ref{fig:variability}). A planet's orbital period may be extracted from a periodogram analysis if transit events are not properly removed from the observed light curve. However, we do not observe transit events by either the b or c planets in the TESS photometry (see Figure~\ref{fig:RvvsFlux}), which is consistent with the findings of \citep{gillon2017c}. A significant sinusoidal amplitude could also indicate the presence of a planet-induced photometric phase curve caused by its day-side reflection or excess thermal emission. If the phase curve is caused by the day-side reflection of the planet, then the maximum brightness of the phase-folded light curve is expected to peak at 0.5 phase when we see the greatest area of the planet illuminated from our point of view. Alternatively, atmospheric winds that redistribute heat from the day to night-side could cause the hottest region of the atmosphere to be shifted eastwards from the sub-stellar point, such that the phase-folded light curve peaks prior to 0.5 phase \citep[e.g.,][]{Showman2013, Heng2015}.
We use the measured time of conjunction (i.e., expected transit time) from the RV analysis to assess both the shape and phase of maximum amplitude of the TESS phase-folded light curve for BD-06~1339b. The full phase curve is fit using a double harmonic sinusoidal function, which allows for modulations caused by Doppler boosting and ellipsoidal variations in addition to the reflection caused by the day-side of the planet \citep[see][]{Shporer2017a}. The first cosine harmonic component represents the modulations caused by day-side reflection or thermal emission, such that the maximum brightness occurs at 0.5 phase. The phase-folded light curve of BD-06~1339b exhibits a significant sinusoidal modulation of $\sim$40 ppm in the TESS photometry (see right panel of Figure~\ref{fig:RvvsFlux}) with a maximum brightness at the third quadrature of the b planet's orbital phase (0.73 phase). In addition to the maximum brightness being at a phase that is inconsistent with day-side reflection or thermal emission, the amplitude of the phase curve is $\sim$10 times greater than expected for such a small planet.\footnote{The day-side reflection modulations of 8.5\,$M_\oplus$ planet with an albedo of 0.3 are expected to be on the order of $\sim$2\,ppm.}
\section{False-Positive Planetary Signature?}
\label{false}
The results described above cast doubt on the planetary origin of the signal ascribed to BD-06~1339b. This target may, in fact, instead be a possible case for the stellar variability of the host star masquerading as a false positive. While it is not impossible for a system to exist in which a planet orbits at the same period as its host star's variability, a coincidence of 0.01 days between the two is highly unlikely. A visual comparison of the RVs and stellar flux of the host star is enough to raise some questions, but we must quantify our results. We further investigate the nature of BD-06~1339b by comparing the phase signature in the RVs and photometry, searching for correlations in the spectral activity indicators, and considering the likelihood of BD-06~1339 exhibiting periodic stellar activity at $\sim$3.9\,days.
Figure~\ref{fig:RvvsFlux} shows the RV signature and the photometric variations in phase with the anticipated orbit of BD-06~1339b. We fit a simple sinusoidal function to each phase curve and find that the maximum of the RVs occurs at 0.69 phase with an amplitude\footnote{This amplitude is estimated assuming a simple sinusoidal function, and thus a zero eccentricity.} of 3.3\,m\,s$^{-1}$, and the maximum of the photometric flux occurs at 0.73 phase with an amplitude of 40\,ppm. Interestingly, the RVs and the photometric variations peak at approximately the same phase. The correlation between these sinusoidal functions is further emphasized in Figure~\ref{fig:Correlate}, where the two functions are normalized by having their amplitudes set to unity.
Clearly there is a very strong correlation between the RVs and the photometry, but they should instead be offset from each other in phase. If the photometric variations were caused by atmospheric reflection or thermal emission of BD-06 1339b, the photometric variations should peak at 0.5 phase or earlier due to winds \citep[e.g.,][]{Showman2013, Heng2015}. However, the observed phase offset is subject to uncertainties from the time of conjunction determined from the RVs (0.2\,days) and the time between the RV and TESS observations ($\sim$3500\,days). Propagating the time of conjunction, and thus phase offset, out to the time of the TESS observations results in an uncertainty of 0.5\,days (13\% of the orbital period), which could render the correlation in phase between the RVs and photometry as a coincidence.
In addition to the photometry, we performed an analysis on all of the available RV activity spectral indicators provided by the HARPS RV database \citep{trifonov2020} to investigate whether any significant activity signals are consistent with the reported period for BD-06~1339b. We used a Generalized L-S periodogram (GLS; \citealp{zechmeister2009}) to search for periodicity in H$_{\alpha}$, chromatic index (CRX), differential line width (dLW) \citep{zechmeister2018}, as well as full-width-at-half-maximum (FWHM) and contrast of the cross correlation function (CCF). None of the aforementioned indicators returned significant signals above the 0.1\% false alarm probability level, except for dLW where a $\sim$ 270-day signal was detected just above the false alarm probability threshold and is possibly of stellar activity origin.
We also investigated if there exists any correlation between the b planet's RV signal (after the removal of RV contributions from the c planet and the linear trend) and each one of the activity indicators using the Pearson correlation coefficient. Once again, only dLW returns a weak correlation of $\sim$ 0.25, while there is no correlation observed in any of the other activity indicators. While there is no peak in the dLW periodogram near $\sim$ 3--4 days, the correlation between the b planet's RV signature and dLW could be related to the 270-day signal. Overall, despite the strong indication from the photometry that the previously reported b signal could attributed to stellar activity, no significant correlations were found between the b planet's RVs and any of the spectral activity indicators, and no activity periods were detected near the b planet's orbital period.
This raises the question of how stellar variability can be selectively manifesting in the photometry, but not in the spectral lines of the host star. We investigate whether the signal observed in the BD-06~1339 light curve is typical for stars of similar spectral types. From the all-sky variability analysis, we searched for stars with effective temperatures between 4000--4500\,K, photometric variability periods of 3.5--4.0\,days, and stellar luminosities lower than 10\,$L_\odot$. We find $\sim$30 stars within this subgroup and, upon visual investigation, find that their light curves are similar in shape and amplitude to the variations observed for BD-06~1339 (see Figure~\ref{fig:variability}). Their light curve behavior proved to be comparable to what is observed in the BD-06~1339 light curve. Therefore, stellar activity is a potential explanation for the observed photometric variations.
We cannot pinpoint the physical mechanism behind the photometric variability, although our general understanding of stellar astrophysics suggests that it is related to magnetic activity in the star that produces spots and plages. The false alarm probability \citep{locurto2013} used to detect BD-06~1339b was based on a simple f-test, but recent work has shown that other statistical methods, such as the extreme value statistical distribution, may be more appropriate for applying to periodogram analyses \citep{suveges2014, vio2016, sulis2017, vio2019, delisle2020}. The close match between both the period and phase of the photometric variability and the RV variations suggests that both signals are produced by the same cause. We therefore believe that the most likely explanation is that BD-06~1339b is a false positive and that the RV variations are not produced by a planetary companion of the star.
\section{Conclusions}
\label{conclusions}
We conducted a photometric analysis of targets with periodic modulations from the TESS primary mission (Fetherolf et al. in prep.; Simpson et al. in prep.) and determined that BD-06~1339b was considered a prime subject for further scrutiny. The similarity between the photometric variability periodicity of the TESS photometry for BD-06~1339 (3.859\,days) and the orbital period of the b planet (3.874\,days) prompted a rigorous re-examination of the spectroscopy and photometry for this target. We performed a re-analysis of the RVs obtained by HARPS and found an orbital solution that was consistent with the RV analysis performed by \citet{locurto2013}. An in-depth investigation of the photometric variations revealed that they were inconsistent with atmospheric phase variations due to the planet based on their phase and amplitude, but they could possibly be attributed to stellar activity. Comparing the RV analysis with the phase-folded photometric fluxes (see Figure~\ref{fig:RvvsFlux})) revealed a strong correlation between the two datasets (see Figure~\ref{fig:Correlate}).
With these results in mind, we addressed what this means for the interpretation of the RV modulation observed near 3.9 days, previously attributed to a planetary signal. Stellar activity is a possible culprit, but the spectroscopic emission lines of this star do not correlate well with the photometric modulations of this star. Therefore, there is a wide field of opportunity for this target to be analyzed further to determine the source of the discrepancy between the photometric and spectroscopic behavior.
These results indicate that BD-06~1339b is, in fact, a likely false positive whose signature was induced by the activity of the star. Follow-up observations could help to resolve the discrepancy between the photometric and spectroscopic data. In particular, understanding the nature of the discrepancy would benefit from additional precision photometry of the star to improve the characterization of the stellar variability, alongside simultaneous spectroscopic activity indicators \citep{diaz2018b} and an extended RV baseline. Overall, reanalysis of this systems emphasizes the greater importance of further verifying the nature of confirmed RV planets as new data becomes available---especially for those that are low in mass and of high interest to demographics and atmospheric studies.
\section*{Acknowledgements}
The authors acknowledge support from NASA grant 80NSSC18K0544, funded through the Exoplanet Research Program (XRP). T.F. acknowledges support from the University of California President's Postdoctoral Fellowship Program. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. All of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. This research made use of Lightkurve, a Python package for Kepler and TESS data analysis \citep{Lightkurve2018}.
\facilities{TESS, HARPS, NASA Exoplanet Archive}
\software{Astropy \citep{Astropy_Collaboration13, Astropy_Collaboration18},
Astroquery \citep{Ginsburg19},
\texttt{GLS} \citep{zechmeister2009},
Lightkurve \citep{Lightkurve2018},
\texttt{RadVel} \citep{fulton2018a},
\texttt{RVSearch} \citep{rosenthal2021},
Matplotlib \citep{Hunter07},
NumPy \citep{Harris20},
SciPy \citep{Virtanen20}
}
|
1,477,468,750,884 | arxiv | \section{Introduction}
Following the discovery of high--temperature superconductivity
in the ceramic copper oxides, novel purely electronic pairing
mechanisms due to the strong Coulomb correlations within the $\rm
CuO_2$ planes have been investigated in detail.
Recently, however, it has become clear that the lattice degrees of
freedom are essential in understanding the puzzling
normal--state properties of the cuprates~\cite{BEMB92,Ri94,GC94}.
Even if it should turn out that the electron--phonon (EP) interaction
is not the relevant pairing interaction in those materials,
its effects need to be reconsidered for the case of
strong electron--electron interactions and low effective dimensionality as
realized in the high--$T_c$ superconductors.
In particular, polaronic effects are suggested to play a
non--negligible role in the copper--based materials
$\rm La_{2-x}Sr_xCuO_{4+y}$~\cite{Ra91,AK92,Emi92b,BE93,AM94,SAL95} and
even more in the isostructural nickel--based charge--transfer
oxides $\rm La_{2-x}Sr_xNiO_{4+y}$~\cite{BE93,CCC93}.
Experimentally, photo-induced absorption experiments~\cite{MFVH90},
infrared spectroscopy~\cite{Caea94}
as well as infrared reflectivity measurements~\cite{FLKB93}
unambiguously indicate the formation of `self--localized'
polaronic states (small polarons) in the insulating parent
compounds $\rm La_2CuO_{4+y}$ and $\rm Nd_2CuO_{4-y}$ of the
hole-- and electron--doped superconductors $\rm La_{2-x}Sr_xCuO_{4+y}$ and
$\rm Nd_{2-x}Ce_xCuO_{4-y}$, respectively.
Therefore a growing theoretical interest in the study of
strongly correlated EP models can be found in the recent
literature~\cite{RT92,Muea92,ZS92,Ra93,AKR94,SYZ95,Feea94,LD94,RFS94,ZL94,FRWM95,DGKR95,Ma95}.
Probably the simplest microscopic models
including both the electron and phonon degrees of freedom are the
Holstein Hubbard model
\begin{eqnarray}
\label{hhm}
{\cal{H}}_{H-H}=
&&-t \sum_{\langle i j \rangle \sigma}
\Big(c_{i\sigma}^\dagger c_{j\sigma}^{} + {\rm H.c.}\Big)+
U \sum_i n_{i\uparrow}^{} n_{i\downarrow}^{}
\nonumber\\
&&\qquad- \sqrt{\varepsilon_p\hbar\omega} \sum_i \big(b_i^{\dagger} + b_i^{} \big)\,n_i^{}
\;+\;\hbar\omega \sum_i \big(b_i^{\dagger}b_i^{} + \frac{1}{2}\big)
\end{eqnarray}
and the Holstein t--J model
\begin{eqnarray}
\label{htjm}
{\cal{H}}_{H-t-J}=
&&-t \sum_{\langle i j \rangle \sigma}
\Big(\tilde{c}_{i\sigma}^\dagger
\tilde{c}_{j\sigma}^{} + {\rm H.c.}\Big)+ J \sum_{\langle i j\rangle}
\Big(\vec{S}_i^{}\vec{S}_j^{} - \frac{1}{4}\tilde{n}_i^{}\tilde{n}_j^{}\Big)
\nonumber\\
&&\qquad- \sqrt{\varepsilon_p\hbar\omega} \sum_i \big(b_i^{\dagger} + b_i^{} \big)\,\tilde{h}_i^{}
\;+\;\hbar\omega \sum_i \big(b_i^{\dagger}b_i^{} + \frac{1}{2}\big)\,,
\end{eqnarray}
where $c^{\left(\dagger \right)}_{i\sigma}$
annihilates (creates) an electron
at Wannier site $i$ with spin projection $\sigma$, $n_i= n_{i\uparrow}+
n_{i\downarrow}$, and $t$ denotes the transfer amplitude between
nearest--neighbour (NN) pairs $\langle i j \rangle$.
${\cal{H}}_{H-t-J}$ acts in a projected Hilbert space without double
occupancy, i.e., $\tilde{c}^{\left(\dagger \right)}_{i\sigma}=c^{\left(\dagger
\right)}_{i\sigma}(1- n^{ }_{i\bar{\sigma}})$, and
$\vec{S}^{ }_i=\frac{1}{2} \sum_{\sigma\sigma '}\tilde{c}^\dagger_{i\sigma}
\vec{\tau}_{\sigma\sigma '}^{} \tilde{c}^{ }_{i\sigma '}\;$.
The first two terms in (\ref{hhm}) and (\ref{htjm}) represent the
standard Hubbard model and t--J model, respectively,
where $U$ is the on--site Coulomb repulsion and $J$ measures the NN
antiferromagnetic exchange interaction strength.
The third and fourth terms take into account the EP interaction and the
phonon energy in a harmonic approximation.
Here, the on--site electron (hole) occupation number
$n_i$ ($\tilde{h}_i= 1- \tilde{n}_{i}$) is locally coupled
to a dispersionsless optical phonon mode, where $\varepsilon_p$ is the EP coupling
constant, $\omega$ denotes the bare phonon frequency,
and $b_i^{(\dagger)}$ are the
phonon annihilation (creation) operators.
In the context of an effective single--band description of
the copper/nickel oxides, the collective
Holstein--coordinates $q_i=\sqrt{\hbar/2 M \omega} \, (b_i^\dagger +
b_i^{})$ may be thought of as representing an internal optical degree
of freedom of the lattice site $i$, i.e., in this case the dominant source of
EP
coupling is assumed to result from the interaction of dopant--induced
charge carriers with the apical out--of plane or the
bond--parallel in--plane breathing--type displacements of oxygen atoms.
Unfortunately, for strongly coupled EP systems exact
results exist only in a few special cases and
limits~\cite{Loe88,GL91,CPF95,FL95}.
Whereas, in an approximative treatment, the weak--coupling regime
$(\varepsilon_p/t\ll 1)$ is well understood and dealt with by perturbation theory,
the standard strong--coupling Migdal--Eliashberg theory~\cite{Mi58,El60}
based on the adiabatic Migdal theorem might break down for strong
enough EP interactions $(\varepsilon_p/t\gg 1)$ due to the familiar polaronic band
collapse~\cite{AK92}. Note that in the presence of strong Coulomb
correlations, a rather moderate EP can cause a substantial reduction
of the coherent band motion making the particles susceptible to
`self--trapping'~\cite{ZS92,FRWM95}.
The (single) polaron problem has been tackled in the strong--coupling
adiabatic $(\hbar\omega/t\ll 1)$
and antiadiabatic $(\hbar\omega/t\gg 1)$ limits using the
Holstein~\cite{Ho59a} and Lang--Firsov~\cite{LF62} approximations,
respectively. Both approaches yield a narrow polaronic band
with an exponentially reduced half--bandwidth~\cite{AKR94}.
Whether these small polarons (or bipolarons)
can exist as itinerant band states is still a heavily debated
issue~\cite{BEMB92}. Apart from variational
calculations~\cite{ZFA89,FCP90,TFDB94,Feea94}
little is known for intermediate values of EP
coupling and phonon frequency $\varepsilon_p\sim\hbar\omega\sim t$ and, in
particular, for the many--polaron problem. In principle,
exact diagonalization (ED)~\cite{RT92,Ma93,AKR94,Ma95} and
(quantum) Monte Carlo~\cite{RL82,RL83,HF82,NS93,NGSF93}
methods including the full quantum nature
of phonons can close this gap. However, by using direct ED techniques
it is necessary to truncate the phononic Hilbert space,
and hence the accessible parameter space is limited
by the size of the matrix one can diagonalize.
Therefore ED studies up to now were limited to either small
values of $\varepsilon_p$, to the so--called frozen phonon
approximation~\cite{SZF91,SZ92,FRMB93,RFB93}, or to very small
systems~\cite{RT92,Muea92,AKR94,Ma95}. In a previous work~\cite{FRWM95},
the authors have proposed a variational Lanczos diagonalization technique
on the basis of an inhomogeneous modified variational Lang--Firsov
transformation (IMVLF) that allows for the description of
static displacement field, polaron and squeezing effects in terms of the
Holstein t--J and Holstein Hubbard models on fairly large clusters.
Although the adiabatic and antiadiabatic as well as the weak-- and
strong--coupling limiting cases are well reproduced in this approach,
the situation becomes less favourable at intermediate EP couplings and
phonon frequencies and, in particular, in the crossover region from
large--size nearly free polarons (FP) to small--size `quasi--localized'
polarons (i.e., in the vicinity of the so--called `self--trapping'
transition). Obviously, this regime requires a more accurate treatment
of the phonons as quantum mechanical objects.
Encouraged by this situation it is the aim of the present paper to perform a
direct Lanczos diagonalization of the Holstein Hubbard and Holstein
t--J models, preserving the full dynamics of quantum phonons.
In particular, we investigate for the first time
the low--lying excitations (spectral functions) on large enough
lattices, in order to identify the
dispersion relation of the (bi)polaronic quasiparticles.
\section{Computational Procedure}
A general state of the model Hamiltonian ${\cal{H}}_{H-H}$ [${\cal{H}}_{H-t-J}$]
describing $N_{el}=N_\uparrow+N_\downarrow$ electrons
on a finite $D$--dimensional hypercubic lattice
with $N$ sites can be written as the direct product
\begin{equation}
\label{gsta}
|{\mit\Psi}\rangle= \sum_{l,k} c_l^k \;
|l\rangle_{el} \otimes |k\rangle_{ph}\,,
\end{equation}
where $ l$ and $k$ label the basic states of the
electronic and phononic Hilbert space with dimensions
$D_{el}={\scriptsize N\choose N_\uparrow}
{\scriptsize N\choose N_\downarrow}$
$\left[D_{el}={\scriptsize N\choose N_\uparrow }
{\scriptsize N-N_\uparrow\choose N_\downarrow}\right]$
and $D_{ph}=\infty$, respectively.
Since the bosonic part of the Hilbert space is infinite
dimensional we use a truncation procedure~\cite{II90}
restricting ourselves to phononic states with at most $M$ phonons:
\begin{equation}
\label{phonsta}
|k\rangle_{ph}=\prod_{i=1}^N
\frac{1}{\sqrt{n_i^k!}}\left(b_i^\dagger\right)^{n_{i}^{k}}\,|0\rangle_{ph}
\end{equation}
with
\begin{equation}
\label{mabschnei}
\sum_{i=1}^N n_i^k \le M\,,
\end{equation}
and
$1\le k \le D_{ph}^{(M)}=(M+N)!/M!N!\,$.
To further reduce the dimension of the Hilbert space, in the case of
${\cal{H}}_{H-H}$ we separate out the center of mass motion
by transforming to new phonon
operators $B_i^{(\dagger)}$, which can be taken into account
analytically as displaced harmonic oscillators.
For the Holstein t--J model it is more effective
to exploit the point group symmetries of the original basis~(\ref{gsta}).
Then the resulting Hamiltonian matrix is diagonalized
using a standard Lanczos method. As the convergence of the
Lanczos procedure depends on the (relative) difference
of neighbouring eigenvalues, $|E_{i+1}-E_i|/|E_i|$,
one needs to be very careful in
resolving eigenvalues within the extremely narrow small--polaron band.
To monitor the convergence of our truncation procedure as a function of $M$
we calculate the weight of the $m$--phonon states in the ground state
$|{\mit \Psi}_0\rangle$ of ${\cal{H}}$:
\begin{equation}
\label{cmab}
|c^m|^2=\sum_{l,k}
|c_{l}^{k}|^2\,,\qquad\mbox{where}\;\;m=\sum_{i=1}^N n_i^{k}\,.
\end{equation}
At fixed $M$, the curve $|c^m|^2 (m)$ is bell-shaped
and its maximum corresponds to the most probable number of
phonon quanta in the ground state. To illustrate the
$M$--dependences of the ground--state energy $E_0$ and the
coefficients $|c^m|^2$, we have shown both quantities
for the single--electron Holstein model in Fig.~\ref{F1}.
In the numerical work convergence is achieved if the relative error of
the ground--state energy is less than $10^{-7}$. In addition, we check
that $E_0$ is smaller than the estimate obtained from the
IMVLF--Lanczos treatment of the phonon subsystem~\cite{FRWM95}.
We have written the program in Fortran90 and ran it on a 64--node
CM5. We were able to diagonalize Hamiltonian matrices up to a total
dimension ($D_{tot}$) of about 82 millions. Since a matrix vector
multiplication for this matrix size takes less than 150 seconds,
the limiting factor of our numerical algorithm is the available
storage.
\section{Numerical results}
\subsection{Holstein Hubbard model}
\subsubsection{One--electron case}
In the first place, we investigate the polaron properties of the
Holstein model with a single electron on finite lattices
with up to ten sites using periodic boundary conditions.
In the light of the literature over at least the last two
decades~\cite{EH76,RL82,Ma95,GL91,CPF95,FL95} we expect a gradual
transition from a (nearly free) large--polaron solution to a
small--polaron--like ground state upon increasing the EP coupling.
Since, in particular in the adiabatic regime,
the formation of a polaronic state is accompanied by
a strong reduction of the coherent electron motion,
this effect should be observable in the
expectation value of the kinetic energy
$E_{p,kin}/t=-\sum_{<ij>\sigma}\langle{\mit\Psi}_0|(c_{i\sigma}^\dagger
c_{j\sigma}^{}+\mbox{H.c.})|{\mit\Psi}_0\rangle$, where
$|{\mit\Psi}_0\rangle$ is the ground--state wave-function.
We therefore define an effective polaronic transfer amplitude~\cite{FRWM95},
\begin{equation}
t_{p,eff}=E_{p,kin}(\varepsilon_p,U)/E_{p,kin}(0,U)\,,
\label{teffpo}
\end{equation}
in order to characterize the increase in the quasiparticle mass~\cite{FRWM95}.
Note that $t_{p,eff}$ substantially differs
from the (exponential) polaron band renormalization factor
$(\rho)$ obtained analytically in the non--adiabatic
Lang--Firsov and adiabatic
Holstein cases~\cite{AKR94}.
We illustrate the dependence of this effective hopping amplitude
on the EP interaction strength in Fig.~\ref{F2},
where we have plotted $t_{p,eff}$ as a function of $\varepsilon_p$ at different phonon
frequencies. First it is important to realize that there are two complementary
(adiabatic and non--adiabatic) regimes for the polaronic motion.
In the non--adiabatic regime, where the lattice fluctuations
are fast and the phonons are able to follow immediately the electronic
motion forming a non--adiabatic Lang--Firsov polaron (NLFP),
one observes a very gradual decrease of $t_{p,eff}$ as
$\varepsilon_p$ increases. At the same time the
`phonon distribution function', $|c^m|^2$, gets wider but the maximum
is still located at the zero--phonon state.
In the adiabatic regime, one
notices a crossover from a large--size polaron (LP) in
1D or nearly free polaron (FP) in 2D,
described by a $t_{p,eff}$ that is only weakly reduced from its
noninteracting value, to a less mobile (small--size)
adiabatic Holstein polaron (AHP) for large $\varepsilon_p$.
We point out that the nature of `delocalized' polaronic states,
occurring in the weak--coupling region, is different in
1D and 2D~\cite{FRWM95}. In the 1D case, the FP state becomes unstable
at any finite EP coupling.
As expected the transition to the AHP state occurs
if the EP coupling approximately
exceeds half the bare electronic bandwidth and,
in accordance with Monte Carlo results~\cite{RL82,RL83}, is much
sharper in two dimensions~\cite{We94}
(in the remainder of this section we focus on the 1D case).
Nonetheless, all physical quantities are smooth functions
of $\varepsilon_p$, in particular there are no ground--state level
crossings, i.e., the transition from LP/FP to AHP is {\it continuous} and
not accompanied by any non--analyticities. While in the
weak--coupling case we have $m_{max}=0$ and the inclusion of
higher phonon states $(m\gapro 5)$ does not improve the ground--state
energy at all, in the adiabatic strong--coupling case ($\varepsilon_p=4$,
$\hbar\omega=0.4$), the maximum in $|c^m|^2$ is shifted to
multi--phonon states ($m_{max}\simeq 8$) and we need about 16 phonons
to reach a sufficient accuracy within our truncation procedure.
Note that a similar behaviour can be observed in
the {\it non--adiabatic} regime
$(\hbar\omega>t)$ provided that $\varepsilon_p\gg\hbar\omega$, e.g.,
for $\hbar\omega=3$ and $\varepsilon_p=8$ ($\varepsilon_p=10$) we find $m_{max}\simeq 2$
in 1D (2D). These results confirm previous findings
for the Holstein Hubbard model on very small size clusters (with
two or three sites), where, as $\varepsilon_p$ increases
in the adiabatic regime, a strong increase
of the average number of phonons, $\langle N_{ph}\rangle$,
contained in the ground state, was observed (cf. Tab.~I in
Ref.~\cite{RT92} and Tab.~I in Ref.~\cite{Muea92}).
In the center of mass system, the phonon expectation value
in the polaronic ground state may be derived from the
phonon distribution function $|c^m|^2$ by
$\langle N_{ph}\rangle=\sum_{m=0}^M|c^m|^2 +\frac{\varepsilon_p
}{\hbar\omega} \frac{N_{el}^2}{N}$.
To elucidate the difference between the `extended' LP and
`quasi--localized' AHP states in more
detail, we have calculated the electron--phonon density correlation
function
\begin{equation}
C_{el-ph}^{}(|i-j|)=\langle {\mit\Psi}_0
|n_i^{} b_j^\dagger b_j^{}|{\mit\Psi}_0\rangle\,,
\label{celph}
\end{equation}
which measures the correlation between
the electron occupying site $i$ and the density of
phonons on site $j$~\cite{Ma95b}.
Results for $C_{el-ph}(|i-j|)$,
plotted in Fig.~\ref{F3} at $\hbar\omega=0.4$
for all distances $i-j:=\vec{R}_i-\vec{R}_j$,
show that for small $\varepsilon_p$ the correlation between the electron and the
phonons is pretty weak and exhibits little structure, i.e., the few
phonons contained in the ground state are nearly uniformly distributed over the
whole lattice. In contrast, in the case of large EP coupling ($\varepsilon_p=3$),
the phonons are strongly correlated with the position of the electron,
thus implying a very small radius of the polaron.
Note, however, that
the translational invariance of the ground state is not broken. Since a
polaron's mass is inversely proportional to its size,
the AHP formed at large $\varepsilon_p$ is an extremely heavy quasiparticle.
As can be seen from the inset of Fig.~\ref{F3},
the on--site electron--phonon correlation increases dramatically
around the same value of $\varepsilon_p$ at which $t_{p,eff}$ becomes depressed
(cf. Fig.~\ref{F2}).
This means, in the adiabatic regime a strong short--range EP
interaction can lower the energy of the system due to a
deformation--potential--like contribution sufficiently
to overcompensate the loss of kinetic energy.
Nonetheless, the `quasi--localized' (self--trapped) polaronic
state has band--like character, i.e., the AHP can move itinerantly.
In order to discuss the formation of a small--polaron band one has to
calculate the low--lying excited states. As a first step, in
Fig.~\ref{F4} we classify the lowest eigenvalues of the Holstein model
according to the allowed wave--vectors of the eight--site lattice for
various phonon frequencies at $\varepsilon_p=3$.
Here the `band dispersion' $E_K-E_0$ is scaled with respect to the
so--called coherent bandwidth ${\mit \Delta}E=\sup_K E_K -\inf_K E_K$.
${\mit \Delta}E$ strongly depends on
both ratios $\varepsilon_p/\hbar\omega$ and
$\varepsilon_p/t$, for example, we found ${\mit
\Delta}E(\varepsilon_p=3,\hbar\omega)=0.0157$, 0.1957, 2.9165, and 4.0 for
$\hbar\omega=0.4$, 0.8, 10.0 and $\infty$, respectively. Of course,
the simple Lang--Firsov formula,
${\mit \Delta}E_{LF}= 4\mbox{D} \exp[-\varepsilon_p/\hbar\omega]$,
gives a good estimate of the polaronic bandwidth
only in the non--adiabatic regime:
${\mit \Delta}E_{LF}(\varepsilon_p=3,\hbar\omega)=0.0022$
($\hbar\omega=0.4$), 0.0941 (0.8), 2.9633 (10.0), 4 ($\infty$).
Besides the strong renormalization of the bandwidth in the
low--frequency strong--coupling regime
it is interesting to note that the deviation of the polaron band dispersion
from a (rescaled) `cosine--dispersion'
of noninteracting electrons is most pronounced
at {\it intermediate} phonon frequencies
$\hbar\omega\sim t$, i.e., in between the extreme adiabatic (AHP) and
antiadiabatic (NLFP) limits.
This deviation may be due to a residual polaron--phonon interaction,
with the phonons sitting on sites other than the polaron.
To demonstrate that the low--lying
eigenvalues do indeed form a well--separated quasiparticle band
in the adiabatic strong--coupling regime ($\varepsilon_p=3$, $\hbar\omega=0.4$),
in the inset of Fig.~\ref{F4} we have displayed
the lowest few eigenvalues in dependence on $\varepsilon_p$.
In the very weak--coupling regime ($\varepsilon_p=0.5$) the eigenvalues are
barely changed from their $\varepsilon_p=0$ values, where additional
eigenvalues, separated from the ground--state energy $E_0$ by multiples of
$\hbar\omega$ (e.g., $E_2$, $E_3$, and $E_4$), enter the spectrum.
As $\varepsilon_p$ increases a band of states separates from the rest of the
spectrum. These states become very close in energy and a narrow
well--separated energy band evolves in the
strong--coupling case ($\varepsilon_p=3$). Obviously, the gap
to the next higher band of eigenvalues is of the order
of the bare phonon frequency $\hbar\omega$.
Neglecting degeneracies one may tentatively identify those five
states as the states of the small--polaron band on the eight--site
lattice.
Keeping this identification in mind, in Fig.~\ref{F5} we have
plotted the lowest eigenvalues as a function of the
(1D) $K$--vectors belonging to various system sizes ($N=6$, 8, 10).
One notices that the dispersion $E_K$ is rather size independent, i.e.,
the $E_K$ values obtained for larger systems just fill the gaps.
Undoubtedly, the smooth shape of $E_K$ already provides good reasons
for a quasiparticle band description of the AHP in the
strong--coupling regime. To further
substantiate this quasiparticle interpretation, we also have
calculated the one--particle spectral functions
\begin{equation}
A_{K}^{}(E) = \sum_n |\langle {\mit\Psi}_n^{(N_{el})}
|c_{K}^{\dagger}|{\mit\Psi}_0^{(N_{el}-1)}\rangle|^2
\,\delta ( E-E_n^{(N_{el})}+E_0^{(N_{el}-1)})
\label{specfun}
\end{equation}
with $N_{el}=1$ for the non--equivalent $K$--values of the
six--site system using a polynomial moment method~\cite{SR94}.
The idea is to see a direct verification of the
coherent band dispersion $E_K$ in terms of $A_K(E)$.
The electronic spectral functions $A_K(E)$ are shown in
the four insets of Fig.~\ref{F5}.
The important point, we would like to emphasize, is that the position
of the first peak in each spectral function $A_K(E)$ exactly coincides
with the corresponding $E_K$--value and the other peaks are at higher
energies than any of the coherent band--energy values. This means,
our exact results for the low--energy excitation spectrum of a single
electron corroborate the existence of heavily dressed polaronic
quasiparticles, where the electronic and phononic degrees of freedom
are strongly mixed. Of course,
in the very high--energy regime the results for $A_K(E)$ can not be
trusted just due to the errors induced by the necessary truncation of
the phononic Hilbert space.
\subsubsection{Two--electron case}
Next, we wish to discuss the two--electron problem. Here it is of
special interest to understand in detail the conditions under
which the two electrons form a bipolaron. Whether or not a
transition to a bipolaronic state will occur depends sensitively on
the competition between the short--ranged
phonon--mediated, i.e., retarded $(\hbar\omega
<\infty)$, attraction $(\propto\varepsilon_p)$
and the instantaneous on--site Hubbard repulsion $(\propto U)$.
We start again with a discussion of the
mobility of the particles. Fig.~\ref{F6} (a) shows the strong
(gradual) reduction of the effective {\it polaronic} transfer amplitude
$t_{p,eff}$ as $\varepsilon_p$ increases in the adiabatic (non--adiabatic)
regime. Now let us mainly focus on the physically
more interesting regime of `small' phonon frequencies, $\hbar\omega=0.4$.
In the case of vanishing Coulomb interaction $U=0$
any finite EP interaction causes an effective on--site
attraction between the electrons forming a bipolaronic bound state
(remember, e.g., that $U_{eff}=U-2\varepsilon_p$ follows from the
simple Lang--Firsov approach). This means, in the pure
Holstein model the state with two nearly free (large)
polarons does not exist, at least in one spatial dimension~\cite{RT92}.
In the weak--coupling limit, the two--polaron state can, however,
be stabilized by taking into account the on--site Coulomb repulsion.
In this case, a crossover from a state of two mobile large polarons to an
extended bipolaronic state occurs. The `transition' will be
shifted to larger EP couplings as
$U$ increases (see Fig.~\ref{F6} (a)).
For example, at $U=6$ and $\hbar\omega=0.4$ (3.0), we find that the
binding energy of two electrons,
$E_B^2=E_0(2)-2E_0(1)$, becomes negative at about $\varepsilon_p=1.7$ (2.8).
Further justification for this interpretation can be found from the
behaviour of the effective bipolaronic transfer amplitude~\cite{IF95},
\begin{equation}
t_{b,eff}=E_{b,kin}(\varepsilon_p,U)/E_{b,kin}(0,U)
\label{tbipoeff}
\end{equation}
with
\begin{equation}
E_{b,kin}(\varepsilon_p,U)/t=-\sum_{<ij>}\langle{\mit\Psi}_0(\varepsilon_p,U)
|(c_{i\uparrow}^\dagger c_{i\downarrow}^\dagger
c_{j\uparrow}^{} c_{j\downarrow}^{}+\mbox{H.c.})
|{\mit\Psi}_0(\varepsilon_p,U)\rangle\,,
\label{etbipo}
\end{equation}
shown in Fig.~\ref{F6}~(b). $t_{b,eff}$ describes the coherent
hopping of a on--site bipolaron from site $i$ to site $j$.
Contrary to $t_{p,eff}$, at low EP coupling strengths,
the bipolaronic hopping amplitude $t_{b,eff}$ {\it grows}
with increasing $\varepsilon_p$ showing the increasing importance of the
correlated motion of two electrons (but, quite clearly, we have
$|E_{b,kin}| < |E_{p,kin}|$). At large EP couplings (e.g., for $\varepsilon_p\gapro 1$
at $U=0$ and $\hbar\omega=0.4$), the on--site
bipolaron becomes more and more localized and accordingly we observe a
drop in $t_{b,eff}$ which corresponds to the drop in $t_{p,eff}$ in
the case of one electron at the parameter values where the AHP becomes
stable. Hence we will call this quasiparticle an adiabatic Holstein
bipolaron (AHBP).
To better illustrate the effect of pair formation in the 1D Holstein
(Hubbard) model, we present in Fig.~\ref{F7}
the electron--electron density correlation function
\begin{equation}
C_{el-el}^{}(|i-j|)=\langle {\mit\Psi}_0(\varepsilon_p,U)
|n_i^{} n_j^{}|{\mit\Psi}_0(\varepsilon_p,U)\rangle-
\langle {\mit\Psi}_0(0,U)|n_i^{} n_j^{}|{\mit\Psi}_0(0,U)\rangle
\label{celel}
\end{equation}
in the adiabatic regime with (b) and without (a) Hubbard repulsion.
In each case we have displayed the results for $C_{el-el}^{}(|i-j|)$
as a function of $\varepsilon_p$ in comparison to the electron--phonon
correlation function $C_{el-ph}^{}(|i-j|)$
given by~(\ref{celph}). As Fig.~\ref{F7}~(a) shows,
in the limit of vanishing Coulomb interaction
the on--site electron--electron correlation $C_{el-el}(0)$ dominates
the inter--site correlations $C_{el-el}(|i-j|)$ with $|i-j|\ge 1$, in
particular for $\varepsilon_p\gapro 0.9$, i.e., in the AHBP regime where both electrons
are mainly confined to the same site sharing a common lattice
distortion. Therefore the transition from a mobile large
bipolaron to a `quasi--self--trapped' on--site AHBP is
manifest in a strongly enhanced $C_{el-ph}(0)$ as well (see inset).
Moreover, the transition should be associated with a significant
reduction of the local magnetic moment, $m_{loc}(\varepsilon_p,U)\propto
\langle{\mit\Psi}_0|(n_{i\uparrow}-n_{i\downarrow})^2|{\mit\Psi}_0\rangle$,
indicating the local pairing of spin up and down electrons. Indeed we
found $m_{loc}(\varepsilon_p=1)/m_{loc}(\varepsilon_p=0.9)|_{U=0,\hbar\omega=0.4}^{}=0.66$.
As can be seen from Fig.~\ref{F7}~(b), a somewhat
different scenario emerges in the presence
of a finite Coulomb interaction. Here, the Hubbard repulsion
prevents the formation of an on--site bipolaronic bound state
in the weak EP coupling regime. On the other hand, as recently pointed
out by Marsiglio~\cite{Ma95}, the retardation effect of the EP
interaction may favour the formation of more extended pairs.
That is, due to the time--delay
the second electron can take the advantage of the
lattice distortion left by the first one still avoiding
the direct Coulomb repulsion.
In fact, increasing the EP interaction, we find that both the
nearest--neighbour electron--electron and electron--phonon
density correlations starts to rise, while the on--site correlations
remain small (cf. Fig.~\ref{F7}~(b)).
Consequently, we may label this state an adiabatic inter--site bipolaron.
We expect that at larger values of $\varepsilon_p$ the short--range
EP interaction overcomes the Hubbard repulsion and as a result the
two electrons coalesce on a single site forming a `self--trapped'
bipolaron. Unfortunately we are unable to increase
the dimension of the Hilbert space to contain
a large enough number of phonons in the adiabatic
very strong--coupling regime.
As already mentioned for the one--electron case,
the description of the self--trapping phenomenon requires the
inclusion of multi--phonon states. This is clearly displayed in
Fig.~\ref{F8}, where we have shown the weight of the $m$--phonon
state in the ground state for various EP coupling strengths.
One sees immediately that the maximum of $|c^m|^2$ is rapidly
shifted to larger values of $m$ as $\varepsilon_p$ increases.
Increasing the phonon frequency at fixed $\varepsilon_p$,
this tendency is reversed (see inset).
In the extreme antiadiabatic limit ($\hbar\omega\to\infty$)
we have $m_{max}=0$ and the binding disappears for $U>2\varepsilon_p$.
As in the case of one electron it is interesting to look at the
low--lying excitations of the inter--site bipolaron. Although we do
not have a clear definition as to the momentum of this compound
particle, it turns out that we indeed find a well--separated energy
band if we again classify the lowest
energy eigenvalues with respect to the allowed $K$--states of our
finite system (see Fig.~\ref{F9}). The formation of the (inter--site)
bipolaron band can be attributed to pronounced retardation effects
[cf. the maxima in the nearest--neighbour correlation functions
$C_{el-ph}(1)$ and $C_{el-el}(1)$ (Fig.~\ref{F7}) as well as the
large bipolaronic hopping amplitude $t_{b,eff}$ (Fig.~6) at $\varepsilon_p=3$].
Surprisingly the dispersion of this `quasiparticle' band becomes exactly like
that of a free particle (with a strongly renormalized bandwidth)
at $\varepsilon_p=U/2$, where in the standard Lang--Firsov polaron theory
the effective Coulomb interaction vanishes. As the EP coupling
exceeds $U/2$, a deviation from the cosine--dispersion occurs and we
expect that for $\varepsilon_p\gg U/2$ an extremely narrow AHBP--band will be
formed.
\subsection{Holstein t--J model}
Now let us turn to the case, where a few dopant--induced charge
carriers (holes) coupled to lattice phonons move in an antiferromagnetic
correlated spin background. In 2D, this situation, frequently described
by the Holstein t--J model~(\ref{htjm})~\cite{DKR90,FRWM95,DGKR95},
is particularly interesting as it represents the basic electronic and
phononic degrees of freedom in the $\rm CuO_2$ planes of the
high--$T_c$ cuprates. As yet, very little is known theoretically about the
interplay between EP coupling and antiferromagnetic exchange interaction
in such systems. Of course, the exact diagonalization technique, as
applied in the preceding section to the Holstein Hubbard model,
provides reliable results for the ground--state properties of the
Holstein t--J model as well. Here, however, one usually works
near half--filling, i.e., the electronic basis is very
large from the outset imposing severe restrictions on the dimension
of the phononic Hilbert space. Therefore we are unable to reach the
extreme strong EP coupling regime especially in the adiabatic limit.
In the following numerical analysis of the Holstein t--J model, the exchange
interaction strength is fixed to $J/t=0.4$ (which seems to be a
realistic value for the high--$T_c$ systems).
First, let us discuss the behaviour of the effective transfer
amplitude, $t_{p,eff}=E_{p,kin}(\varepsilon_p,J)/ E_{p,kin}(\varepsilon_p,0)$, shown in
Fig.~\ref{F10}. Increasing the EP coupling at fixed phonon frequency
$\hbar\omega=0.8$, the mobility of the hole is strongly
reduced and an Holstein--type hole--polaron (AHP) is formed at about
$\varepsilon_p^c\simeq 2.0$. The continuous crossover
from a nearly free hole--polaron (FP) to
the AHP state is similar to that observed in the 2D single--electron
Holstein model, i.e., at $\varepsilon_p \simeq \varepsilon_p^c$ a second maximum in the phonon
distribution function $(|c^m|^2)$ evolves,
which, for $\varepsilon_p \gg \varepsilon_p^c$, becomes more pronounced and
is shifted to higher phonon states. For example, we get
$m_{max}\simeq 4$ at $\varepsilon_p=4$ and $\hbar\omega=0.8$.
The increasing importance of multi--phonon states in obtaining the
`true' ground--state energy at large $\varepsilon_p$
becomes clearly visible in Fig.~\ref{F10} by comparing the
results for various phonon numbers $M$.
There is, however, an important difference between the one--hole and
one--electron cases which should not be underemphasized: In the
single--hole Holstein t--J model
antiferromagnetic spin correlations and EP interactions
reinforce each other to the effect of {\it lowering} the threshold for
polaronic `self--localization'.
This fact is in agreement with IMVLF--Lanczos
results obtained recently by the authors~\cite{FRWM95}.
As Fig.~\ref{F10} illustrates, the IMVLF--Lanczos technique, which
variationally takes into account inhomogeneous {\it frozen--in}
displacement--field configurations as well as {\it dynamic}
polaron and squeezing phenomena, describes the qualitative features of
the transition from FP to AHP states and gives a reliable
estimate of the renormalization of the effective transfer
matrix element $t_{p,eff}$.
Moreover, the IMVLF--Lanczos method yields an excellent variational upper
bound for the true ground--state energy $E_0$, and therefore it
provides an additional educated check for the minimal number of
phonons one has to take into account within the Hilbert space
truncation technique.
By analogy to Eq.~(\ref{celph}), we have calculated
the corresponding hole--phonon density correlation function,
$C_{ho-ph}(|i-j|)=\langle {\mit\Psi}_0^{}|\tilde{h}_i^{} b^\dagger_j
b^{}_j|{\mit\Psi}_0^{}\rangle$, for the 2D Holstein t--J model.
Figure~\ref{F11} shows $C_{ho-ph}(|i-j|)$ as a function of the
short--range EP interaction strength $\varepsilon_p$ at various phonon frequencies.
The transition to the AHP state is signaled by a strong increase in
the on--site hole--phonon correlations which are about one
order in magnitude larger than the nearest--neighbour ones.
This indicates that the AHP quasiparticle comprising a
`quasi--localized' hole and the phonon cloud is mainly confined to a
single lattice site. Increasing the phonon frequency the hole--phonon
correlations are smeared out and the crossover to the small
hole--polaron is shifted to larger values of the EP coupling.
Now, let us consider the two--hole case. In Fig.~\ref{F12} we show
the effective polaronic transfer amplitudes $t_{p,eff}(N_h)$
{\it vs} EP coupling strength in the adiabatic ($\hbar\omega=0.1$),
intermediate ($\hbar\omega=0.8$), and non--adiabatic ($\hbar\omega=3.0$)
regimes. In each case we compare the one-- and two--hole results to
get a feel for hole--binding effects.
Remarkably we find that $t_{p,eff}(2)$ is larger than $t_{p,eff}(1)$
for $\varepsilon_p\lapro 1$ and $\hbar\omega=0.1$,
indicating a {\it dynamical} type of hole binding in the low--frequency
weak--coupling regime where retardation effects become important.
Indeed, the two--hole binding energy, defined as usual by
$E_B^2(J,\varepsilon_p,\hbar\omega)=E_0(2)+E_0(0)-2E_0(1)$ with respect to the
Heisenberg energy $E_0(0)$, slightly decreases,
i.e., hole binding is enhanced [$E_B^2(0.4,0,0)<0$], as the EP interaction
increases at low EP coupling strengths.
In contrast, at large phonon frequencies, with increasing $\varepsilon_p$ we find
that $E_B^2$ increases, which seems to be an indication
that retardation no longer plays a role~\cite{Ma95}.
On the other hand, in the adiabatic strong--coupling
regime, where the two holes become `self--trapped' on NN sites forming
a nearly immobile hole--bipolaron, we expect
an even stronger reduction of $t_{p,eff}(2)$ compared with
$t_{p,eff}(1)$ (cf. the IMVLF--Lanczos results
presented in Ref.~\cite{FRWM95}. Here, a rather {\it static} type of hole
binding is realized.
To substantiate this interpretation we have calculated
the hole--hole density correlation function
\begin{equation}
C_{ho-ho}^{}(|i-j|)=\langle {\mit\Psi}_0(\varepsilon_p,J)
|\tilde{h}_i^{} \tilde{h}_j^{}|{\mit\Psi}_0(\varepsilon_p,J)\rangle
\label{choho}
\end{equation}
in the 2D Holstein t--J model. Note that
$C_{ho-ho}(|i-j|)$ provides an even more reliable test for the
occurrence of hole binding than the binding energy
$E_B^2$~\cite{BPS89b}. Indeed, when calculating $E_B^2$, we are
comparing states with different quantum numbers, specifically with
different $S$ and $S^z$. In Fig.~\ref{F13} we present results for
the non--equivalent hole--hole pair correlation functions
in the ground state of the Holstein t--J model with two holes.
In the weak--coupling region the hole--density correlation function
becomes maximum at the largest distance of the ten--site lattice,
while in the intermediate EP coupling regime the preference is on NNN pairs.
As expected, increasing further the EP interaction strength,
the maximum in $C_{ho-ho}(|i-j|)$ is shifted to the
shortest possible distance (remember that double occupancy is strictly
forbidden), indicating hole--hole attraction. The behaviour of
$C_{ho-ho}(|i-j|)$ is found to be qualitatively
similar for higher (lower) phonon frequencies (see inset), except that
the crossings of different hole--hole correlation functions occur at
larger (smaller) values of $\varepsilon_p$. In essence, our results clearly
indicate that hole--bipolarons could be formed in the Holstein t--J
model at large EP coupling.
\section{Conclusions}
To summarize, in this paper we have studied the problem of (hole--)
bi--/polaron formation in the Holstein Hubbard/t--J model by means of
direct Lanczos diagonalization using a truncation method of the
phononic Hilbert space. Compared with previous treatments of the
Holstein (Hubbard) model on very small clusters,
we are able to analyze large enough systems in order to discuss
polaron and bipolaron band formation, which has been a subject of
recent controversy~\cite{BEMB92,SAL95}. Our main results are the following.
\begin{itemize}
\item[(i)]
In the case of a single electron coupled to Einstein phonons
(Holstein model), we confirm that
the rather `sharp' transition from a `delocalized'
nearly free polaron (FP) [or a large polaron (LP) in 1D]
to a `quasi--localized' Holstein polaron (AHP) in the adiabatic regime
and the very smooth transition to a Lang--Firsov--type polaron (NLFP)
in the non--adiabatic regime are both {\it continuous}.
In agreement with recent exact results~\cite{Loe88,CPF95,FL95},
we observe no ground--state level crossings or any
non--analyticities as the EP coupling increases.
We point out that in the one--dimensional
weak--coupling case a large--size polaron is formed at any finite EP
coupling. In the strong--coupling regime, the AHP state is characterized
by pronounced on--site electron--phonon correlations
making the quasiparticle susceptible to `self--trapping'. Most
notably, the formation of an adiabatic Holstein polaron
is accompanied by a shift of the maximum in the
phonon distribution function to higher phonon states,
which seems to be an intrinsic feature of the
`self--trapping' transition. By contrast, the non--adiabatic
NLFP ground state is basically a zero--phonon state.
\item[(ii)]
By calculating the spectral properties of a single electron, we have
found convincing evidence for the formation of a well separated
narrow polaron band in both the adiabatic and non--adiabatic
strong--coupling regimes. In addition to the expected band--narrowing we
also found a deviation from the `cosine'-dispersion
away from the adiabatic and antiadiabatic limits. Although
the `coherent' bandwidth, deduced from our finite--lattice ED data,
becomes extremely small in the adiabatic
strong--coupling case (polaronic band collapse),
we believe that the AHP does not lose its phase coherence and
can move itinerantly.
\item[(iii)]
Investigating the two--particle problem in terms of the 1D Holstein model,
we could clearly identify the transition
from a extended (large) bipolaron to a `quasi--localized' (on--site)
bipolaron (AHBP) as the EP interaction strength increases.
Stabilizing a two--polaron state in the weak EP coupling regime by
taking into account the on-site Coulomb repulsion (Holstein Hubbard
model), we found a transition to an inter-site bipolaron
at about $\varepsilon_p\simeq U/2$. It is worth emphasizing that
this inter-site bipolaron appears to have a
dispersion that resembles very closely the cosine--dispersion of a
noninteracting particle with a renormalized bandwidth.
If the EP coupling is further enhanced $(\varepsilon_p\gg U/2,\,\hbar\omega,\,t)$,
a second transition to a `self--trapped' on--site AHBP
will occur~\cite{We94}.
\item[(iv)]
Analyzing the hole--polaron formation in the framework of the
2D Holstein t--J model, we found that the critical EP coupling for the
polaron transition is substantially reduced due to `prelocalization'
of the doped charge carriers in the antiferromagnetic spin background.
Therefore we suggest that polaronic effects are of special importance
in (low--dimensional) strongly correlated narrow--band
systems like the nickelates and high--$T_c$ cuprates.
\item[(v)]
Regarding ground--state properties of the Holstein t--J model in the
two--hole sector, a detailed study of the hole--hole correlation
functions and the two--hole binding energy was carried out, yielding
strong evidence for an enhanced hole attraction and the formation of
hole--bipolarons as a dynamical effect of the EP interaction.
\end{itemize}
Of course, the exact results presented in this paper hold for
the Holstein Hubbard (Holstein t--J) model with one and two
electrons (holes) on {\it finite} 1D (2D) systems, i.e.,
we are not prepared to prove any {\it rigorous} statements
about the thermodynamic limit here. However, we believe that our main
conclusions (i)--(v), in particular the existence of
well--separated polaronic and bipolaronic quasiparticle bands
even in the adiabatic strong--coupling regime,
will survive in the infinite system.
\section*{Acknowledgements}
The computations were performed on a CM5 of the GMD (St. Austin). We thank
D. Ihle and E. Salje for interesting and helpful discussions and J.
Stolze for a critical reading of the manuscript.
\baselineskip0.62cm
\newpage
|
1,477,468,750,885 | arxiv |
\section{Introduction}
Centraliser clones are collections of homomorphisms of finite powers of
algebras into themselves. That is, if $\alg{A}$ is an algebra and $F$ is
the set of fundamental operations of~$\alg{A}$, then the
centraliser~$\cent{F}$ of~$F$ is the set
$\bigcup_{n<\omega} \Hom\apply{\alg{A}^n,\alg{A}}$. From a categorical
perspective, this is a very natural construction that makes sense in every
category~$\mathscr{C}$ with arbitrary finite powers. If~$A$ is an object in
such a category~$\mathscr{C}$ we call
$\bigcup_{n<\omega} \Hom_{\mathscr{C}}\apply{A^n,A}$ the clone over the
object~$A$. With this understanding centraliser clones are simply the
clones over algebras in the category of algebras of a certain type.
If we change the signature of the structures to allow relation symbols
(that is, we change the category to relational structures of a certain
signature), we obtain clones over some relational structure~$\mathbb{A}$
with set of fundamental relations~$Q$: $\bigcup_{n<\omega}
\Hom\apply{\mathbb{A}^n,\mathbb{A}}$. This clone is called the clone
$\Pol{Q}$ of polymorphisms of~$Q$ (or just the polymorphism clone
of the structure~$\mathbb{A}$), and it is well known by results of
Bodnar\v{c}uk, Kalu\v{z}nin, Kotov,
Romov~\cite{BodnarcukKaluzninKotovRomovGaloisTheoryForPostAlgebras} and
Geiger~\cite{GeigerClosedSystemsOfFunctionsAndPredicates}
on the classical $\PolOp$-$\InvOp$ Galois correspondence
that every clone on a finite carrier set~$A$ arises as a polymorphism
clone of some relational structure~$\mathbb{A}$.
\par
As every algebraic structure~$\alg{A}$ can also be understood as a
relational one (by taking the graphs of the fundamental operations as
the fundamental relations), it is clear that the centraliser clones on
a given set~$A$ form a subcollection of the polymorphism clones on that
set. This fact is very closely related to restricting the
$\PolOp\text{-}\InvOp$ Galois correspondence on the relational side in
such a way that the only relations taken into consideration are those
which are graphs of a function.
This restriction of the preservation relation (underlying
$\PolOp\text{-}\InvOp$) between functions and relations to functions
and function graphs leads to the notion of commutation of functions,
which is exactly the homomorphism property between finite powers of
algebras that
was used above to introduce the concept of centraliser clone. As the
Galois correspondence is restricted on one side only (the relational
one), there is a connection between the associated Galois closures: the
$\PolOp\text{-}\InvOp$ closure~$\Pol{\Inv{F}}$ of a set of
operations~$F$ (which for finite~$A$ agrees with the generated
clone~$\genClone{F}$) is weaker than the bicentrical
closure~$\bicent{F}$, that is, the double centraliser of~$F$, or,
equivalently, all functions commuting with all those functions that commute with the
functions in~$F$. The strength of the bicentrical closure in comparison
to $\Pol{\Inv{}}$ manifests itself in the following way: while
$\Pol{\Inv{F}}$ closes~$F$ against all compositions of
\nbdd{F}functions with themselves and projections (i.e.\ one iteratively
substitutes functions and variables until nothing new appears),
$\bicent{F}$ computes all functions that are primitive positively
definable from the function graphs of~$F$ (i.e.\ one interprets all
existentially quantified finite conjunctions of predicates of the form
$f(\bfa{v}) = x$ and equality predicates $y=z$ (where $f\in F$,
$\bfa{v}$ is a tuple of variables and $x,y,z$ are variables) and among
these interpretations selects those relations that are function
graphs). Functions whose graphs are constructible via such primitive
positive formul\ae{} from~$F$ have been called \emph{parametrically
expressible} through~$F$~\cite[p.~26]{KuznecovCentralisers1979}
(in contrast to functions in the clone~$\genClone{F}$ that are
\emph{explicitly expressible} via~$F$),
and also the connection of this construction with the preservation of
function graphs and the commutation of operations has first been noted
in~\cite[p.~27 et seq.]{KuznecovCentralisers1979}. For this reason
centraliser clones have also been studied under the name
\emph{parametrically closed classes}
(see e.g.~\cite{Danilcenko1978ParametricallyClosedClasses3ValuedLogic})
or~\emph{primitive positive clones}
(e.g.~\cite{BurrisWillardFinitelyManyPPClones}).
\par
It may not seem so at first glance, but the parametrical
(primitive positive, bicentrical) closure is notably
much stronger than closure under substitution. Namely, it has the
remarkable consequence that on every finite set~$A$ there are only
finitely many centraliser
clones~\cite[Corollary~4, p.~429]{BurrisWillardFinitelyManyPPClones},
which is in sharp contrast to the situation for polymorphism clones, of
which there is a continuum whenever
$\abs{A}\geq 3$~\cite{JanovMucnik1959}.
If $F$ is a centraliser clone (i.e. $\bicent{F} = F$), then
$\bicentn[1]{F}\subs\bicentn[2]{F}\subs \dotsm \subs
\bicentn{F} \subs F$ holds for all $n<\omega$ and
$\bigcup_{n<\omega} \bicentn{F} =\bicent{F} = F$. Since there are only
finitely many centraliser clones on a given finite set there must be some
$n<\omega$ such that for arities larger than~$n$ none of the inclusions is strict any more,
that is, $\bicentn{F} = \bicentn[m]{F}$ for all $n\leq
m<\omega$. Hence, $F = \bigcup_{j\leq n}\bicentn[j]{F} =
\bicentn{F}$; so there is some arity $n$ such that $F$ is
bicentrically generated by its \nbdd{n}ary part. Take this $n_F$ to be
minimal and then take the maximum over all (finitely many) $n_F$:
\[\cdeg(k)\defeq \max\lset{n_F}{F=\bicent{F} \text{ on } A,\abs{A}=k}.\]
We shall refer to this number as the \emph{uniform centraliser degree}
for a \nbdd{k}element set, since every centraliser clone~$F$ on a
carrier set of size~$k$ satisfies $F = \bicentn[\cdeg(k)]{F}$.
\par
With the help of Post's lattice, one can show that $\cdeg(2)=3$. Burris
and Willard explain in~\cite[p.~429]{BurrisWillardFinitelyManyPPClones}
that $\cdeg(k)\leq 4+k^{k^4-k^3+k^2}$ and they claim
that `[b]y slightly different methods [one] can show that any primitive
positive clone on a \nbdd{k}element set is [bicentrically] generated by
its members of arity at most~$k^k$', which implies $\cdeg(k)\leq k^k$.
No written account of the details of this argument has appeared in the
literature so far. However, at the end of the sentence cited above
Burris and Willard conjecture that $\cdeg(k)\leq k$ for every $k\geq 3$.
Besides intuition the only support for this conjecture is a series of
works by A.\,F.\ Dani\v{l}\v{c}enko on the case
$k=3$~(\cite{Danilcenko1974ParametricallyClosedClasses3ValuedLogic,
Danilcenko1976ParametricallyIndecomposables,
Danilcenko1977ParametricExpressibility3ValuedLogic,
Danilcenko1978ParametricallyClosedClasses3ValuedLogic,
Danilcenko1979-thesis},
all of these are in Russian, \cite{Danilcenko1977ParametricExpressibility3ValuedLogic}
has been translated
in~\cite{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated};
\cite{Danilcenko1979ParametricalExpressibilitykValuedLogic}
is written in English). As a side note we remark that a \nbdd{k}ary
example function, stated
in~\cite[p.~269]{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated}
for a different proof, can be used to show that $\cdeg(k)\geq k$ for
$k\geq 3$; so if the Burris-Willard conjecture is true, then it
certainly is sharp.
\par
In her thesis~\cite[Section~6, p.~125 et seqq.]{Danilcenko1979-thesis} Dani\v{l}\v{c}enko gives a complete description of
all $2\,986$ centraliser clones on the three\dash{}element domain. A central
step in this process is to identify a set~$\Gamma$ of 197 parametrically
indecomposable functions~\cite[Theorem~4,
p.~103]{Danilcenko1979-thesis} such that every centraliser clone~$F$ is
the centraliser of a subset of~$\Gamma$~\cite[Theorem~5,
p.~105]{Danilcenko1979-thesis}. The maximum arity of functions
in~$\Gamma$ is three, so Dani\v{l}\v{c}enko's theorems imply that
$F = \bicentn[3]{F}$ for every centraliser clone on three
elements (cf.\
Proposition~\ref{prop:char-cdeg}\eqref{item:bicent-n},\eqref{item:cent-leq-n}), that is, $\cdeg(3)\leq3$. The
results of Theorems~4 and~5 of~\cite{Danilcenko1979-thesis}, which make
the Burris-Willard conjecture true for $k=3$, are also mentioned
in~\cite[p.~155 et
seq.]{Danilcenko1979ParametricalExpressibilitykValuedLogic}
and~\cite[Section~5,
p.~414 et seq.]{Danilcenko1977ParametricExpressibility3ValuedLogic} (\cite[Section~5,
p.~279]{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated},
respectively), but no proofs are given there.
\par
Drastically cut down versions of this work have been published
in~\cite{Danilcenko1974ParametricallyClosedClasses3ValuedLogic,
Danilcenko1976ParametricallyIndecomposables,
Danilcenko1977ParametricExpressibility3ValuedLogic,
Danilcenko1978ParametricallyClosedClasses3ValuedLogic,
Danilcenko1979ParametricalExpressibilitykValuedLogic},
of which
only~\cite{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated,Danilcenko1979ParametricalExpressibilitykValuedLogic}
are accessible without difficulties. Given that the whole thesis
comprises~141 pages, these excerpts are rough sketches of the
classification at best (sometimes containing mistakes, many but not all
of which have been corrected in~\cite{Danilcenko1979-thesis}), and
leading experts in the field agree that it is very hard if not
impossible to reconstruct the proof of the description of all
centralisers on three\dash{}element sets from the readily available
resources. For example, Theorem~4 of~\cite{Danilcenko1979-thesis} has
appeared as part of~\cite[Proposition~2.2,
p.~16]{Danilcenko1978ParametricallyClosedClasses3ValuedLogic} with a
proof sketch of less than two pages, while the proof
from~\cite{Danilcenko1979-thesis} goes through technical calculations
and case distinctions for several pages
(however from Propositions~2.2, 2.3 and~2.4
of~\cite{Danilcenko1978ParametricallyClosedClasses3ValuedLogic}, a proof
of Theorem~5 of~\cite{Danilcenko1979-thesis} \emph{can} be obtained).
The chances of understanding might be better using the thesis as a primary source, but
for unknown reasons Moldovan librarians seem to be rather reluctant to
grant full access to it. In the light of this discussion,
Dani\v{l}\v{c}enko's classification is a result that one may believe in,
but that should not be trusted unconditionally to build further theory
on as it remains not easily verifiable at the moment. This of course
also casts some doubts on the basis of the Burris-Willard conjecture.
\par
Another possible challenge to the conjecture (and likewise to the
correctness of Dani\v{l}\v{c}enko's list of parametrically
indecomposable functions) is presented by much
later results of Snow~\cite{SnowGeneratingPrimitivePositiveClones}. In
this article the minimum arity needed to generate the bicentraliser
clone of a finite algebra from its term operations is investigated, and,
under certain assumptions on the algebra, quite satisfactory upper
bounds for that number are produced. These sometimes match (or almost
match) the number~$k$ predicted by the Burris-Willard conjecture, and
sometimes even fall below~$k$. This is possible (and supports the
conjecture) since the bounds given by Snow do not apply to \emph{all}
algebras on a \nbdd{k}element set, but only to some specific subclass.
Hence, they are not in contradiction with the \nbdd{k}ary function
from~\cite[p.~269]{Danilcenko1977ParametricExpressibility3ValuedLogicTranslated}.
Even more interestingly, in Section~3
of~\cite{SnowGeneratingPrimitivePositiveClones}
a class of examples of algebras on $k$-element carrier sets is given,
for which Snow proves $(k-1)^2$ to be a lower bound for the minimum
arity of term functions from which the bicentraliser can be generated.
This number is larger than~$k$ whenever $k\geq 3$. Explicitly,
when $k=3$, the lower bound is equal to~$4$, which means that arity
three or less does not suffice to generate the bicentraliser clone of
that specific algebra.
\par
In more detail, Snow defines for an algebra \m{\alg{A}} with set~$F$ of
fundamental operations the number
$\ppc(\alg{A})= \min\lset{n\in\N}{ \bicent{\genClone{F}} = \bicent{{\genClone[n]{F}}}}$.
This number certainly only depends on the clone \m{\genClone{F}} of term
operations of the algebra, hence no generality is lost in simply
considering the number
\[
\mu_F \defeq
\min\lset{n\in\N}{ \bicent{\genClone{F}} = \bicent{{\genClone[n]{F}}}}
=\min\lset{n\in\N}{ \bicent{F} =\bicentn{F}}
\]
associated with clones~$F$ on a \nbdd{k}element set~$A$.
If \m{F} happens to be a centraliser clone, the definition clearly
simplifies to
\m{\mu_F = \min\lset{n\in\N}{F = \bicentn{F}} = n_F}, which is
bounded above by \m{\cdeg(k)}.
However, if now~$F$ is the clone of term operations of the example constructed
by~Snow, then the lower bound on~$\mu_F$ from~\cite[Theorem~3.1,
p.~171]{SnowGeneratingPrimitivePositiveClones} implies the
following contradiction
\[k<(k-1)^2\leq \mu_F \stackrel{?_2}{=} n_F\leq \cdeg(k)
\stackrel{?_1}{\leq} k.\]
This offers two conclusions: either $?_1$ does not hold, which means
that the Burris-Willard conjecture and, in particular, the
Dani\v{l}\v{c}enko classification on three\dash{}element domains fail,
or $?_2$ is false, which simply means that~$F$ is not a centraliser
clone.
If we are to believe in Dani\v{l}\v{c}enko's theorems, then (for $k=3$)
the latter is the only possible consequence. However, for the reasons
mentioned above, it would be desirable to obtain such a conclusion
independently of Dani\v{l}\v{c}enko's \oe{}uvre.
\par
Such is the aim of the present article. We are going to give a proof
that the clone~$F$ of term operations of the algebra given
in~\cite[Theorem~3.1, p.~171]{SnowGeneratingPrimitivePositiveClones} is
not bicentrically closed and hence poses no threat to the Burris-Willard
conjecture. To do this, for every $k\geq 3$ we exhibit a
\nbdd{(k-1)}ary
function in~$\bicent{F}$ that cannot be obtained by composition of the
fundamental operation(s) of Snow's algebra. In doing so we use the case
$k=3$ as a guideline, where we show, for example, that~$F$
and~$\bicent{F}$ cannot be separated by unary functions, and that the
mentioned operation is the only separating binary function.
\section{Notation and preliminaries}
Throughout we use \m{\N = \set{0,1,2,\dotsc}} to denote the set of
natural numbers, and we write \m{\Np} for~$\N\setminus\set{0}$. It will
be convenient for us to understand the elements $n\in\N$ as
\nbdd{n}element sets $n= \set{0,1,\dotsc,n-1}$ as originally suggested
by John von Neumann in its model of natural numbers as finite ordinals.
\par
One of the central concepts for this paper are functions, such as
$f\colon A\to B$ and $g\colon B\to C$, and we use a
left\dash{}to\dash{}right notation for composition. That is, \m{g\circ
f\colon A\to C} sends any $x\in A$ to $g(f(x))$. The set of all
functions from~$A$ to~$B$ is written as~$B^A$.
Moreover, if \m{f\in B^A} and \m{U\subs A} and \m{V\subs B} we denote
by \m{f\fapply{U} = \set{f(x)\mid x\in U}} the \emph{image of~$U$
under~$f$} and by \m{f^{-1}\fapply{V} = \set{x\in A \mid f(x)\in V}} the
\emph{preimage of~$V$ under~$f$}. We also use the symbol~$\im f$ to
denote the full \emph{image} $f\fapply{A}$ of~$f$. All these
notational conventions will apply in particular to tuples \m{\bfa{x}\in
A^n}, \m{n\in\N}, that we formally understand as maps
\m{\bfa{x}\colon \set{0,\dotsc,n-1}\to A}. This does, of course, not
preclude us from using a different indexing for the entries
of \m{\bfa{x}=(x_1,\dotsc,x_n)}, if that seems more handy. So, e.g.,
we have \m{\im \bfa{x} = \set{x_1,\dotsc,x_n}} and \m{f\circ
\bfa{x} = (f(x_1),\dotsc, f(x_n))\in B^n}.
Notably, we are
interested in functions of the form $f\colon A^n\to A$ that we call
\emph{\nbdd{n}ary operations} on~$A$. All such operations form the set
$A^{A^n}$, and if we let the parameter~$n$ vary in~$\Np$, then we obtain
the set \m{\Op{A}= \bigcup_{0<n<\omega} A^{A^n}} of all \emph{finitary
(non\dash{}nullary) operations} over~$A$. If \m{F\subs\Op{A}} is any set of finitary
operations, we denote by \m{\Fn{F}\defeq A^{A^n}\cap F} its
\emph{\nbdd{n}ary part}. In particular, $\Op[n]{A} = A^{A^n}$.
Some specific \nbdd{n}ary operations will be needed: for $a\in A$ we
denote the constant \nbdd{n}ary function with value~$a$ by
$\cna[n]{a}\colon A^n\to A$. Moreover, if \m{n\in\N} and
\m{1\leq i\leq n} we call \m{\eni[n]{i}\colon A^n\to A}, given by
\m{\eni[n]{i}(x_1,\dotsc,x_n)\defeq x_i} for all
\m{(x_1,\dotsc,x_n)\in A^n}, the \nbdd{i}th \nbdd{n}variable
\emph{projection} on~$A$. Collecting all projections on~$A$ in one set,
we obtain
\m{\J{A} = \lset{\eni[n]{i}}{1\leq i\leq n, n\in\N}}.
\par
We call a set $F\subs\Op{A}$ a \emph{(concrete) clone} on~$A$ if
$\J{A}\subs F$ and if~$F$ is closed under composition, i.e.,
whenever \m{m,n\in\N} and \m{f\in\Fn{F}},
\m{g_1,\dotsc,g_n\in\Fn[m]{F}}, then
also the composition \m{f\circ(g_1,\dotsc,g_n)}, given by
\m{(f\circ(g_1,\dotsc,g_n))(\bfa{x}) \defeq
f(g_1(\bfa{x}),\dotsc,g_n(\bfa{x}))} for any \m{\bfa{x}\in A^m},
belongs to the set~$F$. All sets of operations that were named `clone'
in the introduction are indeed clones in this sense (except for the
fact that they were allowed to contain nullary operations, which we
want to exclude to avoid unnecessary technicalities). Clones are closed
under intersections, and hence for any set $G\subs\Op{A}$ there is a
least clone~$F$ under inclusion with the property \m{G\subs F}. This
clone~$F$ is called the \emph{clone generated by~$G$} and is denoted
as~$\genClone{G}$. It is computed by adding all projections to~$G$ and
then closing under composition, that is, by forming all term operations
(of any positive arity) over the algebra~$\algwops{A}{G}$.
\par
A function \m{f\in\Op[n]{A}} \emph{preserves} a relation
\m{\rho\subs A^m} (with \m{m,n\in\N}) if for every
\m{\bfa{r}=(r_1,\dotsc,r_n)\in\rho^n} the
tuple \m{f\circ\bfa{r}\defeq (f(r_1(i),\dotsc,r_n(i)))_{1\leq i\leq m}}
belongs to~$\rho$. For a set~$Q$ of finitary relations, the
set \m{\Pol{Q}} of polymorphisms of~$Q$ consists of all functions
preserving all relations belonging to~$Q$. Every polymorphism set is a
clone. Dually, for a set~$F\subs\Op{A}$, the set \m{\Inv{F}} contains
all \emph{invariant} relations of~$F$, that is, all relations being
preserved by all functions in~$F$.
\par
For the convenience of the reader we now give a perhaps more accessible
characterisation of the (non\dash{}nullary part of the) centraliser \m{\cent{F}} of some set of
operations~$F\subs\Op{A}$, which was already defined at the beginning
of the introduction. A function \m{g\colon A^m\to A} belongs to the
centraliser~$\cent{F}$ (\emph{commutes} with all functions from~$F$) if for every function \m{f\in F} the following
holds (where~$n$ is the arity of~$f$): for every matrix \m{X\in
A^{m\times n}} applying~$g$ to the \nbdd{m}tuple obtained from
applying~$f$ to the rows of~$X$ gives the same result as evaluating~$f$
on the \nbdd{n}tuple obtained from applying~$g$ to the columns of the
matrix. In symbols:
\m{g((f((x_{ij})_{j\in n}))_{i\in m})
= f((g((x_{ij})_{i\in m}))_{j\in n})}
has to hold for all
\m{(x_{ij})_{(i,j)\in m\times n}\in A^{m\times n}}
(and all $f\in F$).
A brief moment of reflection shows that this condition is the same as
saying that \m{g\colon \algwops{A}{F}^m\to \algwops{A}{F}} is a
homomorphism. A yet different way of saying this is that~$g$ is a
polymorphism of~$\mathbb{A}=\algwops{A}{\graph{F}}$, that is,
$g\in\Pol{\graph{F}}$ preserves all graphs
\m{\graph{f}=\lset{(\bfa{x},f(\bfa{x}))}{\bfa{x}\in A^n}\subs A^{n+1}}
of all functions \m{f\in F} of any arity \m{n\in \N}. From this, it is
again clear that \m{\cent{F}} always must be a clone. On the other
hand, it is obvious from the matrix formulation that centralisation is
a symmetric condition:
for all \m{F,G\subs\Op{A}} we have \m{G\subs \cent{F}} if and only if
\m{F\subs \cent{G}}. Hence, we see that
\begin{align*}
\cent{F} &= \lset{g\in\Op{A}}{g\in\cent{F}}
=\lset{g\in\Op{A}}{F\subs\cent{\set{g}}}\\
&=\lset{g\in\Op{A}}{\genClone{F}\subs\cent{\set{g}}}
=\lset{g\in\Op{A}}{g\in\cent{\genClone{F}}} = \cent{\genClone{F}}
\end{align*}
for every \m{F\subs\Op{A}}, so the centraliser of a whole clone is not
smaller than the centraliser of its generators. Since the clone
constructed in Snow's paper is given in terms of a single generator
function, we can thus study its centraliser as the set of all operations
commuting with this one generating function.
\par
In the introduction the uniform centraliser degree was defined as the
least arity~$n$ such that every centraliser clone~$F$ on a given finite
set can be bicentrically generated as \m{F=\bicentn{F}}. The
following result shows that the search for this number is likewise a
search for an arity~$n$ such that every centraliser clone is a
centraliser of a set of functions of arity at most~$n$.
\begin{proposition}\label{prop:char-cdeg}
For any carrier set~$A$ and an integer~$n\in\N$ the following facts are
equivalent:
\begin{enumerate}[(a)]
\item\label{item:bicent-n}
For every centraliser clone~$F$ we have \m{F=\bicentn{F}}.
\item\label{item:n-cent}
For every centraliser clone~$F$ we have
\m{\centn{F}=\cent{F}}.
\item\label{item:n-cent-n-cent}
For every centraliser clone~$F$ we have
$F^{(n)*(n)*}=F$.
\item\label{item:cent-leq-n}
For every centraliser clone~$F$ there is some
\m{G\subs \bigcup_{\ell\leq n} \Op[\ell]{A}} such that
\m{F=\cent{G}}.
\item\label{item:cent-n}
For every centraliser clone~$F$ there is some
\m{G\subs \Op[n]{A}} such that
\m{F=\cent{G}}.
\item\label{item:cent-n-cent-bicent}
For every set~$F\subs\Op{A}$ we have
$F^{*(n)*}=\bicent{F}$.
\item\label{item:cent-n-cent}
For every centraliser clone~$F$ we have
$F^{*(n)*}=F$.
\end{enumerate}
\end{proposition}
\begin{proof}
If~\eqref{item:bicent-n} holds and~$F$ is a centraliser clone, then
\m{\cent{F}=F^{(n)***}=\centn{F}}, so~\eqref{item:n-cent} is
true. If~\eqref{item:n-cent} holds, then
\m{F=\bicent{F}=\bicentn{F}} for any centraliser clone~$F$, so
\m{\eqref{item:bicent-n}\Leftrightarrow\eqref{item:n-cent}}.
\par
Suppose now that~\eqref{item:bicent-n}, and thus~\eqref{item:n-cent},
hold. Letting \m{G\defeq\centn{F}} for a centraliser clone~$F$,
we have \m{F=\bicentn{F}=\cent{G}} from~\eqref{item:bicent-n}.
Applying now~\eqref{item:n-cent} to the centraliser~$G$ gives
\m{F=\cent{G}=\centn{G}=F^{(n)*(n)*}},
so~\eqref{item:bicent-n} implies~\eqref{item:n-cent-n-cent}.
\par
From~\eqref{item:n-cent-n-cent} we get~\eqref{item:cent-n} by letting
\m{G=F^{(n)*(n)}\subs\Op[n]{A}}, and~\eqref{item:cent-n}
directly gives~\eqref{item:cent-leq-n}.
\par
Now, suppose that~\eqref{item:cent-leq-n} holds for~$F$ with
functions~$G$ of arity at most~$n$.
Since we have excluded nullary operations, this implies that
\m{G\subs \genClone{\genClone[n]{G}}}, so we obtain
\m{\cent{G}\sups\cent{\genClone{\genClone[n]{G}}} =
\cent{{\genClone[n]{G}}}\sups\cent{\genClone{G}}=\cent{G}},
which means that \m{F=\cent{G} = \cent{H}} where
\m{H\defeq\genClone[n]{G}\subs\Op[n]{A}}. Thus
\m{\eqref{item:cent-n}\Leftrightarrow\eqref{item:cent-leq-n}}.
\par
From~\eqref{item:cent-n}, for every \m{F\subs\Op{A}}, we can express
the bicentraliser \m{\bicent{F}=\cent{G}} with some \m{G\subs\Op[n]{A}}.
Clearly, \m{G\subs\bicent{G}=\cent{F}}, so
\m{G\subs\ncent{F}\subs\cent{F}}.
Therefore, we obtain
\m{\bicent{F}=\cent{G}\sups F^{*(n)*}\sups\bicent{F}},
i.e.~\eqref{item:cent-n-cent-bicent}. The latter
entails~\eqref{item:cent-n-cent} as a special case, for every
centraliser clone~$F$ satisfies \m{\bicent{F}=F}. Moreover,
\eqref{item:cent-n-cent} directly gives~\eqref{item:cent-n} by letting
\m{G\defeq \ncent{F}\subs\Op[n]{A}}.
\par
It remains to show that~\eqref{item:cent-n-cent}
implies~\eqref{item:bicent-n}. Namely, for a centraliser clone~$F$,
applying~\eqref{item:cent-n-cent} to \m{G=\cent{F}}, we get
\m{G=G^{*(n)*}=F^{**(n)*} = \centn{F}}, so
\m{\bicentn{F}=\cent{G}=F}.
\end{proof}
\begin{remark}\label{rem:equivalences-for-one-F}
A closer inspection of the proof of Proposition~\ref{prop:char-cdeg}
reveals that for an individual centraliser clone~$F$ the conditions in
statements~\eqref{item:bicent-n} and~\eqref{item:n-cent} are equivalent
without the universal quantifier. The same holds for the
facts~\eqref{item:cent-leq-n}, \eqref{item:cent-n}
and~\eqref{item:cent-n-cent}.
\end{remark}
Let us now assume that~$F$ denotes the clone constructed by Snow
in~\cite{SnowGeneratingPrimitivePositiveClones}. It is our aim to show
that there is a separating function \m{f\in\bicent{F}\setminus F}.
Since the clone~$F$ is given
in~\cite{SnowGeneratingPrimitivePositiveClones} as
\m{F=\genClone{\set{T}}} by means of a generating function~$T$, once
we have selected an \nbdd{n}ary candidate function~$f$, it is not too
hard to show that \m{f\notin F}. One simply has to describe the
\nbdd{n}ary term operations of~$T$ and to show that~$f$ is not among
them. The harder part is to choose a suitable function
\m{f\in\bicent{F}}: by the definition of the bicentraliser one first
has to understand the whole set~$\cent{F}$ in order to
calculate~$\bicent{F}$. As~$\cent{F}$ contains functions of all arities
this task may require infinitely many steps. Admittedly, there is an
upper bound on the arities that have to be considered, but this bound
is connected to~$\cdeg(\abs{A})$ (see
\m{\eqref{item:bicent-n}\Leftrightarrow\eqref{item:cent-n-cent-bicent}}
in Proposition~\ref{prop:char-cdeg}) and hence under current knowledge
the number of steps is at least exponentially big.
\par
As a way out of this dilemma, we can however consider upper
approximations of~$\bicent{F}$. Namely, if we cut down the centraliser
at some arity~$\ell$, then \m{F^{*(\ell)*}\sups\bicent{F}}.
The smaller~$\ell$ the coarser these approximations are,
but also the easier it becomes to describe \m{\ncent[\ell]{F}}.
In the subsequent section we shall employ a strategy, where we
always start with the least interesting arity~$\ell=1$; it turns
out that this already produces good results by ruling out many
functions that cannot belong to~$\bicent{F}$.
\par
To obtain more information about \m{\ncent[\ell]{F}} for
some fixed~$\ell$, it will be important to derive as many necessary
conditions as possible to help to narrow down the possible candidate
functions in the centraliser. This is done by observing that any
\m{g\in \cent{F}=\Pol{\graph{F}}} belongs to
\m{\Pol{\Inv{\Pol{\graph{F}}}}} and thus has to
preserve all relations in the relational clone
\m{\Inv{\Pol{\graph{F}}}} generated by the graphs of the functions
from~$F$. This set contains all relations that can be defined via
primitive positive formul\ae{} from \m{\graph{F}=\lset{\graph{f}}{f\in
F}}, and among these there are a few notorious candidates: the image,
the set of fixed points and the kernel of any function \m{f\in\Fn{F}}:
\begin{align*}
\im(f)&= \lset{z\in A}{\exists x_1,\dotsc,x_n\in A\colon z =
f(x_1,\dotsc,x_n)},\\
\fix(f) &= \lset{z\in A}{f(z,\dotsc,z)=z},\\
\ker(f) &= \lset{(x_1,\dotsc,x_{2n})\in
A^{2n}}{\exists z\in A\colon f(x_1,\dotsc,x_n)=z
=f(x_{n+1},\dotsc,x_{2n})}.
\end{align*}
\par
To make this more concrete, we now give the generating
function~\m{T\in\Op[n^2]{A}} for the clone \m{F=\genClone{\set{T}}}
where \m{\mu_F\geq n^2} on \m{A=\set{0,\dotsc,n}}, \m{n\geq 2}
(see p.~172 of~\cite{SnowGeneratingPrimitivePositiveClones}):
\m{T\apply{x_{11},\dotsc,x_{1n},x_{21},\dotsc,x_{2n},\dotsc,x_{n1},\dotsc,x_{nn}}=1}
if \m{x_{ij}=i} for all \m{i,j\in\set{1,\dotsc,n}} or
\m{x_{ij}=j} for all \m{i,j\in\set{1,\dotsc,n}}, and
it is zero for all other arguments.
Hence, \m{\im(T) = \set{0,1}}, \m{\fix(T) = \set{0}} and
\m{\ker(T)} identifies
\m{(1,2,\dotsc,n,\dotsc,1,2,\dotsc,n)} with
\m{(1,1,\dotsc,1,\dotsc,n,n,\dotsc,n)} in one block, and all other
\nbdd{n^2}tuples in a second block.
\par
Eventually, after we have found a suitable candidate function
\m{f\notin F=\genClone{\set{T}}}, upper approximations
\m{F^{*(\ell)*}} will not any more be enough to prove
that \m{f\in\bicent{F}} (unless we use an exponentially high value
for~$\ell$, cf.\
Proposition~\ref{prop:char-cdeg}\eqref{item:bicent-n},\eqref{item:cent-n-cent-bicent}). Instead, we can apply a Galois theoretic trick. Namely,
\m{f\in\bicent{F}} if and only if
\m{\cent{F}\subs\cent{\set{f}}=\Pol{\set{\graph f}}}, which is
equivalent to \m{\graph f\in\Inv{\cent{F}} = \Inv{\Pol{\graph{F}}}}.
As the carrier set is finite, this means that the graph of~$f$ must
belong to the relational clone generated from the graphs of functions
in~$F$, i.e., that it is primitive positively definable from those
graphs. Finding a primitive positive formula, which does the job,
requires some creativity, and we will try our best to give some
intuition how it can be found in the case where $\abs{A}=3$. For the
general case $\abs{A}=k\geq 3$ we shall only state the generalisation
of the respective formula and verify that it suffices to define the
graph of a \nbdd{(k-1)}ary function that does not belong to~$F$.
\section{Separating a clone from its bicentraliser}
For the remainder of the paper we let \m{A=\set{0,1,\dotsc,k-1}} where
\m{k\geq 3}, and we consider the clone \m{F=\genClone{\set{T}}}
constructed by Snow
in~\cite[Section~3]{SnowGeneratingPrimitivePositiveClones}. For the
definition of the \nbdd{(k-1)^2}ary generating function~$T$, see the
end of the preceding section.
\par
It is our task to identify some arity~$n$ and some \nbdd{n}ary
operation \m{f\in\Op[n]{A}} such that
\m{f\in\bicent{F} = \bicent{\set{T}}}, but
\m{f\notin F=\genClone{\set{T}}}. In order to avoid a combinatorial
explosion of the structure of the involved clones, it is of course
desirable to keep the arity~$n$ as low as possible. Hence, we shall
start with a description of~\m{\genClone[n]{\set{T}}} for \m{n<k-1}.
Then, using the method of upper approximations, we shall show that it
is impossible to find a separating \m{f\in\bicent{\set{T}}} of such a
low arity. So the next step will be to consider $n=k-1$. Here, we will
first study the case $k=3$, where we can show that there is a unique
function of arity \m{n=k-1=2}, for which we can prove
\m{f\in\bicent{\set{T}}}, but \m{f\notin \genClone[2]{\set{T}}}.
Subsequently, we shall demonstrate that the construction of this
particular~$f$ (and the proof of \m{f\in\bicent{\set{T}}}) can be
generalised to any $k\geq 3$.
\begin{lemma}\label{lem:Tclone-small}
For any $k\geq 3$ we have
\m{\genClone[n]{\set{T}} = \J[n]{A} \cup\set{\cna[n]{0}}} for all $1\leq n<k-1$
where $\cna[n]{0}$ denotes the \nbdd{n}ary constant zero function.
\end{lemma}
\begin{proof}
We have $\cna[n]{0}= \composition{T}{\eni{1},\dotsc,\eni{1}}$ since~$T$
maps every constant tuple to~$0$. Thus the mentioned functions belong to
the \nbdd{n}ary part of~$\genClone{\set{T}}$.
Moreover, the given set is a subalgebra of \m{\algwops{A}{T}^{A^n}}: namely
every composition of~$T$ with functions at least one of which is
$\cna[n]{0}$ is $\cna[n]{0}$. This is so since~$T$ maps every tuple
containing a zero entry to zero. Furthermore, every composition of~$T$
involving only (some of) the~$n$ projections is also~$\cna[n]{0}$
as~$T$ maps every tuple with at most~$n<k-1$ distinct entries to zero.
\end{proof}
To describe \m{\nbicent{\set{T}}} for \m{0<n<k-1}, we shall study
lower approximations of~\m{\cent{\set{T}}}. We begin by cutting the
arity at the level \m{\ell=1}.
\begin{lemma}\label{lem:Tstar1}
For~$A=\set{0,\dotsc,k-1}$ of size $k\geq 3$ we have\footnote{%
For $k=3$ the correctness of this lemma can be checked with the
Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using
the ancillary file \texttt{unaryfunccommutingT.z3}.
It can also be seen from the file \texttt{Tcent1.txt} produced by the
function \texttt{findallunaries()} from the ancillary file
\texttt{commutationTs.cpp}.}
\[\ncent[1]{\set{T}}
= \set{\id_A} \cup \lset{f\in\Op[1]{A}}{f(0)=f(1)=0}.\]
\end{lemma}
\begin{proof}
Let us fix \m{f\in\Op[1]{A}}, commuting with \m{T}.
Since \m{f\in \Pol{\set{\fix(T)}}}, we have \m{f(0) = 0}. Moreover, since $f$
preserves the image of~$T$, we must have \m{f(1)\in\set{0,1}}. If
$f(1)=0$, we are done. Otherwise, if $f(1)=1$, we shall show that
$f=\id_A$. Namely, since $f$ and~$T$ commute, we have
\begin{align*}
1= f(1) &= f(T(1,\dotsc,1,2,\dotsc,2,\dotsc,k-1,\dotsc,k-1))\\
&=T(f(1),\dotsc,f(1),f(2),\dotsc,f(2),\dotsc,f(k-1),\dotsc,f(k-1)),
\end{align*}
which implies that
\begin{align*}
(f(1),\dotsc,f(1),&f(2),\dotsc,f(2),\dotsc,f(k-1),\dotsc,f(k-1))\\
&\in
T^{-1}[\set{1}]\setminus\set{(1,2,\dotsc,k-1,1,2,\dotsc,k-1,\dotsc,1,2,\dotsc,k-1)}\\
&=\set{(1,\dotsc,1,2,\dotsc,2,\dotsc,k-1,\dotsc,k-1)},
\end{align*}
whence clearly $f(x) = x$ for all $0<x<k$, i.e.\ \m{f=\id_A}.
\par
Conversely, we prove that every $f\in\Op[1]{A}$ with $f(0)=f(1)=0$
commutes with~$T$. Assume, for a contradiction, that for some
\m{\bfa{x}\in A^{(k-1)^2}} we had $T(f\circ\bfa{x})=1$; then
\m{\set{1,\dotsc,k-1} = \im f\circ\bfa{x} \subs \im f}, so~$f$ would be
surjective, and, by finiteness of~$A$, bijective. This would contradict
$f(0)=f(1)=0$, so for every \m{\bfa{x}\in A^{(k-1)^2}} we have
$T(f\circ \bfa{x}) = 0 = f(0) = f(1) = f(T(\bfa{x}))$,
since \m{T(\bfa{x})\in \set{0,1}}. Thus~$f\in\cent{\set{T}}$.
\end{proof}
\begin{corollary}\label{cor:Tstar1}
For $A=\set{0,\dotsc,k-1}$ of cardinality~$k\geq 3$, we have the inclusion
\[\ncent[1]{\set{T}}
\sups \lset{u_{j,a}}{a\in A\land j\in A\setminus\set{0,1}},\]
where \m{u_{j,a}} is given by the rule
\[u_{j,a}(x) = \begin{cases} a&\text{if }x=j,\\
0&\text{otherwise.}
\end{cases}\]
\end{corollary}
The binary part of the centraliser already becomes rather obscure
in the general case. So we only give a description for
the case $k=3$ (which can certainly also be verified by a brute-force
enumeration using a computer).
\begin{lemma}\label{lem:Tstar2}
For \m{A=\set{0,1,2}} the set \m{\ncent[2]{\set{T}}} contains the following 65 functions
\begin{align*}
\ncent[2]{\set{T}}= \set{\eni[2]{1},\eni[2]{2}}
&{}\disjointunion \lset{z_a}{a\in \set{0,1,2}}\\
&{}\disjointunion \bigcup_{c\in\set{1,2}}
\lset{f_{a,\bfa{x}}}{a\in\set{0,c}\land \bfa{x}\in\set{0,c}^4\setminus\set{(0,0,0,0)}}
\end{align*}
given by the following tables\footnote{%
The correctness of this lemma (and its proof) can be checked with the
Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the
ancillary file \texttt{binaryfunccommutingT.z3}. The completeness of
the list of 65 operations can also be verified with the function
\texttt{findallbinaries()} from the ancillary file
\texttt{commutationTs.cpp}, resulting in the file \texttt{Tcent2.txt}.}:
\begin{align*}
&\begin{array}{r|*{3}{c}}
z_a(x\backslash y)& 0&1&2\\\hline
0& 0& 0& 0\\
1& 0& 0& 0\\
2& 0& 0& a
\end{array}&
&\begin{array}{r|*{3}{c}}
f_{a,(b,c,d,e)}(x\backslash y)& 0&1&2\\\hline
0& 0& 0& b\\
1& 0& 0& c\\
2& d& e& a
\end{array}
\end{align*}
\end{lemma}
\begin{proof}
Using a case distinction, one can verify that every function
\m{f\in \ncent[2]{\set{T}}} must be among the ones mentioned in
the lemma.
\begin{enumerate}[1.]
\item Assume \m{f(1,1) = 1}. We can show that \m{f(2,2)=2} and
\m{\set{f(1,2),f(2,1)} = \set{1,2}}. Namely, \m{f\in\cent{\set{T}}}
implies
\begin{multline*}
1=f(1,1) = f(T(1,1,2,2),T(1,2,1,2))\\
=T(f(1,1),f(1,2),f(2,1),f(2,2)) = T(1,f(1,2),f(2,1),f(2,2)),
\end{multline*}
which is only possible if \m{f(2,2)=2} and
\m{(f(1,2),f(2,1))\in\set{(1,2),(2,1)}}.
\begin{enumerate}[{1.}1.]
\item Assume \m{f(1,2) =1} and \m{f(2,1) =2}. It follows that
\m{f=\eni[2]{1}}.
In fact, our assumption \m{f\in\cent{\set{T}}} implies
\begin{multline*}
f(1,0) = f(T(1,1,2,2),T(1,1,1,2))\\
=T(f(1,1),f(1,1),f(2,1),f(2,2)) = T(1,1,2,2)=1,
\end{multline*}
moreover
\begin{multline*}
f(0,1) = f(T(1,1,1,2),T(1,1,2,2))\\
=T(f(1,1),f(1,1),f(1,2),f(2,2)) = T(1,1,1,2)=0,
\end{multline*}
and
\begin{multline*}
1 = f(1,0) = f(T(1,2,1,2),T(1,0,1,2))\\
=T(f(1,1),f(2,0),f(1,1),f(2,2)) = T(1,f(2,0),1,2),
\end{multline*}
which is only possible if \m{f(2,0) = 2}.
Finally, we have
\begin{multline*}
0 = f(0,1) = f(T(1,0,1,2),T(1,2,1,2))\\
=T(f(1,1),f(0,2),f(1,1),f(2,2)) = T(1,f(0,2),1,2),
\end{multline*}
which means \m{f(0,2)\neq 2}, and
\begin{multline*}
0 = f(0,0) = f(T(0,2,1,2),T(2,2,1,2))\\
=T(f(0,2),f(2,2),f(1,1),f(2,2)) = T(f(0,2),2,1,2),
\end{multline*}
which gives \m{f(0,2)\neq 1}.
Thus \m{f(0,2)\in A\setminus\set{1,2}=\set{0}}.
\item Assume \m{f(1,2) =2} and \m{f(2,1) =1}. It follows that
\m{f=\eni[2]{2}} by a dual argument.
In fact, \m{f\in\cent{\set{T}}} implies
\begin{multline*}
f(1,0) = f(T(1,1,2,2),T(1,1,1,2))\\
=T(f(1,1),f(1,1),f(2,1),f(2,2)) = T(1,1,1,2)=0,
\end{multline*}
moreover
\begin{multline*}
f(0,1) = f(T(1,1,1,2),T(1,1,2,2))\\
=T(f(1,1),f(1,1),f(1,2),f(2,2)) = T(1,1,2,2)=1,
\end{multline*}
and
\begin{multline*}
1 = f(0,1) = f(T(1,0,1,2),T(1,2,1,2))\\
=T(f(1,1),f(0,2),f(1,1),f(2,2)) = T(1,f(0,2),1,2),
\end{multline*}
which is only possible if \m{f(0,2) = 2}.
Finally, we have
\begin{multline*}
0 = f(1,0) = f(T(1,2,1,2),T(1,0,1,2))\\
=T(f(1,1),f(2,0),f(1,1),f(2,2)) = T(1,f(2,0),1,2),
\end{multline*}
which means \m{f(2,0)\neq 2}, and
\begin{multline*}
0 = f(0,0) = f(T(2,2,1,2),T(0,2,1,2))\\
=T(f(2,0),f(2,2),f(1,1),f(2,2)) = T(f(2,0),2,1,2),
\end{multline*}
which gives \m{f(2,0)\neq 1}.
Thus \m{f(2,0)\in A\setminus\set{1,2}=\set{0}}.
\end{enumerate}
\item Now assume that \m{f(1,1)\neq 1}. Since \m{f\in\Pol{\im(T)}}, we
must have \m{f(1,1)=0}.
We can show that \m{f(0,1) = 0= f(1,0)}.
In point of fact, we have
\begin{multline*}
f(0,1) = f(T(1,1,0,0),T(1,1,2,2))\\
=T(f(1,1),f(1,1),f(0,2),f(0,2)) = T(0,0,f(0,2),f(0,2)) = 0,
\end{multline*}
and for \m{f(1,0)=0} we argue by swapping the arguments of~\m{f}.
\par
Moreover, if \m{\set{1,2}\subs\set{f(0,2),f(2,0),f(1,2),f(2,1)}},
then \m{f\notin\cent{\set{T}}}.
Indeed, if there are \m{x,y\in \set{0,1}} such that
\begin{enumerate}[(a)]
\item \m{f(2,x) = 1}, \m{f(2,y)=2}, then
\begin{multline*}
f(T(2,2,2,2),T(x,x,y,y))= f(0,z)=0 \\
\neq 1 =T(1,1,2,2) =T(f(2,x),f(2,x),f(2,y),f(2,y)),
\end{multline*}
where \m{z\in \set{0,1}}.
\item \m{f(x,2) = 1}, \m{f(y,2)=2}, then we argue with swapped
arguments for \m{f}.
\item \m{f(2,x) = 1}, \m{f(y,2)=2}, then
\begin{multline*}
f(T(2,2,y,y),T(x,x,2,2)) = f(0,z) = 0\\
\neq 1 = T(1,1,2,2) = T(f(2,x),f(2,x),f(y,2),f(y,2)),
\end{multline*}
where \m{z\in \set{0,1}}.
\item \m{f(x,2) = 1}, \m{f(2,y)=2}, then we argue with swapped
arguments for~\m{f}.
\end{enumerate}
Hence, we know that
\m{\set{1,2}\not\subs\set{f(0,2),f(2,0),f(1,2),f(2,1)}} for
\m{f\in\cent{\set{T}}}.
\begin{enumerate}[{2.}1.]
\item Suppose that \m{f(2,2)=0}. There is nothing left to prove: we
already have
\m{f\in\set{z_0}\cup\bigcup_{c\in\set{1,2}}
\lset{f_{0,\bfa{x}}}{\bfa{x}\in\set{0,c}^4\setminus\set{\bfa{0}}}}.
\item Suppose that \m{f(2,2)=c\in\set{1,2}} and let \m{d} be such that
\m{\set{c,d}=\set{1,2}}.
We prove that \m{d\notin \set{f(0,2),f(2,0),f(1,2),f(2,1)}},
as otherwise \m{f\notin\cent{\set{T}}}. This demonstrates that
\m{\set{f(0,2),f(2,0),f(1,2),f(2,1)}\subs\set{0,c}}, so we have
\m{f\in\set{z_c}\cup\bigcup_{c\in\set{1,2}}
\lset{f_{c,\bfa{x}}}{\bfa{x}\in\set{0,c}^4\setminus\set{\bfa{0}}}}.
\par
For a contradiction suppose that there is some argument \m{x\in \set{0,1}}
such that \m{f(x,2)=d}. Then for some \m{z\in\set{0,1}} we have
\begin{multline*}
f(T(x,x,2,2),T(2,2,2,2)) = f(z,0) = 0\\
\neq 1 = T(d,d,c,c) =T(f(x,2),f(x,2),f(2,2),f(2,2)),
\end{multline*}
when \m{(c,d)=(2,1)}, and
\begin{multline*}
f(T(2,2,x,x),T(2,2,2,2)) = f(z,0) = 0\\
\neq 1 = T(c,c,d,d) =T(f(2,2),f(2,2),f(x,2),f(x,2)),
\end{multline*}
when \m{(c,d)=(1,2)}.
In the case where \m{f(2,x)=d} for some \m{x\in\set{0,1}} we argue
similarly, by swapping the arguments of~\m{f}.
\end{enumerate}
\end{enumerate}
\par
For the converse inclusion, we have to check that all mentioned
functions commute with~$T$. So let $g=z_a$ for some $a\in A$ or
$g=f_{a,(b,c,d,e)}$ and consider
$x_1,\dotsc,x_4,y_1,\dotsc,y_4\in A$ to verify that~$g$ commutes
with~$T$. Put $u\defeq T(x_1,\dotsc,x_4)$ and
$v\defeq T(y_1,\dotsc,y_4)$.
Since \m{(u,v)\in \im(T)^2 = \set{0,1}^2}, we have $g(u,v) = 0$. On
the other hand, the values $w_i\defeq g(x_i,y_i)$ for $1\leq i\leq 4$
belong to $\im(g)\subs \set{0,a,b,c,d,e}$. If at least one of them
equals~$0$, then $T(w_1,\dotsc,w_4)=0$ as needed.
Otherwise, all of them belong to \m{\set{a,b,c,d,e}\setminus\set{0}}.
If $g=z_a$, then they are all equal to~$a$ and we thus have
$T(w_1,\dotsc,w_4)=0$, too. In the case that $g=f_{a,(b,c,d,e)}$, we
know from the definition of~$g$ that \m{\set{a,b,c,d,e}\subs\set{0,j}}
for some \m{j\in \set{1,2}}. Thus, $w_1=\dots =w_4=j$, and again
$T(w_1,\dotsc,w_4)=0$. In any case, we have shown
\m{g\in\cent{\set{T}}}.
\end{proof}
Next, with the help of the coarse approximations from
Lemma~\ref{lem:Tstar1}, we observe that the bicentraliser of~$T$ only
contains functions that are close to being conservative and have many
congruences.
\begin{lemma}\label{lem:almost-conservative}
For $A=\set{0,\dotsc,k-1}$ of size \m{k\geq 3} we have
\begin{multline*}
\genClone{\set{T}}\subs \bicent{\set{T}} \subs \set{T}^{*(1)*}\\
{}\subs \Pol{\lset{U\subs A}{0\in U}}
\cap\Pol{\lset{\theta\in\Eq(A)}{(0,1)\in\theta}},
\end{multline*}
where $\Eq(A)$ denotes the set of all equivalence relations on~$A$.
\end{lemma}
\begin{proof}
It is clear that \m{\set{T}^{*(1)*}
\subs\Pol{\lset{\im(f)}{f\in \ncent[1]{\set{T}}}}}
since the image of a function is primitive positively definable from its
graph. If \m{0\in U\subsetneq A}, then~$U$ contains \m{t<k-1} elements
distinct from~$0$. According to the description of the functions
in~\m{\ncent[1]{\set{T}}} given in Lemma~\ref{lem:Tstar1}, there is some
\m{f\in \ncent[1]{\set{T}}} whose image is~\m{U}.
\par
Likewise we have \m{\set{T}^{*(1)*}
\subs\Pol{\lset{\ker(f)}{f\in \ncent[1]{\set{T}}}}}
since the kernel of a function is primitive positively definable from its
graph. Any partition of~$A$ having a class containing the set \m{\set{0,1}}
can again be realised as the kernel of a function \m{f\in
\ncent[1]{\set{T}}}
since the value $f(x)$ can be chosen arbitrarily for every
\m{x\in A\setminus\set{0,1}}.
\end{proof}
Based on this lemma we can show that the \nbdd{n}ary part of
the bicentraliser of~$T$ is not bigger than \m{\genClone[n]{\set{T}}}
when $n<k-1$.
\begin{lemma}\label{lem:T*1*-small}
For~$k=\abs{A}\geq 3$ we have
\m{\nbicent{\set{T}} = \set{T}^{*(1)*(n)}= \J[n]{A} \cup\set{\cna[n]{0}}} for all $1\leq n<k-1$
where $\cna[n]{0}$ denotes the \nbdd{n}ary constant zero function.
\end{lemma}
\begin{proof}
We shall prove that
\m{\set{T}^{*(1)*(n)}\subs \J[n]{A} \cup\set{\cna[n]{0}} = \genClone[n]{\set{T}}}
(cf.\ Lemma~\ref{lem:Tclone-small}). From this it will follow
that \m{\genClone[n]{\set{T}}\subs \nbicent{\set{T}} \subs
\set{T}^{*(1)*(n)}\subs\genClone[n]{\set{T}}}
since \m{\ncent[1]{\set{T}} \subs\cent{\set{T}}} is always true.
\par
Given that~\m{A} has size \m{k\geq 3}, the set \m{A\setminus\set{0,1}}
has \m{k-2\geq n} distinct values.
Let \m{f\in \set{T}^{*(1)*(n)}}.
By Lemma~\ref{lem:almost-conservative} we know that
\m{f\in\Pol{\set{0,2\dotsc,n+1}}}, so we obtain
\m{b\defeq f(2,\dotsc,n+1)\in\set{0,2,\dotsc,n+1}}.
Now for any \m{(a_1,\dotsc,a_n)\in A^{n}} we consider the unary map~\m{u} sending
\m{j \mapsto a_{j-1}} for \m{2\leq j\leq n+1} and \m{j\mapsto 0} otherwise.
Since \m{u\in\ncent[1]{\set{T}}} by Lemma~\ref{lem:Tstar1}, we have
\m{f\in \cent{\set{u}}} and thus
\[f(a_1,\dotsc,a_{n}) = f(u(2),\dotsc,u(n+1)) = u(f(2,\dotsc,n+1))=u(b).\]
If \m{b=0}, then \m{f(a_1,\dotsc,a_n)=u(b)=0}, so \m{f=\cna[n]{0}}.
If \m{b\neq 0}, then it follows that \m{2\leq b\leq n+1}.
Thus, we have $f(a_1,\dotsc,a_{n})=u(b)=a_{b-1}$
for all \m{(a_1,\dotsc,a_{n})\in A^n}, which shows that \m{f=\eni{b-1}}.
\end{proof}
According to Lemmata~\ref{lem:Tclone-small} and~\ref{lem:T*1*-small},
it is impossible to find
\m{f\in\nbicent{\set{T}}\setminus\genClone[n]{\set{T}}} for
\m{n<k-1} where \m{k=\abs{A}}. Next, we thus turn our attention to
\m{n=k-1}, where we will first describe \m{\genClone[k-1]{\set{T}}}:
besides projections the \nbdd{(k-1)}ary part
of~\m{\genClone{\set{T}}} contains only functions being zero everywhere
with a possible exception in only one argument tuple which may be sent
to one.
After that we shall focus for a while on the case \m{k=3} to develop
the right ideas in connection with
\m{\nbicent[k-1]{\set{T}}=\nbicent[2]{\set{T}}},
which can eventually be generalised to any \m{k\geq 3}.
\begin{lemma}\label{lem:Tclone2-general}
Given a set~$A$ of cardinality~$k\geq 3$, put $n=k-1$.
We then have
$\genClone[n]{\set{T}}=\J[n]{A}\cup\set{\cna[n]{0}}\cup F$
where $F\subs\Op[n]{A}$ is the set of \nbdd{n}ary functions in
$\genClone{\set{T}}$ which map exactly one \nbdd{n}tuple to~$1$ and everything
else to~$0$.
\end{lemma}
\begin{proof}
We have $\cna[n]{0}=
\composition{T}{\eni{1},\dotsc,\eni{1}}\in\genClone[n]{\set{T}}$ as in
Lemma~\ref{lem:Tclone-small}, so the inclusion
$G\defeq \J[n]{A}\cup \set{\cna[n]{0}}\cup F\subs\genClone[n]{\set{T}}$ is
clear. For the opposite inclusion, we prove that~$G$ is a subuniverse of
\m{\algwops{A}{T}^{A^n}}. The first step is to check that any variable
identification of~$T$ with at most~$n$ variables ends up in~$F\cup\set{\cna[n]{0}}$.
\par
Let $i\colon\set{1,\dotsc,n^2}\to\set{1,\dotsc,n}$, $j\mapsto i_j$ be a
map describing an \nbdd{n}variable identification
\m{f = \composition{T}{\eni{i_1},\dotsc,\eni{i_{n^2}}}} of~$T$.
Clearly, $\im(f)\subs\im(T)=\set{0,1}$, so every tuple that is not
mapped to one by~$f$ will be sent to zero. To obtain a contradiction,
let us assume that $\abs{f^{-1}\fapply{\set{1}}}\geq 2$. So there are
tuples $\bfa{x}\neq \bfa{y}\in A^n$ such that
\m{T\apply{x_{i_1},\dotsc,x_{i_{n^2}}} = 1 =
T\apply{y_{i_1},\dotsc,y_{i_{n^2}}}}. The preimage
$T^{-1}\fapply{\set{1}}$ contains only two tuples, and these mention~$n$
distinct elements. To obtain one of them in the form
$\apply{x_{i_1},\dotsc,x_{i_{n^2}}}$ or
$\apply{y_{i_1},\dotsc,y_{i_{n^2}}}$ one has to use at least~$n$
distinct variable indices, so the map~$i$ has to be surjective. It is
therefore impossible that the distinct tuples~$\bfa{x}$ and~$\bfa{y}$
produce the same tuple
\m{\apply{x_{i_1},\dotsc,x_{i_{n^2}}} =
\apply{y_{i_1},\dotsc,y_{i_{n^2}}} \in T^{-1}\fapply{\set{1}}}.
This means, one of them, say~$\bfa{x}$, gives
\m{\apply{x_{i_1},\dotsc,x_{i_{n^2}}}
=\apply{1,\dotsc,n,1,\dotsc,n,\dotsc,1,\dotsc,n}}, from which it
follows that $i_1,\dotsc,i_n$ are all distinct (so
\m{\set{i_1,\dotsc,i_n} = \set{1,\dotsc,n}}); the other one however
produces
\m{\apply{y_{i_1},\dotsc,y_{i_{n^2}}}
=\apply{1,\dotsc,1,2,\dotsc,2,\dotsc,n,\dotsc,n}}.
This implies that $\set{y_1,\dotsc,y_n}
=\set{y_{i_1},\dotsc,y_{i_n}}=\set{1}$, so
$\apply{y_{i_1},\dotsc,y_{i_{n^2}}} = (1,\dotsc,1)$, which is a
contradiction for $n\geq 2$.
\par
To prove that~$G$ is closed under application of~$T$, we take functions
$f_1,\dotsc,f_{n^2}$ from~$G$ and show that
$f=\composition{T}{f_1,\dotsc,f_{n^2}}\in G$.
If $f_1,\dotsc,f_{n^2}\in\J{A}$, then the composition is a variable
identification of~$T$ that belongs to~$F\cup\set{\cna[n]{0}}\subs G$.
Otherwise, suppose that (for some $1\leq j\leq n^2$) $f_j$ is a
non\dash{}projection in $F\cup\set{\cna[n]{0}}$. For every $\bfa{x}\in
A^n$ with possibly one exception we have $f_j(\bfa{x})=0$. So for all
those arguments $\bfa{x}\in A^n$, the \nbdd{j}th component of
\m{\apply{f_1(\bfa{x}),\dotsc,f_{n^2}(\bfa{x})}} contains a zero, whence
this tuple is mapped to zero by~$T$. Consequently, $f(\bfa{x})=0$ for
all but possibly one $\bfa{x}\in A^n$, so
\m{f\in F\cup\set{\cna[n]{0}}\subs G}.
\end{proof}
For three\dash{}element domains we obtain a more specific result.
\begin{lemma}\label{lem:Tclone2}
For $A=\set{0,1,2}$ we have \m{\genClone[2]{\set{T}} = \set{\eni[2]{1},\eni[2]{2}, \cna[2]{0}, \delta_{(1,2)}, \delta_{(2,1)}}},
where \m{\cna[2]{0}} is the constant zero function and \m{\delta_a(x) = 1}
if \m{x=a} and \m{\delta_a(x)=0} otherwise.
\end{lemma}
\begin{proof}
It is easy to see that the listed binary functions belong to the clone, namely
\begin{align*}
\cna[2]{0} &= \composition{T}{\eni[2]{1},\eni[2]{1},\eni[2]{1},\eni[2]{1}},\\
\delta_{(1,2)} &= \composition{T}{\eni[2]{1},\eni[2]{1},\eni[2]{2},\eni[2]{2}}
= \composition{T}{\eni[2]{1},\eni[2]{2},\eni[2]{1},\eni[2]{2}},\\
\delta_{(2,1)} &= \composition{T}{\eni[2]{2},\eni[2]{1},\eni[2]{2},\eni[2]{1}}
= \composition{T}{\eni[2]{2},\eni[2]{2},\eni[2]{1},\eni[2]{1}}.
\end{align*}
It is not hard to verify that the given subset is a subuniverse of
\m{\algwops{A}{T}^{A^2}}. Any \nbdd{T}composition involving only
projections except for the ones shown to yield~\m{\delta_{(1,2)}}
or~\m{\delta_{(2,1)}} produces~\m{\cna[2]{0}}. Any composition
involving~\m{\cna[2]{0}}, or just the \m{\delta_a}~functions yields again
the constant map~\m{\cna[2]{0}}. Therefore, only compositions involving
the \m{\delta_a}~functions \emph{and} projections have to be checked. If
all four of them are substituted into~\m{T} (in any order), the result
is~\m{\cna[2]{0}}. If only one projection (and possibly some
non\dash{}projections) are substituted, then in most cases, the result
is~\m{\cna[2]{0}}, and for a few substitutions it is one of the
\m{\delta_a}~functions. If both projections and only one of the
\m{\delta_a}~functions are substituted, the result is either the
substituted function~\m{\delta_a} or~\m{\cna[2]{0}}.
\end{proof}
With the aim of finding separating binary functions in
\m{\bicent{\set{T}}} for \m{\abs{A}=3}, we collect some properties of
binary operations in upper approximations of~\m{\bicent{\set{T}}}.
\begin{lemma}\label{lem:observations-T*1*}
Let \m{A=\set{0,1,2}} and
\m{g\in \set{T}^{*(1)*(2)}},
then the following implications hold:
\begin{enumerate}[(a)]
\item \m{g(1,2)=2 \implies \forall a\in A\colon g(0,a)=a}.
\item \m{g(2,1)=2 \implies \forall a\in A\colon g(a,0)=a}.
\item \m{g(1,2)\in\set{0,1} \implies \forall a\in A\colon g(0,a)=0}.
\item \m{g(2,1)\in\set{0,1} \implies \forall a\in A\colon g(a,0)=0}.
\end{enumerate}
\end{lemma}
\begin{proof}
By Corollary~\ref{cor:Tstar1} we have \m{g\in\cent{\lset{u_{2,a}}{a\in
A}}}. This implies for all \m{a\in A} that
\m{a=u_{2,a}(2)=u_{2,a}(g(1,2)) = g(u_{2,a}(1),u_{2,a}(2))=g(0,a)}
provided~\m{g(1,2)=2}. A symmetric argument works for \m{g(2,1)=2}.
Similarly, if \m{g(1,2)\in\set{0,1}}, then
\m{0=u_{2,a}(g(1,2)) = g(u_{2,a}(1),u_{2,a}(2))=g(0,a)}
for all \m{a\in A}, and symmetrically, if \m{g(2,1)\in\set{0,1}}.
\end{proof}
Not very surprisingly, \m{\ncent[1]{\set{T}}} does not encode
enough information about \m{\cent{\set{T}}} to determine functions in
\m{\bicent{\set{T}}} sufficiently well. However, using the description
of \m{\ncent[2]{\set{T}}} available for \m{\abs{A}=3} from
Lemma~\ref{lem:Tstar2}, we are able to derive a more promising result:
for \m{\abs{A}=3} there is a unique binary function in
\m{\set{T}^{*(2)*(2)}\setminus\genClone{\set{T}}}.
This function might---and although we do not know it yet at this point,
it actually will---serve to distinguish \m{\bicent{\set{T}}} and
\m{\genClone{\set{T}}}.
\begin{lemma}\label{lem:unique-bin-func}
For \m{A=\set{0,1,2}} we have\footnote{%
The correctness of this lemma can be checked with the
Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the
ancillary file \texttt{func\_Tc2c2.z3}.}
\m{\set{T}^{*(2)*(2)}=\genClone[2]{\set{T}}\mathbin{\dot{\cup}}\set{f}}
where for all \m{x,y\in A}
\[f(x,y) = \begin{cases}
1 &\text{if } \set{x,y} = \set{1,2},\\
0 &\text{else.}
\end{cases}\]
\end{lemma}
\begin{proof}
The proof is by a systematic case distinction.
Let \m{g\in\set{T}^{*(2)*(2)}}, which implies that
\m{g\in\set{T}^{*(2)*}=\cent{\genClone{\ncent[2]{\set{T}}}}
\subs \set{T}^{*(1)*}} since
\m{\ncent[1]{\set{T}}\subs\genClone{\ncent[2]{\set{T}}}}.
Hence, we can apply the implications from Lemma~\ref{lem:observations-T*1*} to~$g$.
\par
Assume \m{g(1,2) = 2}.
It follows by Lemma~\ref{lem:observations-T*1*} that
\m{g(0,a)=a} for all \m{a\in A}.
Our goal is to show that \m{g=\eni[2]{2}}.
For a contradiction, suppose that \m{g(2,1)=2}. Since
\m{g\in\cent{\set{z_1}}} by Lemma~\ref{lem:Tstar2}, we obtain
\[
1 = z_1(2,2) = z_1(g(1,2),g(2,1)) = g(z_1(1,2),z_1(2,1))
= g(0,0),
\]
in contradiction to $g(0,0)=0$ derived above.
Hence \m{g(2,1)\in\set{0,1}}. Using again
Lemma~\ref{lem:observations-T*1*}, this implies \m{g(a,0)=0}
for all \m{a\in A}.
\par
Again, for a contradiction, we suppose that \m{g(2,1)=0}. Since
\m{g\in\cent{\set{f_{0,(1,1,1,0)}}}} by Lemma~\ref{lem:Tstar2}, we get
\begin{align*}
1&=f_{0,(1,1,1,0)}(2,0)=f_{0,(1,1,1,0)}(g(1,2),g(2,1))\\
&= g(f_{0,(1,1,1,0)}(1,2),f_{0,(1,1,1,0)}(2,1)) = g(1,0),
\end{align*}
which contradicts \m{g(1,0)=0}.
\par
Hence \m{g(2,1)=1}. Then, since
\m{g\in\cent{\set{f_{0,(c,c,c,c)}}}} for \m{c\in\set{1,2}} by
Lemma~\ref{lem:Tstar2}, we get
\begin{align*}
c&=f_{0,(c,c,c,c)}(2,1)=f_{0,(c,c,c,c)}(g(1,2),g(2,1))\\
&= g(f_{0,(c,c,c,c)}(1,2),f_{0,(c,c,c,c)}(2,1)) = g(c,c),
\end{align*}
which shows that \m{g=\eni[2]{2}}.
Note that a symmetric argument shows that the assumption
\m{g(2,1)=2} implies \m{g=\eni[2]{1}}.
\par
Assume \m{\set{g(1,2),g(2,1)}\subs\set{0,1}}. By
Lemma~\ref{lem:observations-T*1*} we get that
\m{g(0,a)=g(a,0)=0} for all \m{a\in A}. Clearly,
\m{h=\composition{g}{\id_A,\id_A}\in
\set{T}^{*(2)*(1)}\subs\set{T}^{*(1)*(1)}= \set{\id_A,\cna[1]{0}}},
see Lemma~\ref{lem:T*1*-small}.
For a contradiction, suppose that \m{h=\id_A},
whence \m{g(2,2)=2}. As \m{g\in\cent{\set{f_{0,(2,2,2,2)}}}} by
Lemma~\ref{lem:Tstar2}, we get
\begin{align*}
2&=f_{0,(2,2,2,2)}(2,g(1,2))=f_{0,(2,2,2,2)}(g(2,2),g(1,2))\\
&= g(f_{0,(2,2,2,2)}(2,1),f_{0,(2,2,2,2)}(2,2)) = g(2,0),
\end{align*}
which contradicts \m{g(2,0)=0} from before. Therefore,
\m{h=\cna[1]{0}}, which shows that
\m{g\in\set{\cna[2]{0},\delta_{(1,2)},\delta_{(2,1)},f}}.
\par
Hence, according to Lemma~\ref{lem:Tclone2}, \m{g\in\genClone[2]{\set{T}}\cup\set{f}}. For the converse inclusion,
one uses that the containment
\m{\genClone{\set{T}}\subs\bicent{\set{T}}\subs\set{T}^{*(2)*}}
is trivially true and one verifies that, indeed,
\m{f\in\set{T}^{*(2)*}}.
We postpone the latter until Lemma~\ref{lem:pp-formula}, where we shall
show more generally that even
\m{f\in\bicent{\set{T}}\subs\set{T}^{*(2)*}}.
Alternatively, one may ask a computer to check that~$f$ commutes with
all the $65$~functions given in Lemma~\ref{lem:Tstar2},
immediately giving a positive answer.\footnote{%
This can, for example, be done with the
Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the
ancillary file \texttt{func\_Tc2c2.z3}.}
\end{proof}
So far, for the binary operation~$f$ exhibited in
Lemma~\ref{lem:unique-bin-func} we do not know whether it actually
belongs to~\m{\bicent{\set{T}}} as we have only worked with upper
approximations of this bicentraliser, not with~\m{\bicent{\set{T}}}
itself.
\begin{remark}\label{rem:f-in-cent-3-cent-T}
Without much more ingenuity but some additional computational effort,
it is possible to show that the unique binary operation~$f$ from
Lemma~\ref{lem:unique-bin-func} belongs to
\m{\set{T}^{*(3)*}}, which is even closer to \m{\bicent{\set{T}}}.
\par
To do this one needs to enumerate~\m{\ncent[3]{\set{T}}}. Since
\m{\cent{\set{T}}} is a clone, for every ternary $g\in\cent{\set{T}}$
each of its identification minors
$\composition{g}{\eni[2]{1},\eni[2]{1},\eni[2]{2}}$,
$\composition{g}{\eni[2]{1},\eni[2]{2},\eni[2]{1}}$ and
\m{\composition{g}{\eni[2]{2},\eni[2]{1},\eni[2]{1}}}
must also belong to the same clone, i.e.\ to \m{\ncent[2]{\set{T}}}.
However, the latter set has been completely described in
Lemma~\ref{lem:Tstar2} above, it contains precisely~65 functions.
Thus, the behaviour of~$g$ on tuples of the form~\m{(x,x,y)} has to
coincide with one of these 65 functions, likewise, the results on tuples
of the form~\m{(x,y,x)} and of the form~\m{(y,x,x)} are determined by
one of these functions, respectively. Moreover, on the three tuples of
the form \m{(x,x,x)}, the three binary operations from
\m{\ncent[2]{\set{T}}} have to prescribe non\dash{}contradictory values.
Therefore, except for the six
tuples that are permutations of\/ \m{(0,1,2)} the values of \m{g} are
determined by one of at most \m{65^3} choices. Altogether no more than
\m{65^3\cdot 3^6 = 200\,201\,625} ternary functions have to be considered.
\par
This can be done by a computer, resulting in a list\footnote{%
This list can be computed using the function
\texttt{findallternaries\_optimised()} from the ancillary file
\texttt{commutationTs.cpp}, and it is given in the file
\texttt{Tcent3\_sorted.txt}.}
of exactly $1\,048\,578$~functions belonging to~\m{\ncent[3]{\set{T}}}.
Again for each of these ternary operations it is readily verified by a
computer that they commute\footnote{%
This verification can be carried out using the function
\texttt{readTcent3("Tcent3\_sorted.txt")} from the ancillary file
\texttt{commutationTs.cpp} and confirms once more the concluding
sentence in the proof of Lemma~\ref{lem:unique-bin-func}.}
with the binary operation~\m{f} given in
Lemma~\ref{lem:unique-bin-func}. Consequently, by a complete case
distinction, we have indeed that
\m{f\in\set{T}^{*(3)*}}.
Together with Lemma~\ref{lem:Tclone2}, this proves
\m{f\in\set{T}^{*(3)*(2)}\setminus\genClone[2]{\set{T}}}
for \m{A=\set{0,1,2}}.
\end{remark}
It is not a suitable strategy to continue indefinitely with
individual verifications that the unique binary operation~$f$ from
Lemma~\ref{lem:unique-bin-func} belongs to more and more accurate upper
approximations \m{\set{T}^{*(\ell)*}},
\m{\ell\rightarrow\infty}, of \m{\bicent{\set{T}}}. Instead we need a
more creative Galois theoretic argument to be sure that
\m{f\in\bicent{\set{T}}}. This confirmation is given in the following
lemma in the form of a primitive positive definition. As it turns out,
the argument used there for \m{k=\abs{A}=3} and the definition of~$f$
from Lemma~\ref{lem:unique-bin-func} can then be generalised to any
\m{k\geq 3}, see Theorem~\ref{thm:pp-defining-separating-function}.
However, we think it is instructive to first show where the idea for
the theorem originates from.
\begin{lemma}\label{lem:pp-formula}
The binary function \m{f \in\set{T}^{*(2)*(2)}} defined
in Lemma~\ref{lem:unique-bin-func} indeed belongs to~\m{\bicent{\set{T}}} for
its graph is definable by a primitive positive formula%
\footnote{The correctness of this formula has been checked with the
Z3-solver~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3}, see
the script \texttt{checkformulaforbinfunc.z3} available as an ancillary file.}
over \m{A=\set{0,1,2}} involving only the graph of\/~$T$:
\begin{align*}
\bigl\{(&x_2,x_3,x_5)\in A^3 \mathrel{\big\vert} f(x_2,x_3)=x_5\bigr\}\\
&=\lset{(x_2,x_3,x_5)\in A^3}{\exists x_1,x_4\in A\colon
\begin{aligned}[c]
T(x_1,x_2,x_3,x_4) &= x_5 \land{}\\
(x_2,x_3,x_2,x_3,x_1,x_2,x_4,x_3) &\in \ker(T) \land{}\\
(x_3,x_2,x_3,x_2,x_1,x_3,x_4,x_2) &\in \ker(T)
\end{aligned}}\\
&=\lset{(x_2,x_3,x_5)\in A^3}{\exists x_1,x_4,u,v\in A\colon
\begin{aligned}[c]
T(x_1,x_2,x_3,x_4) &= x_5 \land{}\\
T(x_2,x_3,x_2,x_3) &= u \land{}\\
T(x_1,x_2,x_4,x_3) &= u \land{}\\
T(x_3,x_2,x_3,x_2) &= v \land{}\\
T(x_1,x_3,x_4,x_2) &= v
\end{aligned}}
\end{align*}
\end{lemma}
\begin{proof}
The idea how to construct the graph of~$f$ is by considering the full graph
of~$T$, that is, the relation
\[\lset{(x_1,x_2,x_3,x_4,x_5)\in A^5}{T(x_1,x_2,x_3,x_4) = x_5},\]
and to project it to the second, third and fifth coordinate. This is
motivated by the fact that~$T$ sends only two arguments, $(1,1,2,2)$ and
$(1,2,1,2)$, to one and every other quadruple to zero, and the middle two
components of the two mentioned quadruples coincide with those pairs that
are mapped to one by~$f$. Of course, such a projection will not result in a
function graph, but it almost does. The pairs $(1,2)$ and $(2,1)$ will be
assigned two values each: the value one (as desired for~$f$) and an
erroneous value zero caused by some other quadruples $(x_1,x_2,x_3,x_4)$
with the same middle component $(1,2)$ or $(2,1)$. Hence, the goal is to
remove those quadruples from the relation before projecting. There are 16
disturbing argument tuples in the graph of~$T$ altogether:
\[ \lset{(u,a,b,v)}{\set{a,b}=\set{1,2}, u,v\in A}\setminus\set{(1,1,2,2),(1,2,1,2)}.\]
They need to be removed by imposing additional conditions that have to be
satisfied by the quadruples $(1,2,1,2)$ and $(1,1,2,2)$ since we have to
ensure that these are kept in the relation.
It turns out that this is
possible by imposing just two additional requirements involving the
kernel of~$T$. The kernel is an equivalence relation on quadruples that
we interpret as an octonary relation on~$A$, and it partitions $A^4$ into
two classes: $\set{(1,2,1,2),(1,1,2,2)}$ and the complement~$B$ of this
set in~$A^4$. In particular~$B$ includes all tuples containing a zero or
three ones or three twos or a two in the first position or a one in the
last position.
Using this observation it is easy to verify that the following two sets
jointly (i.e.\ their intersection) exclude all 16 undesired quadruples. So
these two sets represent the restrictions that we are going to
apply to the graph of~$T$:
\begin{align*}
\lset{(x_1,\dots,x_4)\in A^4}{T(x_2,x_3,x_2,x_3) = T(x_1,x_2,x_4,x_3)}
&=A^4\setminus\set{
\begin{array}{@{}*{8}{c@{\,}}c@{}}
0&0&0&1&1&1&2&2&2\\
1&1&1&1&1&2&1&1&1\\
2&2&2&2&2&2&2&2&2\\
0&1&2&0&1&1&0&1&2
\end{array}}\\
\lset{(x_1,\dots,x_4)\in A^4}{T(x_3,x_2,x_3,x_2) = T(x_1,x_3,x_4,x_2)}
&=A^4\setminus\set{
\begin{array}{@{}*{8}{c@{\,}}c@{}}
0&0&0&1&1&1&2&2&2\\
2&2&2&2&2&2&2&2&2\\
1&1&1&1&1&2&1&1&1\\
0&1&2&0&1&1&0&1&2
\end{array}}
\end{align*}
Both sets also exclude the tuple $(1,2,2,1)$, but this is not harmful, as there
are sufficiently many other quadruples left having $(2,2)$ as their middle
component, for example $(0,2,2,0)$.
\end{proof}
As the arity of~$T$ is \m{(k-1)^2} where \m{k=\abs{A}}, it is perhaps
helpful to arrange the arguments of~$T$ in a
\nbdd{((k-1)\times(k-1))}square. Expressing the primitive positive
formula from Lemma~\ref{lem:pp-formula} using such
\nbdd{(2\times2)}squares then yields
\[
\exists\,x_1,x_4\in \set{0,1,2}\colon
T\apply{\begin{smallmatrix}x_1&x_2\\x_3&x_4\end{smallmatrix}} = x_5
\land
T\apply{\begin{smallmatrix}x_2&x_3\\x_2&x_3\end{smallmatrix}} =
T\apply{\begin{smallmatrix}x_1&x_2\\x_4&x_3\end{smallmatrix}}
\land
T\apply{\begin{smallmatrix}x_3&x_2\\x_3&x_2\end{smallmatrix}} =
T\apply{\begin{smallmatrix}x_1&x_3\\x_4&x_2\end{smallmatrix}}.
\]
This kind of interpretation is key for the understanding of the
following main result.
\begin{theorem}\label{thm:pp-defining-separating-function}
Let $A=\set{0,\dotsc,k-1}$ where $k\geq 3$ and put $n=k-1$.
Let the function $f\colon A^n\to A$ be defined by
\[f(\bfa{x}) = \begin{cases}
1 &\text{if } \bfa{x}\in\set{\mathord{\Uparrow},\mathord{\Downarrow}},\\
0 &\text{else},
\end{cases}\]
where $\mathord{\Uparrow}=(1,\dotsc,n)$ and $\mathord{\Downarrow}=(n,\dotsc,1)$.
The graph of~$f$ can be defined by a primitive positive formula using the
graph of\/~$T$ as follows:
\begin{align*}
\bigl\{(&\mathord{\swarrow},y)\in A^k \mathrel{\big\vert}
f(\mathord{\swarrow})=y\bigr\}\\
&=\lset{(\mathord{\swarrow},y)\in A^k}{\apply{\exists
x_{ij}\in A}_{\substack{1\leq i,j\leq n\\i+j\neq k}}\colon
\begin{aligned}
T(\rightarrow_1,\rightarrow_2,\dotsc,\rightarrow_n) &= y\\
T(\mathord{\swarrow},\mathord{\swarrow},\dotsc,\mathord{\swarrow})
&=T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)\\
T(\mathord{\nearrow},\mathord{\nearrow},\dotsc,\mathord{\nearrow})
&=T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)
\end{aligned}}\\
&=\lset{(\mathord{\swarrow},y)\in A^k}{
\apply{\exists x_{ij}\in A}_{\substack{1\leq i,j\leq n\\i+j\neq k}}\,
\exists u,v\in A\colon
\begin{aligned}[c]
T(\rightarrow_1,\rightarrow_2,\dotsc,\rightarrow_n) &= y \land{}\\
T(\mathord{\swarrow},\mathord{\swarrow},\dotsc,\mathord{\swarrow}) &= u \land{}\\
T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n) &= u \land{}\\
T(\mathord{\nearrow},\mathord{\nearrow},\dotsc,\mathord{\nearrow}) &= v \land{}\\
T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n) &= v
\end{aligned}},
\end{align*}
where the arrows represent the following sequences of variables for
$1\leq i\leq n$:
\begin{align*}
\mathord{\swarrow} &= x_{1,n},x_{2,n-1},\dotsc,x_{n-1,2},x_{n,1}\\
\mathord{\nearrow} &= x_{n,1},x_{n-1,2},\dotsc,x_{2,n-1},x_{1,n}\\
\rightarrow_i &= x_{i,1},\dotsc,x_{i,n}\\
\leftarrow_i &= x_{i,n},\dotsc,x_{i,1}\\
\downarrow_i &= x_{1,i},\dotsc,x_{n,i}\\
\uparrow_i &= x_{n,i},\dotsc,x_{1,i}
\end{align*}
\end{theorem}
\begin{proof}
We imagine the $n^2$ variables of~$T$ arranged in a square as follows
\[\mathord{\boxdot} = \begin{matrix}x_{1,1},\dotsc,x_{1,n}\\
\vdots\\
x_{n,1},\dotsc,x_{n,n}\end{matrix},
\]
which we feed row-wise into~$T$, that is, as a notational convention we
identify $\mathord{\boxdot}$ with $\rightarrow_1,\dotsc,\rightarrow_n$ and thus
stipulate
$T(\mathord{\boxdot})\defeq
T(\rightarrow_1,\dotsc,\rightarrow_n) = T(x_{1,1},\dotsc,x_{n,n})$.
Reversing this line of thought, we can as well start with some
square~$\mathord{\boxdot}$ of
variables, feed its elements into~$f$ in some order (indicated, for
instance, by certain arrows) and then interpret this sequence of
variables as rows of a new square. For example, given~$\mathord{\boxdot}$, the value
$T(\downarrow_1,\dotsc,\downarrow_n)$ is the result of~$T$ applied to a
square whose rows are the columns of~$\mathord{\boxdot}$; so we apply~$T$ to the
transposed~$\mathord{\boxdot}$. Subsequently, we shall often consider sequences as
squares where the rows are connected to the ordering of the given
sequence and the meaning of columns, diagonals etc.\ is tied to this
particular square interpretation.
\par
Two squares play a special role for~$T$, namely those where~$T$
outputs~$1$. First, we have $T(p_1) = 1$ where $p_1$ is given by
${\rightarrow_i} = (i,\dotsc,i)$ for all $1\leq i\leq n$ (that is,
${\downarrow_i}=\mathord{\Uparrow}$ for all $1\leq i\leq n$ and also $\mathord{\swarrow}=\mathord{\Uparrow}$).
Second we have $T(\mathord{\Uparrow},\dotsc,\mathord{\Uparrow}) = 1$, and we denote the square all
of whose rows $\rightarrow_i$ are $\mathord{\Uparrow}$ by~$p_2$ (this means
${\downarrow_i} = (i,\dotsc,i)$ for all $1\leq i\leq n$ and
$\mathord{\swarrow}=\mathord{\Downarrow}$).
\par
With the square interpretation in mind we form the set
\[\theta = \lset{(\mathord{\boxdot},y)\in A^{n^2+1}}{
\begin{aligned}
T(\mathord{\boxdot}) &= y\\
T(\mathord{\swarrow},\mathord{\swarrow},\dotsc,\mathord{\swarrow})
&=T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)\\
T(\mathord{\nearrow},\mathord{\nearrow},\dotsc,\mathord{\nearrow})
&=T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)
\end{aligned}}\]
and then project it to the diagonal $\mathord{\swarrow}$ and the last
coordinate~$y$,
representing the image value of~$T$. To show that this projection
coincides with the graph of~$f$, we shall prove the following
statements:
\begin{enumerate}[(i)]
\item\label{item:p1}
For every $(\mathord{\boxdot},y)\in \theta$ where $\mathord{\swarrow} = \mathord{\Uparrow}$, it
follows $y=1$. This means that $\mathord{\swarrow}=\mathord{\Uparrow}$ implies
$\mathord{\boxdot}=p_1$.
\item\label{item:p2}
For every $(\mathord{\boxdot},y)\in \theta$ where $\mathord{\swarrow} = \mathord{\Downarrow}$, it
follows $y=1$. This means that $\mathord{\swarrow}=\mathord{\Downarrow}$ implies that
$\mathord{\boxdot}=p_2$.
\item\label{item:whole-graph-present}
For every $\bfa{x}\in A^n\setminus\set{\mathord{\Uparrow},\mathord{\Downarrow}}$ there is
some~$\mathord{\boxdot}$ such that $(\mathord{\boxdot},0)\in\theta$ and
$\mathord{\swarrow} = \bfa{x}$. Moreover, $(p_1,1),(p_2,1)\in\theta$.
\end{enumerate}
Now, if $(\mathord{\boxdot},y)\in\theta$ then $y\in\im(T) = \set{0,1}$. If $y=1$,
then $\mathord{\boxdot}=p_1$ or $\mathord{\boxdot}=p_2$, whence $\mathord{\swarrow}=\mathord{\Uparrow}$ or
$\mathord{\swarrow}=\mathord{\Downarrow}$ and both $(\mathord{\Uparrow},1),(\mathord{\Downarrow},1)\in\graph{f}$. If $y=0$,
then $\mathord{\boxdot} \neq p_1$, so statement~\eqref{item:p1} yields
$\mathord{\swarrow}\neq \mathord{\Uparrow}$; similarly, $\mathord{\boxdot}\neq p_2$ and so
$\mathord{\swarrow}\neq\mathord{\Downarrow}$ by statement~\eqref{item:p2}. Hence in each case we
have $(\mathord{\swarrow},y)\in\graph{f}$ which shows that the projection
of~$\theta$ is a subset of the graph of~$f$. Conversely,
statement~\eqref{item:whole-graph-present} shows that the full graph
of~$f$ is obtainable as a projection of~$\theta$.
\par
We proceed with the proof of the three statements.
\begin{enumerate}[(i)]
\item If $(\mathord{\boxdot},y)\in\theta$ and $\mathord{\swarrow}=\mathord{\Uparrow}$, then
$1= T(\mathord{\Uparrow},\mathord{\Uparrow},\dotsc,\mathord{\Uparrow}) =
T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$.
This means $(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)
\in\set{p_1,p_2}$. Because $\mathord{\swarrow}=\mathord{\Uparrow}$, $x_{1n}=1$ and
$x_{n1}=n$, so the \nbdd{n}th column of
$(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$ is not constant
and hence the latter cannot be equal to~$p_2$. Thus it is~$p_1$ and
therefore also $\mathord{\boxdot} = p_1$.
\item If $(\mathord{\boxdot},y)\in\theta$ and $\mathord{\swarrow}=\mathord{\Downarrow}$, then reading
backwards we have $\mathord{\nearrow}=\mathord{\Uparrow}$, and therefore
\m{1= T(\mathord{\Uparrow},\mathord{\Uparrow},\dotsc,\mathord{\Uparrow}) =
T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)}, whence
\m{(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)\in\set{p_1,p_2}}.
As $\mathord{\swarrow}=\mathord{\Downarrow}$, we have $x_{1,n}=n$ and $x_{n,1}=1$, so
the \nbdd{n}th column of
\m{(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)} is not constant
(recall that, by our convention, these tuples are fed as rows
into~$T$). This means that
\m{(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)} must have constant
rows (be equal to~$p_1$), so \m{{\downarrow_1} = (1,\dotsc,1)}, and
\m{{\downarrow_i} = (i,\dotsc,i)} for $2\leq i\leq n$. This means
that~$\mathord{\boxdot}$ has constant columns with values $1,\dotsc,n$, which
means that $\mathord{\boxdot}=p_2$.
\item First we check that $(p_1,1)\in\theta$. Clearly, $T(p_1)=1$. For
$p_1$ we have $\mathord{\swarrow}=\mathord{\Uparrow}$ and $\mathord{\nearrow}=\mathord{\Downarrow}$, so
$T(\mathord{\Uparrow},\dotsc,\mathord{\Uparrow}) = 1 = T(p_1) =
T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$ holds as $p_1$
has constant rows, and
$T(\mathord{\Downarrow},\dotsc,\mathord{\Downarrow}) = 0=
T(\mathord{\Uparrow},\mathord{\Downarrow},\dotsc,\mathord{\Downarrow})=
T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)$ is true, as
well.
\par
Next we verify that $(p_2,1)\in\theta$. Again, $T(p_2)=1$.
This time we have $\mathord{\swarrow}=\mathord{\Downarrow}$ and $\mathord{\nearrow}=\mathord{\Uparrow}$, so
\m{T(\mathord{\Downarrow},\dotsc,\mathord{\Downarrow})=0 = T(\mathord{\Uparrow},\mathord{\Downarrow},\dotsc,\mathord{\Downarrow})
=T(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_3)}. Furthermore,
\m{T(\mathord{\Uparrow},\dotsc,\mathord{\Uparrow})=1 =T(p_1)
=T(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)} because the columns
of~$p_2$ have constant values $1,\dotsc,n$.
\par
Finally, consider some $\bfa{x}\in A^n\setminus\set{\mathord{\Uparrow},\mathord{\Downarrow}}$
and $\mathord{\boxdot}$ with $\mathord{\swarrow}=\bfa{x}$ and having zeros everywhere
else.
All rows of
$(\mathord{\swarrow},\dotsc,\mathord{\swarrow})$ and of $(\mathord{\nearrow},\dotsc,\mathord{\nearrow})$
are identical, so none of these two squares is~$p_1$. If one of
these were~$p_2$, then $\bfa{x}=\mathord{\swarrow}=\mathord{\Uparrow}$ or $\mathord{\nearrow}=\mathord{\Uparrow}$,
which would mean $\bfa{x}=\mathord{\swarrow}=\mathord{\Downarrow}$. Both options are
excluded by the choice of~$\bfa{x}$. Since neither of these two
squares is $p_1$ or~$p_2$, we have
$T(\mathord{\swarrow},\dotsc,\mathord{\swarrow})=0=T(\mathord{\nearrow},\dotsc,\mathord{\nearrow})$.
As $\mathord{\boxdot}$ has zeros outside the \nbdd{\mathord{\swarrow}}diagonal, it
follows that also
$(\rightarrow_1,\leftarrow_2,\dotsc,\leftarrow_n)$ and
$(\downarrow_1,\uparrow_2,\dotsc,\uparrow_n)$ have zeros somewhere
and are hence mapped to zero by~$T$. Thus~$\mathord{\boxdot}$ satisfies the
two conditions regarding the kernel of~$T$. As~$\mathord{\boxdot}$ contains
zeros, we also have $T(\mathord{\boxdot}) = 0 = y$, concluding the argument.
\qedhere
\end{enumerate}
\end{proof}
As a corollary we obtain that the example algebras $\algwops{A}{T}$
constructed by Snow in~\cite{SnowGeneratingPrimitivePositiveClones} do
not generate centraliser clones as term operations and are thus no
counterexample to the Burris-Willard conjecture or to
Dani\v{l}\v{c}enko's results.
\begin{corollary}\label{cor:f-separating-clones}
For every carrier~$A$ of cardinality $k\geq 3$ the \nbdd{(k-1)}ary
function~$f$ defined in
Theorem~\ref{thm:pp-defining-separating-function} satisfies
$f\in \bicent{\set{T}}\setminus\genClone{\set{T}}$.
\end{corollary}
\begin{proof}
By Theorem~\ref{thm:pp-defining-separating-function} we have
$f\in\bicent{\set{T}}$; since~$f$ is \nbdd{(k-1)}ary, it cannot belong to the
clone generated by~$T$ as it is maps two distinct tuples to one and is
not a projection (cf.\ Lemma~\ref{lem:Tclone2-general}).
\end{proof}
\section{Some computational remarks}\label{sect:computations}
We conclude with a few comments on computational aspects related to
verifying that for \m{A=\set{0,1,2}}, the simplest case in question,
the binary function
\m{f\in\set{T}^{*(2)*(2)}\setminus\genClone[2]{\set{T}}}
found in Lemma~\ref{lem:unique-bin-func} actually belongs to
\m{\bicent{\set{T}}}.
\par
The first possibility is based on trusting the classification results
shown by Da\-ni\v{l}\-\v{c}en\-ko in~\cite[Theorems~4, 5,
pp.~103, 105]{Danilcenko1979-thesis}. Using the equivalence of
statements~\eqref{item:cent-leq-n} and~\eqref{item:cent-n-cent-bicent}
in Proposition~\ref{prop:char-cdeg}, these theorems imply that
\m{\bicent{\set{T}}=\set{T}^{*(3)*}}, which
contains~$f$ by the calculations described in
Remark~\ref{rem:f-in-cent-3-cent-T}. Believing in Dani\v{l}\v{c}enko's
thesis obviously does not render
Theorem~\ref{thm:pp-defining-separating-function} obsolete, as the
latter also covers the cases where \m{\abs{A}>3}.
\par
The second option we would like to discuss is whether it is feasible to
compute a primitive positive formula over~\m{\graph{T}} that allows to
define~\m{\graph{f}}. The formula shown in
Lemma~\ref{lem:pp-formula} uses five \nbdd{\graph{T}}atoms and
four existentially quantified variables. Of course, these bounds are not
known beforehand, and even if they were, simply trying to produce all
formul\ae{} with $\ell=1,2,3,\dots$ \nbdd{\graph{T}}atoms and trying to
find a \nbdd{3}variable projection that gives~$\graph{f}$ becomes
unwieldy very quickly. Indeed, before even dealing with projections,
there are \m{\kappa^\kappa} possible variable substitutions, where
\m{\kappa\defeq\ell\cdot\arity(\graph{T})} for a primitive positive
formula with $\ell$~atoms of type~$\graph{T}$ and at most~$\kappa$
variables. More concretely, to get the formula from
Lemma~\ref{lem:pp-formula}, we would have \m{\ell=5} and
\m{\arity(\graph{T})=5}, so \m{\kappa=25}, and
\m{25^{25}\approx 10^{35}} substitutions are currently too many to
check in a reasonable amount of time.
\par
However, if \m{f\in\bicent{\set{T}}}, then
\m{\graph{f}\in\Inv{\Pol{\set{\graph{T}}}}}, and there is a more
systematic method to compute a primitive positive formula for a
relation \m{\rho_0\in\Inv{F}} on a finite set~$A$ where
\m{F=\Pol{\set{\rho_1,\dotsc,\rho_t}}}, \m{t\in\N}.
It comes from an algorithm to compute~$\Fn{F}$ interpreted as
a relation~\m{\Gamma_F(\chi_n)} of arity~$\abs{A}^n$, which is given
in~\cite[4.2.5., p.~100 et seq.]{PoeKal}, combined with the proof of
the second part of the main theorem on the $\PolOp\text{-}\InvOp$
Galois connection, showing that any
\m{\rho_0\in\Inv{\Pol{Q}}} belongs to the relational clone generated
by~$Q$, as it is primitive positively definable from
\m{\Gamma_{F}(\chi_n)} where \m{F=\Pol{Q}} and \m{n=\abs{\rho_0}}
(cf.~\cite[1.2.2.~Lemma, p.~53 et seq.]{PoeKal}).
\par
The following is slightly more general than what is described
in~\cite[4.2.5]{PoeKal} for we can deal with finitely many describing
relations \m{\rho_1,\dotsc,\rho_t} for the polymorphism clone~$F$,
while only one is used in~\cite{PoeKal}. Taking
\m{Q=\set{\rho_1\times\dotsm\times\rho_t}} as a singleton
in~\cite{PoeKal} is inefficient from a computational point of view, so
we give a proof of this not very original modification. Additionally,
we allow for a generating system~$\gamma_0$ of the relation~$\rho_0$
for which a formula is sought (although this is somehow implicit
in~\cite[4.2.5]{PoeKal} as \m{\Gamma_{F}(\chi_n)} is generated by the
\nbdd{n}element subrelation~$\chi_n$).
\begin{proposition}\label{prop:gen-rel-clone}
Assume \m{Q\defeq\lset{\rho_\ell}{1\leq \ell\leq t}}, \m{F\defeq \Pol{Q}},
\m{\rho_0\in\Inv{F}} where \m{\rho_\ell\subs A^{m_\ell}} for
\m{0\leq \ell\leq t}. Let \m{\gamma_0\subs\rho_0} with \m{n\defeq
\abs{\gamma_0}} be a generating system of~$\rho_0$, that is,
\m{\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}}.
There is \m{m'\leq m_0} and \m{\gamma\subs\rho\in\Inv[m']{F}} and
\m{\alpha\colon m_0\to m'} such that
\m{\gamma_0=\lset{x\circ\alpha}{x\in\gamma}} where~$\gamma$ does not have
any duplicate coordinates. If we imagine the tuples in~\m{\gamma_0}
written as columns of an \nbdd{(m_0\times n)}matrix, then the distinct
rows of this matrix are precisely the rows of the matrix whose columns
form the tuples of~$\gamma$. Some of these $m'$~rows will be found as rows of a
relation \m{\mu\subs A^L} with \m{\abs{\mu}=n} defined below. For
notational simplicity we choose~$\alpha$ such that the rows with
indices \m{1,\dotsc,m} have this property and put \m{p\defeq
m'-m\geq0}.
\par
The matrix representation of the relation~$\mu$ has~$n$ columns
(tuples) and~$L$ rows \m{\apply{\bfa{z}_i}_{0\leq i<L}} where \m{L=\sum_{\ell=1}^t s_\ell^n\cdot m_\ell}
with \m{s_\ell=\abs{\rho_\ell}}. Let the columns of~$\mu$ arise by
stacking on top of each other all possible submatrices of~$\rho_1$
with~$n$ columns, followed by all possible submatrices of~$\rho_2$,
and so forth, finishing with all submatrices obtained by choosing~$n$
of the~$s_t$ columns of~$\rho_t$. Thus
\m{\mu\subs\pi\defeq\rho_1^{s_1^n}\times\dotsm\times\rho_t^{s_t^n}}.
Define the kernel relation \m{\epsilon\defeq \lset{(i,j)\in
L^2}{\bfa{z}_i = \bfa{z}_j}} and identify variables in~\m{\pi}
accordingly with \m{\delta_\epsilon = \lset{x\in A^L}{\forall\,
(i,j)\in\epsilon\colon x_i=x_j}}. This gives
\m{\sigma\defeq\pi\cap\delta_{\epsilon}} having the same row kernel as~$\mu$.
By finding the first~$m$ rows of~$\gamma$ among the rows of~$\mu$, we
find a projection~$\pr$ to an \nbdd{m}element set of indices such that
\m{\gamma\subs\pr(\mu)\times A^p}. It follows that
\m{\rho=\pr(\sigma)\times A^p
=\pr\apply{\apply{\rho_1^{s_1^n}\times
\dotsm\times \rho_t^{s_t^n}}\cap\delta_\epsilon}\times A^p}.
\end{proposition}
\begin{proof}
To show that \m{\rho\subs\pr(\sigma)\times A^p} we note that
\m{\pr(\sigma)\times A^p\in \Inv{F}} since for every \m{1\leq\ell\leq t}
we have \m{\rho_\ell\in Q\subs\Inv{F}}. Moreover, as~$\rho$ is a
projection of~$\rho_0$ in the same way as~$\gamma$ is a projection
of~$\gamma_0$, and since
\m{\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}}, we have
\m{\rho=\gapply{\gamma}_{\algwops{A}{F}^{m'}}}.
Due to \m{\gamma\subs\pr(\mu)\times A^p \subs \pr(\sigma)\times A^p},
the generating set~$\gamma$ is a subset of the invariant
\m{\pr(\sigma)\times A^p}, and so is the generated invariant~$\rho$.
\par
For the converse inclusion we take a tuple $x\in\sigma$ and some
\m{a\in A^p} and denote by~$y$ the tuple obtained from~$x$ by
projection to the~$m$ indices that relate the first~$m$ rows
\m{\bfa{v}_1,\dotsc,\bfa{v}_m} of~$\gamma$ to a certain section of~$x$.
Since \m{x\in\sigma\subs\delta_\epsilon}, this tuple defines a function
\m{f_x\colon B\to A} where
\m{B\defeq\lset{\bfa{z}_i}{0\leq i<L}\sups
C\defeq\lset{\bfa{v}_i}{1\leq i\leq m}} by finding for any
\m{\bfa{z}\in B} some \m{0\leq i<L} such that
\m{\bfa{z}=\bfa{z}_i} and letting \m{f_x(\bfa{z}) \defeq x_i} (the
choice of \m{i<L} is inconsequential as \m{x\in\delta_\epsilon}).
The remaining rows \m{\bfa{v}_i} with \m{m<i\leq m+p} do not belong
to~$B$ by the choice of~$m$ and~$p$. This means, $(y,a)$ defines a
function \m{f_{y,a}\colon\lset{\bfa{v}_i}{1\leq i\leq m'}\to A}, where
\m{f_x\restriction_C=f_{y,a}\restriction_C}. Moreover, it is possible
to extend~$f_{y,a}$ to a globally defined function \m{f\colon A^n\to A}
such that \m{f\restriction_{\set{\bfa{v}_i\mid 1\leq i\leq m'}}=f_{y,a}}
and \m{f\restriction_B=f_x} without contradictory value assignments.
We pick one particular such~$f$, no matter which one, and we show below
that~$f\in F=\Pol{Q}$. By the hypothesis of the proposition, $\rho$
belongs to~$\Inv{F}$, so~$f$ preserves~$\rho$. Thus, applying~$f$ to
the tuples in~$\gamma$, gives
\m{(y,a)=\apply{f_{y,a}(\bfa{v}_i)}_{1\leq i\leq m'}
=\apply{f(\bfa{v}_i)}_{1\leq i\leq m'}\in\rho} as needed.
\par
It remains to argue that~$f\in\Pol{Q}$. Hence, take any
\m{1\leq \ell\leq t} and any matrix of $n$~columns taken
from~$\rho_\ell$. By the construction of~$\mu$ there are
$m_\ell$~consecutive indices \m{0\leq i,i+1,\dotsc,i+m_\ell-1<L} such
that the rows of this matrix are
\m{\bfa{z}_i,\dotsc,\bfa{z}_{i+m_\ell-1}}.
Now \m{\apply{f(\bfa{z}_{i+\nu})}_{0\leq \nu<m_\ell}=
\apply{f_x(\bfa{z}_{i+\nu})}_{0\leq \nu<m_\ell}}, and this tuple is
in~$\rho_\ell$ because~$f_x$ is defined via
\m{x\in\sigma=\pi\cap\delta_{\epsilon}}.
\end{proof}
The expression \m{\rho=\pr\apply{\apply{\rho_1^{s_1^n}\times
\dotsm\times \rho_t^{s_t^n}}\cap\delta_\epsilon}\times A^p}
in Proposition~\ref{prop:gen-rel-clone} gives a primitive positive
definition of~$\rho$ in terms of~$\rho_1,\dotsc,\rho_t$. Duplicating
variables as indicated by~$\alpha$, one can then give a primitive
positive formula for the original relation~$\rho_0$.
The inclusion \m{\rho\subs\pr\apply{\apply{\rho_1^{s_1^n}\times
\dotsm\times \rho_t^{s_t^n}}\cap\delta_\epsilon}\times A^p}
holds in any case, regardless of the assumption that
\m{\rho\in\Inv{F}}. It can be seen from the proof of
Proposition~\ref{prop:gen-rel-clone} that the latter condition is only
needed for the opposite inclusion.
\par
That is, the formula computed by the following algorithm will always be
satisfied by all tuples from~$\rho$ (or~$\rho_0$), but if the
containment
\m{\rho\in\Inv{F}} is only suspected but not known in advance, then one
needs to check afterwards that the tuples satisfying the
generated primitive positive formula really belong to~$\rho$ (or
to~$\rho_0$, respectively).
\begin{algo}\label{alg:gen-rel-clone}
Compute a primitive positive definition\footnote{%
An implementation is available in the file \texttt{ppdefinitions.cpp},
which can be compiled using \texttt{compile.sh}, resulting in an
executable \texttt{getppformula}. This executable expects a file
\texttt{input.txt}, the formatting of which is explained in
\texttt{input\_template.txt}, which can also be used as
\texttt{input.txt}. After a successful run the programme will
produce files \texttt{ppoutput.out}, an ascii text file containing the
computed primitive positive formula, and \texttt{checkppoutput.z3}, a
script to verify the correctness of the formula using the Z3 theorem
prover~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3}.
\par
There are two caveats with the implementation added as ancillary file
to this submission: first, the initial preprocessing step
turning~$\gamma_0$ into~$\gamma$ has not been implemented. Hence,
\texttt{ppdefinitions.cpp} expects a goal relation~$\gamma$
(relation~\texttt{S} in \texttt{input.txt}) without duplicate
coordinates. If $\gamma_0$ has duplicate rows, the initial massaging
and the final adjustment of the formula by duplicating the respective
variables has to be done by hand. Second, it is possible to use a
proper generating set $\gamma_0\subs\rho_0$ in the input (provided it
does not contain duplicate rows), but then in the output file
\texttt{checkppoutput.z3} the goal relation~\texttt{S} has to be
completed manually with all tuples from~$\rho_0$, since the closure
\m{\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}} is not computed.}
\newline
(Pseudocode is given on page~\pageref{code:compute-ppdefinition},
line numbers in the description refer to this code.)
\begin{description}
\item[Input]
finitary relations \m{\rho_1\subs A^{m_1},\dotsc,\rho_t\subs A^{m_t}}
defining \m{F\defeq \Pol{Q}} where
\m{Q=\set{\rho_\ell\mid 1\leq \ell\leq t}}
\par
a generating system \m{\gamma_0\subs A^{m_0}} for a relation
\m{\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}\in \Inv{F}}
\item[Output]
a primitive positive formula describing \m{\rho_0} in terms of
\m{\rho_1,\dotsc,\rho_t}
\item[Description]
We assume that \m{\gamma_0=\set{r_1,\dotsc,r_n}}, the tuples of which
we represent as a matrix with columns \m{r_1,\dotsc,r_n} and rows
\m{v_1,\dotsc,v_{m_0}}.
We first define a map
\m{\alpha\colon\set{1,\dotsc,m_0}\to\set{1,\dotsc,m'}} to a
transversal of the equivalence relation
\m{\lset{(i,j)\in\set{1,\dotsc,m_0}^2}{v_i=v_j}} (lines~1--9).
For this we iterate over all rows, and, if~$v_j$ has been seen
previously among \m{v_1,\dotsc,v_{j-1}}, we assign to \m{\alpha(j)}
the same index \m{\iota(v_j)} as previously, and if~$v_j$ is a fresh
row, we assign to~\m{\alpha(j)} the least index \m{i\eqdef\iota(v_j)}
not used before (lines~4--9). When this is finished,
\m{\gamma_0=\lset{(x_{\alpha(1)},\dotsc,x_{\alpha(m_0)})}{(x_1,\dotsc,x_{m'})\in\gamma}}
where \m{\gamma\subs A^m} is a projection of~$\gamma_0$ to its
distinct rows, and~$m'$ is the last used value of~$i$.
\par
Next we iterate over all \m{1\leq \ell\leq t} and for each
relation~$\rho_\ell$ we iteratively extend the set~$\mathcal{L}_\ell$
of \nbdd{\rho_\ell}atoms for the final formula, starting from
\m{\mathcal{L}_\ell=\emptyset} (lines~10--13).
We iterate over the rows \m{z_1,\dotsc,z_{m_\ell}} of all possible
matrices with~$n$ columns chosen from~$\rho_\ell$ (lines~14--16). For any of these
matrices we construct an \nbdd{m_\ell}tuple~$a$ of variable symbols (lines~17--24),
which will represent a \nbdd{\rho_\ell}atom and will be added
to~$\mathcal{L}_\ell$ if it is not already present in the list of
atoms (lines~25--26). The atoms have to be constructed in such a way that any two
identical rows occurring within all possible matrices get the same
variable symbol. This ensures that the variable identification
represented in Proposition~\ref{prop:gen-rel-clone} by intersection
with~$\delta_\epsilon$ takes place. Moreover, if a row in the
matrices occurs as a row of~$\gamma$ (or equivalently of~$\gamma_0$),
then the corresponding variable is not going to be existentially
quantified, while all others are. This takes care of the projection
in the formula for~$\rho$ from Proposition~\ref{prop:gen-rel-clone}.
\par
In more detail, if a row~$z_j$ with \m{1\leq j\leq m_\ell} has not occurred
previously (line~17), we have to define its variable
symbol~$u(z_j)$. If \m{z_j\in\set{v_1,\dotsc,v_{m_0}}}, that is,
$z_j$ is among the rows of~$\gamma_0$, we use the
variable \m{u(z_j)\defeq x_{\iota(z_j)}} (lines~19--20).
Otherwise, the fresh row~$z_j$ needs to be projected away by
existential quantification, and we use a different symbol
\m{u(z_j)\defeq y_k} where~$k>0$ is the least previously unused index
for existentially quantified variables (lines~21--23). Regardless of whether~$z_j$
is fresh or not, we define the \nbdd{j}th entry of the current
atom~$a$ as \m{a(j)\defeq u(z_j)} (line~24). Only if the resulting string
$a=(a(1),\dotsc,a(m_\ell))\notin\mathcal{L}_\ell$, that is, if~$a$ is
a new atom, it will be added to~$\mathcal{L}_\ell$ (lines~25--26).
\par
After all iterations, we state that all variables \m{x_1,\dotsc,x_i}
occurring in \m{\set{x_{\alpha(1)},\dotsc,x_{\alpha(m_0)}}} come from
the base set~$A$, we existentially quantify all variables
\m{y_1,\dotsc,y_k} and write out (line~27) a long conjunction over all
relations \m{\rho_1,\dotsc,\rho_t} and over all \nbdd{\rho_\ell}atoms
\m{a\in \mathcal{L}_\ell}
(cf.\ the direct product in the formula for~$\rho$ in
Proposition~\ref{prop:gen-rel-clone}).
\end{description}
\begin{algorithm}[htp]
\SetAlgoVlined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{%
finitary relations \m{\rho_1\subs A^{m_1},\dotsc,\rho_t\subs A^{m_t}}\\
\texttt{// defining \m{F\defeq \Pol{Q}} where
\m{Q=\set{\rho_\ell\mid 1\leq \ell\leq t}}}\\
generating system $\gamma_0\subs A^{m_0}$ for a relation
\rlap{$\rho_0=\gapply{\gamma_0}_{\algwops{A}{F}^{m_0}}\in\Inv{F}$}\\
\tcp{where \m{\gamma_0=\set{r_1,\dotsc,r_n}}, i.e.,
\m{\abs{\gamma_0}\leq n}
\newline
written as a matrix
$\apply{r_1,\dotsc,r_n} = \apply{\begin{smallmatrix}
v_1\\\vdots\\v_{m_0}
\end{smallmatrix}}$
with rows $v_j\in A^n$}
}
\Output{a primitive positive presentation of~$\rho_0$ in terms of
\m{\rho_1,\dotsc,\rho_t}}
\Begin{%
$i\gets 0$
\tcp*[r]{initialise index for distinct rows of~$\gamma_0$}
$D_0\gets\emptyset$
\tcp*[r]{initialise domain of distinct rows of~$\gamma_0$}
\tcp{Define $\iota\colon D_0\to \set{1,\dotsc,\abs{D_0}}$,
$\alpha\colon \set{1,\dotsc,m_0}\to\set{1,\dotsc,\abs{D_0}}$}
\ForAll{$1\leq j\leq m_0$}{%
\If{$v_j\notin D_0$}{
$D_0\gets D_0\cup\set{v_j}$\;
$i\gets i+1$\;
$\iota(v_j)\gets i$\;
}
$\alpha(j)\gets\iota(v_j)$
}
\tcp{Now \m{D_0=\set{v_1,\dotsc,v_{m_0}}}}
$k\gets 0$
\tcp*[r]{initialise index for $\exists$-quantified variables}
$D\gets \emptyset$
\tcp*[r]{\mbox{initialise domain of distinct rows from submatrices}
of~$\rho_1,\dotsc,\rho_t$ to define
$u\colon D\to \set{x_{1},\dotsc,
x_{\abs{D_0}}}\cup\set{y_1,\dotsc,y_{k}}$}
\ForAll{$1\leq \ell\leq t$}{
\m{\mathcal{L}_\ell\gets \emptyset}
\tcp*[r]{initialise list of atoms pertaining to~$\rho_\ell$}
\ForAll{$c\colon n\to\rho_\ell$}{%
Form a matrix
\m{\apply{c_0,\dotsc,c_{n-1}}=\apply{\begin{smallmatrix}z_1\\\vdots\\z_{m_\ell}\end{smallmatrix}}}
with rows \m{z_j\in A^n}\;%
\tcp{Iterate over its rows and form a possibly new atom~$a$}
\ForAll{$1\leq j\leq m_\ell$}{%
\If(\tcp*[f]{A previously unseen row $z_j$ appears.}){$z_j\notin D$}{
$D\gets D\cup\set{z_j}$\;
\eIf(\tcp*[f]{It is a row of~$\gamma_0$.}){$z_j\in D_0$}{%
$u(z_j)\gets x_{\iota(z_j)}$}
{
$k\gets k+1$\;
$u(z_j)\gets y_{k}$}
}
$a(j)\gets u(z_j)$
\tcp*[r]{extend current atom with the appropriate variable symbol}
}
\If(\tcp*[f]{If it is really new\dots}){$a=(a(1),\dotsc,a(m_\ell))\notin \mathcal{L}_\ell$}{
$\mathcal{L}_\ell \gets \mathcal{L}_\ell\cup\set{a}$
\tcp*[r]{\dots add current atom~$a$ to the list.}
}
}}
\Return{String
\m{\rho_0=\Bigl\{\apply{x_{\alpha(1)},\dotsc,x_{\alpha(m_0)}}
\ \Big\vert\ x_1,\dotsc,x_i\in A\land
\exists y_1\dotsm \exists y_k\colon
\bigwedge\limits_{1\leq \ell\leq t}\bigwedge\limits_{a\in\mathcal{L}_\ell}\rho_\ell(a)\Bigr\}}}
}
\NoCaptionOfAlgo
\caption{Compute a primitive positive definition}
\label{code:compute-ppdefinition}
\end{algorithm}
\end{algo}
\begin{example}\label{ex:computing-ternary-f.-from-T.}
In the case discussed in this section, we have \m{A=\set{0,1,2}},
\m{t=1}, \m{Q=\set{\graph{T}}}, \m{\rho_0=\graph{f}}, \m{m_0=3} and
\m{m_1=5}. Moreover, \m{F=\Pol{Q}=\cent{\set{T}}}.
As the size~$s_1$ of~$\graph{T}$ is \m{\abs{A}^4=81}, it is
crucial for the applicability of Algorithm~\ref{alg:gen-rel-clone} to
find a small generating system~$\gamma_0$ of~$\graph{f}$ with respect to
\m{\alg{A}^3} where \m{\alg[\cent{\set{T}}]{A}}. Given
\m{\abs{\gamma_0}=n}, the algorithm has to iterate over \m{s_1^n=81^n}
matrices and thus over \m{m_1\cdot s_1^n = 5\cdot 81^n} rows.
Experiments show that if we blindly took \m{\gamma_0=\graph{f}}, i.e.,
\m{n=\abs{A}^2=9}, the algorithm would need more than eighteen thousand
years to finish, perhaps less by a factor of ten if run on a computer
much faster than the author's.
Fortunately, the number~$n$ can be reduced significantly to a value far
below~$9$.
\par
Indeed, listing the tuples of~$\graph{f}$ as columns, we have
\begin{align*}
\graph{f}&=\set{
\apply{\begin{smallmatrix}0\\0\\0\end{smallmatrix}},
\apply{\begin{smallmatrix}0\\1\\0\end{smallmatrix}},
\apply{\begin{smallmatrix}0\\2\\0\end{smallmatrix}},
\apply{\begin{smallmatrix}1\\0\\0\end{smallmatrix}},
\apply{\begin{smallmatrix}1\\1\\0\end{smallmatrix}},
\apply{\begin{smallmatrix}1\\2\\1\end{smallmatrix}},
\apply{\begin{smallmatrix}2\\0\\0\end{smallmatrix}},
\apply{\begin{smallmatrix}2\\1\\1\end{smallmatrix}},
\apply{\begin{smallmatrix}2\\2\\0\end{smallmatrix}}}\\
&=\gapply{\set{
\apply{\begin{smallmatrix}1\\2\\1\end{smallmatrix}},
\apply{\begin{smallmatrix}2\\1\\1\end{smallmatrix}}
}}_{\alg{A}^3}.
\end{align*}
To see this, we can take advantage of the unary operations
\m{u_{2,a}\in \ncent[1]{\set{T}}} with \m{a\in A}, described in
Corollary~\ref{cor:Tstar1}, and
\m{f_{0,(2,2,2,2)}\in\ncent[2]{\set{T}}} from
Lemma~\ref{lem:Tstar2}. Namely, for \m{a\in\set{0,1,2}}, we have
\begin{align*}
u_{2,a}(1)&=0& u_{2,a}(2)&=a& f_{0,(2,2,2,2)}(1,2)&=2& u_{2,1}(2)&=1\\
u_{2,a}(2)&=a& u_{2,a}(1)&=0& f_{0,(2,2,2,2)}(2,1)&=2& u_{2,1}(2)&=1\\
u_{2,a}(1)&=0,& u_{2,a}(1)&=0,& f_{0,(2,2,2,2)}(1,1)&=0,& u_{2,1}(0)&=0.
\end{align*}
\par
Hence, we can use the \nbdd{2}element generating set
\m{\gamma_0=\set{(1,2,1),(2,1,1)}\subs\graph{f}}
and thus we only have to enumerate \m{5\cdot 81^2 = 32\,805} rows. This
can be done in a fraction of a second\footnote{%
After compilation the programme \texttt{ppdefinitions.cpp} may be run
on \texttt{input\_2generated.txt} copied to \texttt{input.txt}. The
mentioned files can be found in the ancillary directory of this submission.}
and results in a primitive positive formula\footnote{%
Running \texttt{ppdefinitions.cpp} on \texttt{input\_2generated.txt}
(see the ancillary directory) produces the content of \texttt{ppoutput\_2generated.out}
and \texttt{checkppoutput\_2generated.z3}, which both contain the resulting
primitive positive formula (as plain text and in
\texttt{SMT-LIB2.0}-syntax).} with $6$~existentially quantified variables and
$6\,561$~\nbdd{\graph{T}}atoms, the correctness of which can be
verified by a sat\dash{}solver in a few minutes\footnote{%
This can, for example, be done with the Z3 theorem
prover~\cite{deMouraBjoernerZ3EfficientSMTsolver,Z3} using the
ancillary file \texttt{checkppoutput\_2generated.z3}. This file also
contains the computed primitive positive formula for~$\graph{f}$
expressed in the \texttt{SMT-LIB2.0}-format.}.
\end{example}
We conclude that it is possible to computationally find a proof that
the graph of~$f$ is primitive positively definable from~$\graph{T}$ for
\m{A=\set{0,1,2}}. However, the resulting formula is not suitable for a
generalisation to larger carrier sets as the one from
Lemma~\ref{lem:pp-formula} was.
\section*{Acknowledgements}\label{sect:acknowledgements}
The author would like to thank Zarathustra Brady for telling him about
the possibility to include ancillary files with an arXiv preprint.
Moreover, he is grateful to Dmitriy Zhuk for mentioning the usefulness
of the Z3 theorem prover in connection with clones.
\input{referencesBW.tex}%
\end{document}
|
1,477,468,750,886 | arxiv | \section{Introduction}
All graphs in this paper are undirected, finite and simple. We refer
to the book \cite{bondy} for graph theoretic notation and
terminology not described here. For any graph $G$ of order $n$, the
\emph{spanning tree packing number} or \emph{$STP$ number}, denoted
by $\sigma=\sigma(G)$, is the maximum number of edge-disjoint
spanning trees contained in $G$. The problem studying the $STP$
number of a graph is called the \emph{Spanning Tree Packing
Problem}. For the spanning tree packing problem, Palmer
\cite{Palmer} published a survey paper on this subject. Later, Ozeki
and Yamashita gave a survey paper on the spanning tree problem. For
more details, we refer to \cite{OY}.
With graphs considered as natural models for many network design
problems, (edge-)connectivity and maximum number of edge-disjoint
spanning trees of a graph have been used as measures for reliability
and strength in communication networks modeled as a graph (see
\cite{Cunningham, Matula}).
Graph products are important methods to construct bigger graphs, and
play key roles in the design and analysis of networks. In
\cite{Peng2}, Peng and Tay obtained the spanning tree numbers of
Cartesian products of various combination of complete graphs,
cycles, complete multipartite graphs. Note that $Q_{n}\cong
P_{2}\Box P_{2}\Box\cdots\Box P_{2}$, where $Q_n$ is the
$n$-hypercube. Let $K_{n(m)}$ denote a complete multipartite graph
with $n$ parts each of which contains exact $m$ vertices.
\begin{proposition}\cite{Palmer, Peng2}\label{pro1}
$(1)$ $\sigma(K_n\Box C_m)=\lfloor \frac{n+1}{2}\rfloor$;
$(2)$ For $2\leq n\leq m$, $\sigma(K_n\Box K_m)=\lfloor
\frac{n+m-2}{2}\rfloor$;
$(3)$ For $n$-hypercube $Q_n\cong P_{2}\Box P_{2}\Box\cdots\Box
P_{2}$, $\sigma(Q_n)=\lfloor \frac{n}{2}\rfloor$;
$(4)$ $\sigma(K_{n(m)}\Box K_r)=\lfloor \frac{nm-m+r-1}{2}\rfloor$;
$(5)$ For a cycle $C_r$ with $r$ vertices, $\sigma(K_{n(m)}\Box
C_r)=\lfloor \frac{nm-m+2}{2}\rfloor$;
$(6)$ $\sigma(K_{n(m)}\Box K_{r(t)})=\lfloor
\frac{m(n-1)+(r-1)t}{2}\rfloor$;
$(7)$ $\sigma(K_{n(m)})=\lfloor \frac{m(n-1)}{2}\rfloor$.
\end{proposition}
In this paper, we focus on general graphs and give some lower bounds
for the $STP$ numbers of Cartesian product graphs, Lexicographic
product graphs. Moreover, these lower bounds are sharp.
\section{For Cartesian product}
Recall that the \emph{Cartesian product} (also called the {\em
square product}) of two graphs $G$ and $H$, written as $G\Box H$, is
the graph with vertex set $V(G)\times V(H)$, in which two vertices
$(u,v)$ and $(u',v')$ are adjacent if and only if $u=u'$ and
$(v,v')\in E(H)$, or $v=v'$ and $(u,u')\in E(G)$. Clearly, the
Cartesian product is commutative, that is, $G\Box H\cong H\Box G$.
Let $G$ and $H$ be two connected graphs with
$V(G)=\{u_1,u_2,\ldots,u_{n_1}\}$ and
$V(H)=\{v_1,v_2,\ldots,v_{n_2}\}$, respectively. We use $G(u_j,v_i)$
to denote the subgraph of $G\Box H$ induced by the set
$\{(u_j,v_i)\,|\,1\leq j\leq n_1\}$. Similarly, we use $H(u_j,v_i)$
to denote the subgraph of $G\Box H$ induced by the set
$\{(u_j,v_i)\,|\,1\leq i\leq n_2\}$. It is easy to see
$G(u_{j_1},v_i)=G(u_{j_2},v_i)$ for different $u_{j_1}$ and
$u_{j_2}$ of $G$. Thus, we can replace $G(u_{j},v_i)$ by $G(v_i)$
for simplicity. Similarly, we can replace $H(u_{j},v_i)$ by
$H(u_j)$. For any $u,u'\in V(G)$ and $v,v'\in V(H)$, $(u,v),\
(u,v')\in V(H(u))$, $(u',v),\ (u',v')\in V(H(u'))$, $(u,v),\
(u',v)\in V(G(v))$, and $(u,v),\ (u',v)\in V(G(v))$. We refer to
$(u,v)$ as the vertex corresponding to $u$ in $G(v)$. Clearly,
$|E(G\Box H)|=|E(H)||V(G)|+|E(G)||V(H)|$.
\begin{figure}[h,t,b,p]
\begin{center}
\scalebox{0.7}[0.7]{\includegraphics{1.eps}}\\
Figure 1: The parallel subgraph $\mathscr{F}_i$ in $G\Box H$
corresponding to the tree $T_i$ in $G$.
\end{center}
\end{figure}
Throughout this paper, let $\sigma(G)=k$, $\sigma(H)=\ell$, and
$T_1,T_2,\cdots,T_k$ be $k$ spanning trees in $G$ and
$T_1',T_2',\cdots,T_{\ell}'$ be $\ell$ spanning trees in $H$. For
the spanning tree $T_i \ (1\leq i\leq k)$ in $G$, we define a
spanning subgraph (see Figure $1$ for an example) of $G\Box H$ as
follows: $\mathscr{F}_i=\bigcup_{v_j\in V(H)}T_i(v_j)$, where
$T_i(v_j)$ is the corresponding tree of $T_i$ in $G(v_j)$. We call
each of $\mathscr{F}_i \ (1\leq i\leq k)$ a \emph{parallel subgraph
of $G\Box H$ corresponding to the tree $T_i$ in $G$}. For a spanning
tree $T_j'$ in $H$, we define a spanning subgraph of $G\Box H$ as
follows: $\mathscr{F}_j'=\bigcup_{u_i\in V(G)}T_j'(u_i)$, where
$T'_j(u_i)$ is the corresponding tree of $T_j' \ (1\leq i\leq \ell)$
in $H(u_i)$. We also call each of $\mathscr{F}_j' \ (1\leq i\leq
\ell)$ a \emph{parallel subgraph of $G\Box H$ corresponding to the
tree $T_j'$ in $H$}.
The following observation is helpful for understanding our main
result.
\begin{observation}\label{obs1}
Let $\mathscr{T}=\{T_1,T_2,\cdots,T_k\}$ be the set of spanning
trees of $G$, and $\mathscr{T}'=\{T_1',T_2',\cdots,T_{\ell}'\}$ be
the set of spanning trees of $H$. Then
$(1)$ $\underset{T\in \mathscr{T},T'\in \mathscr{T}'}{\bigcup} T\Box
T'\subseteq G\Box H$;
$(2)$ $E(T_i\Box T')\cap E(T_j\Box T')=\bigcup_{u\in V(G)}E(T'(u))$
for $T'\in \mathscr{T}'$ and $T_i,T_j\in \mathscr{T} \ (i\neq j)$;
$(3)$ if $G=\underset{T\in \mathscr{T}}{\bigcup}T$ and
$H=\underset{T'\in \mathscr{T}'}{\bigcup}T'$, then $\underset{T\in
\mathscr{T},T'\in \mathscr{T}'}{\bigcup} T\Box T'=G\Box H$.
\end{observation}
Let us now give our first result.
\begin{theorem}\label{th1}
For two connected graphs $G$ and $H$, $\sigma(G \Box H)\geq
\sigma(G)+\sigma(H)-1$. Moreover, the lower bound is sharp.
\end{theorem}
\begin{proof}
Let $V(G)=n_1$, $V(H)=n_2$, $\sigma(G)=k$ and $\sigma(H)=\ell$.
Since $\sigma(G)=k$, there exist $k$ spanning trees in $G$, say
$T_1,T_2,\cdots,T_{k}$. Clearly, $k\leq
\lfloor\frac{n_1}{2}\rfloor$. Since $\sigma(H)=\ell$, there exist
$\ell$ spanning trees in $H$, say $T_1',T_2',\cdots,T_{\ell}'$.
Clearly, $\ell\leq \lfloor\frac{n_2}{2}\rfloor$.
Pick up two spanning trees $T_k$ and $T_{\ell}'$ of $G$ and $H$,
respectively. Consider the graph $T_k\Box T_{\ell}'$. We will find
an our desired spanning tree of $G\Box H$ from $T_k\Box T_{\ell}'$
by a few steps.
First, we focus on the spanning tree $T_{\ell}'$ of $H$. We
successively delete some leaves of $T_{\ell}'$ to obtain a subtree
$T_{a}'$ of order $\lceil\frac{n_2}{2}\rceil$ in $T_{\ell}'$, and
the induced subgraph of all the deleted edges in $T_{\ell}'$ is a
forest, say $F_{b}'$. For example, we consider the tree $T_{\ell}'$
shown in Figure 2 $(a)$. Clearly, $|V(T_{\ell}')|=7$. We
successively delete the leaves $v_1v_4,v_6v_2,v_6v_3$, and obtain
the tree $T_a'=v_4v_5\cup v_4v_6\cup v_4v_7$ (see Figure 2 $(b)$)
and the forest $F_b'=v_1v_4\cup v_6v_2\cup v_6v_3$ (see Figure 2
$(c)$).
\begin{figure}[h,t,b,p]
\begin{center}
\scalebox{0.8}[0.8]{\includegraphics{2.eps}}\\
Figure 2: An example for deleting some leaves from a spanning tree
of $H$.
\end{center}
\end{figure}
Let $V(T_k)=V(G)=\{u_1,u_2,\cdots,u_{n_1}\}$. It is clear that there
are $n_1$ copies of the spanning tree $T_{\ell}'$ of $H$ in $T_k\Box
T_{\ell}'$, say
$T_{\ell}'(u_1),T_{\ell}'(u_2),\cdots,T_{\ell}'(u_{n_1})$. From the
above argument, for each $T_{\ell}'(u_i) \ (1\leq i\leq n_1)$, we
can obtain a subtree $T'_{a}(u_i) \ (1\leq i\leq n_1)$ of order
$\lceil\frac{n_2}{2}\rceil$ and a forest $F'_{b}(u_i) \ (1\leq i\leq
n_1)$ in $T_{\ell}'(u_i)$. Without loss of generality, let $u_1$ be
a root of $T_k$. Pick up $T_{\ell}'(u_1)$. Then we pick up
$\lfloor\frac{n_1-1}{2}\rfloor$ copies of $T_{a}'$ from
$T'_{\ell}(u_2),T'_{\ell}(u_3),\cdots,T'_{\ell}(u_{n_1})$, say
$T'_{a}(u_2),T'_{a}(u_3),\cdots,T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$,
and continue to pick up $\lfloor\frac{n_1-1}{2}\rfloor$ copies of
$F_{b}'$ from $T'_{\ell}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
T'_{\ell}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}),
\cdots,T'_{\ell}(u_{n_1})$, say
$F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}),
\cdots,F'_{b}(u_{n_1})$.
Next, we combine
$T'_{a}(u_2),T'_{a}(u_3),\cdots,T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$,
$F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}), \cdots,\\
F'_{b}(u_{n_1})$ with $T_{\ell}'(u_1)$ by adding some edges to form
a spanning tree of $G\Box H$ in the following way: For two trees
$T'_{\ell}(u_i)$ and $T'_{\ell}(u_j)$ such that $u_iu_j\in E(T_k)$
and $d_{T_k}(u_i,u_1)<d_{T_k}(u_j,u_1)$ (namely, $u_i$ is closer
than $u_j$ to the root $u_1$), we add some edges between
$V(T'_{\ell}(u_i))$ and $V(T'_{\ell}(u_j))$. Note that we can obtain
a subtree $T'_{a}(u_j)$ and a forest $F'_{b}(u_j)$ from
$T'_{\ell}(u_j)$. Let
$V(T_{\ell}'(u_j))=\{(u_j,v_{1}),(u_j,v_{2}),\cdots,(u_j,v_{\lceil\frac{n_2}{2}\rceil}),
(u_j,v_{\lceil\frac{n_2}{2}\rceil+1}),(u_j,v_{\lceil\frac{n_2}{2}\rceil+2}),\cdots,(u_j,v_{n_2})\}$
and $V(T'_{a}(u_j))=\{(u_j,v_{\lfloor\frac{n_2}{2}\rfloor+1}),
(u_j,v_{\lfloor\frac{n_2}{2}\rfloor+2}),\cdots,(u_j,v_{n_2})\}$ and
$V(T_{\ell}'(u_j))\setminus
V(T'_{a}(u_j))=\{(u_j,v_{1}),\\(u_j,v_{2}),\cdots,(u_j,v_{\lfloor\frac{n_2}{2}\rfloor})\}$.
If we have chosen the forest $F'_{b}(u_j)$ from $T_{\ell}'(u_j)$,
then
$E_1(u_i,u_j)=\{(u_i,v_{k})(u_j,v_{k})|\lfloor\frac{n_2}{2}\rfloor\leq
k\leq n_2\}$ is our desired edge set, which implies that we will add
the edges in $E_1(u_i,u_j)$ between $V(T_{\ell}'(u_i))$ and
$V(T_{\ell}'(u_j))$. If we have chosen the tree $T'_{a}(u_j)$ from
$T'_{\ell}(u_j)$, then $E_2(u_i,u_j)=\{(u_i,v_{k})(u_j,v_{k})|1\leq
k\leq \lfloor\frac{n_2}{2}\rfloor+1\}$ is our desired edge set,
which implies that we will add the edges in $E_2(u_i,u_j)$ between
$V(T_{\ell}'(u_i))$ and $V(T_{\ell}'(u_j))$. For the above example,
if we have chosen the forest $F'_{b}(u_j)=(u_j,v_{1})(u_j,v_{4})\cup
(u_j,v_{6})(u_j,v_{2})\cup (u_j,v_{6})(u_j,v_{3})$ from
$T'_{\ell}(u_j)$, then the edge set
$E_1(u_i,u_j)=\{(u_i,v_{4})(u_j,v_{4}),\\
(u_i,v_{5})(u_j,v_{5}),
(u_i,v_{6})(u_j,v_{6}),(u_i,v_{7})(u_j,v_{7})\}$ is our desired one
(see Figure 3 $(a)$). If we have chosen the tree
$T'_{a}(u_j)=(u_j,v_{4})(u_j,v_{5})\cup (u_j,v_{4})(u_j,v_{6})\cup
(u_j,v_{4})(u_j,v_{7})$ from $T'_{\ell}(u_j)$, then the edge set
$E_2(u_i,u_j)=\{(u_i,v_{1})(u_j,v_{1}),(u_i,v_{2})(u_j,v_{2}),
(u_i,v_{3})(u_j,v_{3}),(u_i,v_{4})(u_j,v_{4})\}$ is our desired one;
see Figure 3 $(b)$.
\begin{figure}[h,t,b,p]
\begin{center}
\scalebox{0.7}[0.7]{\includegraphics{3.eps}}\\
Figure 3: An example for the procedure of adding edges.
\end{center}
\end{figure}
We continue to complete the above adding edges procedure. In the
end, we obtain a spanning tree of $G\Box H$ in $T_k\Box T'_{\ell}$,
say $\widehat{T}$. An example is given in Figure $4$. Let us focus
on the graph $T_k\Box T'_{\ell}\setminus E(\widehat{T})$. In order
to form the tree $\widehat{T}$, we have used the tree
$T'_{\ell}(u_1)$, the subtrees
$T'_{a}(u_2),T'_{a}(u_3),\cdots,T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$
and the forests
$F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3})$,
$\cdots,F'_{b}(u_{n_1})$ among
$T_{\ell}'(u_2),T_{\ell}'(u_3),\cdots,T_{\ell}'(u_{n_1})$. So there
are $\lfloor\frac{n_1-1}{2}\rfloor$ copies of $T'_{a}$, namely,
$T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}),\cdots,T'_{a}(u_{n_1})$
in $T_k\Box T'_{\ell}\setminus E(\widehat{T})$, and there are also
$\lfloor\frac{n_1-1}{2}\rfloor$ copies of $F'_{b}$, namely,
$F'_{b}(u_{2}),
F'_{b}(u_{3}),\cdots,F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$ in
$T_k\Box T'_{\ell}\setminus E(\widehat{T})$. We must notice another
fact that, for two trees $T'_{\ell}(u_i)$ and $T'_{\ell}(u_j)$ such
that $u_iu_j\in E(T_k)$ and $d_{T_k}(u_i,u_1)<d_{T_k}(u_j,u_1)$
(namely, $u_i$ is closer than $u_j$ to the root $u_1$), we have used
$\lceil\frac{n_2}{2}\rceil$ edges belonging to $E_1(u_i,u_j)$ or
$E_2(u_i,u_j)$ between $V(T'_{\ell}(u_i))$ and $V(T'_{\ell}(u_j))$
and hence there are
$n_2-\lceil\frac{n_2}{2}\rceil=\lfloor\frac{n_2}{2}\rfloor$
remaining edges belonging to $\overline{E}_1(u_i,u_j)$ or
$\overline{E}_2(u_i,u_j)$ between $V(T'_{\ell}(u_i))$ and
$V(T'_{\ell}(u_j))$ in $T_k\Box T'_{\ell}\setminus E(\widehat{T})$,
where
$\overline{E}_1(u_i,u_j)=E[V(T'_{\ell}(u_i)),V(T'_{\ell}(u_j))]\setminus
E_1(u_i,u_j)$ and
$\overline{E}_2(u_i,u_j)=E[V(T'_{\ell}(u_i)),V(T'_{\ell}(u_j))]\setminus
E_2(u_i,u_j)$. Later, we will use all the above remaining edges to
form some new spanning trees of $G\Box H$.
\begin{figure}[h,t,b,p]
\begin{center}
\scalebox{0.7}[0.7]{\includegraphics{4.eps}}\\
Figure 4: A spanning tree of $G\Box H$ from $T_i\Box T_j$.
\end{center}
\end{figure}
Let $\mathscr{F}_i \ (1\leq i\leq k-1)$ be the parallel subgraph of
$G\Box H$ corresponding to $T_i \ (1\leq i\leq k-1)$ in $G$. Note
that one of $\mathscr{F}_1,\mathscr{F}_1,\cdots,\mathscr{F}_{k-1}$,
one of $T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}),$
$\cdots,T'_{a}(u_{n_1})$ and one of $F'_{b}(u_{2}),
F'_{b}(u_{3}),\cdots,F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$
can form a spanning tree of $G\Box H$. So we can obtain $k-1$
spanning trees of $G\Box H$ since $k-1\leq
\lfloor\frac{n_1}{2}\rfloor-1$.
Let $\mathscr{F}'_j \ (1\leq j\leq \ell-1)$ be the parallel subgraph
of $G\Box H$ corresponding to $T'_j \ (1\leq j\leq \ell-1)$ in $H$,
where $\mathscr{F}'_j=\bigcup_{u_i\in V(G)}T_j'(u_i)$. Note that one
of $\mathscr{F}'_1,\mathscr{F}'_2,\cdots,\mathscr{F}'_{\ell-1}$ and
one edge of $\overline{E}_1(u_i,u_j)$ or $\overline{E}_2(u_i,u_j)$
for each $u_iu_j\in V(T_k) \ (2\leq i\neq j\leq n_1)$ can form a
spanning tree of $G\Box H$. Since $\ell-1\leq
\lfloor\frac{n_2}{2}\rfloor -1$ and $|\overline{E}_r(u_i,u_j)|\leq
\lfloor\frac{n_2}{2}\rfloor \ (r=1,2)$, we can obtain $\ell-1$
spanning trees of $G\Box H$.
In the following, we summarize all the edge-disjoint spanning trees
obtained by us.
$\bullet$ $k-1$ spanning trees of $G \Box H$ obtained from the
parallel subgraphs
$\mathscr{F}_1,\mathscr{F}_2,\cdots,\mathscr{F}_{k-1}$, the subtrees
$T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}),
\cdots,T'_{a}(u_{n_1})$, and the forests $F'_{b}(u_{2}),
F'_{b}(u_{3}),\cdots,\\
F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$;
$\bullet$ $\ell-1$ spanning trees of $G\Box H$ obtained from the
parallel subgraphs
$\mathscr{F}_1',\mathscr{F}_2',\cdots,\mathscr{F}_{\ell-1}'$ and the
$\ell-1$ edges in $\overline{E}_1(u_i,u_j)$ or
$\overline{E}_2(u_i,u_j)$ for each $u_iu_j\in E(T_k) \ (1\leq i\neq
j\leq n_1)$;
$\bullet$ one spanning trees $\widehat{T}$ of $G \Box H$ obtained
from the tree $T'_{\ell}(u_{1})$, the subtrees $T'_{a}(u_{2}),
T'_{a}(u_{3}),\\
\cdots,T'_{a}(u_{\lfloor\frac{n_1-1}{2}\rfloor+1})$, and the forests
$F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+2}),
F'_{b}(u_{\lfloor\frac{n_1-1}{2}\rfloor+3}),\cdots, F'_{b}(u_{n_1})$
and the edges of $E_1(u_i,u_j)$ or $E_2(u_i,u_j)$ for each
$u_iu_j\in E(T_k) \ (1\leq j\leq n_1)$.
From the above arguments, we know that there exist $k+\ell-1$
spanning trees of $G \Box H$, that is, $\sigma(G \Box H)\geq
k+\ell-1=\sigma(G)+\sigma(H)-1$.
\end{proof}
To show the sharpness of the above bound, we consider the following
examples.
\noindent \textbf{Example 1}. $(1)$ Let $G$ and $H$ be two paths of
order $n \ (n\geq 2)$. Clearly, $\sigma(G)=\sigma(H)=1$,
$|V(G)|=|V(H)|=n$. On the one hand, from the above theorem, we have
$\sigma(G \Box H)\geq \sigma(G)+\sigma(H)-1=1$. On the other hand,
since $|E(G\Box H)|=|E(H)||V(G)|+|E(G)||V(H)|=2n(n-1)$, it follows
that $\sigma(G \Box H)\leq \lfloor\frac{2n(n-1)}{n^2-1}\rfloor=1$.
So $\sigma(G \Box H)=\sigma(G)+\sigma(H)-1$;
$(2)$ Let $G=K_{2n}$ and $H=C_{m}$. Clearly, $\sigma(G)=n$,
$\sigma(H)=1$. From $(1)$ of Proposition \ref{pro1}, $\sigma(G \Box
H)=n=\sigma(G)+\sigma(H)-1$;
$(3)$ Let $G=K_{2n}$ and $H=K_{2m}$. Clearly, $\sigma(G)=n$,
$\sigma(H)=m$. From $(2)$ of Proposition \ref{pro1}, $\sigma(G \Box
H)=n+m-1=\sigma(G)+\sigma(H)-1$;
$(4)$ Let $n$ be an odd integer, and $G=Q_{n-1}$ and $H=P_{2}$.
Clearly, $\sigma(G)=\lfloor\frac{n-1}{2}\rfloor$, $\sigma(H)=1$.
From $(3)$ of Proposition \ref{pro1},
$\lfloor\frac{n}{2}\rfloor=\sigma(Q_{n})=\sigma(G \Box
H)=\lfloor\frac{n-1}{2}\rfloor+1-1=\sigma(G)+\sigma(H)-1$.
$(5)$ Let $m,n,r$ be three odd integers such that $m,r$ are even, or
$n$ is odd and $r$ is even, and $G=K_{n(m)}$ and $H=K_{r}$. Clearly,
$\sigma(G)=\lfloor\frac{m(n-1)}{2}\rfloor$,
$\sigma(H)=\lfloor\frac{r}{2}\rfloor$. From $(4)$ of Proposition
\ref{pro1}, $\sigma(G \Box
H)=\lfloor\frac{m(n-1)+r-1}{2}\rfloor=\frac{m(n-1)+r-2}{2}$ and
hence $\sigma(G)+\sigma(H)-1=\frac{m(n-1)}{2}+\frac{r}{2}-1
=\frac{m(n-1)+r-2}{2}=\sigma(G \Box H)$.
\section{For Lexicographic product}
Recall that the \emph{Lexicographic product} of two graphs $G$ and
$H$, written as $G\circ H$, is defined as follows: $V(G\circ
H)=V(G)\times V(H)$. Two distinct vertices $(u,v)$ and $(u',v')$ of
$G\circ H$ are adjacent if and only if either $(u,u')\in E(G)$ or
$u=u'$ and $(v,v')\in E(H)$. Note that unlike the Cartesian Product,
the Lexicographic product is a non-commutative product. Thus $G\circ
H$ need not be isomorphic to $H\circ G$. Clearly, $|E(G\circ
H)|=|E(H)||V(G)|+|E(G)||V(H)|^2$.
The following observation is helpful for understanding our main
result.
\begin{observation}\label{obs2}
Let $\mathscr{T}=\{T_1,T_2,\cdots,T_k\}$ be the set of spanning
trees of $G$, and $\mathscr{T}'=\{T_1',T_2',\cdots,T_{\ell}'\}$ be
the set of spanning trees of $H$. Then
$(1)$ $\underset{T\in \mathscr{T},T'\in \mathscr{T}'}{\bigcup}
T\circ T'\subseteq G\circ H$;
$(2)$ $E(T_i\circ T')\cap E(T_j\circ T')=\bigcup_{u\in
V(G)}E(T'(u))$ for $T'\in \mathscr{T}'$ and $T_i,T_j\in \mathscr{T}
\ (i\neq j)$;
$(3)$ if $G=\underset{T\in \mathscr{T}}{\bigcup}T$ and
$H=\underset{T'\in \mathscr{T}'}{\bigcup}T'$, then $\underset{T\in
\mathscr{T},T'\in \mathscr{T}'}{\bigcup} T\circ T'=G\circ H$.
\end{observation}
From the definition, the Lexicographic product graph $G\circ H$ is a
graph obtained by replacing each vertex of $G$ by a copy of $H$ and
replacing each edge of $G$ by a complete bipartite graph
$K_{n_2,n_2}$. For an edge $u_iu_j\in E(G) \ (1\leq i,j\leq n_1)$,
the induced subgraph obtained from the edges between the vertex set
$V(H(u_i))=\{(u_i,v_1),(u_i,v_2),\cdots,(u_i,v_{n_2})\}$ and the
vertex set $V(H(u_j))=\{(u_j,v_1),(u_j,v_2),\cdots,(u_j,v_{n_2})\}$
in $G\circ H$ is a complete equipartition bipartite graph of order
$2n_2$, denoted by $K_{H(u_i),H(u_j)}$.
Laskar and Auerbach \cite{LA} obtained the following result.
\begin{proposition}\cite{LA} \label{pro2}
For all even $r\geq 2$, $K_{r,r}$ is the union of $\frac{1}{2}r$ of
its Hamiltonian cycles.
\end{proposition}
From their result, $K_{H(u_i),H(u_j)}$ can be decomposed into
$\frac{1}{2} n_2$ Hamiltonian cycles for $n_2$ even, or $\frac{1}{2}
(n_2-1)$ Hamiltonian cycles and one perfect matching for $n_2$ odd.
Therefore, $K_{H(u_i),H(u_j)}$ can be decomposed into $n_2$ perfect
matchings $M_1,M_2,\cdots,M_{n_2}$ of $K_{H(u_i),H(u_j)}$ such that
$C_i=M_{2i-1} \cup M_{2i} \ (1\leq i\leq
\lfloor\frac{n_2}{2}\rfloor)$ is a Hamiltonian cycle of
$K_{H(u_i),H(u_j)}$. We call each $C_i$ an \emph{perfect cycle}.
Furthermore, $K_{H(u_i),H(u_j)}$ can be decomposed into $x$ perfect
cycles and $n_2-2x$ perfect matchings.
Since $\tau(G)=k$, there exist $k$ spanning trees in $G$, say
$T_1,T_2,\cdots,T_{k}$. For each $T_i \ (1\leq i\leq k)$, there is
spanning subgraph $\mathscr{T}_i \ (1\leq i\leq k)$ in $G\circ H$
corresponding to the spanning tree $T_i$ in $G$; see Figure $5$. As
we know, $K_{n_2,n_2}$ can be decomposed into $n_2$ perfect
matchings. So each such spanning subgraph $\mathscr{T}_i \ (1\leq
i\leq k)$ can be decomposed into $n_2$ parallel subgraphs
corresponding to the spanning tree $T_i$ in $G$, say
$\mathscr{F}_{i,1},\mathscr{F}_{i,2},\cdots,\mathscr{F}_{i,n_2}$.
Furthermore, we can decompose each $\mathscr{T}_i \ (1\leq i\leq k)$
into $n_2$ parallel subgraphs
$\mathscr{F}_{i,1},\mathscr{F}_{i,2},\cdots,\mathscr{F}_{i,n_2}$
such that $\mathscr{F}_{i,2j-1}\cup \mathscr{F}_{i,2j} \ (1\leq
j\leq \lfloor\frac{n_2}{2}\rfloor)$ contains $n_1-1$ perfect cycles.
\begin{figure}[h,t,b,p]
\begin{center}
\scalebox{0.7}[0.7]{\includegraphics{5.eps}}\\
Figure 5: The spanning subgraph $\mathscr{T}_i$ in $G\circ H$
corresponding to the tree $T_i$ in $G$.
\end{center}
\end{figure}
For more clear, we give the following observation.
\begin{observation}\label{obs3}
Let $T$ be a tree, $H$ be a connected graph of order $n$. Then all
the edges of $T\circ H$ corresponding to the edges of $T$ can be
discomposed into $n$ parallel subgraphs of $T\circ H$ corresponding
to the tree $T$, say
$\mathscr{F}_{1},\mathscr{F}_{2},\cdots,\mathscr{F}_{n}$, such that
there exist $2x$ parallel subgraphs
$\mathscr{F}_{1},\mathscr{F}_{2},\cdots,\mathscr{F}_{2x}$ such that
$\mathscr{F}_{2j-1}\cup \mathscr{F}_{2j} \ (1\leq j\leq x\leq
\lfloor\frac{n_2}{2}\rfloor)$ contains exact $n_1-1$ perfect cycles.
\end{observation}
After the above preparations, we now give our result.
\begin{theorem}\label{th2}
Let $G$ and $H$ be two connected graphs. $\sigma(G)=k$,
$\sigma(H)=\ell$, $|V(G)|=n_1$, and $|V(H)|=n_2$. Then
$(1)$ if $k n_2=\ell n_1$, then $\sigma(G \circ H)\geq k n_2(=\ell
n_1)$;
$(2)$ if $\ell n_1>k n_2$, then $\sigma(G \circ H)\geq
kn_2-\lceil\frac{k n_2-1}{n_1}\rceil+\ell-1$;
$(3)$ if $\ell n_1<k n_2$, then $\sigma(G \circ H)\geq
kn_2-2\lceil\frac{kn_2-1}{n_1+1}\rceil+\ell-1$.
Moreover, the lower bounds are sharp.
\end{theorem}
\begin{proof}
$(1)$ Since $\sigma(G)=k$, there exist $k$ spanning tree in $G$, say
$T_1,T_2,\cdots,T_k$. Then there exist parallel subgraphs
$\mathscr{F}_{i,j} \ (1\leq i\leq k, 1\leq j\leq n_2)$ in $G \circ
H$ corresponding to the spanning tree $T_i \ (1\leq i\leq k)$ in
$G$. Since $\sigma(H)=\ell$, there exist $\ell$ spanning trees of
$H$, say $T'_1,T'_2,\cdots,T'_{\ell}$. Then, for a spanning tree
$T_j' \ (1\leq j\leq \ell)$ in $H$, there is parallel subgraph
$\mathscr{F}_j'=\bigcup_{u_i\in V(G)}T_j'(u_i)$ in $G \circ H$
corresponding to the spanning tree $T'_j$ of $H$, where $T'_j(u_i)$
is the corresponding tree of $T_j'$ in $H(u_i)$. So there are $\ell
n_1$ such trees $T_j'(u_i) \ (1\leq i\leq n_1, 1\leq j\leq \ell)$ in
$G \circ H$. Because one tree of $\{T_j'(u_i)|1\leq i\leq n_1, 1\leq
j\leq \ell\}$ and one of $\{\mathscr{F}_{i,j}|1\leq i\leq k, 1\leq
j\leq n_2\}$ can form a spanning tree of $G \circ H$, we can get $k
n_2=\ell n_1$ spanning trees in $G\circ H$, namely, $\sigma(G \circ
H)\geq k n_2(=\ell n_1)$.
$(2)$ Since $\sigma(G)=k$, there exist $k$ spanning tree in $G$, say
$T_1,T_2,\cdots,T_k$. Then there exist parallel subgraphs
$\mathscr{F}_{i,j} \ (1\leq i\leq k, 1\leq j\leq n_2)$ in $G \circ
H$ corresponding to the spanning tree $T_i \ (1\leq i\leq k)$ in
$G$. We pick up $\mathscr{F}_{k,n_2}$. Note that
$\mathscr{F}_{k,n_2}=\bigcup_{v_i\in V(H)}T_{k}(v_i)$, where
$T_{k}(v_i)$ is the corresponding tree of $T_{k}$ in $G(v_i) \ 1\leq
i\leq n_2$. Thus we can obtain $n_2$ such trees isomorphic to the
spanning tree $T_{k}$ of $H$ from $\mathscr{F}_{k,n_2}$, say
$T_{k}(v_1),T_{k}(v_2),\cdots,T_{k}(v_{n_2})$. Since $\tau(H)=\ell$,
there exist $\ell$ spanning tree in $G$, say
$T'_1,T'_2,\cdots,T'_{\ell}$. Then there exist parallel subgraphs
$\mathscr{F}_j'=\bigcup_{u_i\in V(G)}T_j'(u_i) \ (1\leq j\leq \ell)$
in $G \circ H$ corresponding to the spanning tree $T_j'$ in $H$,
where $T'_j(u_i)$ is the corresponding tree of $T_j'$ in $H(u_i)$.
Pick up $x$ parallel subgraphs, without loss of generality, let them
be $\mathscr{F}_1',\mathscr{F}_2',\cdots,\mathscr{F}_{x}'$. We can
obtain $xn_1$ trees $T_j'(u_i) \ (1\leq j\leq x, 1\leq i\leq n_1)$
isomorphic to the tree $T_j'$ in $H$. Note that each of
$\{\mathscr{F}_{i,j}|1\leq i\leq k, 1\leq j\leq n_2\}\setminus
\mathscr{F}_{k,n_2}$ and each of the trees $T_j'(u_i) \ (1\leq j\leq
x, 1\leq i\leq n_1)$ can form a spanning tree of $G \circ H$. If
$xn_1\geq kn_2-1$, then we can obtain $kn_2-1$ spanning trees of $G
\circ H$. Consider the remaining $\ell-x$ parallel subgraphs
$\mathscr{F}_{x+1}',\mathscr{F}_{x+2}',\cdots,\mathscr{F}_{\ell}'$.
Note that each of them and each of the trees
$T_{k}(v_1),T_{k}(v_2),\cdots,T_{k}(v_{n_2})$ can form a spanning
tree of $G \circ H$. Since $\ell-x\leq \ell\leq
\lfloor\frac{n_2}{2}\rfloor\leq n_2$, we can obtain $\ell-x$
spanning trees of $G \circ H$ and hence the total number of the
spanning trees is $(kn_2-1)+(\ell-x)$, namely, $\sigma(G \circ
H)\geq kn_2-1+\ell-x$. Since $xn_1\geq kn_2-1$, it follows that
$x=\lceil\frac{k n_2-1}{n_1}\rceil$ and hence $\sigma(G \circ H)\geq
kn_2-1+\ell-\lceil\frac{k n_2-1}{n_1}\rceil$.
$(3)$ Since $\sigma(G)=k$, there exist $k$ spanning tree in $G$, say
$T_1,T_2,\cdots,T_k$. Then there exist parallel subgraphs
$\mathscr{F}_{i,j} \ (1\leq i\leq k, 1\leq j\leq n_2)$ in $G \circ
H$ corresponding to the spanning tree $T_i \ (1\leq i\leq k)$ in
$G$. We pick up $\mathscr{F}_{k,n_2}$. Note that
$\mathscr{F}_{k,n_2}=\bigcup_{v_i\in V(H)}T_{k}(v_i)$, where
$T_{k}(v_i)$ is the corresponding tree of $T_{k}$ in $G(v_i) \ 1\leq
i\leq n_2$. Thus we can obtain $n_2$ such trees isomorphic to the
spanning tree $T_{k}$ of $H$ from $\mathscr{F}_{k,n_2}$, say
$T_{k}(v_1),T_{k}(v_2),\cdots,T_{k}(v_{n_2})$. Since $\tau(H)=\ell$,
there exist $\ell$ spanning tree in $G$, say
$T'_1,T'_2,\cdots,T'_{\ell}$. Then there exist parallel subgraphs
$\mathscr{F}_j'=\bigcup_{u_i\in V(G)}T_j'(u_i) \ (1\leq j\leq \ell)$
in $G \circ H$ corresponding to the spanning tree $T_j'$ in $H$,
where $T'_j(u_i)$ is the corresponding tree of $T_j'$ in $H(u_i)$.
Note that one of
$\mathscr{F}_1',\mathscr{F}_2',\cdots,\mathscr{F}_{\ell}'$ and one
of $T_{k}(v_1),T_{k}(v_2),\cdots,T_{k}(v_{n_2})$ can form a spanning
tree of $G\Box H$. Since $\ell\leq \lfloor\frac{n_2}{2}\rfloor$, we
can obtain $\ell$ spanning trees of $G\Box H$. Note that we also
have $kn_2-1$ parallel subgraphs $\{\mathscr{F}_{i,j}|1\leq i\leq k,
1\leq j\leq n_2\}\setminus \mathscr{F}_{k,n_2}$. Pick up $2x$
parallel subgraphs from $\{\mathscr{F}_{i,j}|1\leq i\leq k, 1\leq
j\leq n_2\}\setminus \mathscr{F}_{k,n_2}$, say
$\mathscr{F}_{a_1,b_1},\mathscr{F}_{a_2,b_2},\cdots,\mathscr{F}_{a_{2x},b_{2x}}
\ (a_{1},a_2,\cdots,a_{2x}\in \{1,2,\cdots,k\})$, such that
$\mathscr{F}_{a_{2r-1},b_{2r-1}}\cup \mathscr{F}_{a_{2r},b_{2r}} \
(1\leq r\leq x)$ contains $(n_1-1)$ perfect cycles. So we can obtain
$x(n_1-1)$ perfect cycles from the above $2x$ parallel subgraphs.
Now we still have $kn_2-1-2x$ parallel subgraphs. Note that one
parallel subgraph and one perfect cycle can form a spanning subgraph
of $G\Box H$ containing a spanning tree of $G\Box H$. If
$x(n_1-1)\geq kn_2-1-2x$, then we can obtain $kn_2-1-2x$ spanning
trees of $G\Box H$. So the total number of the spanning trees of
$G\Box H$ is $(kn_2-1-2x)+\ell$. Since $x(n_1-1)\geq kn_2-1-2x$, it
follows that $x\geq \frac{kn_2-1}{n_1+1}$. We hope that $x$ is as
small as possible. So $\sigma(G\circ H)\geq \sigma(G\Box H)\geq
kn_2-1-\lceil\frac{kn_2-1}{n_1+1}\rceil+\ell
=kn_2+\ell-1+\lceil\frac{kn_2-1}{n_1+1}\rceil$.
\end{proof}
To show the sharpness of the above lower bounds, we consider the
following three examples.
\noindent\textbf{Example 2}. Let $G$ and $H$ be two connected graphs
which can be decomposed into exact $k$ and $\ell$ spanning trees of
$G$ and $H$, respectively. From $(1)$ of the above theorem,
$\sigma(G\circ H)\geq k n_2(=\ell n_1)$. Since $|E(G\circ
H)|=|E(H)|n_1+|E(G)|n_2^2=\ell(n_2-1)n_1+k(n_1-1)n_2^2
=kn_2(n_2-1)+k(n_1-1)n_2^2=kn_2(n_1n_2-1)$, we have $\sigma(G \circ
H)\leq \frac{|E(G \circ H)|}{n_1n_2-1}=kn_2$. Then $\sigma(G \circ
H)=kn_2(=\ell n_1)$. So the upper bound of $(1)$ is sharp.
\noindent\textbf{Example 3}. Consider the graphs $G=P_3$ and
$H=K_4$. Clearly, $\sigma(G)=k=1$, $\sigma(H)=\ell=2$, $n_1=3$,
$n_2=4$, $|E(G)|=2$, $|E(H)|=6$ and $6=\ell n_1>k n_2=4$. On one
hand, we have $\ell n_1-k n_2=2$ and $\tau(G \circ H)\geq
kn_2-\lceil\frac{k
n_2-1}{n_1}\rceil+\ell-1=4-1+2-\lceil\frac{4-1}{3}\rceil=4$. On the
other hand, $|E(G \circ H)|=50$. Then $\sigma(G \circ H)\leq
\frac{|E(G \circ H)|}{n_1n_2-1}=\lfloor\frac{50}{11}\rfloor=4$. So
$\sigma(G \circ H)=4$. So the upper bound of $(2)$ is sharp.
\noindent\textbf{Example 4}. Let $G=K_{4}^-$ be a graph obtained
from $K_4$ by deleting one edge, and $H=P_3$. Clearly,
$\sigma(G)=k=2$, $\sigma (H)=\ell=1$, $n_1=4$, $n_2=3$, $|E(G)|=5$,
$|E(H)|=2$ and $4=\ell n_1<k n_2=6$. On one hand, $\sigma(G \circ
H)\geq kn_2+\ell-1-2\lceil\frac{kn_2-1}{n_1+1}\rceil=4$. On the
other hand, $|E(G \circ H)|=|E(H)|n_1+|E(G)|n_2^2=53$. Then
$\sigma(G \circ H)\leq \frac{|E(G \circ
H)|}{n_1n_2-1}=\lfloor\frac{53}{11}\rfloor=4$. So $\sigma(G \circ
H)=4$ and the upper bound of $(3)$ is sharp.
|
1,477,468,750,887 | arxiv | \section{Introduction}
To provide ubiquitous connectivity among tens of billions devices, the internet-of-things (IoT) is envisaged as one of the key technology trends for the fifth generation (5G) system~\cite{8030485}. Under the IoT paradigm, the low-cost devices can automatically communication with each other without human intervention. Nonetheless, with the development of IoT technology, there are currently many research challenges needed to be addressed, one of them being the energy issue~\cite{Dawy-2017,8030504}. For those devices where the battery replacement can be very costly, the energy harvesting becomes a desirable approach to maintain the functionality of devices for a long period. It is worthy to note that the energy harvesting can be very compatible with most IoT devices, because these devices only consume a small amount of energy~\cite{Dawy-2017,Jayakody-2017}.
One of the promising energy harvesting techniques is the backscatter communication (BackCom)~\cite{Lium-2017}. A BackCom system generally has two main components, a reader and a backscatter node (BN). The BN does not have any active radio frequency (RF) component, and it reflects and modulates the incident single-tone sinusoidal continuous wave (CW) from the reader for the uplink communication. The reflection is achieved by intentionally mismatching the antennas input impedance and the signal encoding is achieved by varying the antenna impedance~\cite{Boyer-2014}. The BN can also harvests the energy from the CW signal. These energy-saving features make the BackCom system become a prospective candidate for IoT.
The backscatter technique is commonly used in the radio frequency identification systems (RFID), which usually accommodates the short range communication (i.e., several meters)~\cite{Vannucci-2008,Boyer-2014}. Recently, the BackCom system has been proposed for providing longer range communications, e.g., by installation of battery units and supporting low-bit rate communications~\cite{Vannucci-2008,Bletsas-2009}, or exploiting the bistatic architectures~\cite{6742719}. Such extended-range BackCom systems have been considered for point-to-point communication~\cite{6836141,Liu-2017,Vincent-2013,Yang-2017,Han-2017,Mudasar-2017} and one-to-many communication~\cite{Vannucci-2008,Bletsas-2009,Yang-2015,Psomas-2017,Zhu-2017}. For the \textit{point-to-point communication}, the physical layer security mechanism was developed in~\cite{6836141}, where the reader interferes with the eavesdropper by injecting a randomly generated noise signal which is added to the CW sent to the tag. In~\cite{Liu-2017}, for a BackCom system consisting of multiple reader-tag pairs, a multiple access scheme, named as time-hopping full-duplex BackCom, was proposed to avoid interference and enable full-duplex communication. Other works have considered BackCom systems with BNs powered by the ambient RF signal~\cite{Vincent-2013,Yang-2017} or power beacons~\cite{Han-2017,Mudasar-2017}. For the \textit{one-to-many communication}, a set of signal and data extraction techniques for the backscatter sensors' information was proposed in~\cite{Vannucci-2008}, where the sensors operate in different subcarrier frequencies. In~\cite{Bletsas-2009}, the authors used the beamforming and frequency-shift keying modulation to minimize the collision in a backscatter sensor network and studied the sensor collision (interference) performance. In~\cite{Yang-2015}, an energy beamforming scheme was proposed based on the backscatter-channel state information and the optimal resource allocation schemes were also obtained to maximize the total utility of harvested energy. In~\cite{Psomas-2017}, the decoding probability for a certain sensor was derived using stochastic geometry, where three collision resolution techniques (i.e., directional antennas, ultra-narrow band transmissions and successive interference cancellation (SIC)) were incorporated. For an ALOHA-type random access, by applying the machine learning to implement intelligent sensing, the work in~\cite{Zhu-2017} presented a framework of backscatter sensing with random encoding at the BNs and the statistic inference at the reader.
In this work, we focus on the uplink communication in a one-to-many BackCom system. To handle the multiple access, non-orthogonal multiple access (NOMA) is employed. By allowing multiple users to be served in the same resource block, NOMA can greatly improve the spectrum efficiency and it is also envisaged as an essential technology for 5G systems~\cite{Ding-2017}. In general, the NOMA technique can be divided into power-domain NOMA and code-domain NOMA. The code-domain NOMA utilizes user-specific spreading sequences for concurrently using the same resource, while the power-domain NOMA exploits the difference in the channel gain among users for multiplexing. The power-domain NOMA has the advantages of low latency and high spectral efficiency~\cite{Shin-2017} and it will be considered in our work. For the conventional communication system, the implementation of power-domain NOMA in the uplink communication has been well investigated in the literature, e.g.,~\cite{Imari-2014,Ding-2014,Diamantoulakis-2016,Mohammad-2017}. Very recently, the authors in~\cite{Lyu-2017} investigated NOMA in the context of a power station-powered BackCom system. To implement NOMA, the time spent on energy harvesting for each BN is different, where the optimal time allocation policy was obtained.
\textit{Paper contributions:} In this paper, we consider a single BackCom system, where one reader serves multiple randomly deployed BNs. We adopt a hybrid power-domain NOMA and time division multiple access (TDMA) to enhance the BackCom system performance. Specifically, we multiplex the BNs in different spatial regions (namely the region division approach) or with different backscattered power levels (namely the power division approach) to implement NOMA. Different from the conventional wireless devices that can actively adjust the transmit power, we set the reflection coefficients for the multiplexed BNs to be different in order to better exploit power-domain NOMA. We make the following major contributions in this paper:
\begin{itemize}
\item We propose a NOMA-enhanced BackCom system, where the reflection coefficients for the multiplexed BNs from different groups are set to different values to utilize the power-domain NOMA. Based on the considered system model, we develop criteria for choosing the reflection coefficients for the different groups. To the best of our knowledge, such guidelines have not yet been proposed in the literature.
\item We adopt a metric, named as the average number of successfully decoded bits (i.e., the average number of bits can be successfully decoded by the reader in one time slot), to evaluate the system performance. For the most practical case of two-node pairing, we derive the exact analytical closed-form results for the fading-free scenario and semi closed-form results for the fading scenario (cf. Table~\ref{tb:1}). For analytical tractability, under the fading-free and general multiple-node multiplexing case, we analyze a metric, the average number of successful BNs given $N$ multiplexing BNs, which has similar performance trend as the average number of successfully decoded bits. The derived expressions allow us to verify the proposed selection criteria and investigate the impact of system parameters.
\item Our numerical results show that, NOMA generally can achieve much better performance gain in the BackCom system compared to its performance gain in the conventional system. This highlight the importance of incorporating NOMA with the BackCom system.
\end{itemize}
The remainder of the paper is organized as follows. Section~\ref{sec:system} presents the detailed system model, including the developed NOMA scheme. The proposed reflection coefficient selection criterion is presented in Section~\ref{sec:designtuition}. The definition and the analysis of the considered performance metrics for the fading-free and fading scenarios are given in Sections~\ref{sec:ana1} and~\ref{sec:fading}, respectively. Section~\ref{sec:result} presents the numerical and simulation results to study the NOMA-enhanced BackCom system. Finally, conclusions are presented in Section~\ref{sec:summary}.
\section{System Model}\label{sec:system}
\subsection{Spatial Model}\label{sec:spatialmodel}
We consider a BackCom system consisting of a single reader and $M$ BNs (sensors), as illustrated in Fig.~\ref{fig_systemmodel1}. The coverage zone $\mathcal{S}$ for the reader is assumed to be an annular region specified by the inner and outer radii $R_{1}$ and $R$, where the reader is located at the origin~\cite{Bletsas-2009,Psomas-2017}. The $M$ BNs are randomly independently and uniformly distributed inside $\mathcal{S}$, i.e., the location of BNs is modelled as the binomial point process. Consequently, the distribution of the random distance between a BN and the reader, $r$, is $f_r(r)=\frac{2r}{R^2-R_1^2}$~\cite{Zubair-2013}.
\ifCLASSOPTIONpeerreview
\begin{figure}
\centering
\subfigure[Spatial model (${\color{red}\blacktriangle}=$ reader, ${\color{blue}\bullet}=$ BNs).]{\label{fig_systemmodel1}\includegraphics[width=0.3 \textwidth]{systemmodel}}
\mbox{\hspace{3cm}}
\subfigure[Time slot structure (${\color{green}\blacksquare}=$ mini-slot on NOMA, ${\color{black}\square}=$ mini-slot on single access).]{\label{fig_systemmodel3}\includegraphics[width=0.3\textwidth]{systemmodel3}}
\caption{Illustration of the system model for two-node pairing case.}
\end{figure}
\else
\begin{figure}
\centering
\subfigure[Spatial model (${\color{red}\blacktriangle}=$ reader, ${\color{blue}\bullet}=$ BNs).]{\label{fig_systemmodel1}\includegraphics[width=0.25 \textwidth]{systemmodel}}\\
\vspace{+0.1in}
\subfigure[Time slot structure (${\color{green}\blacksquare}=$ mini-slot on NOMA, ${\color{black}\square}=$ mini-slot on single access).]{\label{fig_systemmodel3}\includegraphics[width=0.3\textwidth]{systemmodel3}}
\caption{Illustration of the system model for two-node pairing case.}
\end{figure}
\fi
\subsection{Channel Model}\label{sec:channelmodel}
In this work, we first consider the fading-free channel model, i.e., we use the path-loss to model the wireless communication channel. Thus, for a receiver, its received power from a transmitter is given by $p_t r^{-\alpha}$, where $p_t$ is the transmitter's transmit power, $\alpha$ is the path-loss exponent, and $r$ is the distance between the transmitter and receiver pair, respectively. This fading-free channel model is a reasonable assumption for the BackCom system with strong line-of-sight (LOS) links~\cite{Psomas-2017}. This can be justified as follows. The coverage zone for a reader is generally relatively small, especially compared to the cell's coverage region, and the BNs are close to the reader; hence, the communication link is very likely to experience strong LOS fading. In Section~\ref{sec:fading}, we will extend the system model to include the fading. Under the fading case, we assume that the fading on the communication link is identically and independently distributed (i.i.d.) Nakagami-$m$ fading. Also we will show that the design intuition gained from the fading-free scenario can provide a good guideline for LOS fading scenario. The additive white Gaussian noise (AWGN) with noise power $\mathcal{N}$ is also included in the system.
\subsection{Backscatter Communication Model}
In general, the BNs do not actively transmit any radio signal. Instead, the communication from a BN to the reader is achieved by reflecting the incident CW signal from the reader. In this work, the reader is assumed to transmit a CW signal for most of the time, while each BN has two states, namely the backscattering state and the waiting state. Fig.~\ref{fig_systemmodel2} depicts the structure of the considered BN; it is mainly composed of the transmitter, receiver, energy harvester, information decoder, micro-controller and variable impedance.
In the \emph{backscattering state}, the BN's transmitter is active and is backscattering the modulated signal via a variable impedance. We consider the binary phase shift keying modulation in this work. To modulate the signal, the in-built micro-controller switches impedances between the two impedance states. These two impedance are assumed to generate two reflection coefficients with the same magnitude (denoted as $\xi$) but with different phase shift (i.e., zero degree and 180 degree). Combining with our channel model, given that the transmit power of the reader is $P_T$, the backscattered power at a BN is $\xi P_T r^{-\alpha}$.
In the \emph{waiting state}, the BN stops backscattering and only harvests the energy from the CW signal. The harvested energy is used to power the circuit and sensing functions. We assume that each BN has a relatively large energy storage. The storage battery allows the accumulation of energy with random arrivals and the stored energy can be used to maintain the normal operation of BNs in the long run.
\ifCLASSOPTIONpeerreview
\begin{figure}
\centering
\includegraphics[width=0.9 \textwidth]{systemmodel2}
\caption{Illustration of the BackCom with NOMA for $N=2$ scenario. $P_r^{(1)}$ and $P_r^{(2)}$ denote the stronger signal power and the weaker signal power at the reader, respectively. $s^{(\cdot)}$ denotes the corresponding normalized information signal.}
\label{fig_systemmodel2}
\end{figure}
\else
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8 \textwidth]{systemmodel2}
\caption{Illustration of the BackCom with NOMA for $N=2$ scenario. $P_r^{(1)}$ and $P_r^{(2)}$ denote the stronger signal power and the weaker signal power at the reader, respectively. $s^{(\cdot)}$ denotes the corresponding normalized information signal.}
\label{fig_systemmodel2}
\end{figure*}
\fi
\subsection{Proposed NOMA Scheme}\label{sec:NOMAmodel}
In this section, we describe the proposed NOMA scheme for the BackCom system, which is a contribution of this work. We focus on the uplink communication and employ a hybrid of power-domain NOMA and TDMA. Each time slot lasts $\mathcal{L}$ seconds and the data rate for each BN is $\mathcal{R}$ bits/secs. Each time slot $\mathcal{L}$ is further divided into multiple mini-slots depending on the multiplexing situation, which will be explained later in Section~\ref{sec:NOMA:frame}.
\subsubsection{Region division for multiplexing}It is widely known that the fundamental principle of implementing power-domain NOMA is to multiplex (group) users with the relatively large channel gain difference on the same spectrum resource~\cite{Shin-2017}. Hence, we utilize the BNs residing in separate regions to implement power-domain NOMA, which is named as the region division approach. Specifically, the reader ``virtually'' divides the coverage zone into $N$ subregions\footnote{In this work, we mainly focus on the $N=2$ case (i.e., NOMA with two-node pairing case), which is widely considered in the literature. The analysis for the general $N$ case will be presented in Section~\ref{sec:generalcase}.} and the $i$-th subregion is an annular region specified by the radii $R_{i}$ and $R_{i+1}$, where $i\in [1,N]$, $R_{i}<R_{i+1}$ and $R_{N+1}=R$. The reader randomly picks one BN from each subregion to implement NOMA. Since the BNs are randomly deployed in $\mathcal{S}$, it is possible that the number of BNs in each subregion is not equal. For this unequal number of BNs scenario, the reader will first multiplex $N$ BNs. If the reader cannot further multiplex $N$ BNs, it will then multiplex $N-1$ BNs, $N-2$ BNs and so on and so forth.
\subsubsection{Time slot structure}\label{sec:NOMA:frame} Each time slot $\mathcal{L}$ is divided into multiple mini-slots. For the mini-slot used to multiplex $n$ BNs, the time allocated to this mini-slot is assumed to be $n\frac{\mathcal{L}}{M}$. Let us consider $N=2$ as an example and assume that there are $t$ BNs residing in the first subregion (namely the near subregion) and $M-t$ BNs in the second subregion (namely the far subregion), where $t\leq M/2$ is considered. In the first $t$ mini-slots, where each mini-slot lasts $\frac{2\mathcal{L}}{M}$ seconds, the reader will randomly select one BN from the near subregion and another BN from the far subregion to implement NOMA for each mini-slot. As for the remaining $M-2t$ BNs in the far subregion, since there are no available BNs in the near subregion to pair them, they can only communicate with the reader in a TDMA fashion in the following $M-2t$ mini-slots, i.e., each BN is allocated $\frac{\mathcal{L}}{M}$ seconds to backscatter the signal alone. Note that the BNs which are not selected by the reader to backscatter signal on a certain mini-slot are in waiting state. The time slot structure for two-node pairing case is illustrated in Fig.~\ref{fig_systemmodel3}.
\subsubsection{Reflection coefficient differentiation and its implementation} To make the difference of channel gains for multiplexing nodes more significant, we consider that the reflection coefficient for the BN belonging to different subregion is different. Let $\xi_i$ denote the reflection coefficient for the BN in the $i$-th subregion and we set $1\geq \xi_{1}\geq...\xi_{i-1}\geq\xi_{i}...\geq \xi_{N}>0$. The reflection coefficient $\xi$ is of importance for the BackCom system with NOMA. In Section~\ref{sec:designtuition}, we will provide design guidelines on how to choose the reflection coefficient for each subregion to improve the system performance.
In order to know which BNs belong to which subregions, the following approach is adopted in this work. We assume that each BN has a unique ID, which is known by the reader~\cite{Bletsas-2009}. The reader broadcasts the training signal to all BNs and each node then backscatters this signal in its corresponding assigned slot~\cite{Yang-2015}. By receiving the backscattered signal, the reader can categorize the BNs into different subregions based on the different power levels. At the same time, each BN can decide which subregion it belongs to according to the received training signal power from the reader, and then switches its impedance pair to the corresponding subregion's impedance pair for the NOMA implementation. Note that, we assume that each BN has $N$ impedance pairs corresponding to the $N$ reflection coefficients for each subregion, from which the micro-controller can select\footnote{Note that $N$ is a pre-defined system parameter. Once $N$ is chosen, the hardware (e.g., the impedance pairs) is fixed.}. Additionally, during the training period, all BNs switch to the first impedance pair (e.g., the reflection coefficient is $\xi_1$).
\subsubsection{SIC mechanism} NOMA is carried out via the SIC technique at the reader. We assume that the decoding order is always from the strongest signal to the weakest signal\footnote{Under the fading-free scenario, the decoding order is from the nearest BN to the farthest BN. Under the fading case, the signal here implies the instantaneous backscattered signal received at the reader and the strongest signal may not come from the nearest BN.} and error propagation is also included. For example, the reader firstly detects and decodes the strongest signal, and treats the weaker signal as the interference. If the signal-to-interference-plus-noise ratio (SINR) at the reader is greater than a threshold $\gamma$, the strongest signal can be successfully decoded and extracted from the received signal. The reader then decodes the second strongest signal and so on and so forth. If the SINR is below the threshold, the strongest signal cannot be decoded and the reader will not continue to decode the weaker signals, which implies that the remaining weaker signals fail to be decoded as well~\cite{6954404}. Fig.~\ref{fig_systemmodel2} illustrates the basic structure for the SIC technique.
\section{Design Guideline for the Reflection Coefficients}\label{sec:designtuition}
For the conventional communication system implementing power-domain NOMA, the multiplexed devices transmit with different powers in order to gain the benefits from NOMA. Unfortunately, actively updating the transmit power is impossible for BNs, since they are passive devices. Instead, the reflection coefficient is an adjustable system parameter for BNs to enhance the system performance. It is intuitive to set the reflection coefficients for the near subregions as large as possible and set the reflection coefficients for the far subregions as small as possible. Then the question is how small (or large) should the reflection coefficients be for the near (or far) subregions. In this section, we provide a simple design guideline for choosing the reflection coefficients for the subregions, which is presented in the following proposition.
\begin{proposition}
Based on our system model considered in Section~\ref{sec:system}, to achieve the best system performance, the reflection coefficient for each subregion should satisfy the following conditions
\ifCLASSOPTIONpeerreview
\begin{align}
&\xi_N\geq \gamma\frac{\mathcal{N}R^{2\alpha}}{P_T},\label{eq:designguide1}\\
&\xi_i\geq \max\left\{\xi_{i+1},\gamma\left (\sum_{j=i+1}^{N}\xi_j\frac{R_{i+1}^{2\alpha}}{R_j^{2\alpha}}+\frac{\mathcal{N}R_{i+1}^{2\alpha}}{P_T} \right)\right\}, \quad {i\leq N-1.}\label{eq:designguide}
\end{align}
\else
\begin{align}
&\xi_N\geq \gamma\frac{\mathcal{N}R^{2\alpha}}{P_T},\label{eq:designguide1}\\
&\xi_i\geq \max\left\{\xi_{i+1},\gamma\left (\sum_{j=i+1}^{N}\xi_j\frac{R_{i+1}^{2\alpha}}{R_j^{2\alpha}}+\frac{\mathcal{N}R_{i+1}^{2\alpha}}{P_T} \right)\right\}, \nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad {i\leq N-1.}\label{eq:designguide}
\end{align}
\fi
For the simplest case where $N=2$, we have $\xi_2\geq \gamma\frac{\mathcal{N}R^{2\alpha}}{P_T}$ and $\xi_1\geq \max\left\{\xi_{2},\gamma\left (\xi_2+\frac{\mathcal{N}R_{2}^{2\alpha}}{P_T} \right)\right\}$.
\end{proposition}
\begin{proof}
We consider the case of $N$ multiplexing nodes and the design guideline obtained for this scenario also holds for the case of $n$ multiplexing nodes, where $n<N$, since the decoding signal receives the most severe interference for the $N$ multiplexing nodes case. The best performance that can be achieved by the BackCom system is that the signals from all the multiplexed BNs are successfully decoded. In other words, the SINR for the $i$-th strongest signal, denoted $\textsf{SINR}_i$, is greater than the channel threshold $\gamma$, where $i\in [1,N-1]$, and the signal-to-interference (SNR) for the weakest signal, denoted as $\textsf{SNR}_N$ is also higher than $\gamma$.
Let us start from the strongest signal and its SINR is given by $\textsf{SINR}_1=\frac{P_T\xi_1r_1^{-2\alpha}}{\sum_{j=2}^{N}P_T\xi_jr_j^{-2\alpha}+\mathcal{N}}$, where $r_j$ represents the random distance between the reader and the BN from the $j$-th subregion and its conditional probability density function (PDF) is $f_{r_j}(r_j)=\frac{2r_j}{R_{j+1}^2-R_{j}^2}$ with $r_j\in[R_{j},R_{j+1}]$. In order to ensure that the strongest signal will always be successfully decoded, the worst case of $\textsf{SINR}_1$ should always be greater than $\gamma$. The worst case for $\textsf{SINR}_1$ is that $r_1=R_2$ and $r_j=R_j$; hence, we can write the condition that the strongest signal is always successfully decoded as $\frac{P_T\xi_1R_2^{-2\alpha}}{\sum_{j=2}^{N}P_T\xi_jR_j^{-2\alpha}+\mathcal{N}}\geq \gamma$. After rearranging the inequality, we obtain $\xi_1\geq \gamma\left( \sum_{j=2}^{N}\xi_j\frac{R_2^{2\alpha}}{R_j^{2\alpha}}+\frac{\mathcal{N}R_2^{2\alpha}}{P_T}\right)$. Adopting the same procedure, we can find the value of $\xi_i$ for the other signals.
\end{proof}
\begin{remark}
Under the proposed selection criterion, every BN can be successfully decoded for the fading-free scenario. Clearly, when more BNs can be multiplexed (i.e., $N$ is a relatively large value), the network performance can be greatly improved. From~\eqref{eq:designguide}, we can see that $\xi_i$ is the summation of $\xi_j$, where $j>i$, and also depends on $\gamma$. When $\gamma$ is large, the obtained $\xi_i$ can be greater than one, which is impractical. In order to meet the condition in~\eqref{eq:designguide}, we have to set $\xi_N$ as small as possible. Correspondingly, when $N$ is large, the transmit power of the reader $P_T$ should be increased in order to satisfy condition in~\eqref{eq:designguide1}. Hence, there is a tradeoff between the BackCom system performance and the reader's transmit power together with the SIC implementation complexity.
\end{remark}
\section{Analysis of the Proposed BackCom System with NOMA}\label{sec:ana1}
In this section, we present the analysis of the performance metrics for our considered BackCom system with NOMA, under the fading-free scenario.
\subsection{Performance Metrics}
The \emph{average number of successfully decoded bits}, $\bar{\mathcal{C}}_{\textrm{suc}}$, is the main metric considered in this work. It is defined as the average number of bits that can be successfully decoded at the reader in one time slot. For the system where the coverage region is divided into $N$ subregions, this metric depends on: (i) the average number of successful BNs given that $n$ (where $n\in[1,N]$) BNs are multiplexed, denoted as $\bar{\mathcal{M}}_n$; and (ii) all possible multiplexing scenarios (i.e., the number of BNs in each separate subregion).
For $N=2$, we investigate the average number of successfully decoded bits, $\bar{\mathcal{C}}_{\textrm{suc}}$. When $N\geq 3$, there is no general expression for $\bar{\mathcal{C}}_{\textrm{suc}}$, because the second condition corresponds to the classical balls into bins problem and currently the general form listing all possible allocation cases is not possible~\cite{Raab-1998}. In this work, for $N\geq 3$ scenario, we consider the metric $\bar{\mathcal{M}}_N$, i.e., the average number of successful BNs given $N$ multiplexing nodes. As will be shown in Section~\ref{sec:result}, $\bar{\mathcal{M}}_N$ has similar trends as $\bar{\mathcal{C}}_{\textrm{suc}}$ for general $N$ case.
\subsection{Two-Node Pairing Case ($N=2$)}\label{sec:nofading}
We first consider the two-node pairing case, which is widely adopted and considered in the NOMA literature due to its feasibility in practical implementation. The definition and the essential expression of $\bar{\mathcal{C}}_{\textrm{suc}}$ are given below, where the factors used to calculate this metric for different scenarios are summarized in Table~\ref{tb:1} (cf. Section~\ref{sec:fading:summary}).
\begin{define}
Based on our NOMA-enhanced BackCom system in Section~\ref{sec:NOMAmodel}, the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ is
\ifCLASSOPTIONpeerreview
\begin{align}\label{eq:general:C}
\bar{\mathcal{C}}_{\textrm{suc}}=&\sum_{t=0}^{M/2}\binom{t}{M}p_{\textrm{near}}^t(1-p_{\textrm{near}})^{M-t}\left(t\frac{2\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_2+(M-2t)\frac{\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_{1\textrm{near}} \right)\nonumber\\
&+\sum_{t=M/2+1}^{M}\binom{t}{M}p_{\textrm{near}}^t(1-p_{\textrm{near}})^{M-t}\left((M-t)\frac{2\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_2+(2t-M)\frac{\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_{1\textrm{far}}\right),
\end{align}
\else
\begin{align}\label{eq:general:C}
\bar{\mathcal{C}}_{\textrm{suc}}=&\sum_{t=0}^{M/2}\binom{t}{M}p_{\textrm{near}}^t(1-p_{\textrm{near}})^{M-t}\nonumber\\
&\times\left(t\frac{2\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_2+(M-2t)\frac{\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_{1\textrm{near}} \right)\nonumber\\
&+\sum_{t=M/2+1}^{M}\binom{t}{M}p_{\textrm{near}}^t(1-p_{\textrm{near}})^{M-t}\nonumber\\
&\times\left((M-t)\frac{2\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_2+(2t-M)\frac{\mathcal{L}\mathcal{R}}{M}\bar{\mathcal{M}}_{1\textrm{far}}\right),
\end{align}
\fi
\noindent where $p_{\textrm{near}}$ is the average probability that a BN is residing in the near subregion (i.e., the first subregion) and it equals to $p_{\textrm{near}}=\frac{R_2^2-R_1^2}{R^2-R_1^2}$. $\bar{\mathcal{M}}_{1\textrm{near}}$ ($\bar{\mathcal{M}}_{1\textrm{far}}$) denotes the average number of successful BNs coming from the near (far) subregion, given that it accesses the reader alone. $\bar{\mathcal{M}}_2$ is the average number of successful BNs when two BNs are paired, and it can be expressed as $\bar{\mathcal{M}}_2=p_1+2p_2$, where $p_2$ is the average probability that signals for the paired BNs are successfully decoded and $p_1$ is the probability that only the stronger signal is successfully decoded.
\end{define}
The key elements that determine $\bar{\mathcal{C}}_{\textrm{suc}}$ are presented in the following lemmas.
\ifCLASSOPTIONpeerreview
\begin{lemma}
Based on our system model in Section~\ref{sec:system}, given that two BNs are paired, the probability that the signals from the two BNs are successfully decoded and the probability that the signal from only one BN is successfully decoded are given by
\begin{align}\label{eq:nofading:p2}
p_2=\left\{ \begin{array}{ll}
0, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{\gamma\geq \frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}};}\\
\frac{\left(\textrm{max}\left\{\frac{1}{R^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}
\right)^{-\frac{1}{\alpha}}-R_2^2}{R^2-R_2^2}, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{\left(\gamma< \frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(\gamma\leq\frac{R_2^{-2\alpha}}{\kappa R_2^{-2\alpha}+\frac{\mathcal{N}}{\xi_1P_T}} \right);}\\
0,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\quad\quad{\left(\gamma< \frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(R_1^{-2\alpha}\leq\frac{\mathcal{N}\gamma}{\xi_1P_T}+\textrm{max}\left\{ \frac{1}{R^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}\gamma\kappa \right);}\\
\Omega\left(\textrm{max}\left\{\frac{1}{R^{2\alpha}},\frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T},\frac{\mathcal{N}\gamma}{\xi_2P_T}\right\}
, \textrm{min}\left\{\frac{1}{R_2^{2\alpha}},\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T}\right\} ,\textrm{max}\left\{ \frac{1}{R^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}
\right), \quad{\textrm{otherwise};}\\
\end{array} \right.
\end{align}
\begin{align}\label{eq:nofading:p1}
p_1=\left\{ \begin{array}{ll}
0, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{\gamma\leq \frac{P_T\xi_2R^{-2\alpha}}{\mathcal{N}};}\\
\frac{R^2-\left(\textrm{min}\left\{ \frac{1}{R_2^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}\right)^{-\frac{1}{\alpha}} }{R^2-R_2^2}, \quad\quad\quad\quad{\left(\gamma> \frac{P_T\xi_2R^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(R_2^{-2\alpha}\geq\frac{\mathcal{N}\gamma}{\xi_1P_T}+\textrm{min}\left\{ \frac{1}{R_2^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}
\gamma\kappa \right);}\\
0,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\quad\quad\quad\quad{\left(\gamma> \frac{P_T\xi_2R^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(\gamma\leq\frac{R_1^{-2\alpha}}{\kappa R^{-2\alpha}+\frac{\mathcal{N}}{\xi_1P_T}} \right);}\\
\Omega\left(\textrm{max}\left\{\frac{1}{R^{2\alpha}},\frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T}\right\}
,
\textrm{min}\left\{\frac{1}{R_2^{2\alpha}},\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T},\frac{\mathcal{N}\gamma}{\xi_2P_T}\right\}
,R^{-2\alpha}\right), \quad\quad\quad\quad\quad\,\,\,{\textrm{otherwise};}\\
\end{array} \right.
\end{align}
\noindent respectively, where $\Omega\left(p,q,w\right)\triangleq \frac{q^{-\frac{1}{\alpha}}R_1^2-\left(\frac{P_T\xi_1}{\mathcal{N}\gamma q}\right)^{\frac{1}{\alpha}}\,_2F_1\left[-\frac{1}{\alpha},\frac{1}{\alpha},\frac{\alpha-1}{\alpha},-\frac{P_T\xi_2}{\mathcal{N}}q\right] -
p^{-\frac{1}{\alpha}}R_1^2+\left(\frac{P_T\xi_1}{\mathcal{N}\gamma p}\right)^{\frac{1}{\alpha}}\,_2F_1\left[-\frac{1}{\alpha},\frac{1}{\alpha},\frac{\alpha-1}{\alpha},-\frac{P_T\xi_2}{\mathcal{N}}p\right]}{(R_2^2-R_1^2)(R^2-R_2^2)}-\frac{w^{-\frac{1}{\alpha}}-p^{-\frac{1}{\alpha}}}{R^2-R_2^2}$ and $\kappa\triangleq\frac{\xi_2}{\xi_1}$.
\end{lemma}
\else
\begin{lemma}
Based on our system model in Section~\ref{sec:system}, given that two BNs are paired, the probability that the signals from the two BNs are successfully decoded and the probability that the signal from only one BN is successfully decoded are given by~\eqref{eq:nofading:p2} and~\eqref{eq:nofading:p1}, respectively, as shown at the top of next page, where $\Omega\left(p,q,w\right)\triangleq \frac{q^{-\frac{1}{\alpha}}R_1^2-\left(\frac{P_T\xi_1}{\mathcal{N}\gamma q}\right)^{\frac{1}{\alpha}}\,_2F_1\left[-\frac{1}{\alpha},\frac{1}{\alpha},\frac{\alpha-1}{\alpha},-\frac{P_T\xi_2}{\mathcal{N}}q\right] }{(R_2^2-R_1^2)(R^2-R_2^2)}
-\frac{p^{-\frac{1}{\alpha}}R_1^2+\left(\frac{P_T\xi_1}{\mathcal{N}\gamma p}\right)^{\frac{1}{\alpha}}\,_2F_1\left[-\frac{1}{\alpha},\frac{1}{\alpha},\frac{\alpha-1}{\alpha},-\frac{P_T\xi_2}{\mathcal{N}}p\right]}{(R_2^2-R_1^2)(R^2-R_2^2)}-\frac{w^{-\frac{1}{\alpha}}-p^{-\frac{1}{\alpha}}}{R^2-R_2^2}$ and $\kappa\triangleq\frac{\xi_2}{\xi_1}$.
\begin{figure*}[!t]
\normalsize
\begin{align}\label{eq:nofading:p2}
p_2=\left\{ \begin{array}{ll}
0, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{\gamma\geq \frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}};}\\
\frac{\left(\textrm{max}\left\{\frac{1}{R^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}
\right)^{-\frac{1}{\alpha}}-R_2^2}{R^2-R_2^2}, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{\left(\gamma< \frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(\gamma\leq\frac{R_2^{-2\alpha}}{\kappa R_2^{-2\alpha}+\frac{\mathcal{N}}{\xi_1P_T}} \right);}\\
0,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\quad\quad{\left(\gamma< \frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(R_1^{-2\alpha}\leq\frac{\mathcal{N}\gamma}{\xi_1P_T}+\textrm{max}\left\{ \frac{1}{R^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}\gamma\kappa \right);}\\
\Omega\left(\textrm{max}\left\{\frac{1}{R^{2\alpha}},\frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T},\frac{\mathcal{N}\gamma}{\xi_2P_T}\right\}
, \textrm{min}\left\{\frac{1}{R_2^{2\alpha}},\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T}\right\} ,\textrm{max}\left\{ \frac{1}{R^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}
\right), \quad{\textrm{otherwise};}\\
\end{array} \right.
\end{align}
\begin{align}\label{eq:nofading:p1}
p_1=\left\{ \begin{array}{ll}
0, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad{\gamma\leq \frac{P_T\xi_2R^{-2\alpha}}{\mathcal{N}};}\\
\frac{R^2-\left(\textrm{min}\left\{ \frac{1}{R_2^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}\right)^{-\frac{1}{\alpha}} }{R^2-R_2^2}, \quad\quad\quad\quad{\left(\gamma> \frac{P_T\xi_2R^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(R_2^{-2\alpha}\geq\frac{\mathcal{N}\gamma}{\xi_1P_T}+\textrm{min}\left\{ \frac{1}{R_2^{2\alpha}},\frac{\mathcal{N}\gamma}{\xi_2P_T} \right\}
\gamma\kappa \right);}\\
0,
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\quad\quad\quad\quad{\left(\gamma> \frac{P_T\xi_2R^{-2\alpha}}{\mathcal{N}}\right)\&\&\left(\gamma\leq\frac{R_1^{-2\alpha}}{\kappa R^{-2\alpha}+\frac{\mathcal{N}}{\xi_1P_T}} \right);}\\
\Omega\left(\textrm{max}\left\{\frac{1}{R^{2\alpha}},\frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T}\right\}
,
\textrm{min}\left\{\frac{1}{R_2^{2\alpha}},\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{\xi_2P_T},\frac{\mathcal{N}\gamma}{\xi_2P_T}\right\}
,R^{-2\alpha}\right), \quad\quad\quad\quad\quad\,\,\,{\textrm{otherwise}.}\\
\end{array} \right.
\end{align}
\hrulefill
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
\end{lemma}
\fi
\textit{Proof:} See Appendix~A.
\begin{lemma}
Based on our system model in Section~\ref{sec:system}, given that only one BN from the near subregion accesses the reader, the average number of successful BNs is
\begin{align}\label{eq:N1case}
\bar{\mathcal{M}}_{1\textrm{near}}=\frac{\min\left\{R_2^2,\left(\frac{P_T\xi_1}{\gamma\mathcal{N}} \right)^{\frac{1}{\alpha}}\right\}-R_1^2}{R_2^2-R_1^2}\textbf{1}\left(\frac{P_T\xi_1R_1^{-2\alpha}}{\mathcal{N}}\geq\gamma \right),
\end{align}
and the average number of successful BNs when only one BN from the far subregion accesses the reader is
\begin{align}\label{eq:N1case1}
\bar{\mathcal{M}}_{1\textrm{far}}=\frac{\min\left\{R^2,\left(\frac{P_T\xi_2}{\gamma\mathcal{N}} \right)^{\frac{1}{\alpha}}\right\}-R_2^2}{R^2-R_2^2}\textbf{1}\left(\frac{P_T\xi_2R_2^{-2\alpha}}{\mathcal{N}}\geq\gamma \right).
\end{align}
\end{lemma}
\begin{proof}
According to the definition of $\bar{\mathcal{M}}_{1\textrm{near}}$, it can be expressed as $\bar{\mathcal{M}}_{1\textrm{near}}=\mathbb{E}_{r_1}\left[\Pr\left(\frac{P_T\xi_1r_1^{-2\alpha}}{\mathcal{N}}\geq\gamma\right)\right]$. After rearranging and evaluating this expression, we arrive at the result in~\eqref{eq:N1case}.
\end{proof}
\ifCLASSOPTIONpeerreview
\begin{remark}
Under the selection criterion of the reflection coefficient proposed in Proposition 1, it is clear that $\bar{\mathcal{M}}_2=2$ and $\bar{\mathcal{M}}_{1\textrm{near}}=\bar{\mathcal{M}}_{1\textrm{far}}=1$. Consequently, $\bar{\mathcal{C}}_{\textrm{suc}}$ can be simplified into $\bar{\mathcal{C}}_{\textrm{suc}}=\mathcal{L}\mathcal{R}(1+2p_{\textrm{near}})+\frac{4\mathcal{L}\mathcal{R}p_{\textrm{near}}^{\frac{M+2}{2}}}{M(1-p_{\textrm{near}})^{\frac{4-M}{2}}}+\left((p_{\textrm{near}}-1)\binom{M}{\frac{M+2}{2}}
\,_2F_1\left[1,\frac{2-M}{2},\frac{4+M}{2},\frac{p_{\textrm{near}}}{p_{\textrm{near}}-1}\right]-p_{\textrm{near}}\binom{M}{\frac{M+4}{2}}
\,_2F_1\left[2,\frac{4-M}{2},\frac{6+M}{2},\frac{p_{\textrm{near}}}{p_{\textrm{near}}-1}\right] \right)$, which is the same as the total number of bits transmitted by BNs. Note that this quantity strongly relies on the radius $R_2$. The impact of $R_2$ will be presented in Section~\ref{sec:result::r2}.
\end{remark}
\else
\begin{remark}
Under the selection criterion of the reflection coefficient proposed in Proposition 1, it is clear that $\bar{\mathcal{M}}_2=2$ and $\bar{\mathcal{M}}_{1\textrm{near}}=\bar{\mathcal{M}}_{1\textrm{far}}=1$. Consequently, $\bar{\mathcal{C}}_{\textrm{suc}}$ can be simplified into $\bar{\mathcal{C}}_{\textrm{suc}}=\mathcal{L}\mathcal{R}(1+2p_{\textrm{near}})+\frac{4\mathcal{L}\mathcal{R}p_{\textrm{near}}^{\frac{M+2}{2}}}{M(1-p_{\textrm{near}})^{\frac{4-M}{2}}}$ $+\left((p_{\textrm{near}}-1)\binom{M}{\frac{M+2}{2}}
\,_2F_1\left[1,\frac{2-M}{2},\frac{4+M}{2},\frac{p_{\textrm{near}}}{p_{\textrm{near}}-1}\right]\right.$
$\left.-p_{\textrm{near}}\binom{M}{\frac{M+4}{2}}
\,_2F_1\left[2,\frac{4-M}{2},\frac{6+M}{2},\frac{p_{\textrm{near}}}{p_{\textrm{near}}-1}\right] \right)$, which is the same as the total number of bits transmitted by BNs. Note that this quantity strongly relies on the radius $R_2$. The impact of $R_2$ will be presented in Section~\ref{sec:result::r2}.
\end{remark}
\fi
Note that, when we set $\xi_2\geq\gamma\frac{\mathcal{N}R^{2\alpha}}{P_T}$, the average number of successful BNs is $\bar{\mathcal{M}}_{\textrm{suc}}=2p_2$, which is directly proportional to $p_2$. The closed-form expression shown in~\eqref{eq:nofading:p2} involves the hypergeometric function coming from the noise term in the SINR, which makes it generally difficult to obtain any design intuition. By assuming that the noise is negligible, we obtain the following simplified asymptotic result for $\bar{\mathcal{M}}_{2}$ as
\ifCLASSOPTIONpeerreview
\begin{align}\label{eq:nofadingnonoise}
\lim_{\mathcal{N}\rightarrow 0}\bar{\mathcal{M}}_{2}=\left\{ \begin{array}{ll}
2, &{\gamma\kappa\leq 1 ;}\\
\frac{2R_2^2R^2+2R_1^2R_2^2-2R_1^2R^2-R_2^4\left((\gamma\kappa)^{-\frac{1}{\alpha}}+(\gamma\kappa)^{\frac{1}{\alpha}}\right)}{2(R_2^2-R_1^2)(R^2-R_2^2)}, &{1<\gamma\kappa\leq \frac{R_2^{-2\alpha}}{R^{-2\alpha}} ;}\\
\frac{2R_1^2R_2^2-2R_1^2R^2+(R^4-R_2^4)(\gamma\kappa)^{\frac{1}{\alpha}}}{2(R_2^2-R_1^2)(R^2-R_2^2)}, &{\frac{R_2^{-2\alpha}}{R^{-2\alpha}}<\gamma\kappa\leq \frac{R_1^{-2\alpha}}{R_2^{-2\alpha}} ;}\\
\frac{-2R_1^2R^2+R^4(\gamma\kappa)^{-\frac{1}{\alpha}}+R_1^4(\gamma\kappa)^{\frac{1}{\alpha}}}{2(R_2^2-R_1^2)(R^2-R_2^2)}, &{\frac{R_1^{-2\alpha}}{R_2^{-2\alpha}}<\gamma\kappa\leq \frac{R_1^{-2\alpha}}{R^{-2\alpha}} ;}\\
0, &{\gamma\kappa> \frac{R_1^{-2\alpha}}{R^{-2\alpha}}.}\\
\end{array} \right.
\end{align}
\else
\begin{align}\label{eq:nofadingnonoise}
\lim_{\mathcal{N}\rightarrow 0}\bar{\mathcal{M}}_{2}=\left\{ \begin{array}{ll}
2, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,\,{\gamma\kappa\leq 1 ;}\\
\frac{2R_2^2R^2+2R_1^2R_2^2-2R_1^2R^2-R_2^4\left((\gamma\kappa)^{-\frac{1}{\alpha}}+(\gamma\kappa)^{\frac{1}{\alpha}}\right)}{2(R_2^2-R_1^2)(R^2-R_2^2)},\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,{1<\gamma\kappa\leq \frac{R_2^{-2\alpha}}{R^{-2\alpha}} ;}\\
\frac{2R_1^2R_2^2-2R_1^2R^2+(R^4-R_2^4)(\gamma\kappa)^{\frac{1}{\alpha}}}{2(R_2^2-R_1^2)(R^2-R_2^2)}, \\
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,{\frac{R_2^{-2\alpha}}{R^{-2\alpha}}<\gamma\kappa\leq \frac{R_1^{-2\alpha}}{R_2^{-2\alpha}} ;}\\
\frac{-2R_1^2R^2+R^4(\gamma\kappa)^{-\frac{1}{\alpha}}+R_1^4(\gamma\kappa)^{\frac{1}{\alpha}}}{2(R_2^2-R_1^2)(R^2-R_2^2)},\\
\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,\,{\frac{R_1^{-2\alpha}}{R_2^{-2\alpha}}<\gamma\kappa\leq \frac{R_1^{-2\alpha}}{R^{-2\alpha}} ;}\\
0, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,\,{\gamma\kappa> \frac{R_1^{-2\alpha}}{R^{-2\alpha}}.}\\
\end{array} \right.
\end{align}
\fi
\begin{remark}
According to~\eqref{eq:nofadingnonoise}, for the given spatial and channel model, $\bar{\mathcal{M}}_{2}$ is totally determined by the ratio of reflection coefficients $\kappa$ and the threshold $\gamma$. It can be easily proved that the asymptotic result of $\bar{\mathcal{M}}_{2}$ is a monotonic decreasing function of the $\gamma$ and $\kappa$. Thus, when $\gamma\kappa\leq1$, $\bar{\mathcal{M}}_{2}$ is maximized. In other words, for the given channel threshold $\gamma$, $\kappa=\xi_2/\xi_1$ should be lower than $1/\gamma$ to optimize the network work performance, which is consistent with Proposition 1.
\end{remark}
\subsection{Multiple-Node Multiplexing Case ($N\geq$ 3)}\label{sec:generalcase}
Under this scenario, we analyze the average number of successful BNs given $N$ multiplexing BNs.
\begin{define}
Based on our NOMA-enhanced BackCom system in Section~\ref{sec:NOMAmodel}, the average number of successful BNs given $N$ multiplexing nodes, $\bar{\mathcal{M}}_N$, is given by
\begin{align}\label{eq:Ngeneral}
\bar{\mathcal{M}}_N=\sum_{k=0}^{N}kp_k,
\end{align}
\noindent where $p_k$ is the probability that only the signals from the first $k$ BNs are successfully decoded.
\end{define}
The derivation of probability $p_k$ is very challenging. This is because the event that the signal from a BN in $i$-th subregion is successfully decoded is correlated with the event that the signal from the BN in the $i+1$-th subregion is unsuccessfully decoded. Thus, for analytical tractability, similar to most literatures~\cite{6954404,Psomas-2017}, we assume that each decoding step in the SIC is independent. We will show in Section~\ref{sec:result} that the independence assumption does not adversely affect the accuracy of the analysis. Based on this independence assumption, we can approximately express $p_k$ as
\begin{align}\label{eq:pkgeneral}
p_k\approx p_{\textrm{out}}^{(k+1)}\prod_{i=1}^{k}\left(1-p_{\textrm{out}}^{(i)}\right),
\end{align}
\noindent where $p_{\textrm{out}}^{(i)}$ denotes the probability when the SINR of the $i$-th strongest signal (e.g., the signal from the BN in the $i$-th subregion) falls below $\gamma$ given that the $i-1$-th strongest signal is successfully decoded. Note that except $p_{\textrm{out}}^{(1)}$, any $p_{\textrm{out}}^{(i)}$ is the conditional outage probability.
Different from the previous two-node pairing scenario, there is no direct way to compute $p_{\textrm{out}}^{(i)}$ when $N> 3$. Instead, we adopt the moment generating function (MGF)-based approach in~\cite{Guo-2013} to work out $p_{\textrm{out}}^{(i)}$, which is presented in the following lemma.
\begin{lemma}
Based on our system model considered in Section~\ref{sec:system}, the probability that the signal from the $i$-th BN fails to be decoded, given that the $i-1$-th strongest signal is successfully decoded, is
\ifCLASSOPTIONpeerreview
\begin{align}\label{eq:general:mgf}
p_{\textrm{out}}^{(i)}
&=1-\frac{2^{-\mathcal{B}}\exp(\frac{\mathcal{A}}{2})}{\gamma^{-1}}\sum_{b=0}^{\mathcal{B}}\binom{\mathcal{B}}{b} \sum_{c=0}^{\mathcal{C}+b}\frac{(-1)^c}{\mathcal{D}_c}\mathrm{Re} \left\{\frac{\mathcal{M}_{w_{i}}\left(s\right)}{s} \right\},
\end{align}
\noindent where $\mathcal{M}_{w_{i}}\left(s\right)\!=\!\!\!\mathlarger{\int}_{R_{i}}^{R_{i+1}}\exp\left(\frac{-sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\right)\prod_{j=i+1}^{N}\frac{\left(sr_i^{2\alpha}\frac{\xi_{j}}{\xi_i}\right)^{\frac{1}{\alpha}}
\Gamma\left[-\frac{1}{\alpha},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j+1}^{2\alpha}},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j}^{2\alpha}}\right]}{\alpha(R_{j+1}^2-R_{j}^2)}\frac{2r_i}{R_{i+1}^2-R_{i}^2}\textup{d}r_i$. $\mathcal{D}_c= 2$ (if $c=0$) and $\mathcal{D}_c=1$ (if $c=1,2,\hdots$), $s=(\mathcal{A}+\mathbf{i}2\pi c)/(2\gamma^{-1})$. $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ are three parameters employed to control the error estimation and following~\cite{Guo-2013}, we set $\mathcal{A} = 8 \ln 10$, $\mathcal{B} = 11$, $\mathcal{C} = 14$ in this work.
\else
\begin{align}\label{eq:general:mgf}
p_{\textrm{out}}^{(i)}
&=1\!-\!\frac{2^{-\mathcal{B}}\exp(\frac{\mathcal{A}}{2})}{\gamma^{-1}}\sum_{b=0}^{\mathcal{B}}\binom{\mathcal{B}}{b} \sum_{c=0}^{\mathcal{C}+b}\frac{(-1)^c}{\mathcal{D}_c}\mathrm{Re} \left\{\!\frac{\mathcal{M}_{w_{i}}\left(s\right)}{s}\! \right\},
\end{align}
\noindent where $\mathcal{M}_{w_{i}}\left(s\right)\!=\mathlarger{\int}_{R_{i}}^{R_{i+1}}\exp\left(\frac{-sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\right)$
$\times\prod_{j=i+1}^{N}\frac{\left(sr_i^{2\alpha}\frac{\xi_{j}}{\xi_i}\right)^{\frac{1}{\alpha}}
\Gamma\left[-\frac{1}{\alpha},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j+1}^{2\alpha}},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j}^{2\alpha}}\right]}{\alpha(R_{j+1}^2-R_{j}^2)}\frac{2r_i}{R_{i+1}^2-R_{i}^2}\textup{d}r_i$. $\mathcal{D}_c= 2$ (if $c=0$) and $\mathcal{D}_c=1$ (if $c=1,2,\hdots$), $s=(\mathcal{A}+\mathbf{i}2\pi c)/(2\gamma^{-1})$. $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ are three parameters employed to control the error estimation and following~\cite{Guo-2013}, we set $\mathcal{A} = 8 \ln 10$, $\mathcal{B} = 11$, $\mathcal{C} = 14$ in this work.
\fi
\end{lemma}
\begin{proof}
Based on the MGF-approach, we can express $p_{\textrm{out}}^{(i)}$ as
\ifCLASSOPTIONpeerreview
\begin{align}
p_{\textrm{out}}^{(i)}&=\Pr\left(\textsf{SINR}_i<\gamma\right)=\Pr\left(\frac{P_T\xi_i r_i^{-2\alpha}}{\sum_{j=i+1}^{N}P_T\xi_j r_j^{-2\alpha}+\mathcal{N}}<\gamma\right)\nonumber\\
&=1-\frac{2^{-\mathcal{B}}\exp(\frac{\mathcal{A}}{2})}{\gamma^{-1}}\sum_{b=0}^{\mathcal{B}}\binom{\mathcal{B}}{b} \sum_{c=0}^{\mathcal{C}+b}\frac{(-1)^c}{\mathcal{D}_c}\mathrm{Re} \left\{\frac{\mathcal{M}_{w_{i}}\left(s\right)}{s} \right\},
\end{align}
\else
\begin{align}
p_{\textrm{out}}^{(i)}&=\Pr\!\left(\textsf{SINR}_i<\gamma\right)=\Pr\!\left(\frac{P_T\xi_i r_i^{-2\alpha}}{\sum_{j=i+1}^{N}P_T\xi_j r_j^{-2\alpha}\!+\!\mathcal{N}}<\!\gamma\!\right)\nonumber\\
&=1\!-\!\frac{2^{-\mathcal{B}}\exp(\frac{\mathcal{A}}{2})}{\gamma^{-1}}\sum_{b=0}^{\mathcal{B}}\binom{\mathcal{B}}{b} \sum_{c=0}^{\mathcal{C}+b}\frac{(-1)^c}{\mathcal{D}_c}\mathrm{Re} \left\{\!\frac{\mathcal{M}_{w_{i}}\left(s\right)}{s} \!\right\},
\end{align}
\fi
\noindent where $w_{i}$ is the inverse $\textsf{SINR}_{i}$ and $\mathcal{M}_{w_{i}}\left(s\right)$ is its distribution's MGF.
\ifCLASSOPTIONpeerreview
Following the definition of MGF, we then can express the MGF of the distribution of $w_i$ as
\begin{align}\label{eq:mgfderi}
&\mathcal{M}_{w_i}\left(s\right)=\mathbb{E}_{r_i,r_{i+1},...,r_N}\left[\exp\left(-\frac{sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}-\sum_{j=i+1}^{N}sr_i^{2\alpha}r_{j}^{-\alpha}\frac{\xi_j}{\xi_i}\right)\right] \nonumber\\
&=\mathbb{E}_{r_i}\left[\exp\left(-\frac{sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\right)\prod_{j=i+1}^{N}\int_{R_{j}}^{R_{j+1}}
\exp\left(-sr_i^{2\alpha}r_{j}^{-\alpha}\frac{\xi_j}{\xi_i}\right)\frac{2r_j}{R_{j+1}^2-R_j^2}\textup{d}r_j \right] \nonumber\\
&=\mathbb{E}_{r_i}\left[\exp\left(-\frac{sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\right)\prod_{j=i+1}^{N}\frac{\left(sr_i^{2\alpha}\frac{\xi_{j}}{\xi_i}\right)^{\frac{1}{\alpha}}
\Gamma\left[-\frac{1}{\alpha},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j+1}^{2\alpha}},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j}^{2\alpha}}\right]}{\alpha(R_{j+1}^2-R_{j}^2)} \right].
\end{align}
\else
Following the definition of MGF, we then can express the MGF of the distribution of $w_i$ as in~\eqref{eq:mgfderi}, as shown at the top of next page.
\begin{figure*}[!t]
\normalsize
\begin{align}\label{eq:mgfderi}
&\mathcal{M}_{w_i}\left(s\right)=\mathbb{E}_{r_i,r_{i+1},...,r_N}\!\left[\exp\!\!\left(-\frac{sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\!-\!\sum_{j=i+1}^{N}sr_i^{2\alpha}\frac{\xi_j}{\xi_ir_{j}^{\alpha}}\!\right)\!\right] \!=\!\mathbb{E}_{r_i}\!\left[\exp\!\left(-\frac{sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\right)\!\!\prod_{j=i+1}^{N}\!\int_{R_{j}}^{R_{j+1}}
\!\frac{\exp\!\left(-sr_i^{2\alpha}\frac{\xi_j}{\xi_ir_{j}^{\alpha}}\right)2r_j}{R_{j+1}^2-R_j^2}\textup{d}r_j \right] \nonumber\\
&=\mathbb{E}_{r_i}\left[\exp\left(-\frac{sr_i^{2\alpha}\mathcal{N}}{P_T\xi_i}\right)\prod_{j=i+1}^{N}\frac{\left(sr_i^{2\alpha}\frac{\xi_{j}}{\xi_i}\right)^{\frac{1}{\alpha}}
\Gamma\left[-\frac{1}{\alpha},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j+1}^{2\alpha}},sr_i^{2\alpha}\frac{\xi_{j}}{\xi_iR_{j}^{2\alpha}}\right]}{\alpha(R_{j+1}^2-R_{j}^2)} \right].
\end{align}
\hrulefill
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
\fi
\end{proof}
\section{Two-Node Pairing Case with Fading}\label{sec:fading}
In this section, we consider the analysis for two-node pairing by taking fading channel model into account. The block fading is considered, which indicates that the fading coefficient is unchanged within one time slot, but it may vary independently from one time slot to another time slot. The fading on the communication link is assumed to be the i.i.d. Nakagami-$m$ fading and let $g$ denote the fading power gain on the communication link that follows gamma distribution. Moreover, we assume that the downlink and uplink channels are reciprocal (i.e., the fading coefficient of the downlink channel is the transpose conjugate of the uplink channel and their fading amplitudes are the same)~\cite{Liu-2017}.
When fading is included, another type of randomness is added to the received signal. In this section, we consider two pairing approaches for power-domain NOMA, which are named as the region division and power division, respectively. Under the region division approach, the situation is similar to Section~\ref{sec:nofading}, where the reader pairs the BNs from the near subregion and far subregion. In the fading context, this approach requires the long term training to recognize the BNs either in the near subregion or in the far subregion. In terms of the power division approach, the reader pairs the BNs with the higher instantaneous backscattered power and the lower instantaneous backscattered power, which requires the instantaneous training to classify the BNs. Its explicit implementation will be explained in Section~\ref{sec:powerdivision}. Note these two approaches converge to the same one for the fading-free scenario.
The average number of successfully decoded bits, $\bar{\mathcal{C}}_{\textrm{suc}}$, is the metric investigated under the fading case. Its general expression is the same as~\eqref{eq:general:C} in Definition 1 for both approaches, while the key factors, such as $p_{\textrm{near}}$, $p_1$, $p_2$, $\bar{\mathcal{M}}_{1\textrm{near}}$ and $\bar{\mathcal{M}}_{1\textrm{far}}$, are changed. The analysis for these factors are presented as follows and the summary is presented in Table~\ref{tb:1} (cf. Section~\ref{sec:fading:summary}).
\subsection{Region Division Approach}\label{sec:regiondevision}
For the region division approach, $p_{\textrm{near}}$ is the probability that the BN is located in the near subregion, which is the same as Section~\ref{sec:nofading} and is equal to $p_{\textrm{near}}=\frac{R_2^2-R_1^2}{R^2-R_1^2}$. The analysis of $p_k$ (i.e., the probability that the signals from $k$ BNs are successfully decoded) becomes complicated due to the consideration of fading.
Note that our considered SIC scheme is based on the instantaneous received power at the reader, i.e., $P_T\xi g^{2}r^{-2\alpha}$. Under the region division approach, the stronger signal may not come from the BN in the near subregion. Before deriving $p_k$, we first present the following lemma which shows the composite distribution of the random distance and fading.
\begin{lemma}\label{lemma:composite}
Let $r$ denote a random variable following the distribution of $f_r(r)=\frac{2r}{R_{u}^2-R_{l}^2}$, where $r\in[R_{l},R_{u}]$, and $g$ is a random variable following the gamma distribution, i.e., $f_g(g)=\frac{m^m g^{m-1}\exp(-m g)}{\Gamma[m]}$. The cumulative distribution function (CDF) and PDF for the composite random variable $x\triangleq g^{2}r^{-2\alpha}$ are given by
\ifCLASSOPTIONpeerreview
\begin{align}
\Phi(x,R_l,R_u)&=1-\frac{R_u^2\Gamma\!\left[m,mR_u^{\alpha}\sqrt{x} \right]-R_l^2\Gamma\!\left[m,mR_l^{\alpha}\sqrt{x}\right]+\left(m\sqrt{x}\right)^{-\frac{2}{\alpha}}\Gamma\!\left[m+\frac{2}{\alpha},mR_l^{\alpha}\sqrt{x},mR_u^{\alpha}\sqrt{x}\right]}{\left(R_{u}^2-R_{l}^2\right)\Gamma[m]},\label{eq:cdf1} \\
\phi(x,R_l,R_u)&=\frac{\Gamma\!\left[m+\frac{2}{\alpha},mR_l^{\alpha}\sqrt{x},mR_u^{\alpha}\sqrt{x}\right]}{m^{\frac{2}{\alpha}}x^{\frac{1}{\alpha}+1}\alpha\left(R_{u}^2-R_{l}^2\right)\Gamma[m]},
\end{align}
\else
\begin{align}
&\Phi(x,R_l,R_u)=1-\frac{R_u^2\Gamma\!\left[m,mR_u^{\alpha}\sqrt{x} \right]\!-\!R_l^2\Gamma\!\left[m,mR_l^{\alpha}\sqrt{x}\right]}{\left(R_{u}^2-R_{l}^2\right)\Gamma[m]}\nonumber\\
&\quad\quad\quad-\frac{\left(m\sqrt{x}\right)^{-\frac{2}{\alpha}}\Gamma\!\left[m+\frac{2}{\alpha},mR_l^{\alpha}\sqrt{x},mR_u^{\alpha}\sqrt{x}\right]}{\left(R_{u}^2-R_{l}^2\right)\Gamma[m]}, \label{eq:cdf1} \\
&\phi(x,R_l,R_u)=\frac{\Gamma\!\left[m+\frac{2}{\alpha},mR_l^{\alpha}\sqrt{x},mR_u^{\alpha}\sqrt{x}\right]}{m^{\frac{2}{\alpha}}x^{\frac{1}{\alpha}+1}\alpha\left(R_{u}^2-R_{l}^2\right)\Gamma[m]},
\end{align}
\fi
%
\noindent respectively
\end{lemma}
\begin{proof}
The CDF of $x$ can be written as
\ifCLASSOPTIONpeerreview
\begin{align}
\Phi(x,R_l,R_u)&=\Pr\left(g^{2}r^{-2\alpha}< x\right) =\mathbb{E}_{r}\left\{\Pr\left(g< r^{\alpha}\sqrt{x}\right)\right\}\nonumber\\
&=\int_{R_l}^{R_u}\frac{\Gamma\left[m,0,m\sqrt{x}r^{\alpha}\right]}{\Gamma[m]}\frac{2r}{R_{u}^2-R_{l}^2}\, \textup{d}r\nonumber\\
&=1-\frac{R_u^2\Gamma\!\left[m,mR_u^{\alpha}\sqrt{x} \right]-R_l^2\Gamma\!\left[m,mR_l^{\alpha}\sqrt{x}\right]+\left(m\sqrt{x}\right)^{-\frac{2}{\alpha}}\Gamma\!\left[m+\frac{2}{\alpha},mR_l^{\alpha}\sqrt{x},mR_u^{\alpha}\sqrt{x}\right]}{\left(R_{u}^2-R_{l}^2\right)\Gamma[m]}.
\end{align}
\else
\begin{align}
&\Phi(x,R_l,R_u)=\Pr\left(g^{2}r^{-2\alpha}< x\right) =\mathbb{E}_{r}\left\{\Pr\left(g< r^{\alpha}\sqrt{x}\right)\right\}\nonumber\\
&\quad\quad=\int_{R_l}^{R_u}\frac{\Gamma\left[m,0,m\sqrt{x}r^{\alpha}\right]}{\Gamma[m]}\frac{2r}{R_{u}^2-R_{l}^2}\, \textup{d}r\nonumber\\
&\quad\quad=1-\frac{R_u^2\Gamma\!\left[m,mR_u^{\alpha}\sqrt{x} \right]-R_l^2\Gamma\!\left[m,mR_l^{\alpha}\sqrt{x}\right]}{\left(R_{u}^2-R_{l}^2\right)\Gamma[m]}\nonumber\\
&\quad\quad\quad-\frac{\left(m\sqrt{x}\right)^{-\frac{2}{\alpha}}\Gamma\!\left[m+\frac{2}{\alpha},mR_l^{\alpha}\sqrt{x},mR_u^{\alpha}\sqrt{x}\right]}{\left(R_{u}^2-R_{l}^2\right)\Gamma[m]}.
\end{align}
\fi
Taking the derivative of $\Phi(x,R_l,R_u)$ with respect to $x$, we obtain its PDF.
\end{proof}
According to Lemma~\ref{lemma:composite} and probability theory, the key elements for the region division approach with fading are shown in the following lemmas.
\ifCLASSOPTIONpeerreview
\begin{lemma}
Based on our system model considered in Sections~\ref{sec:system} and~\ref{sec:fading}, under the fading scenario with region division approach, the probability that the signals from two BNs are successfully decoded and the probability that the signal from only one BN is successfully decoded are given by
\begin{align}\label{eq:general:p2}
p_2=\left\{ \begin{array}{ll}
\mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2(1-\gamma)}}^{\infty}\!\left(1-\Phi^A\!\left(\kappa x_2\right)\right)\phi^B(x_2)\textup{d}x_2+ \mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2}}^{\frac{\mathcal{N}\gamma}{P_T\xi_2(1-\gamma)}}\!\left(1-\Phi^A\!\left(\gamma\kappa x_2\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)\!\right)\phi^B(x_2)\textup{d}x_2 \\
+\mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1(1-\gamma)}}^{\infty}\!\left(1-\Phi^B\!\left(\frac{x_1}{\kappa}\right)\right)\phi^A(x_1)\textup{d}x_1+ \mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1}}^{\frac{\mathcal{N}\gamma}{P_T\xi_1(1-\gamma)}}\!\left(1-\Phi^B\!\left(\frac{\gamma x_1}{\kappa}\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\!\right)\phi^A(x_1)\textup{d}x_1, &{\gamma<1 ;}\\
\mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2}}^{\infty}\!\left(1-\Phi^A\!\left(\gamma\kappa x_2\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)\!\right)\phi^B(x_2)\textup{d}x_2 + \mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1}}^{\infty}\!\left(1-\Phi^B\!\left(\frac{\gamma x_1}{\kappa}\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\!\right)\phi^A(x_1)\textup{d}x_1, &{\gamma \geq 1 ;}\\
\end{array} \right.
\end{align}
\begin{align}\label{eq:general:p1}
p_1&= \mathlarger{\int}_0^{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2}}\!\left(1-\Phi^A\!\left(\gamma\kappa x_2\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)\!\right)\phi^B(x_2)\textup{d}x_2 + \mathlarger{\int}_0^{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1}}\!\left(1-\Phi^B\!\left(\frac{\gamma x_1}{\kappa}\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\!\right)\phi^A(x_1)\textup{d}x_1,
\end{align}
\noindent respectively, where $\kappa=\xi_2/\xi_1$, $\Phi^A(x_1)\triangleq\Phi(x_1,R_1,R_2)$, $\phi^A(x_1)\triangleq\phi(x_1,R_1,R_2)$, $\Phi^B(x_2)\triangleq\Phi(x_2,R_2,R)$ and $\phi^B(x_2)\triangleq\phi(x_2,R_2,R)$. $\Phi(\cdot,\cdot,\cdot)$ and $\phi(\cdot,\cdot,\cdot)$ are defined in Lemma~\ref{lemma:composite}.
\end{lemma}
\else
\begin{lemma}
\begin{figure*}[!t]
\normalsize
\begin{align}\label{eq:general:p2}
p_2=\left\{ \begin{array}{ll}
\mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2(1-\gamma)}}^{\infty}\!\left(1-\Phi^A\!\left(\kappa x_2\right)\right)\phi^B(x_2)\textup{d}x_2+ \mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2}}^{\frac{\mathcal{N}\gamma}{P_T\xi_2(1-\gamma)}}\!\left(1-\Phi^A\!\left(\gamma\kappa x_2\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)\!\right)\phi^B(x_2)\textup{d}x_2 \\
+\mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1(1-\gamma)}}^{\infty}\!\left(1-\Phi^B\!\left(\frac{x_1}{\kappa}\right)\right)\phi^A(x_1)\textup{d}x_1+ \mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1}}^{\frac{\mathcal{N}\gamma}{P_T\xi_1(1-\gamma)}}\!\left(1-\Phi^B\!\left(\frac{\gamma x_1}{\kappa}\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\!\right)\phi^A(x_1)\textup{d}x_1, &{\gamma<1 ;}\\
\mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2}}^{\infty}\!\left(1-\Phi^A\!\left(\gamma\kappa x_2\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)\!\right)\phi^B(x_2)\textup{d}x_2 + \mathop{\mathlarger{\int}}_{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1}}^{\infty}\!\left(1-\Phi^B\!\left(\frac{\gamma x_1}{\kappa}\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\!\right)\phi^A(x_1)\textup{d}x_1, &{\gamma \geq 1 ;}\\
\end{array} \right.
\end{align}
\begin{align}\label{eq:general:p1}
p_1&= \mathlarger{\int}_0^{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_2}}\!\left(1-\Phi^A\!\left(\gamma\kappa x_2\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)\!\right)\phi^B(x_2)\textup{d}x_2 + \mathlarger{\int}_0^{\!\!\frac{\mathcal{N}\gamma}{P_T\xi_1}}\!\left(1-\Phi^B\!\left(\frac{\gamma x_1}{\kappa}\!+\!\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\!\right)\phi^A(x_1)\textup{d}x_1.
\end{align}
\hrulefill
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
Based on our system model considered in Sections~\ref{sec:system} and~\ref{sec:fading}, under the fading scenario with region division approach, the probability that the signals from two BNs are successfully decoded and the probability that the signal from only one BN is successfully decoded are given by~\eqref{eq:general:p2} and~\eqref{eq:general:p1}, respectively, at the top of this page, where $\kappa=\xi_2/\xi_1$, $\Phi^A(x_1)\triangleq\Phi(x_1,R_1,R_2)$, $\phi^A(x_1)\triangleq\phi(x_1,R_1,R_2)$, $\Phi^B(x_2)\triangleq\Phi(x_2,R_2,R)$ and $\phi^B(x_2)\triangleq\phi(x_2,R_2,R)$. $\Phi(\cdot,\cdot,\cdot)$ and $\phi(\cdot,\cdot,\cdot)$ are defined in Lemma~\ref{lemma:composite}.
\end{lemma}
\fi
\begin{proof}
\ifCLASSOPTIONpeerreview
In order to ensure that the signals from both paired BNs are successfully decoded, it requires both of the SINR from the stronger signal and the SNR from the weaker signal to be greater than the channel threshold. Based on the decoding order, $p_{2}$ can be decomposed into
\begin{align}\label{eq:deliever1}
p_{2}=&\underbrace{\Pr\!\left(\frac{P_T\xi_1g_1^{2}r_1^{-2\alpha}}{P_T\xi_2g_2^{2}r_2^{-2\alpha}+\mathcal{N}}\geq\gamma \,\,\&\& \,\,\frac{P_T\xi_2g_2^{2}r_2^{-2\alpha}}{\mathcal{N}}\geq\gamma\,\, \&\&\,\, \frac{\xi_1g_1^{2}}{r_1^{2\alpha}}\geq \frac{\xi_2g_2^{2}}{r_2^{2\alpha}}\right)}_{p_2^A} \nonumber \\
&+\underbrace{\Pr\!\left(\frac{P_T\xi_2g_2^{2}r_2^{-2\alpha}}{P_T\xi_1g_1^{2}r_1^{-2\alpha}+\mathcal{N}}\geq\gamma \,\,\&\& \,\,\frac{P_T\xi_1g_1^{2}r_1^{-2\alpha}}{\mathcal{N}}\geq\gamma\,\, \&\& \,\, \frac{\xi_1g_1^{2}}{r_1^{2\alpha}}<\frac{\xi_2g_2^{2}}{r_2^{2\alpha}}\right)}_{p_2^B},
\end{align}
\noindent where $x_1\triangleq r_1^{2\alpha}/g_1^{2}$, $x_2\triangleq r_2^{2\alpha}/g_2^{2}$, $g_1$ and $g_2$ represent the fading power gain for the BN from the near subregion and far subregion, respectively.
\else
\begin{figure*}[!t]
\normalsize
\begin{align}\label{eq:deliever1}
p_{2}=&\underbrace{\Pr\!\left(\frac{P_T\xi_1g_1^{2}r_1^{-2\alpha}}{P_T\xi_2g_2^{2}r_2^{-2\alpha}+\mathcal{N}}\geq\gamma \,\,\&\& \,\,\frac{P_T\xi_2g_2^{2}r_2^{-2\alpha}}{\mathcal{N}}\geq\gamma\,\, \&\&\,\, \frac{\xi_1g_1^{2}}{r_1^{2\alpha}}\geq \frac{\xi_2g_2^{2}}{r_2^{2\alpha}}\right)}_{p_2^A} \nonumber \\
&+\underbrace{\Pr\!\left(\frac{P_T\xi_2g_2^{2}r_2^{-2\alpha}}{P_T\xi_1g_1^{2}r_1^{-2\alpha}+\mathcal{N}}\geq\gamma \,\,\&\& \,\,\frac{P_T\xi_1g_1^{2}r_1^{-2\alpha}}{\mathcal{N}}\geq\gamma\,\, \&\& \,\, \frac{\xi_1g_1^{2}}{r_1^{2\alpha}}<\frac{\xi_2g_2^{2}}{r_2^{2\alpha}}\right)}_{p_2^B}.
\end{align}
\hrulefill
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
In order to ensure that the signals from both paired BNs are successfully decoded, it requires both of the SINR from the stronger signal and the SNR from the weaker signal to be greater than the channel threshold. Based on the decoding order, $p_{2}$ can be decomposed into~\eqref{eq:deliever1}, as shown at the top of next page, where $x_1\triangleq r_1^{2\alpha}/g_1^{2}$, $x_2\triangleq r_2^{2\alpha}/g_2^{2}$, $g_1$ and $g_2$ represent the fading power gain for the BN from the near subregion and far subregion, respectively.
\fi
Let us consider $p_2^A$ firstly, which is the probability that both BNs are successfully decoded when the signal from the near BN is decoded at first. The condition of the signal from the near BN being decoded at first is $\xi_1 x_1\geq\xi_2 x_2$ (equivalently, $x_1\geq \kappa x_2$). Additionally, the condition that the signal from the far BN is successfully decoded is that $x_2$ must be greater than $\frac{\mathcal{N}\gamma}{P_T\xi_2}$. Then, we can express $p_2^A$ as
\ifCLASSOPTIONpeerreview
\begin{align}\label{eq:p2_derive}
p_2^A&=\Pr\left(\frac{x_1}{\kappa x_2+\frac{\mathcal{N}}{P_T\xi_1}}\geq\gamma\right)=\mathbb{E}_{x_2}\left\{\Pr\left(x_1\geq \gamma\kappa x_2+\frac{\mathcal{N}\gamma}{P_T\xi_1} \right)\right\},
\end{align}
\else
\begin{align}\label{eq:p2_derive}
p_2^A&=\Pr\left(\frac{x_1}{\kappa x_2+\frac{\mathcal{N}}{P_T\xi_1}}\geq\gamma\right)\nonumber\\
&=\mathbb{E}_{x_2}\left\{\Pr\left(x_1\geq \gamma\kappa x_2+\frac{\mathcal{N}\gamma}{P_T\xi_1} \right)\right\},
\end{align}
\fi
\noindent where $x_1\in\left(\kappa x_2,\infty\right)$ and $x_2\in\left(\frac{\mathcal{N}\gamma}{P_T\xi_2},\infty\right)$.
Then following the similar procedure as presented in Appendix A, we obtain the expression of $p_2^A$. $p_2^B$ can be derived using the same procedure. After combining these two results, we arrive at the final result in~\eqref{eq:general:p2}. The derivation of $p_1$ is similar.
\end{proof}
Due to the complexity of functions $\Phi$ and $\phi$, it is not possible to obtain the closed-form results. But the single-fold integration can be easily numerically evaluated using standard mathematical packages such as Mathematica or Matlab.
\begin{lemma}
Based on our system model considered in Sections~\ref{sec:system} and~\ref{sec:fading}, under the fading scenario with region division approach, the average number of successful BNs given that only one BN from the near subregion accesses the reader is
\begin{align}\label{eq:m1nr}
\bar{\mathcal{M}}_{1\textrm{near}}&=1-\Phi^A\left(\frac{\mathcal{N}\gamma}{P_T\xi_1}\right),
\end{align}
\noindent and the average number of successful BNs given that only one BN from the far subregion accesses the reader is
\begin{align}\label{eq:m1fr}
\bar{\mathcal{M}}_{1\textrm{far}}&=1-\Phi^B\left(\frac{\mathcal{N}\gamma}{P_T\xi_2}\right).
\end{align}
\end{lemma}
Since the derivation is similar to the proof of Lemma 2, we skip it here for the sake of brevity.
\subsection{Power Division Approach}\label{sec:powerdivision}
\ifCLASSOPTIONpeerreview
\else
\begin{figure*}[!t]
\normalsize
\begin{align}\label{eq:p2newnew}
p_2\!=\left\{ \begin{array}{ll}
0,&{\tilde{\beta}\leq\frac{\mathcal{N}\gamma}{P_T\xi_2} ;}\\
\mathlarger{\int}_{\textrm{min}\!\left\{\tilde{\beta},\textrm{max}\!\left\{\frac{\mathcal{N}\gamma}{P_T\xi_2},\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\}}^{\tilde{\beta}}\left(1-\frac{\Phi\left(\gamma\kappa x_2'+\frac{\mathcal{N}\gamma}{P_T\xi_1},R_1,R\right)-\Phi\left(\tilde{\beta},R_1,R\right)}{1-\Phi\left(\tilde{\beta},R_1,R\right)}\right)\frac{\phi\left(x_2',R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}\textup{d}x_2'\\
+\frac{\Phi\left(\textrm{min}\!\left\{\tilde{\beta},\textrm{max}\!\left\{\frac{\mathcal{N}\gamma}{P_T\xi_2},\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\},R_1,R\right)
-\Phi\left(\frac{\mathcal{N}\gamma}{P_T\xi_2},R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)},&{\tilde{\beta}>\frac{\mathcal{N}\gamma}{P_T\xi_2} ;}\\
\end{array} \right.
\end{align}
\begin{align}\label{eq:p1newnew}
p_1=&\mathlarger{\int}_{\textrm{min}\!\left\{\tilde{\beta},\frac{\mathcal{N}\gamma}{P_T\xi_2},\textrm{max}\!\left\{0,\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\}}^{\textrm{min}\left\{\tilde{\beta}, \frac{\mathcal{N}\gamma}{P_T\xi_2}\right\}}\left(1-\frac{\Phi\left(\gamma\kappa x_2'+\frac{\mathcal{N}\gamma}{P_T\xi_1},R_1,R\right)-\Phi\left(\tilde{\beta},R_1,R\right)}{1-\Phi\left(\tilde{\beta},R_1,R\right)}\right)\frac{\phi\left(x_2',R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}\textup{d}x_2'\nonumber\\
&+\frac{\Phi\left(\textrm{min}\!\left\{\tilde{\beta},\frac{\mathcal{N}\gamma}{P_T\xi_2},\textrm{max}\!\left\{0,\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\},R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}.
\end{align}
\hrulefill
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
\fi
Under the power division approach, rather than pairing the BNs from different subregions, the reader pairs the BNs with different power levels. Specifically, for the reader, there is a pre-defined threshold $\beta$ and training period at the start of each time slot. By comparing the threshold $\beta$ with the instantaneous backscattered signal power from each node, the reader categorizes the BNs into high power level group and low power level group. Correspondingly, each BN can pick its reflection coefficient by comparing its received power with the threshold $(1-\xi_1)\sqrt{P_T\beta/\xi_1}$ \footnote{In the training period, all the BNs' reflection coefficients are assumed to be $\xi_1$.}. If the received power is greater than the threshold, this BN belongs to the high power level group and its reflection coefficient will be set to $\xi_1$. Otherwise, it belongs to the low level power level group and the reflection coefficient is set to $\xi_2$.
According to the principle of power division approach, $p_{\textrm{near}}$ can be interpreted as the probability that the backscattered signal power for the node is greater than the threshold $\beta$. Thus, $p_{\textrm{near}}$ can be written as $p_{\textrm{near}}=1-\Phi\left(\tilde{\beta},R_1,R\right)$, where $\tilde{\beta}\triangleq \beta/(P_T \xi_1)$ is the normalized threshold. The key results for $p_2$, $p_1$, $\bar{\mathcal{M}}_{1\textrm{near}}$ and $\bar{\mathcal{M}}_{1\textrm{far}}$ are given in the following lemmas.
\ifCLASSOPTIONpeerreview
\begin{lemma}
Based on our system model considered in Sections~\ref{sec:system} and~\ref{sec:fading}, under the fading scenario with power division approach, the probability that the signals from two BNs are successfully decoded is
\begin{align}\label{eq:p2newnew}
p_2\!=\left\{ \begin{array}{ll}
0,&{\tilde{\beta}\leq\frac{\mathcal{N}\gamma}{P_T\xi_2} ;}\\
\mathlarger{\int}_{\textrm{min}\!\left\{\tilde{\beta},\textrm{max}\!\left\{\frac{\mathcal{N}\gamma}{P_T\xi_2},\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\}}^{\tilde{\beta}}\left(1-\frac{\Phi\left(\gamma\kappa x_2'+\frac{\mathcal{N}\gamma}{P_T\xi_1},R_1,R\right)-\Phi\left(\tilde{\beta},R_1,R\right)}{1-\Phi\left(\tilde{\beta},R_1,R\right)}\right)\frac{\phi\left(x_2',R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}\textup{d}x_2'\\
+\frac{\Phi\left(\textrm{min}\!\left\{\tilde{\beta},\textrm{max}\!\left\{\frac{\mathcal{N}\gamma}{P_T\xi_2},\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\},R_1,R\right)
-\Phi\left(\frac{\mathcal{N}\gamma}{P_T\xi_2},R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)},&{\tilde{\beta}>\frac{\mathcal{N}\gamma}{P_T\xi_2} ;}\\
\end{array} \right.
\end{align}
\noindent and the probability that the signal from only one BN is successfully decoded is given by
\begin{align}\label{eq:p1newnew}
p_1=&\mathlarger{\int}_{\textrm{min}\!\left\{\tilde{\beta},\frac{\mathcal{N}\gamma}{P_T\xi_2},\textrm{max}\!\left\{0,\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\}}^{\textrm{min}\left\{\tilde{\beta}, \frac{\mathcal{N}\gamma}{P_T\xi_2}\right\}}\left(1-\frac{\Phi\left(\gamma\kappa x_2'+\frac{\mathcal{N}\gamma}{P_T\xi_1},R_1,R\right)-\Phi\left(\tilde{\beta},R_1,R\right)}{1-\Phi\left(\tilde{\beta},R_1,R\right)}\right)\frac{\phi\left(x_2',R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}\textup{d}x_2'\nonumber\\
&+\frac{\Phi\left(\textrm{min}\!\left\{\tilde{\beta},\frac{\mathcal{N}\gamma}{P_T\xi_2},\textrm{max}\!\left\{0,\frac{\tilde{\beta}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}\right\}\right\},R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}.
\end{align}
\end{lemma}
\else
\begin{lemma}
Based on our system model considered in Sections~\ref{sec:system} and~\ref{sec:fading}, under the fading scenario with power division approach, the probability that the signals from two BNs are successfully decoded is~\eqref{eq:p2newnew} and the probability that the signal from only one BN is successfully decoded is given by~\eqref{eq:p1newnew}, as shown at the top of this page.
\end{lemma}
\fi
\begin{proof}
Let $x_1'$ represent the normalized instantaneous received power from a BN belonging to the high power level group, which is normalized over $P_T$ and $\xi_1$, and its CDF can be expressed as $F_{x_1'}(x_1')=\frac{\Phi\left(x_1',R_1,R\right)-\Phi\left(\tilde{\beta},R_1,R\right)}{1-\Phi\left(\tilde{\beta},R_1,R\right)}$, where $x_1'\in[\tilde{\beta},\infty)$. Similarly, let
$x_2'$ denote the normalized instantaneous received power from a BN belonging to the lower power level group and its CDF is $F_{x_2'}(x_2')=\frac{\Phi\left(x_2',R_1,R\right)}{\Phi\left(\tilde{\beta},R_1,R\right)}$, where $x_2'\in(0,\tilde{\beta})$.
Clearly, under the power division approach, the decoding order is always from the high power level group to the low power level group. $p_2$ and $p_1$ are then written as $p_2=\mathbb{E}_{x_1',x_2'}\left[\Pr\left(\frac{P_T\xi_1x_1'}{P_T\xi_2x_2'+\mathcal{N}}\geq\gamma \&\&\frac{P_T\xi_2x_2'}{\mathcal{N}}\geq\gamma\right)\right]$ and $p_1=\mathbb{E}_{x_1',x_2'}\left[\Pr\left(\frac{P_T\xi_1x_1'}{P_T\xi_2x_2'+\mathcal{N}}\geq\gamma \&\&\frac{P_T\xi_2x_2'}{\mathcal{N}}<\gamma\right)\right]$, respectively. Following the similar derivation approach in Appendix A, we arrive at the results in~\eqref{eq:p2newnew} and~\eqref{eq:p1newnew}.
\end{proof}
\begin{lemma}
Based on our system model considered in Sections~\ref{sec:system} and~\ref{sec:fading}, under the fading scenario with power division approach, the average number of successful BNs given that only one BN from the near subregion accesses the reader is
\begin{align}
\bar{\mathcal{M}}_{1\textrm{near}} &= 1-\Phi\left(\frac{\mathcal{N}\gamma}{\xi_1P_T},R_1,R\right)\textbf{1}\left(\frac{\mathcal{N}\gamma}{\xi_1P_T}>\tilde{\beta} \right),\label{eq:m1nearnewnew}
\end{align}
\noindent and the average number of successful BNs given that only one BN from the far subregion accesses the reader is
\begin{align}\label{eq:m1farnewnew}
\bar{\mathcal{M}}_{1\textrm{far}} &= \left(1-\Phi\left(\frac{\mathcal{N}\gamma}{\xi_2P_T},R_1,R\right)\right)\textbf{1}\left(\frac{\mathcal{N}\gamma}{\xi_2P_T}<\tilde{\beta} \right).
\end{align}
\end{lemma}
The derivation is similar to the proof of Lemma 2; hence, we skip it here for the sake of brevity.
\begin{remark}
The reflection coefficient selection criterion for the region division approach depends on the subregion radius and channel threshold, while the selection criterion for the power division approach strongly relies on the threshold $\beta$ and channel threshold. Following the same derivation procedure for Proposition 1, to achieve the better system performance, we can compute the relationship between $\xi_1$ and $\xi_2$ as
\begin{align}\label{eq:design2}
\xi_1\geq \max\left\{\xi_{2},\gamma\left (\xi_2+\frac{\mathcal{N}}{P_T\tilde{\beta}} \right)\right\}.
\end{align}
\end{remark}
\subsection{Summary}\label{sec:fading:summary}
Table~\ref{tb:1} summarizes the key factors used to calculate the average number of successfully decoded bits, $\bar{\mathcal{C}}_{\textrm{suc}}$, under the fading-free and fading scenarios with different pairing approaches. The general expression of $\bar{\mathcal{C}}_{\textrm{suc}}$ for two-node pairing is given in~\eqref{eq:general:C}, where $\bar{\mathcal{M}}_2=p_1+2p_2$.
\begin{table*}[t]
\centering
\caption{Key factors determining $\bar{\mathcal{C}}_{\textrm{suc}}$ for two-node pairing case.}\label{tb:1}
\begin{tabular}{|c||c|c|c|c|c|} \hline
Scenario & $p_{\textrm{near}}$ & $p_2$ & $p_1$ & $\bar{\mathcal{M}}_{1\textrm{near}}$ & $\bar{\mathcal{M}}_{1\textrm{far}}$\\ \hline
\multirow{2}{*}{\makecell[c]{Fading-free\\Fading: region division}} & \multirow{2}{*}{$\frac{R_2^2-R_1^2}{R^2-R_1^2}$} & \multirow{2}{*}{\makecell[c]{\eqref{eq:nofading:p2}\\\eqref{eq:general:p2}} } &\multirow{2}{*}{\makecell[c]{\eqref{eq:nofading:p1}\\\eqref{eq:general:p1}}} &\multirow{2}{*}{\makecell[c]{\eqref{eq:N1case}\\\eqref{eq:m1nr}}}&\multirow{2}{*}{\makecell[c]{\eqref{eq:N1case1}\\ \eqref{eq:m1fr}}}
\\ \cline{1-1}\cline{3-6}
&&&& &\\ \cline{1-6}
Fading: power division&$1-\Phi\left(\tilde{\beta},R_1,R\right)$&\eqref{eq:p2newnew} & \eqref{eq:p1newnew}& \eqref{eq:m1nearnewnew}& \eqref{eq:m1farnewnew} \\ \hline
\end{tabular}
\end{table*}
\ifCLASSOPTIONpeerreview
\else
\begin{figure*}[!t]
\centering
\subfigure[Two-node pairing.]{\label{fig1b}\includegraphics[width=0.32\textwidth]{fig1b}}
\subfigure[Multiple-node multiplexing.]{ \label{fig1a}\includegraphics[width=0.32\textwidth]{fig1a}}
\subfigure[Multiple-node multiplexing (simulation only).]{ \label{fig1c}\includegraphics[width=0.32\textwidth]{fig1c}}
\caption{Channel threshold $\gamma$ versus (a) the normalized average number of successfully decoded bits under two-node pairing; (b) the average number of successful BNs $\bar{\mathcal{M}}_{N}$ given $N$ multiplexing nodes and (c) the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ for general multiplexing case.}\label{fig_valid}
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
\fi
\section{Numerical Results}\label{sec:result}
In this section, we present the numerical results to investigate the performance of the NOMA-enhanced BackCom system. In order to validate the numerical results, we also present simulation results which are generated using Matlab and are averaged over $10^6$ simulation runs. Unless specified otherwise, the following values of the main system parameters are adopted: the outer radius of the coverage zone $R=65$ m, the inner radius of the coverage zone $R_1=1$ m, the number of BNs $M=60$, the path-loss exponent $\alpha=2.5$ for Nakagami-$m$ ($m=4$) fading scenario and fading-free scenario while $\alpha=4$ for Rayleigh fading case, the reader's transmit power $P_T=35$ dBm, the noise power $\mathcal{N}=-100$ dBm, and the product of the time slot and the data rate $\mathcal{L}\mathcal{R}=60$ bits. In addition, for the region division approach, we set $R_{i}=\sqrt{\frac{(i-1)R^2+(N+1-i)R_1^2}{N}}$ for $i={2,...,N}$. As for the power division approach, we find a $\tilde{\beta}$ value that makes $p_{\textrm{near}}=0.5$. Note that such a value ensures that the average number of BNs in each group is the same. The impact of $R_2$ and $\tilde{\beta}$ will be analyzed in the following Section~\ref{sec:result::r2}.
\subsection{Analysis Validation}
\ifCLASSOPTIONpeerreview
\begin{figure}[t]
\centering
\subfigure[Two-node pairing.]{\label{fig1b}\includegraphics[width=0.45\textwidth]{fig1b}}\\
\subfigure[Multiple-node multiplexing.]{ \label{fig1a}\includegraphics[width=0.45\textwidth]{fig1a}}
\subfigure[Multiple-node multiplexing (simulation only).]{ \label{fig1c}\includegraphics[width=0.45\textwidth]{fig1c}}
\caption{Channel threshold $\gamma$ versus (a) the normalized average number of successfully decoded bits under two-node pairing; (b) the average number of successful BNs $\bar{\mathcal{M}}_{N}$ given $N$ multiplexing nodes and (c) the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ for general multiplexing case.}\label{fig_valid}
\end{figure}
\fi
Fig.~\ref{fig_valid} plots the channel threshold $\gamma$ versus the (normalized) average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ and the average number of successful BNs $\bar{\mathcal{M}}_{N}$ given $N$ multiplexing nodes for different fading and multiplexing scenarios. Note that the \textit{normalized average number of successfully decoded bits} is defined as the average number of successfully decoded bits over the total number of bits transmitted by BNs, where the latter term is a constant for the given system setup and is given by the formulation in Remark 2. We set $\xi_1=0.7$, $\xi_2=0.5$, $\xi_3=0.3$, $\xi_4=0.1$ and $\xi_5=0.05$. The curves in Fig.~\ref{fig1c} are generated using simulations only. From Fig.~\ref{fig1b} and $N=2$ curve in Fig.~\ref{fig1a}, we can see that the simulation results match perfectly with the analytical results as expected. As for the $N\geq3$ curves in Fig.~\ref{fig1a}, we find that, when the channel threshold is large, the analytical results slightly deviate from the simulation results. This is due to the independence assumption we made when calculating $p_k$ for more users multiplexing scenario. The (close) match of the simulation and analytical results demonstrates the accuracy of our derivations. Furthermore, comparing Fig.~\ref{fig1a} with Fig.~\ref{fig1c}, we find the trends for $\bar{\mathcal{M}}_{N}$ and $\bar{\mathcal{C}}_{\textrm{suc}}$ are the same. This indicates that $\bar{\mathcal{M}}_{N}$ is a reasonable metric to investigate the performance.
As shown in Fig.~\ref{fig_valid}, when the considered reflection coefficient sets satisfy the criteria proposed in Proposition 1 and Remark 4 for certain value of $\gamma$, the curves for fading-free and Nakagami fading (i.e., $\alpha=2.4$, $m=4$) scenarios are constant. These are the best achievable system performance, where all the transmitted bits are successfully delivered and the average number of successfully decoded bits is the same as the the total number of bits transmitted by BNs. Note that the normalized $\bar{\mathcal{C}}_{\textrm{suc}}$ under Rayleigh fading can only achieve about half of the best performance; hence, in the following subsections, we focus on the fading-free and Nakagami fading (i.e., $\alpha=2.4$, $m=4$) scenarios.
\subsection{Effect of the Reflection Coefficient for Two-Node Pairing}\label{sec:result:4}
\ifCLASSOPTIONpeerreview
\begin{figure}[t]
\centering
\includegraphics[width=0.5 \textwidth]{fig2}
\caption{The reflection coefficient of the far backscatter group $\xi_2$ versus the normalized average number of successfully decoded bits for two-node pairing.}
\label{fig_sec2}
\end{figure}
\fi
In this subsection, we investigate the effect of reflection coefficient for the two-node pairing case and examine the proposed reflection coefficient selection criteria. Fig.~\ref{fig_sec2} plots the reflection coefficient of the far backscatter group $\xi_2$ versus the normalized average number of successfully decoded bits. We set $\xi_1=0.7$.
From Fig.~\ref{fig_sec2}, we can see that the general trend for the normalized $\bar{\mathcal{C}}_{\textrm{suc}}$ is decreasing as $\xi_2$ increases. This shows that a smaller value of $\xi_2$ can benefit the system. This is because, by reducing $\xi_2$, the interference from the weaker signal is reduced; the stronger signal, thus, has the higher chance to be decoded successfully. However, $\xi_2$ cannot be set too small, as the curves begin to decrease when $\xi_2$ approaches to an extremely small value (e.g., $10^{-4}$). When $\xi_2$ is extremely small, the weaker signal is less likely to be decoded successfully (i.e., SNR is very small for most of the time), which leads to the reduction of $p_2$, $\bar{\mathcal{M}}_{2}$ and $\bar{\mathcal{C}}_{\textrm{suc}}$ correspondingly.
We also mark the maximum $\xi_2$ and the corresponding normalized $\bar{\mathcal{C}}_{\textrm{suc}}$ which satisfies~\eqref{eq:designguide} or~\eqref{eq:design2} in Fig.~\ref{fig_sec2}. We find that, under the fading-free scenario or fading case with the power division approach, the marked normalized $\bar{\mathcal{C}}_{\textrm{suc}}$ is equal to 1, which implies that the signals from all the paired BNs are successfully decoded and the system performance is consequently optimized. It also validates our proposed selection criteria. For the fading case with the region division approach, the proposed selection criterion still provides a good performance (i.e., the marked normalized $\bar{\mathcal{C}}_{\textrm{suc}}$ is 0.9265 for $\gamma=10$ dB and 0.9306 for $\gamma=5$ dB).
\ifCLASSOPTIONpeerreview
\else
\begin{figure}[t]
\centering
\includegraphics[width=0.45 \textwidth]{fig2}
\caption{The reflection coefficient of the far backscatter group $\xi_2$ versus the normalized average number of successfully decoded bits for two-node pairing.}
\label{fig_sec2}
\end{figure}
\fi
\subsection{Effect of the Reflection Coefficient for Multiple-Node Multiplexing}
\ifCLASSOPTIONpeerreview
\begin{figure}[t]
\centering
\subfigure[$N=3$.]{\label{fig3a}\includegraphics[width=0.45\textwidth]{fig3a}}
\mbox{\hspace{0.5cm}}
\subfigure[$N=5$.]{ \label{fig3b}\includegraphics[width=0.45\textwidth]{fig3b}}
\caption{The reflection coefficient of the first backscatter group $\xi_1$ versus the average number of successful BNs $\bar{\mathcal{M}}_{N}$ given $N$ multiplexing BNs.}\label{fig_sec3}
\end{figure}
\else
\begin{figure}[t]
\centering
\subfigure[$N=3$.]{\label{fig3a}\includegraphics[width=0.45\textwidth]{fig3a}}\\
\subfigure[$N=5$.]{ \label{fig3b}\includegraphics[width=0.45\textwidth]{fig3b}}
\caption{The reflection coefficient of the first backscatter group $\xi_1$ versus the average number of successful BNs $\bar{\mathcal{M}}_{N}$ given $N$ multiplexing BNs.}\label{fig_sec3}
\end{figure}
\fi
Fig.~\ref{fig_sec3} plots the reflection coefficient of the first backscatter group $\xi_1$ versus the average number of successful BNs $\bar{\mathcal{M}}_{N}$ given $N$ multiplexing BNs, under the fading-free scenario, for $N=3$ and $N=5$, respectively. In Fig.~\ref{fig3a}, we set $\xi_3=0.007$ and $\xi_2$ to be the minimum value satisfying~\eqref{eq:designguide}. In Fig.~\ref{fig3b}, we set $\xi_5$ to be the minimum value which satisfies~\eqref{eq:designguide1} and $\xi_i$ (where $i \in [2,4]$) to be the minimum value satisfying~\eqref{eq:designguide}. We also mark the minimum $\xi_1$ satisfying~\eqref{eq:designguide}. As expected, these curves increase as $\xi_1$ increases and then become a constant (e.g., $\bar{\mathcal{M}}_{N}=N$) after the marked points. For the purpose of comparison, we also plot the curves when the reflection coefficients for all the subregions are the same. It is clear that, by properly selecting the reflection coefficients, the performance for BackCom system with NOMA can be greatly enhanced.
In addition, as shown in Fig.~\ref{fig3b}, when $N=5$ and $P_T=35$ dBm, the system can achieve the optimum performance when the channel threshold $\gamma$ is less than $8.5$ dB. If $\gamma$ further increases, the minimum $\xi_1$ satisfying~\eqref{eq:designguide} will be greater than one, which is impossible. Thus, we have to increase the transmit power of the reader in order to set a smaller $\xi_5$. In Fig.~\ref{fig3b}, we plot the curve for $P_T=41.5$ dBm, which allows to achieve the best performance when $\gamma$ is less than 10 dB.
\subsection{Effect of $R_2$ and $\tilde{\beta}$}\label{sec:result::r2}
\ifCLASSOPTIONpeerreview
\begin{figure}[t]
\centering
\subfigure[Fading-free scenario.]{\label{fig4a}\includegraphics[width=0.45\textwidth]{fig4a}}\\
\subfigure[Fading: region division.]{ \label{fig4b}\includegraphics[width=0.45\textwidth]{fig4b}}
\subfigure[Fading: power division.]{ \label{fig4c}\includegraphics[width=0.45\textwidth]{fig4c}}
\caption{The average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ versus (a) the radius $R_2$ under fading-free scenario; (b) the radius $R_2$ with fading and (c) the probability $p_{\textrm{near}}$ with power division approach.}\label{fig_sec4}
\end{figure}
\else
\begin{figure*}[!t]
\centering
\subfigure[Fading-free scenario.]{\label{fig4a}\includegraphics[width=0.32\textwidth]{fig4a}}
\subfigure[Fading: region division.]{ \label{fig4b}\includegraphics[width=0.32\textwidth]{fig4b}}
\subfigure[Fading: power division.]{ \label{fig4c}\includegraphics[width=0.32\textwidth]{fig4c}}
\caption{The average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ versus (a) the radius $R_2$ under fading-free scenario; (b) the radius $R_2$ with fading and (c) the probability $p_{\textrm{near}}$ with power division approach.}\label{fig_sec4}
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
\fi
In this subsection, we investigate the impact of the radius $R_2$ for the region division approach and the threshold $\beta$ for the power division approach. To analyze the impact of system parameters, we focus on the metric $\bar{\mathcal{C}}_{\textrm{suc}}$ rather than the normalized $\bar{\mathcal{C}}_{\textrm{suc}}$, since the total number of bits transmitted by BNs varies for different system setups. Figs.~\ref{fig4a} and~\ref{fig4b} plot the radius $R_2$ versus the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ for fading-free and fading scenarios, respectively. Fig.~\ref{fig4c} plots the probability $p_{\textrm{near}}$ versus $\bar{\mathcal{C}}_{\textrm{suc}}$ for the fading case with the power division approach. We set channel threshold $\gamma=5$ dB and $\xi_2=0.05$. We also mark the maximum $\bar{\mathcal{C}}_{\textrm{suc}}$ reached by each case. From these figures, we can see that, when $p_{\textrm{near}}$ is varying from 0 to 1 (equivalently, $R_2$ varies from $R_1$ to $R$ for the region division approach), $\bar{\mathcal{C}}_{\textrm{suc}}$ first increases and then decreases. When $\xi_1$ follows the selection criterion in Proposition 1 and Remark 4, the maximum $\bar{\mathcal{C}}_{\textrm{suc}}$ is achieved for $p_{\textrm{near}}=0.5$ (equivalently, $R_2=\sqrt{\frac{R_1^2+R^2}{2}}$ for the region division approach). This is due to the fact that when the reflection coefficients follow the selection criterion in Proposition 1 or Remark 4, $\bar{M}_{2}$ is always equal to 2 and it is the best performance gain achieved by pairing BNs. Hence, the overall system can benefit more when more BNs are paired. $p_{\textrm{near}}=0.5$ can result in the highest probability that all BNs are paired, thereby maximizing the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$.
For other scenarios, $p_{\textrm{near}}=0.5$ may not lead to the maximum $\bar{\mathcal{C}}_{\textrm{suc}}$. This is because, when $\bar{M}_{2}$ is no longer equal to 2, both $\bar{M}_{2}$ and the probability of different pairing cases (equivalently, $p_{\textrm{near}}$) are determined by $R_2$ or $\tilde{\beta}$. The varying of $R_2$ or $\tilde{\beta}$ results in the different value of $\bar{M}_{2}$ and $p_{\textrm{near}}$, and the interplay of these two factors results in the different maximum $\bar{\mathcal{C}}_{\textrm{suc}}$ that can be achieved by the system. In addition, we find that the maximum $\bar{\mathcal{C}}_{\textrm{suc}}$ achieved by the system where $\bar{M}_{2}<2$ is always less than the maximum $\bar{\mathcal{C}}_{\textrm{suc}}$ achieved by the system where $\bar{M}_{2}=2$. This shows the importance of carefully selecting system parameters in order to achieve the best performance.
\subsection{Performance Gain Achieved by Applying NOMA to the BackCom System}
\ifCLASSOPTIONpeerreview
\begin{figure}[t]
\centering
\subfigure[The average number of successfully decoded bits.]{\label{fig5a}\includegraphics[width=0.45\textwidth]{fig5a}}
\mbox{\hspace{0.5cm}}
\subfigure[The ratio of $\bar{\mathcal{C}}_{\textrm{suc}}$ for BackCom system with/without NOMA.]{ \label{fig5c}\includegraphics[width=0.45\textwidth]{fig5c}}
\caption{Channel threshold $\gamma$ versus (a) the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ under $\alpha=2.5$, $m=4$ and (b) the ratio of $\bar{\mathcal{C}}_{\textrm{suc}}$ for BackCom system with/without NOMA.}\label{fig_sec5}
\end{figure}
\else
\begin{figure}[t]
\centering
\subfigure[The average number of successfully decoded bits.]{\label{fig5a}\includegraphics[width=0.45\textwidth]{fig5a}}\\
\subfigure[The ratio of $\bar{\mathcal{C}}_{\textrm{suc}}$ for BackCom system with/without NOMA.]{ \label{fig5c}\includegraphics[width=0.45\textwidth]{fig5c}}
\caption{Channel threshold $\gamma$ versus (a) the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ under $\alpha=2.5$, $m=4$ and (b) the ratio of $\bar{\mathcal{C}}_{\textrm{suc}}$ for BackCom system with/without NOMA.}\label{fig_sec5}
\end{figure}
\fi
\ifCLASSOPTIONpeerreview
\else
\begin{figure*}[!t]
\centering
\includegraphics[width=0.7 \textwidth]{appendix2}
\vspace{-5mm}
\caption{Illustration of expressions of $p_{2|y_2}$ and the valid range of $y_2$ when $\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{R_2^{-2\alpha}}{\gamma\kappa}\geq R_2^{-2\alpha}-R^{-2\alpha}$.}
\label{fig_1ppendix2}
\vspace*{4pt}
\vspace{-0.05 in}
\end{figure*}
\fi
In this subsection, we evaluate the performance gain achieved by adopting NOMA into the BackCom system. For the purpose of comparison, we present the numerical results for the benchmark systems (i.e., the conventional system with/without NOMA and the BackCom system without NOMA). For the \textit{conventional communication system with NOMA}, the transmitting nodes are active devices and they use powered transceiver for the uplink communication. For the \textit{system without NOMA}, the BNs or conventional nodes access the reader in the pure TDMA fashion, e.g., only one BN/conventional node is scheduled to transmit signal to the reader per mini-slot lasting $\frac{\mathcal{L}}{M}$ seconds. The analytical results for these benchmark system can be derived using our analysis in this work. For the sake of brevity, we omit them here. Additionally, for a fair comparison among different communication systems, we assume that $\xi=0.7$ for all BNs and the transmit power for all conventional nodes are set to same, i.e., $20$ dBm.
Figs.~\ref{fig5a} plots the channel threshold $\gamma$ versus the average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$. We first \textit{compare the BackCom system with the conventional system under NOMA scenario}. As shown in this figure, under the good channel condition (i.e., the channel condition tends to be LOS), the BackCom system has the larger average number of successfully decoded bits $\bar{\mathcal{C}}_{\textrm{suc}}$ than the conventional system. This is mainly caused by the double attenuation of the received power at the reader for the BackCom system. This double attenuation effect can boost the performance of the BackCom system with NOMA under good channel condition. When the channel condition is good, the BNs are very likely to be successfully decoded alone. The double attenuation effect can make the channel gain between the stronger signal and weaker signal more distinguishable; hence, introducing the NOMA (i.e., bringing in the interference from the weaker signal) results in a small impact on the system.
We then \textit{compare the BackCom system with and without NOMA}, and we also plot the ratio of $\bar{\mathcal{C}}_{\textrm{suc}}$ for these two systems in Fig.~\ref{fig5c}. From this figure, we can see that the BackCom system with NOMA generally leads to a better performance than the BackCom system without NOMA regardless of channel conditions. Under the case of same reflection coefficient, the system with NOMA allows two BNs to access the reader at the same time, which makes the reader experience the interference from the weaker signal when decoding the stronger signal. Hence, it is possible that less number of BNs can be successfully decoded when BNs are paired. However, in terms of the average number of successfully decoded bits, since the time on each mini-slot under NOMA is doubled, the BackCom system with NOMA can achieve larger $\bar{\mathcal{C}}_{\textrm{suc}}$ than the system without NOMA. This illustrates why it is beneficial to apply NOMA to the BackCom system. In particular, by setting the proper reflection coefficients for the BackCom system with NOMA, the performance gain can be further improved.
\section{Conclusions}\label{sec:summary}
In this work, we have come up with a BackCom system enhanced by the power-domain NOMA, i.e., multiplexing the BNs located in different spatial regions or with different reflected power levels. Especially, the reflection coefficients for the BNs coming from different groups are set to be different such that the NOMA is fully utilized (i.e., increase the channel gain different for multiplexing BNs). In order to optimize the system performance, we provided the criteria for choosing the reflection coefficients for different groups of BNs. We also derived the analytical results for the average number of successfully decoded bits for two-node pairing case and the average number of successful BNs for the general multiplexing case. These derived results validated our proposed selection criteria. Our numerical results illustrated that NOMA generally results in the much better performance gain in the BackCom system than its performance gain in the conventional system. This demonstrated the significance of adopting NOMA with the BackCom system. Future work can consider the multiple readers scenario and the BNs powered by power beacons or ambient RF signals.
\section*{Appendix A}
\begin{proof}
\ifCLASSOPTIONpeerreview
\begin{figure}
\centering
\includegraphics[width=0.7 \textwidth]{appendix2}
\vspace{-5mm}
\caption{Illustration of expressions of $p_{2|y_2}$ and the valid range of $y_2$ when $\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{R_2^{-2\alpha}}{\gamma\kappa}\geq R_2^{-2\alpha}-R^{-2\alpha}$.}
\label{fig_1ppendix2}
\end{figure}
\fi
Since we consider the $\xi_1\geq\xi_2$ scenario, the decoding order is always from the near BN to the far BN. The probability that both BNs are successfully decoded is given by
\ifCLASSOPTIONpeerreview
\begin{align}
p_2&=\Pr\left(\frac{P_T\xi_1 r_1^{-2\alpha}}{P_T\xi_2 r_2^{-2\alpha}+\mathcal{N}}\geq\gamma\,\,\&\& \,\,\frac{P_T\xi_2 r_2^{-2\alpha}}{\mathcal{N}}\geq\gamma\right)\nonumber\\
&=\Pr\left(y_1\geq\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}\,\,\&\&\,\,y_2\geq\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\nonumber\\
&=\left\{ \begin{array}{ll}
0,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,\,\,{\frac{\mathcal{N}\gamma}{P_T\xi_2}\geq R_2^{-2\alpha};} \\
\mathlarger{\int}_{\textrm{min}\left\{\frac{\mathcal{N}\gamma}{P_T\xi_2},R^{-2\alpha} \right\}}^{R_2^{-2\alpha}}p_{2|y_2}f_{y_2}(y_2)\textup{d}y_2,\quad\quad\quad\quad{\frac{\mathcal{N}\gamma}{P_T\xi_2}<R_2^{-2\alpha};}\\
\end{array} \right.
\end{align}
\else
\begin{align}
p_2&=\Pr\left(\frac{P_T\xi_1 r_1^{-2\alpha}}{P_T\xi_2 r_2^{-2\alpha}+\mathcal{N}}\geq\gamma\,\,\&\& \,\,\frac{P_T\xi_2 r_2^{-2\alpha}}{\mathcal{N}}\geq\gamma\right)\nonumber\\
&=\Pr\left(y_1\geq\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}\,\,\&\&\,\,y_2\geq\frac{\mathcal{N}\gamma}{P_T\xi_2}\right)\nonumber\\
&=\left\{ \begin{array}{ll}
0,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\,{\frac{\mathcal{N}\gamma}{P_T\xi_2}\geq R_2^{-2\alpha};} \\
\mathlarger{\int}_{\textrm{min}\left\{\frac{\mathcal{N}\gamma}{P_T\xi_2},R^{-2\alpha} \right\}}^{R_2^{-2\alpha}}p_{2|y_2}f_{y_2}(y_2)\textup{d}y_2,\quad\,\,{\frac{\mathcal{N}\gamma}{P_T\xi_2}<R_2^{-2\alpha};}\\
\end{array} \right.
\end{align}
\fi
\noindent where $y_1\triangleq r_1^{-2\alpha}$ with PDF $f_{y_1}(y_1)=\frac{y_1^{-\frac{1}{\alpha}-1}}{\alpha(R_2^2-R_1^2)}$ and $y_1\in\left[R_2^{-2\alpha},R_1^{-2\alpha}\right]$, $y_2\triangleq r_2^{-2\alpha}$ with PDF $f_{y_2}(y_2)=\frac{y_2^{-\frac{1}{\alpha}-1}}{\alpha(R^2-R_2^2)}$ and $y_2\in\left[R^{-2\alpha},R_2^{-2\alpha}\right]$, and $p_{2|y_2}$ is the conditional probability of $p_2$.
We first consider the case of $\frac{\mathcal{N}\gamma}{P_T\xi_2}<R^{-2\alpha}$, which implies that the weaker signal can be always successfully decoded given that the stronger signal is successfully decoded. Note that when $\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}\leq \left(y_1\right)_{\textrm{min}}=R_2^{-2\alpha}$ (e.g., $y_2\leq \frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}$), the conditional probability $p_{2|y_2}$ is always equal to one. When $\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}\geq \left(y_1\right)_{\max}=R_1^{-2\alpha}$ (e.g., $y_2\geq \frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2}$), $p_{2|y_2}$ is always equal to zero. For the remaining range of $y_2$, $p_{2|y_2}=\int_{\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}}^{R_1^{-2\alpha}}\frac{y_1^{-\frac{1}{\alpha}-1}}{\alpha(R_2^2-R_1^2)}\textup{d}y_1=\frac{\left(\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)^{-\frac{1}{\alpha}}-R_1^2}{R_2^2-R_1^2}$.
Based on the expressions of $p_{2|y_2}$ and $y_2$'s valid range, when $\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{R_2^{-2\alpha}}{\gamma\kappa}\geq R_2^{-2\alpha}-R^{-2\alpha}$, we can plot a diagram in Fig.~\ref{fig_1ppendix2} to help finding the integration limits. From Fig.~\ref{fig_1ppendix2}, we obtain the final expression of $p_2$ as
\begin{itemize}
\item $\gamma\leq\frac{R_2^{-2\alpha}}{\kappa R_2^{-2\alpha}+\frac{\mathcal{N}}{P_T\xi_1}}$: $p_2=1$;
\item $\gamma\geq \frac{R_1^{-2\alpha}}{\kappa R^{-2\alpha}+\frac{\mathcal{N}}{P_T\xi_1}}$: $p_2=0$;
\item Other range:
$p_2=\mathlarger{\int}_{R^{-2\alpha}}^{\max\left\{\frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2},R^{-2\alpha} \right\}}f_{y_2}(y_2)\textup{d}y_2+\mathlarger{\int}_{\max\left\{\frac{R_2^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2},R^{-2\alpha} \right\}}^{\min\left\{\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{\mathcal{N}}{P_T\xi_2},R_2^{-2\alpha} \right\}}\frac{\left(\gamma\kappa y_2+\frac{\mathcal{N}\gamma}{P_T\xi_1}\right)^{-\frac{1}{\alpha}}-R_1^2}{R_2^2-R_1^2} f_{y_2}(y_2)\textup{d}y_2$.
\end{itemize}
We note the above expressions of $p_2$ also hold for $\frac{R_1^{-2\alpha}}{\gamma\kappa}-\frac{R_2^{-2\alpha}}{\gamma\kappa}< R_2^{-2\alpha}-R^{-2\alpha}$. For other cases, we can adopt the similar steps to work out $p_2$. After further computation and simplification, we arrive at the result in~\eqref{eq:nofading:p2}.
\end{proof}
\bibliographystyle{IEEEtran}
|
1,477,468,750,888 | arxiv | \section{Introduction}
The role played by helicity in turbulent flows is not completely
understood. Helicity is relevant in many atmospheric processes, such
as rotating convective (supercell) thunderstorms, the predictability
of which may be enhanced because of its presence \cite{heli}.
However helicity, which is a conserved quantity in the three
dimensional Euler equation, plays no role in the original theory of
turbulence of Kolmogorov. Later studies of absolute equilibrium
ensembles for truncated helical Euler flows by Kraichnan
\cite{KRA73} gave support to an scenario where in helical turbulent
flows both the energy and the helicity cascade towards small scales
\cite{Helcas-BFLLM}, a phenomena recently verified in numerical
simulations \cite{BorueOrszag97,Eyink03,MininniPouquet06}. The
thermalization dynamics of the non-helical spectrally truncated
Euler flows were studied in \cite{CBDB-echel}. However, Kraichnan
helical equilibrium solutions were never directly observed in
simulations. Note that the Galerkin truncated non-helical Euler
dynamics was recently found to emerge as the asymptotic limit of
high order hyperviscous hydrodynamics and that bottlenecks observed
in viscous turbulence may be interpreted as an incomplete
thermalization \cite{frisch-2008}.
In this letter we study truncated helical Euler flows, and consider
the transient turbulent behavior as well as the late time
equilibrium of the system. Here is a short summary of our main
results. The relaxation toward a Kraichnan helical absolute
equilibrium \cite{KRA73} is observed for the first time. Transient
mixed energy and helicity cascades are found to take place while
more and more modes gather into the Kraichnan time-dependent
statistical equilibrium. It was shown in \cite{CBDB-echel} that, due
to the effect of thermalized small-scales, the spectrally truncated
Euler equation has long-lasting transients behaving similarly to the
dissipative Navier-Stokes equation. These results, obtained for
non-helical flows, are extended to the helical case. The concept of
eddy viscosity, as previously developed in \cite{CBDB-echel} and
\cite{GKMEB2fluid}, is used to qualitatively explain differences
observed between truncated Euler and high-Reynolds number (fixed
viscosity) Navier-Stokes. Finally, the truncated Euler large scale
modes are shown to quantitatively follow an effective Navier-Stokes
dynamics based on a (time and wavenumber dependent) eddy viscosity
that does not depend explicitly on the helicity content in the flow.
Performing spherical Galerkin truncation at wave-number $k_{\rm
max}$ on the incompressible ($ \nabla \cdot {\bf u}=0$) and
spatially periodic Euler equation $ {\partial_t {\bf u}} + ({\bf u}
\cdot \nabla) {\bf u} =- \nabla p$ yields the following finite
system of ordinary differential equations for the Fourier transform
of the velocity ${\bf \hat u}({\bf k})$ (${\bf k}$ is a 3 D vector
of relative integers satisfying $ |{\bf k}| \leq k_{\rm max}$):
\begin{equation}
{\partial_t { \hat u}_\alpha({\bf k},t)} = -\frac{i} {2}
{\mathcal P}_{\alpha \beta \gamma}({\bf k}) \sum_{\bf p} {\hat
u}_\beta({\bf p},t) {\hat u}_\gamma({\bf k-p},t),
\label{eq_discrt}
\end{equation}
where ${\mathcal P}_{\alpha \beta \gamma}=k_\beta P_{\alpha
\gamma}+k_\gamma P_{\alpha \beta}$ with $P_{\alpha
\beta}=\delta_{\alpha \beta}-k_\alpha k_\beta/k^2$.
This time-reversible system exactly conserves the energy
$E=\sum_{k}E(k,t)$ and helicity $H=\sum_{k}H(k,t)$, where the energy
and helicity spectra $E(k,t)$ and $H(k,t)$ are defined by averaging
respectively ${\frac1 2}|{\bf \hat u}({\bf k'},t)|^2 \,$ and ${\bf
\hat u}({\bf k'},t)\cdot{\bf \hat \omega}({\bf -k'},t)$ (${\bf
\omega=\nabla\times {\bf u}}$ is the vorticity) on spherical shells
of width $\Delta k = 1$. It is trivial to show from the definition
of vorticity that $|H(k,t)|\leq 2 k E(k,t)$.
We will use as initial condition ${\bf u}_0$ the sum of two ABC
(Arnold, Beltrami and Childress) flows in the modes $k=3$ and $k=4$,
\begin{equation}
{\bf u}_0(x,y,z)={\bf u}_{\rm ABC}^{(3)}(x,y,z)+{\bf u}_{\rm
ABC}^{(4)}(x,y,z)\label{eq:condini}
\end{equation}
where the basic ABC flow is a maximal helicity stationary solution
of Euler equations in which the vorticity is parallel to the
velocity, explicitly given by
\begin{eqnarray}
{\bf u}_{\rm ABC}^{(k)}(x,y,z) &=& \frac{u_0}{k^2} \left\{ \left[B
\cos(k y) +
C \sin(k z) \right] \hat{x} + \right. {} \nonumber \\
&& {} + \left[A \sin(k x) + C \cos(k z) \right] \hat{y} +
{} \nonumber \\
&& {} + \left. \left[A \cos(k x) + B \sin(k y) \right]
\hat{z} \right\}.
\label{eq:ABC}
\end{eqnarray}
The parameters will be set to $A=0.9$, $B=1$, $C=1.1$ and
$u_0=(A^2+B^2+C^2)^{-1/2}(1/3^4+1/4^4)^{-1/2}$. With this choice
of normalization the initial energy is $E=0.5$ and helicity
$H=3\times4\times(3^3+4^3)/(3^4+4^4)=3.24$.
Numerical solutions of equation (\ref{eq_discrt}) are efficiently
produced using a pseudo-spectral general-periodic code
\cite{PabloCode1} with $512^3$ Fourier modes that is dealiased using
the $2/3$ rule \cite{Got-Ors} by spherical Galerkin truncation at
$\textbf{k}_{\rm max}=170$. The equations are evolved in time using a second
order Runge-Kutta method, and the code is fully parallelized with
the message passing interface (MPI) library. The numerical method
used is non-dispersive and conserves energy and helicity with high
accuracy.
Fig.\ref{Fig:speccomp4} shows the time-evolution of the energy and
helicity spectra that evolve from (\ref{eq:condini}) compensated by
$k^{5/3}$.
\begin{figure}[h!]
\begin{center} \includegraphics[height=8.0cm]{1}
\caption{Compensated energy ({\tiny $\bullet\bullet\bullet$}) and
helicity spectra ({\tiny $\times\times\times$}) with the predictions
(\ref{HelSpec}) in solid lines and (\ref{eq:speciner}) in dotted
lines. a) $t=4.8$. b) $t=7$. c) $t=10$. d) $t=19.8$.
\label{Fig:speccomp4}} \end{center}
\end{figure} The plots clearly display a progressive
thermalization similar to that obtained in Cichowlas et
al.\cite{CBDB-echel} but with the non zero helicity cascading to
the right.
The truncated Euler equation dynamics is expected to reach at large
times an absolute equilibrium that is a statistically stationary
gaussian exact solution of the associated Liouville equation
\cite{OrszagAnalytTheo}. When the flow has a non vanishing helicity,
the absolute equilibria of the kinetic energy and helicity predicted
by Kraichnan \cite{KRA73} are
\begin{equation}
E(k)=\frac{k^2}{\alpha }\frac{ 4 \pi}{1-\beta^2 k^2 /
\alpha^2}\,;\hspace{0.1cm}
H(k)= \frac{k^4\beta}{\alpha^2}\frac{ 8 \pi }{1-\beta^2 k^2 / \alpha^2}\label{HelSpec}
\end{equation}
where $\alpha>0$ and $\beta k_{\rm max}<\alpha$ to ensure
integrability. The values of $\alpha$ and $\beta$ are uniquely
determined by the total amount of energy and helicity (verifying
$|H|\leq 2 k_{\rm max} E$) contained in the wavenumber range
$[1,k_{\rm max}]$ \cite{KRA73}.
The final values of $\alpha$ and $\beta$ (when total thermalization
is obtained) corresponding to the initial energy and helicity are
$\alpha=4.12\times 10^7$ and $\beta=7695$. Therefore the
dimensionless number $\beta^2 k^2 / \alpha^2$ is at most of the
order $ 10^{-4}$ and equations (\ref{HelSpec}) thus lead to almost
pure power laws for the energy and helicity spectra, as is manifest
in Fig\ref{Fig:speccomp4}.d. Fig. \ref{Fig:speccomp4} thus shows for
the first time a time evolving helical quasi-equilibrium.
In order to analyze the run in the spirit of Cichowlas et al.
\cite{CBDB-echel} we define $k_{\rm th}(t)$ as the wavenumber where
the thermalized power-law zone starts. We define the thermalized
energy and helicity as
\begin{equation}
{E}_{\rm th}(t)=\sum_{k_{\rm th}(t)}^{ k_{\rm max}}
E(k,t)\,;\hspace{0.3cm}
{H}_{\rm th}(t) = \sum_{k_{\rm th}(t)}^{
k_{\rm max}} H(k,t) \label{Th_energy}
\end{equation}
where $E(k,t)$ and $H(t,k)$ are the energy and helicity spectra.
The temporal evolutions of
$E_{\rm th},H_{\rm th}$ and $k_{\rm th}(t)$ are shown in Fig.
\ref{Fig1}.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=7cm]{2}
\caption{a) Temporal evolution of $E_{\rm th}$ ($-$) ,$H_{\rm th}$ ({\tiny $\cdot-\cdot$}) and $k_{\rm
th}(t)$ ($\cdots$) normalized by their respective initial values.
$E_{\rm tot}=0.5$, $H_{\rm tot}=3.24$ and $k_{\rm max}=170$. b)
Left vertical axis: temporal evolution of $\epsilon_{\rm
th}=\frac{dE_{\rm th}}{dt}$ ({\tiny $\bigstar\bigstar\bigstar$})
and Navier-Stokes energy dissipation
$\epsilon=2\nu_0\sum_{k=1}^{k_{\rm max}} k^2E(k)$ ({\tiny
$\bullet\bullet\bullet$}). Right vertical axis: $\eta_{\rm
th}=\frac{dH_{\rm th}}{dt}$ ($***$) and NS helicity dissipation
$\eta=\nu_0\sum_{k=1}^{k_{\rm max}} k^2H(k)$ ({\small
$\circ\circ\circ$}). }\label{Fig1} \end{center} \end{figure}
The values of $\alpha(t)$ and $\beta(t)$ during thermalization can
then be obtained from $E_{\rm th}(t),H_{\rm th}(t)$ and $k_{\rm
th}(t)$ by inverting the system of equations (\ref{Th_energy}) using
$\frac{\beta^2}{\alpha^2} k_{\rm max}^2\ll 1$.
The Kraichnan prediction (\ref{HelSpec}) for the high-$k$ part of
the spectra are shown (in solid lines) in Fig. \ref{Fig:speccomp4}.
The plot shows an excellent agreement with the prediction.
The low-$k$ part of the compensated spectrum presents a flat zone
that amounts to $k^{-5/3}$ scaling for both the energy and helicity
spectra. This $k^{-5/3}$ behavior was predicted by Brissaud et al.
\cite{Helcas-BFLLM} in viscous fluids when there are simultaneous
energy and helicity cascades. The energy and helicity fluxes,
$\epsilon$ and $\eta$ respectively, determine the prefactor in the
inertial range of the spectra:
\begin{equation}
E(k) \sim \epsilon^{2/3}k^{-5/3},\hspace{0.5cm} H(k) \sim
\eta\epsilon^{-1/3}k^{-5/3}\label{eq:speciner} ,
\end{equation}
Helical flows have been also studied in high Reynolds number
numerical simulations of the Navier-Stokes (NS) equation.
Simultaneous energy and helicity cascades leading to the scaling
(\ref{eq:speciner}) have been confirmed when the system is forced at
large scales \cite{BorueOrszag97,Eyink03,MininniPouquet06}.
The energy and helicity fluxes $\epsilon$ and $\eta$ at intermediate
scales in our truncated Euler simulation can be estimated using the
time derivative of the thermalized energy and helicity:
$\epsilon_{\rm th}=\frac{dE_{\rm th}}{dt}$ and $\eta_{\rm
th}=\frac{dH_{\rm th}}{dt}$, whose temporal evolutions are shown in
Fig. \ref{Fig1}. The predictions (\ref{eq:speciner}) for the
low-$k$ part of the spectra are shown (in dotted lines) in Fig.
\ref{Fig:speccomp4}. The plot shows a good agreement with the data.
Note that Fig. \ref{Fig:speccomp4}.a corresponds to $t=4.8$, that is
just after the time when both the maximum energy and helicity fluxes
(to be interpreted below as ``dissipation'' rates of the
non-thermalized components of the energy and the helicity) are
achieved, see Fig. \ref{Fig1}. In this way $E_{\rm th} $ and $H_{\rm
th}$ determine the thermalized part of the spectra while their time
derivative determines an inertial range.
We now compare the dynamics of the truncated Euler equation with
that of the unforced high-Reynolds number NS equation (i.e.
Eq.(\ref{eq_discrt}) with $-\nu_0 k^2{ \hat u}_\alpha({\bf k},t)$
added in the r.h.s.) using the initial condition (\ref{eq:condini}).
The viscosity is set to $\nu_0=5\times10^{-4}$, the smallest value
compatible with accurate computations using $k_{\rm max}=170$. A
behavior qualitatively similar to that of the truncated Euler
equation is obtained (see Fig. \ref{Fig1}b). However, the maxima of
the energy and helicity fluxes (or dissipation rates) occur later,
and with smaller values.
We refered above to ``dissipation'' in the context of the ideal
(time-reversible) flow. A proper definition of dissipation in the
truncated Euler flow is now in order. Thermalized modes in truncated
Euler are known to provide an eddy viscosity $\nu_{\rm{eddy}}$ to
the modes with wavenumbers below the transition wavenumber
\cite{CBDB-echel}. It was shown in \cite{GKMEB2fluid} that
Monte-Carlo determinations of $\nu_{\rm eddy}$ are given with good
accuracy by the Eddy Damped Quasi-Normal Markovian (EDQNM) two-point
closure, previously known to reproduce well direct numerical
simulation results \cite{BosBertoglioEDQNM}. For helical flows, the
EDQNM theory provides coupled equations for the energy and helicity
spectra \cite{EDQNM-Andre-Lesieur}, in which using (\ref{HelSpec})
in an analogous way to \cite{GKMEB2fluid} we find a very small
correction of $\nu_{\rm{eddy}}$ that depends on the total amount of
helicity and is of order
$\Delta\nu_{\rm{eddy}}/\nu_{\rm{eddy}}\sim\beta k_{\rm max} /
\alpha\sim 10^{-2}$. Thus the presence of helicity does not affect
significantly the dissipation at large scales and can be safely
neglected in the eddy viscosity expressions. This eddy viscosity has
a strong dependence in $k$ and can also be obtained, in the limit
$k/k_{\rm max}\rightarrow 0$, from the EDQNM eddy viscosity of
Lesieur and Schertzer \cite{LesieruSchertezerEDQNMExpa} using here
an energy spectrum $E(k)\sim k^2$. The result reads
\begin{equation}\label{eq_nuEDQNM}
\nu_{\rm{eddy}}=\frac{\sqrt{E_{\rm{th}}}}{k_{\rm{max}}}\frac{7}{\sqrt{15}\lambda},
\end{equation}
with $\lambda=0.36$. The eddy viscosity $\nu_{\rm{eddy}}$ is thus an
increasing function of time, see $E_{\rm{th}}(t)$ in Fig.
\ref{Fig1}.
The time-evolution of truncated Euler and Navier-Stokes spectra
are compared in Fig. \ref{Fig:speccompNS}. At early times the
value of $E_{\rm{th}}$ is very small and therefore the NS
viscosity $\nu_0$ is larger than $\nu_{\rm{eddy}}$, as manifested
by the NS dissipative zone in Fig. \ref{Fig:speccompNS}.a.
As $E_{\rm{th}}(t)$ increases, both viscosities became equal
($t=2.7$). Later, at $t=3.8$, the Navier-Stokes spectrum crosses
the truncated Euler one (Fig. \ref{Fig:speccompNS}b).
The eddy viscosity $\nu_{\rm{eddy}}$ is then much larger than
$\nu_0$ and the truncated Euler dissipative zone lies below the NS
one, see Fig. \ref{Fig:speccompNS}c.
This behavior is also conspicuous when the spectra are compared at
maximum energy-dissipation time ($t=4.4$ for truncated Euler and
$t=5.6$ for NS), see Fig. \ref{Fig:speccompNS}d.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=8cm]{3}
\caption{Compensated energy spectra of truncated Euler ({\small
$\cdots$}) and Navier-Stokes ({\tiny $\times\times\times$}). a)
$t=1.8$, b) $t=3.8$, c) $t=5.8$. d) Maximum energy-dissipation
time ($t=4.4$ for truncated Euler and $t=5.6$ for NS).
\label{Fig:speccompNS}}
\end{center}
\end{figure}
The variation in time of $\nu_{\rm{eddy}}$ thus explains
qualitatively the different behavior of the truncated Euler and
Navier-Stokes spectra in Fig. \ref{Fig:speccompNS}. We now proceed
to check more quantitatively the validity of an effective
dissipation description of thermalization in truncated Euler. To
wit, we introduce an effective Navier-Stokes equation for which the
dissipation is produced by an effective viscosity that depends on
time and wavenumber.
We will use the effective viscosity obtained in \cite{GKMEB2fluid}
which is consistent with both direct Monte-Carlo calculations and
EDQNM closure and is explicitly given by $$
\nu_{\rm{eff}}(k)=\nu_{\rm{eddy}}e^{-3.97k/k_{\rm{max}}},$$ with
$\nu_{\rm{eddy}}$ given in Eq. (\ref{eq_nuEDQNM}).
We thus integrate Eq. (\ref{eq_discrt}) with the viscous term
$-\nu_{\rm{eff}}(k) k^2{ \hat u}_\alpha({\bf k},t)$ added in the
right hand side. The parameter $E_{\rm{th}}$ that fixes the eddy
viscosity in Eq. (\ref{eq_nuEDQNM}) is evolved using the effective
NS dissipation by
\begin{equation}
\frac{d E_{\rm{th}}}{dt}=\sum_{k=1}^{k_{\rm max}}
2\nu_{\rm{eff}}(k)k^2E(k).
\end{equation}
This ensures consistency between the effective NS dissipated
energy and the truncated Euler thermalized energy that drives
$\nu_{\rm{eddy}}$.
To initialize the effective NS equation we integrate the truncated
Euler equation (\ref{eq_discrt}) with the initial condition
(\ref{eq:condini}) until the $k^2$-thermalized zone is clearly
present ($t=4.77$). The value of $E_{\rm{th}}$ is then computed
using equations (\ref{Th_energy}). The low-passed velocity ${\bf
u}^<$, defined by $${\bf u}^<(\textbf{r}) = \sum
\frac{1}{2}\left(1+\tanh{\left[2({|\textbf{k}|-k_{\rm th}})\right]}\right)
\hat{{\bf u}}_\textbf{k} e^{i\textbf{k}\cdot\textbf{r}}$$ is used as initial data
for the effective Navier-Stokes dynamics.
Results of a truncated Euler and effective NS with $k_{\rm
max}=85$ are shown in Fig. \ref{Fig5}. In Fig. \ref{Fig5}.a the
energy and helicity dissipated in effective NS [$E_{\rm tot}-E(t)$
and $H_{\rm tot}-H(t)$ respectively] are compared to $E_{\rm th}$
and $H_{\rm th}$ showing a good agreement. Next, the temporal
evolution of both energy spectra from the initial time $t=5.3$
(Fig. \ref{Fig5}.b) to $t=20$ (Fig. \ref{Fig5}.e) is confronted,
demonstrating that the low-$k$ dynamics of truncated Euler is well
reproduce by the effective Navier-Stokes equations.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=9.2cm]{4}
\caption{Effective NS run with $k_{\rm max}=85$. a) Temporal evolution of $E_{\rm th}$ ($-$), $H_{\rm th}$ ({\tiny $\cdot-\cdot$}) from truncated
Euler, energy ({\tiny $\bullet\bullet\bullet$}) and helicity ({\small $\circ\circ\circ$}) from effective NS.
b-e) Temporal evolution of compensated energy spectra of truncated Euler ({\small
$\cdots$}) and effective Navier-Stokes ({\tiny
$\times\times\times$}) .}\label{Fig5}
\end{center}
\end{figure}
In summary, we observed the relaxation of the truncated Euler
dynamics toward a Kraichnan helical absolute equilibrium. Transient
mixed energy and helicity cascades were found to take place. Eddy
viscosity was found to qualitatively explain the different behaviors
of truncated Euler and (constant viscosity) Navier-Stokes. The large
scale of Galerkin truncated Euler were shown to quantitatively
follow an effective Navier-Stokes dynamics based on a variable
helicity-independent eddy viscosity. In conclusion, with its
built-in eddy viscosity, the Galerkin truncated Euler equations
appears as a minimal model of turbulence.
\textbf{Acknowledgments:} We acknowledge discussions with U. Frisch.
and J.Z. Zhu. P.D.M. is a member of the Carrera del Investigador
Cient\'{\i}fico of CONICET. The computations were carried out at
NCAR and IDRIS (CNRS).
\bibliographystyle{unsrt}
|
1,477,468,750,889 | arxiv | \section{Introduction}
\label{sec:introduction}
The new generation of highly sensitive X-ray observatories such as
Chandra and XMM-Newton is generating large volumes of X-ray data,
which through public archives are made available for all
researchers. Even though all observations are targeting a particular
object, the large field of view (FOV) of XMM-Newton allows many other
sources to be detected in deep exposures. These sources are the main
product of the XMM-Newton Serendipitous Sky Survey
\citep{2001A&A...365L..51W}, which annually identifies about 50\,000
new X-ray sources. To fully understand the nature of these
serendipitously detected sources follow-up observations at other
wavelengths are needed.
\begin{table*}
\centering
\caption{Central positions in right ascension, declination and
galactic longitude and latitude of the 12 XMM-Newton fields
observed as part of the XMM-Newton follow-up survey.}
\begin{tabular}{llrrrr}
\hline\hline
Field & Target & $\alpha$ (J2000.0) & $\delta$ (J2000.0) &
\multicolumn{1}{c}{$l$} & \multicolumn{1}{c}{$b$} \\\hline
XMM-01 & RX J0925.7$-$4758 & 09:25:46.0 & $-$47:58:17 & 271:21:18 &
+01:53:03\\
XMM-02 & RX J0720.4$-$3125 & 07:20:25.1 & $-$31:25:49 & 244:09:28 &
$-$08:09:50 \\
XMM-03 & HE 1104$-$1805 & 11:06:33.0 & $-$18:21:24 & 270:49:55 &
+37:53:29 \\
XMM-04 & MS 1054.4$-$0321 & 10:56:60.0 & $-$03:37:27 & 256:34:30 &
+48:40:18 \\
XMM-05 & BPM 16274 & 00:50:03.2 & $-$52:08:17 & 303:26:03 &
$-$64:59:19 \\
XMM-06 & RX J0505.3$-$2849 & 05:05:20.0 & $-$28:49:05 & 230:39:29 &
$-$34:36:50 \\
XMM-07 & LBQS 2212$-$1759 & 22:15:31.7 & $-$17:44:05 & 39:16:07 &
$-$ 52:55:44\\
XMM-08 & NGC 4666 & 12:45:08.9 & $-$00:27:38 & 299:25:55 &
+63:17:22 \\
XMM-09 & QSO B1246$-$057 & 12:49:13.9 & $-$05:59:19 & 301:55:40 &
+56:52:43 \\
XMM-10 & PB 5062 & 22:05:09.8 & $-$01:55:18 & 58:03:55 &
$-$42:54:13 \\
XMM-11 & Sgr A & 17:45:40.0 & $-$29:00:28 & 359:56:39 &
$-$00:02:45 \\
XMM-12 & WR 46 & 12:05:19.0& $-$62:03:07 & 297:33:23 & +00:20:14
\\
\hline
\end{tabular}
\label{tab:field-coord}
\end{table*}
Based on a Call for Ideas for public surveys to the ESO community, the
XMM-Newton Survey Science Center (SSC) proposed optical follow-up
observations of XMM-Newton fields for its X-ray Identification (XID)
program
\citep{2001A&A...365L..51W,2002A&A...382..522B,2004A&A...428..383D}.
This proposal was evaluated and accepted by ESO's Survey Working Group
(SWG) and turned into a proposal for an ESO large program submitted to
the ESO OPC.\footnote{The full text of the large program proposal is
available at
\url{http://www.eso.org/science/eis/documents/EIS.2002-09-04T12:42:31.890.ps.gz}}
The XMM-Newton optical follow-up survey aims at obtaining optical
observations of XMM-Newton Serendipitous Sky Survey fields, publicly
available in the XMM-Newton archive, using the wide-field imager (WFI) at the
ESO/MPG 2.2m telescope at the La Silla Observatory. WFI has a FOV
which is an excellent match to that of the X-ray detectors on-board
the XMM-Newton satellite, making this instrument an obvious choice for
this survey in the South. A complementary multiband optical imaging
program (to median $5\sigma$ limiting magnitudes reaching
$i^\prime=23.1$) for over 150 XMM-Newton fields is nearing completion in the
North using the similarly well matched Wide Field Camera on the 2.5~m
Isaac Newton Telescope
\citep{2003AN....324..178Y,2003AN....324...89W}. In order to provide
data for minimum spectral discrimination and photometric redshift
estimates of the optical counterparts of previously detected X-ray
sources, the survey has been carried out in the $B$-, $V$-, $R$-, and
$I$-passbands. The survey has been administered and carried out by the
ESO Imaging Survey (EIS) team.
This paper describes observations, reduction, and science verification
of data publicly released as part of this follow-up survey.
Section~\ref{sec:targets} briefly describes the X-ray observations while
Sect.~\ref{sec:observations} focus on the optical imaging. In
Sect.~\ref{sec:reduction} the reduction and calibration of optical
data are presented and the results discussed. Final survey products
such as stacked images and science-grade catalogs extracted from them
are presented in Sect.~\ref{sec:products}. The quality of these
products is evaluated in Sect.~\ref{sec:discussion} by comparing
statistical measures obtained from these data to those of other
authors as well as from a direct comparison with the results of an
independent reduction of the same dataset. In this section the results
of a preliminary assessment of X-ray/optical cross-correlation are
also discussed. Finally, in Sect.~\ref{sec:summary} a brief summary of
the paper is presented.
\section{X-ray observations}
\label{sec:targets}
The original proposal by the SWG to the ESO OPC was to cover a total
area of approximately 10 square degrees (40 fields) to a limiting
magnitude of 25 (AB, $5\sigma$, 2\arcsec\ aperture). The OPC approved
enough time to observe 12 fields, later extending the time allocation
to include 3 more fields. This paper presents results for the original
12 fields for which the optical data were originally publicly released
in the fall of 2004, with corrections to the weight maps
released in July 2005. Table~\ref{tab:field-coord} gives the
location of the 12 fields listing: in Col.~1 the field name; in
Col.~2 the original XMM-Newton target name; in Cols.~3 and 4 the right
ascension and declination in J2000; and in Cols.~5 and 6 the galactic
coordinates, $l$ and $b$.
The 12 fields listed in Table~\ref{tab:field-coord} were selected and
prioritized by a collaboration of interested parties from the SSC, a
group at the Institut f\"ur Astrophysik und Extraterrestrische
Forschung (IAEF) of the University of Bonn, and an appointed committee
of the SWG. These fields were selected following, as much as possible,
the criteria given in the proposal, namely that: (1) the fields had to
have a large effective exposure time in X-ray (ideally $t_\mathrm{exp} >
30$~ks) with no enhanced background; (2) the X-ray data of the
selected fields had to be public by the time the raw WFI frames were
to become public; (3) the original targets should not be too bright
and/or extended, thus allowing a number of other X-ray sources to be
detected away from the primary target; and (4) $\sim 70$\% of the
fields had to be located at high-galactic latitude. Comments on the
individual fields can be found in Appendix~\ref{sec:field_desc}.
Combined EPIC X-ray images for the fields listed in
Table~\ref{tab:field-coord} were created from exposures taken with the
three cameras (PN, MOS1, MOS2) on-board XMM-Newton. The
sensitive area of these cameras is a circle with a diameter of
approximately 30\arcmin. The contributing
X-ray observations are summarized in Table~\ref{tab:xray_obs} which
gives for each field: in Col.~1 the field identification; in Col.~2
the XMM-Newton observation id; in Col.~3 the nominal exposure time; in
Cols.~4--6 the settings for each of the cameras. Here (E)FF indicates
(extended) full frame readout, LW large window mode and SW2 small
window mode. These cameras and their settings are described in detail
in \citet{2004.xmm.guide.E}. For some fields additional observations
were available but these were discarded mainly due to unsuitable
camera settings.
\begin{table*}
\centering
\caption{Information about X-ray imaging used to create composite
X-ray images.}
\begin{tabular}{llrrrr}
\hline\hline
Field & Obs. ID & $T_\mathrm{exp}$ (s)& \multicolumn{3}{c}{Camera
settings}\\\hline
XMM-01 & 0111150201 & 62\,067 & EPN LW & MOS1 FF & MOS2 SW2\\
& 0111150101 & 61\,467 & EPN LW & MOS1 FF & MOS2 SW2\\
\hline
XMM-02 & 0164560501 & 50\,059 & EPN FF & MOS1 FF & MOS2 FF\\
& 0156960201 & 30\,243 & EPN FF & MOS1 FF & MOS2 FF\\
& 0156960401 & 32\,039 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-03 & 0112630101 & 36\,428 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-04 & 0094800101 & 41\,021 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-05 & 0125320701 & 45\,951 & EPN FF & MOS1 FF & MOS2 FF\\
& 0125320401 & 33\,728 & EPN FF & MOS1 FF & MOS2 FF\\
& 0125320501 & 7845 & EPN FF & MOS1 FF & MOS2 FF\\
& 0153950101 & 5156 & EPN FF & MOS1 FF & MOS2 FF\\
& 0133120301 & 12\,022 & EPN FF & MOS1 FF & MOS2 FF\\
& 0133120401 & 13\,707 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-06 & 0111160201 & 49\,616 & EPN EFF &MOS1 FF & MOS2 FF\\
\hline
XMM-07 & 0106660501 & 11\,568 & EPN FF & MOS1 FF & MOS2 FF\\
& 0106660401 & 35\,114 & --- & MOS1 FF & MOS2 FF\\
& 0106660101 & 60\,508 & EPN FF & MOS1 FF & MOS2 FF\\
& 0106660201 & 53\,769 & EPN FF & MOS1 FF & MOS2 FF\\
& 0106660601 & 110\,168 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-08 & 0110980201 & 58\,237 & EPN EFF & MOS1 FF & MOS2 FF\\
\hline
XMM-09 & 0060370201 & 41\,273 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-10 & 0012440301 & 35\,366 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-11 & 0112970601 & 27\,871 & EPN FF & --- & --- \\
& 0112971601 & 28\,292 & --- & MOS1 FF & MOS2 FF\\
& 0112972101 & 26\,870 & EPN FF & MOS1 FF & MOS2 FF\\
& 0111350301 & 17\,252 & EPN FF & MOS1 FF & MOS2 FF\\
& 0111350101 & 52\,823 & EPN FF & MOS1 FF & MOS2 FF\\
\hline
XMM-12 & 0109110101 & 76\,625 & EPN EFF & MOS1 FF & MOS2 FF\\
\hline
\end{tabular}
\label{tab:xray_obs}
\end{table*}
\begin{figure*}
\centering
\resizebox{\hsize}{!}{\includegraphics[width=17cm]{3785fi01.eps}}
\caption{Color composite X-ray images for the 12 fields considered in
this paper (XMM-01 to XMM-12 from top left to bottom right). The
color images are composites within the so-called XID-band
(0.5--4.5~keV). Red, green and blue channels comprise the energy
ranges 0.5--1.0~keV, 1.0--2.0~keV, and 2.0--4.5~keV, respectively.
Weighting of the sub-images was done in a manner that a typical
extragalactic source with a power law spectrum with photon index 1.5
and absorption column density $N_\mathrm{H} =1 \times
10^{20}$\,cm$^{-2}$ would have equal photon numbers in all three
bands. North is up and East to the left. The size of the images
is typically $30\arcmin\times30\arcmin$ but varies slightly with
camera orientation.}
\label{fig:xray}
\end{figure*}
The XMM-Newton data, both in raw and pipeline reduced form, are
available through the XMM-Newton Science
Archive.\footnote{\url{http://xmm.vilspa.esa.es/external/xmm_data_acc/xsa/index.shtml}}
These data were used to create a wide range of products which include:
\begin{itemize}
\item Combined EPIC images in the XID-band 0.5--4.5 keV (FITS);
\item Combined EPIC images in the total band 0.1--12 keV (FITS);
\item Color images using three sub-bands, 0.5--1.0 keV (red), 1.0--2.0 keV
(green), 2.0--4.5 keV (blue), in the XID-band (JPG).
\end{itemize}
As an illustration, Fig.~\ref{fig:xray} shows color composites of
the final combined X-ray images for the 12 fields considered. Note
that the X-ray images have a non-uniform exposure time over the field
of view due to (1) the arrangements of the CCDs in the focal plane,
which is different for the three cameras, and (2) the vignetting of
the camera optics.
\section{Optical observations}
\label{sec:observations}
As mentioned earlier, the optical observations were carried out using
WFI at the ESO/MPG-2.2m telescope in service mode. WFI is a focal
reducer-type mosaic camera mounted at the Cassegrain focus of the
telescope. The mosaic consists of $4 \times 2$ CCD chips with $2048
\times 4096$ pixels with a projected pixel size of $0\farcs{238}$,
giving a FOV of $8\farcm{12} \times 16\farcm{25}$ for each individual
chip. The chips are separated by gaps of $23\farcs{8}$ and
$14\farcs{3}$ along the right ascension and declination direction,
respectively. The full FOV of WFI is thus $34\arcmin \times 33\arcmin$
with a filling factor of $95.9$\%.
The WFI data described in this paper are from the following two
sources:
\begin{enumerate}
\item the ESO Large Programme 170.A-0789(A) (Principal In\-vest\-igator:
J. Krautter, as chair of the SWG) which has accumulated data from
January 27, 2003 to March 24, 2004 at the time of writing.
\item the contributing programs 70.A-0529(A); 71.A-0110(A);
71.A-0110(B) with P. Schneider as the Principal In\-vest\-igator, which
have contributed data from October 14, 2002 to September 29, 2003
\end{enumerate}
Observations were performed in the $B$-, $V$-, $R$-, and
$I$-passbands. These were split into OBs consisting of a sequence of
five (ten in the $I$-band) dithered sub-exposures with the typical
exposure time given in Table~\ref{tab:strategy}. The table gives: in
Col.1~ the passband; in Col.~2 the filter id adopting the unique
naming convention of the La Silla Science Operations Team; in Col.~3
the total exposure time in seconds; in Col.~4 the number of observing
blocks (OBs) per field; and in Col.~5 the integration time of the
individual sub-exposures in the OB. The dither pattern with a radius
of 80\arcsec\ was optimized for the best filling of the gaps. Filter
curves can be found in \citet{2001A&A...379..740A} and on the web page
of the La Silla Science Operations
Team.\footnote{\url{http://www.ls.eso.org/lasilla/sciops/2p2/E2p2M/WFI/filters/}}
Even though the nominal total survey exposure time for the $R$-band is
3500~s, the data contributed by the Bonn group provided additional
exposures totaling 11\,500~s each, spread over 4~OBs. For the
same reason the $B$-band data for the field XMM-07 has a
significantly larger exposure time than that given in
Table~\ref{tab:strategy} (see Table~\ref{tab:img-products}).
\begin{table}
\centering
\caption{Planned observing strategy for the XMM-Newton follow-up
survey.}
\begin{tabular}{llrrr}
\hline\hline
Passband & Filter & $T_\mathrm{tot}$ (s) & $N_\mathrm{OB}$ &
$T_\mathrm{exp}$ (s) \\\hline
$B$ & B/123\_ESO879 & 1800 & 1 & 360 \\
$V$ & V/89\_ESO843 & 4400 & 2 & 440 \\
$R$ & Rc/162\_ESO844 & 3500 & 1 & 700 \\
$I$ & I/203\_ESO879 & 9000 & 3 & 300 \\\hline
\end{tabular}
\label{tab:strategy}
\end{table}
Service mode observing provides the option for constraints on e.g.,
seeing, transparency, and airmass to be specified in order to meet the
requirements of the survey. The adopted constraints were: (1) dark sky
with a fractional lunar illumination of less than $0.4$; (2) clear sky
with no cirrus though not necessarily photometric; and (3) seeing
$\leq 1\farcs2$. The $R$-band images of the contributing program were
taken with a seeing constraint of $\lesssim 1\farcs0$ so that the data
can be used for weak lensing studies.
The total integration time in some fields may be higher than the
nominal one listed in Table~\ref{tab:strategy} because unexpected
variations in ambient conditions during the execution of an OB can
cause, for instance, the seeing and transparency to exceed the
originally imposed constraints. If this happens, the OB is normally
executed again at a later time. In these cases the decision of using
or not all the available data must be taken during the data reduction
process. In the case of the present survey all available data were
included in the reduction, which explains why in some cases the total
integration time exceeds that originally planned.
This paper describes data accumulated prior to October 16, 2003,
amounting to about 80~h on-target integration. The science data
comprises 720 exposures split into 130 OBs. About 15\% of the
$B$-band and 85\% of the $R$-band data are from the contributing
programs.
\section{Data reduction}
\label{sec:reduction}
The accumulated optical exposures were reduced and calibrated using
the EIS Data Reduction System (da Costa et al., in preparation) and its
associated image processing engine based on the C++ EIS/MVM library
routines \citep[][Vandame et al., in
preparation]{2004PhDVandame}.\footnote{The PhD thesis is available
from \url{http://www.eso.org/science/eis/publications.html}} This
library incorporates routines from the multi-resolution visual model
package (MVM) described in \citet{1995SigPr.....46..345R} and
\citet{1997ExA.....7..129R}. It was developed by the EIS project
to enable handling and reducing, using a single environment,
the different observing strategies and the variety of
single/multi-chip, optical/infrared cameras used by the different
surveys carried out by the EIS team. The platform independent EIS/MVM
image processing engine is publicly available and can be retrieved
from the EIS
web-pages.\footnote{\url{http://www.eso.org/science/eis/}}
The system automatically recognizes calibration and science exposures
and treats them accordingly. For the reduction, frames are associated
and grouped into \emph{Reduction Blocks} (RBs) based on the frame
type, spatial separation and time interval between consecutive
frames. The end point of the reduction of an RB is a
\emph{reduced image} and an associated weight map describing the local
variations of noise and exposure time in the reduced image. The data
reduction algorithms are fully described in
\citet{2004PhDVandame}.
In order to produce a \emph{reduced image}, the individual exposures
within an RB are: (1) normalized to 1~s integration; (2)
astrometrically calibrated with the Guide Star Catalog version
2.2 (GSC-2.2) as reference catalog, using a second-order polynomial
distortion model; (3) warped into a user-defined reference grid
(pixel, projection and orientation), using a third-order Lanczos
kernel; and (4) co-added only using the weight for discarding the flux
contribution from masked pixels (e.g. satellite tracks
automatically detected and masked using a Hough transformation),
for which the pixel value is zero. Note that individual exposures in
the RB are not scaled to the same flux level. This assumes that the
time interval corresponding to an RB is small enough to neglect
significant changes in airmass.
The 720 raw exposures were converted into 160 fully calibrated reduced
images, of which 146 were released in the $B$- (36), $V$- (32), $R$-
(43) and $I$- (35) passbands. Of the remaining 14, 10 were observed
with wrong coordinates, three (XMM-05 ($R$), XMM-06 ($I$), XMM-12
($V$)) were rejected after visual inspection and one (XMM-12) was
discarded due to a very short integration time (73~s), associated to a
failed OB. The number of reduced images (150) exceeds that of OBs
(130) because the RBs were built by splitting the OBs in order to
improve the cosmetic quality of the final stacked images, as discussed
below.
The photometric calibration of the reduced images was obtained using
the photometric pipeline integrated to the EIS data reduction system
as described in more detail in Appendix~\ref{sec:photometry}. In
particular, the XMM-Newton survey data presented here were obtained in
41 different nights of which 37 included observations of standard star
fields. For these 37 nights it was attempted to obtain photometric
solutions. The four nights without standard star observations are:
February 2, 3 and 4, 2003 (Public Survey); and November 8, 2002
(contributing program). For the nights with standard star
observations, the number of measurements ranges from a few to over
300, covering from 1 to 3 Landolt fields.
\begin{table}
\centering
\caption{Summary of the number of nights with standard star observations
and type of solution.}
\begin{tabular}{lrrrrr}
\hline\hline
Passband & default & 1-par & 2-par & 3-par & total \\
\hline
$B$ & 0 & 3 & 4 & 3 & 10 \\
$V$ & 0 & 8 & 3 & 5 & 16 \\
$R$ & 0 & 8 & 3 & 3 & 14 \\
$I$ & 4 & 8 & 2 & 5 & 19 \\
\hline
\end{tabular}
\label{tab:bestsol}
\end{table}
Table~\ref{tab:bestsol} summarizes the available photometric
observations. The table lists: in Col.~1 the passband; in Col.~2--5
the number of nights assigned a default solution or a 1--3-parameter
solution; and in Col.~6 the total number of nights with standard star
observations. For three nights (March 26, 2003; April 2, 2003; August
6, 2003) the solutions obtained in the passbands $V$, $I$, $R$,
respectively (either 2- or 3-parameter fits) deviate from the median by
$-0.26$, $-0.5$, $-0.25$ mag. Of those, only the $I$-band zeropoint
obtained for April 2, 2003 deviates by more than $3\sigma$ from the
solutions obtained for other nights. Note that the type of solution
obtained depends on the available airmass and color coverage, which in
the case of the XMM-Newton survey depends on the calibration plan adopted by
the La Silla Science Operations Team.
Because the EIS Survey System automatically carries out the
photometric calibrations it is interesting to compare the solutions to
those obtained by other means. Therefore, the automatically computed
3-parameter solutions of the EIS Survey System are compared with the
\emph{best solution} recently obtained by the La Silla Science
Operations Team. The results of this comparison are presented in
Table~\ref{tab:photcomp} which lists: in Col.~1 the passband; in
Cols.~2--4 the mean offsets in zeropoint ($ZP$), extinction ($k$) and
color term (color), respectively. The agreement of the solutions is
excellent for all passbands. However, it is worth emphasizing that the
periods of observations of standard stars available to the two teams
do not coincide.
\begin{table}
\centering
\caption{Comparison between the EIS 3-parameter fit solutions and
the Telescope Team's best solution.}
\begin{tabular}{lrrr}
\hline\hline
Passband & $\Delta ZP$ & $\Delta k$ & $\Delta$ color \\
\hline
$B$ & $0.00$ & $-0.03$ & $-0.06$ \\
$V$ & $-0.05$ & $0.00$ & $-0.01$ \\
$R$ & $0.00$ & $0.10$ & $0.00$ \\
$I$ & $-0.04$ & $0.05$ & $-0.02$ \\
\hline
\end{tabular}
\label{tab:photcomp}
\end{table}
Not surprisingly, larger offsets are found when 2- and 1-parameter
fits are included, depending on the passband and estimator used to
derive the estimates for extinction and color term. Finally, taking
into consideration only 3-parameter fit solutions and after rejecting
$3\sigma$ outliers one finds that the scatter of the zeropoints is
$\lesssim 0.08$~mag. This number is still uncertain given the small
number of 3-parameter fits currently available, especially in the
$R$-band. The obtained scatter is a reasonable estimate for the current
accuracy of the absolute photometric calibration of the XMM-Newton survey
data.
There are two more points that should be considered in evaluating the
accuracy of the photometric calibration of the present data. First,
for detectors consisting of a mosaic of individual CCDs it is
important to estimate and correct for possible chip-to-chip variations
of the gain. For the present data these variations were estimated by
comparing the median background values of sub-regions bordering
adjacent CCDs. The determined variations were used to bring the gain
to a common value for all CCDs in the mosaic. This was applied to
both science and standard exposures. Second, it is also known that
large-scale variations due to non-uniform illumination over the field
of view of a wide-field camera exist. The significance of this effect
is passband-dependent and becomes more pronounced with increasing
distance from the optical axis
(\citealp{2001Msngr.104...16M,2004AN....325..299K}; Vandame et~al. in
preparation). Automated software to correct for this effect has been
developed but due to time constraints it has not yet been applied to
these data.
The final step of the data reduction process involves the assessment
of the quality of the reduced images. Following visual inspection,
each reduced image is graded, with the grades ranging from A (best) to
D (worst). This grade refers only to the visual aspect of the data
(e.g. background, cosmetics). Out of 150 reduced images covering (see
Sect.~\ref{sec:observations}) the selected XMM-Newton fields, 104 were
graded A, 35 B, 7 C and 4 D. The images with grades C and D are listed
in Table~\ref{tab:red-grade}. The table, ordered by field and date,
lists: in Col.~1 the field name; in Col.~2 the passband; in Col.~3 the
civil date when the night started (YYYY-MM-DD); in Col.~4 the grade
given by the visual inspection; and in Col.~5 the primary motive for
the grade. It is important to emphasize that the reduced images must
be graded, as grades are used in the preparation of the final image
stacks. In particular, reduced images with grade D have no
scientific value and were not released and were discarded in the
stacking process discussed in the next section.
\begin{table*}
\centering
\caption{Grades representing the visual assessment of the reduced
images.}
\label{tab:red-grade}
\begin{tabular}{lcccl}
\hline\hline
Field & Passband & Date & Grade & Comment \\\hline
XMM-05 & $R$ & 2002-10-14 & D & strong stray light contamination \\
XMM-06 & $I$ & 2003-01-29 & D & inadequate fringing correction \\
XMM-12 & $I$ & 2003-03-29 & D & very short integration time \\
XMM-12 & $V$ & 2003-09-27 & D & out-of-focus \\
XMM-01 & $V$ & 2003-02-01 & C & strong shape distortions \\
XMM-07 & $R$ & 2003-08-06 & C & stray light contamination \\
XMM-10 & $R$ & 2003-08-06 & C & fringing \\
XMM-10 & $R$ & 2003-09-23 & C & fringing \\
XMM-10 & $R$ & 2003-09-29 & C & fringing \\
XMM-10 & $R$ & 2003-09-29 & C & fringing \\
XMM-10 & $R$ & 2003-09-29 & C & fringing \\
\hline
\end{tabular}
\end{table*}
The success rate of the automatic reduction process is better than
95\% and most of the lower grades are associated with observational
problems rather than inadequate performance of the software operating
in an un-supervised mode. An interesting point is that occasionally
$R$-band images are also affected by fringing (see Table
\ref{tab:red-grade}) -- for instance, in the
nights of August 6 and September 23 and 29, 2003, all from the
contributing program. The night of August 6 is one of the nights for
which the computed $R$-band zeropoint deviates from the median. This
points out the need to consider applying fringing correction also in
the $R$-band, at least in some cases. The $R$-band fringing problem
accounts for five out of seven grade C images. The remaining cases are
due to stray-light and strong shape distortions.
It should also be pointed out that the reduced images show a number of
cosmic ray hits. This is because the construction of RBs was optimized
for removing cosmic ray features in the final stacks using a
thresholding technique. To this end the number of images in an RB was
minimized for some field and filter combinations to have at least
three reduced images entering the SB.
\section{Final products}
\label{sec:products}
\subsection{Images}
\label{sec:image-products}
The 146 reduced images with grades better than D were converted into
44 stacked (co-added) images using the EIS Data Reduction System. The
system creates both a final stack, by co-adding different reduced
images taken of the same field with the same filter (see
Appendix~\ref{sec:image-stacks}), and an associated product log with
additional information about the stacking process and the final image.
Note that all stacks (and catalogs) and their associated product logs
are publicly available from the EIS survey release and ESO
Science Archive Facility
pages.\footnote{\url{http://www.eso.org/science/eis/surveys/release_XMM.html}
for catalogs and
\url{http://archive.eso.org/archive/public_datasets.html} for the
latest release of stacked images made in July 2005.}
The final stacks are illustrated in Fig.~\ref{fig:xmm-overview} which
shows cutouts from color composite images of the 12 fields. From this
figure, one can easily see the broad variety of fields observed by
this survey -- dense stellar fields (XMM-01, XMM-02, XMM-12),
sometimes with diffuse emission (XMM-11), extended objects (e.g.
XMM-08), and empty fields at high galactic latitude (e.g. XMM-07). While
the constraints imposed by the system normally lead to good results,
visual inspection of the images after stacking revealed that at least
in one case the final stacked image was significantly degraded by the
inclusion of a reduced image (graded B) with high-amplitude noise.
Therefore, this image was not included in the production of the
corresponding stack. The reason for this problem is being investigated
and may lead to the definition of additional constraints for the
automatic rejection algorithm being currently used.
\begin{figure*}[ht]
\centering
\includegraphics[width=17cm]{3785fi02.eps}
\caption{Above are cut-outs from color images of XMM-01 to XMM-12
(from top left to bottom right) to illustrate the wide variety of
fields the pipeline can successfully handle. The color images are
$BVR$ composite were $R$-band data is available, $BVI$
otherwise. The side length of the images displayed here is
$7\farcm{9} \times 5\farcm{6}$. In these images North is up and
East is to the left. These composite color images also demonstrate
the accuracy of the astrometric calibration independently achieved
in each passband.}
\label{fig:xmm-overview}
\end{figure*}
Before being released the stacks were again examined by eye and
graded. Out of 44 stacks, 33 were graded A, 10 B, and 1 C, with no
grade D being assigned. In addition to the grade a comment may be
associated and a list of all images with some comment can be found in
the README file associated to this release in the EIS web-pages. The
comments refer mostly to images with poor background subtraction
either due to very bright stars (XMM-12) or extended, bright galaxies
(XMM-08, XMM-09) in the field. It is important to emphasize that the
reduction mode for these data was optimized for extragalactic,
non-crowded fields, which is not optimal for some of these fields.
Residual fringing is also observed in some stacks such as that of
XMM-10 in the $R$-band and XMM-04, XMM-06 in the $I$-band.
As mentioned in the previous section, to improve the rejection of
cosmic rays, the RBs were constructed so that in most cases the stack
blocks (SB) consist of at least 3 reduced images as input. This allows
for the use of a thresholding procedure, with the threshold set to
$2.5\sigma$, to remove cosmic ray hits from the final stacked
image. Even with this thresholding the stacks consisting of only three
RBs (totaling 5 exposures), mostly $B$-band images, still show some
cosmic ray hits. This happens primarily in the regions of the
inter-chip gaps, where fewer images contribute to the final
stack. Also, the automatic satellite track masking algorithm has
proven to be efficient in removing both bright and faint tracks. The
most extreme case is 3~satellite tracks of varying intensity in a
single exposure. The regions affected by satellite tracks in the
original images were flagged in the weight-map images and thus are
properly removed from the stacked image. Naturally, in the regions
where a satellite track was found in one of the contributing images
the noise is slightly higher in the stacked image. This is also
reflected in the final weight-map image.
The accuracy of the final photometric calibration of course depends on
the accuracy of the photometric calibration of the reduced images
which are used to produce the final co-added stacks and the number of
independent photometric nights in which these were observed (see
Table~\ref{tab:nights}). The former depends not only on the quality of
the night but also on the adopted calibration plan. To preview the
quality of the photometric calibration, Table~\ref{tab:nights}
provides information on the number of reduced images and number of
independent nights for each passband and filter. The table gives for
each field in: Col.~1 the field identification; Cols.~2--4 for each
passband the number of reduced images with the number in parenthesis
being the number of independent nights in which they were observed.
Complementing this information Table~\ref{tab:zp-field} shows the best
type of solution available for each field/filter combination. The
table gives: in Col.1 the field name; in Cols. 2--5 the number of free
parameters in the type of the \emph{best solution} available for the
passbands indicated. Solutions with more free parameters in general
indicate better airmass and color coverage, yielding better
photometric calibration. Examination of these two tables provide some
insight into the quality of the photometric calibration of each final
stack, as reported below.
\begin{table}
\centering
\caption{Summary of available data -- number of reduced images and
in parentheses number of independent nights -- for each field and passband.}
\label{tab:nights}
\begin{tabular}{lrrrr}
\hline\hline
Field & $B$ & $V$ & $R$ & $I$ \\
\hline
XMM-01 & 3 (1) & 3 (2) & 3 (1) & 3 (3)\\
XMM-02 & 3 (1) & 3 (1) & 3 (1) & 3 (1)\\
XMM-03 & 3 (1) & 3 (2) & 5 (3) & 3 (1)\\
XMM-04 & 3 (1) & 3 (2) & 4 (2) & 3 (2)\\
XMM-05 & 3 (1) & 3 (2) & 5 (1) & 3 (2)\\
XMM-06 & 3 (1) & 3 (2) & 6 (4) & 3 (2)\\
XMM-07 & 3 (2) & 3 (2) & 6 (4) & 3 (2)\\
XMM-08 & 3 (1) & 3 (1) & --- & 3 (2)\\
XMM-09 & 3 (1) & 3 (2) & --- & 3 (2)\\
XMM-10 & 3 (1) & --- & 5 (3) & --- \\
XMM-11 & 3 (1) & 3 (2) & 3 (1) & 5 (4)\\
XMM-12 & 3 (1) & 2 (1) & 3 (1) & 3 (2)\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Type of best photometric solution available for each field.}
\label{tab:zp-field}
\begin{tabular}{lcccc}
\hline\hline
Field & Default & 1-par & 2-par & 3-par \\\hline
XMM-01 & $R$ & $BV$ & --- & $I$ \\
XMM-02 & $RI$ & $BV$ & --- & --- \\
XMM-03 & --- & $I$ & $R$ & $BV$ \\
XMM-04 & --- & $V$ & --- & $BRI$ \\
XMM-05 & --- & $R$ & $I$ & $BV$ \\
XMM-06 & --- & $V$ & $BR$& $I$ \\
XMM-07 & --- & $I$ & $B$ & $VR$ \\
XMM-08 & --- & $V$ & --- & $BI$ \\
XMM-09 & --- & --- & $B$ & $VI$ \\
XMM-10 & --- & --- & $B$ & $R$ \\
XMM-11 & --- & --- & --- & $BVRI$\\
XMM-12 & --- & $BR$ & $V$ & $I$ \\
\hline
\end{tabular}
\end{table}
The main properties of the stacks produced for each field and filter
are summarized in Table~\ref{tab:img-products}. The table gives: in
Col.~1 the field identifier; in Col.~2 the passband; in Col.~3 the
total integration time $T_\mathrm{int}$ in seconds, of the final stack; in
Col.~4 the number of contributing reduced images or RBs; in Col.~5 the
total number of science frames contributing to the final stack; in
Cols.~6 and 7 the seeing in arcseconds and the point-spread function
(PSF) anisotropy measured in the final stack; in Col.~8 the limiting
magnitude, $m_\mathrm{lim}$, estimated for the final image stack for a
2\arcsec\ aperture, $5\sigma$ detection limit in the AB system; in
Col.~9 the grade assigned to the final image during visual inspection
(ranging from A to D); in Col.~10 the fraction (in percentage) of
observing time relative to that originally planned.
\begin{table*}
\centering
\caption{Overview of the properties of the produced image
stacks.}
\begin{tabular}{llrrrrrrrr}
\hline\hline
Field & Passband & $T_\mathrm{int}$ & \#RBs & \#Exp. &
Seeing & PSF rms & $m_\mathrm{lim}$ & Grade &
Completeness\\
& & (s)& & & (arcsec) & &
(mag) & &(\%) \\
\hline
XMM-01 & $B$ & 1800 & 3 & 5 & 1.19 & 0.056 & 24.94 & A & 100 \\
XMM-01 & $V$ & 6599 & 3 & 15 & 0.97 & 0.056 & 25.32 & A & 150 \\
XMM-01 & $R$ & 3500 & 3 & 5 & 0.82 & 0.074 & 23.97 & B & 100 \\
XMM-01 & $I$ & 8998 & 3 & 30 & 0.69 & 0.063 & 23.53 & A & 100 \\
\hline
XMM-02 & $B$ & 1800 & 3 & 5 & 1.17 & 0.051 & 24.51 & A & 100 \\
XMM-02 & $V$ & 4399 & 3 & 10 & 0.96 & 0.076 & 24.63 & A & 100 \\
XMM-02 & $R$ & 3500 & 3 & 5 & 0.64 & 0.087 & 24.69 & A & 100 \\
XMM-02 & $I$ & 5998 & 3 & 20 & 0.94 & 0.079 & 23.84 & A & 67 \\
\hline
XMM-03 & $B$ & 1800 & 3 & 5 & 1.01 & 0.031 & 25.44 & A & 100 \\
XMM-03 & $V$ & 4399 & 3 & 10 & 0.86 & 0.068 & 25.35 & A & 100 \\
XMM-03 & $R$ & 11\,748 & 5 & 20 & 0.83 & 0.152 & 25.15 & A & 336 \\
XMM-03 & $I$ & 9297 & 3 & 31 & 0.96 & 0.061 & 24.39 & A & 103 \\
\hline
XMM-04 & $B$ & 1800 & 3 & 5 & 1.17 & 0.041 & 25.22 & A & 100 \\
XMM-04 & $V$ & 4399 & 3 & 10 & 1.07 & 0.050 & 25.05 & A & 100 \\
XMM-04 & $R$ & 11\,748 & 4 & 20 & 0.76 & 0.069 & 25.57 & A & 336 \\
XMM-04 & $I$ & 8998 & 3 & 30 & 0.87 & 0.066 & 24.83 & A & 100 \\
\hline
XMM-05 & $B$ & 1800 & 3 & 5 & 1.24 & 0.076 & 25.18 & A & 100 \\
XMM-05 & $V$ & 4399 & 3 & 10 & 1.51 & 0.063 & 24.80 & A & 100 \\
XMM-05 & $R$ & 12\,348 & 5 & 21 & 0.94 & 0.072 & 25.58 & A & 353 \\
XMM-05 & $I$ & 8998 & 3 & 30 & 1.09 & 0.056 & 24.58 & A & 100 \\
\hline
XMM-06 & $B$ & 1800 & 3 & 5 & 0.87 & 0.052 & 25.57 & A & 100 \\
XMM-06 & $V$ & 4399 & 3 & 10 & 0.73 & 0.039 & 25.43 & A & 100 \\
XMM-06 & $R$ & 14\,998 & 6 & 25 & 0.85 & 0.060 & 24.54 & A & 429 \\
XMM-06 & $I$ & 8998 & 3 & 30 & 0.74 & 0.044 & 24.40 & A & 100 \\
\hline
XMM-07 & $B$ & 2699 & 2 & 8 & 1.24 & 0.035 & 25.55 & A & 150 \\
XMM-07 & $V$ & 4399 & 3 & 10 & 1.10 & 0.050 & 25.37 & A & 100 \\
XMM-07 & $R$ & 15\,698 & 6 & 27 & 1.03 & 0.058 & 25.66 & A & 449 \\
XMM-07 & $I$ & 8998 & 3 & 30 & 0.95 & 0.048 & 24.96 & A & 100 \\
\hline
XMM-08 & $B$ & 1800 & 3 & 5 & 1.28 & 0.062 & 25.62 & A & 100 \\
XMM-08 & $V$ & 4399 & 3 & 10 & 1.03 & 0.082 & 24.93 & A & 100 \\
XMM-08 & $I$ & 8998 & 3 & 30 & 0.79 & 0.052 & 24.76 & B & 100 \\
\hline
XMM-09 & $B$ & 1800 & 3 & 5 & 0.94 & 0.045 & 24.59 & B & 100 \\
XMM-09 & $V$ & 4839 & 3 & 11 & 0.83 & 0.031 & 24.20 & B & 110 \\
XMM-09 & $I$ & 8998 & 3 & 30 & 0.72 & 0.038 & 23.81 & B & 100 \\
\hline
XMM-10 & $B$ & 1500 & 3 & 5 & 1.12 & 0.042 & 24.26 & B & 83 \\
XMM-10 & $R$ & 11\,748 & 5 & 20 & 0.88 & 0.049 & 24.62 & C & 336 \\
\hline
XMM-11 & $B$ & 1800 & 3 & 5 & 1.09 & 0.058 & 25.25 & A & 100 \\
XMM-11 & $V$ & 4399 & 3 & 10 & 0.77 & 0.075 & 24.03 & A & 100 \\
XMM-11 & $R$ & 3500 & 3 & 5 & 0.60 & 0.090 & 23.10 & A & 100 \\
XMM-11 & $I$ & 12\,297 & 5 & 41 & 0.77 & 0.087 & 22.64 & A & 137 \\
\hline
XMM-12 & $B$ & 1800 & 3 & 5 & 1.09 & 0.087 & 23.41 & B & 100 \\
XMM-12 & $V$ & 3519 & 2 & 8 & 0.79 & 0.085 & 23.48 & B & 80 \\
XMM-12 & $R$ & 4899 & 3 & 7 & 0.64 & 0.111 & 23.16 & B & 140 \\
XMM-12 & $I$ & 3599 & 3 & 12 & 1.21 & 0.093 & 22.01 & B & 40 \\
\hline
\end{tabular}
\label{tab:img-products}
\end{table*}
This table shows that for most stacks the desired limiting magnitude
was met in $V$ (24.92~mag) or even slightly exceeded in $B$
(25.20~mag). The $R$- and $I$-band images are slightly shallower than
originally proposed with median limiting magnitudes of $24.66$~mag and
$24.39$~mag. Still, when only the high-galactic latitude fields are
included the median limiting magnitudes are fainter -- 25.33 ($B$),
25.05 ($V$), 25.36 ($R$) and 24.58 ($I$)~mag. All magnitudes are given
in the AB system. The median seeing of all stacked images is
$0\farcs94$ with the best and worst values being $0\farcs60$ and
$1\farcs51$, respectively. This is significantly better than the
seeing requirement of $1\farcs2$ specified for this survey.
Finally, the following remarks can be made concerning the image stacks
and their calibration:
\begin{itemize}
\item \textbf{XMM-01 ($R$)} -- The background subtraction near bright stars
is poor. This field was observed as a single OB on February 3, 2003
for which no standard stars were observed. Since this is a galactic
field there are no complementary observations from the contributing
program, and therefore these observations cannot be calibrated.
\item \textbf{XMM-01 ($I$)} -- This field at low galactic latitude is
very crowded and no acceptable fringing map could be produced from the
science exposures in the field. The de-fringing was done with an
\emph{external fringing} map generated from science images taken on
empty fields close in time to the XMM-01 $I$-band observations.
\item \textbf{XMM-02 ($R$)} -- The observations for this pointing and
filter were done with one OB (5 exposures) on February 2,
2003 for which no standard stars observations were carried out.
\item \textbf{XMM-02 ($I$)} -- The observations for this pointing and
filter were done with two OBs (10 exposures each) on
February 2, 2003 for which no standard stars observations were
carried out. Like for XMM-01 ($I$) an external fringing map was
used.
\item \textbf{XMM-03 ($V$)} -- The $V$-band calibration on the night of March
26, 2003 yields a 3-parameter fit that deviates from the median of
the solutions by roughly 0.26 mag (less than $3\sigma$).
\item \textbf{XMM-04 ($I$)} -- Low level fringing is still visible in
the final stacked image.
\item \textbf{XMM-06 ($I$)} -- As in XMM-04, low level fringing is
still visible in the final stack.
\item \textbf{XMM-07 ($B$)} -- From the three reduced images available
only two were used for stacking because of the high amplitude of
noise in one of them which greatly affected the final product.
\item \textbf{XMM-07 ($R$)} -- This field was observed in four nights
(August 6, and September 23, 27, and 28, 2003) as part of the
contributing program. For the night of August 6 a 3-parameter fit
solution was obtained. However, this solution deviates by roughly
0.25~mag relative to the median of all $R$-band solutions.
\item \textbf{XMM-07 ($I$)} -- There is a visible stray light
reflection at the lower right corner of the image.
\item \textbf{XMM-08 ($V$)} -- The bright central galaxy is larger
than the dithering pattern, thus making it difficult to estimate the
background in its neighborhood. As a consequence the background
subtraction procedure does not work properly.
\item \textbf{XMM-08 ($I$)} -- The comments about the background
subtraction for the $V$-band image also apply to the $I$-band. This
field was observed using 3 OBs (which in this case also correspond
to 3 RBs) on two nights (March 30, 2003, one OB and April 2, 2003,
two OBs). On the night of April 2 a 3-parameter solution was
obtained for which the ZP determined deviates significantly (more
than $3\sigma$) from the median of all solutions, even though the
conditions of the night seem to have been adequate. The reason for
this poor solution is at present unknown. Poor fringing correction
is a possibility but needs to be confirmed. The zeropoint for the
two reduced images taken in this night has been replaced by a
default value.
\item \textbf{XMM-09 ($BVI$)} -- The preceding comment about
background subtraction (see XMM-08) can be repeated here for the
large galaxy in the North-West corner of the image. The background
subtraction procedure fails, creating a strong variation around the
galaxy.
\item \textbf{XMM-10 ($B$)}: This stack has a shorter exposure time
than the others released, leading to higher background noise.
\item \textbf{XMM-10 ($R$)} -- This field was observed in the nights of
August 6, and September 23 and 29, 2003 as part of contributing
program. As in case of XMM-07 the solution for August 6
deviates somewhat from the median.
\item \textbf{XMM-11 ($V$)} -- The same comments as for the photometric
calibration of XMM-03 ($V$) apply to this image.
\item \textbf{XMM-11 ($I$)} -- Like for XMM-01 ($I$) an external
fringing map was used.
\item \textbf{XMM-12 ($BR$)} -- The background subtraction near
bright stars is poor.
\item \textbf{XMM-12 ($V$)} -- The preceding comment about background
subtraction also applies to this image. In addition the comment
about the photometric calibration of XMM-03 ($V$-band) also applies
to this image.
\item \textbf{XMM-12 ($I$)} -- The comment about background
subtraction also applies to the $I$-band image. Like for XMM-01
($I$) an external fringing map was used.
\end{itemize}
Some improvements in the image quality may be possible in the future
by adopting a different observing strategy such as larger dithering
patterns to deal with more extended objects or shorter exposure times
to minimize the impact of fringing.
\subsection{Catalogs}
\label{sec:datacatalogs}
For the 8 fields located at high-galactic latitudes with $|b| >
30^\circ$, a total of 28 catalogs were produced (not all fields were
observed in all filters, see Table~\ref{tab:nights}). Catalogs for the
remaining low-galactic latitude fields were not produced since these
are crowded stellar fields for which SExtractor alone is not well
suited. As in the case of the Pre-FLAMES survey (Zaggia et al., in
preparation), it is preferable to use a PSF fitting algorithm such as
DAOPHOT \citep{1987PASP...99..191S}. Details about the catalog
production pipeline available in the EIS data reduction system are
presented in Appendix~\ref{sec:catalogs}.
As mentioned earlier, the fields considered here cover a range of
galactic latitudes of varying density of objects, in some cases with
bright point and extended sources in the field. In this sense this
survey is a useful benchmark to evaluate the performance of the
procedures adopted for the un-supervised extraction of sources and the
production of science-grade catalogs. This also required carrying out
tests to fine-tune the choice of input parameters to provide the best
possible compromise. Still, it should be emphasized that the catalogs
produced are in some sense general-purpose catalogs. Specific
science goals may require other choices of software (e.g. DAOPHOT,
IMCAT) and/or input parameters.
A key issue in the creation of catalogs is to minimize the number of
spurious detections and in general, the adopted extraction parameters
work well. However, there are unavoidable situations where this is not
the case. Among these are: (1) the presence of ghost images near
bright stars. Their location and size vary with position and magnitude
making it difficult to deal with them in an automatic way; (2) the
presence of bright galaxies because the algorithm for automatic
masking does not work well in this case; (3) residual fringing in the
image; (4) the presence of stray light, in particular, associated with
bright objects just outside the observed field; (5) when the image is
slightly rotated, the trimming procedure does not trim the corners of
the image correctly, leading to the inclusion of regions with a low
$S/N$. In these corners many spuriously detected objects are not
flagged as such. The XMM-Newton fields are a good showcase for these
various situations.
Another important issue to consider is the choice of the parameter
that controls the deblending of sources. Experience shows that the
effects of deblending depend on the type of field being considered
(e.g. empty or crowded fields, extended object, etc.) and vary across
the image. Some tests were carried out but further analysis of this
topic may be required.
A number of tests have also been carried out to find an adequate
compromise for the scaling factor used in the calculation of the size
of the automatic masks (see Appendix~\ref{sec:catalogs}) which depends
on the passband and the magnitude of the object. While the current
masking procedure generally works well, the optimal scaling will
require further investigation. It is also clear that for precision
work, such as e.g. lensing studies, additional masking by hand is
unavoidable. It should also be mentioned that occasionally the
masking of saturated stars fails. This occurs in five out of the 28
catalogs released and only for $\sim$10\% of the saturated stars in
them. These cases are likely to be of stars just barely saturated, at
the limit of the settings for automatic masking.
Bearing these points in mind, the following comments can be made
regarding some of the released catalogs:
\begin{itemize}
\item \textbf{XMM-03 ($B$)} -- The automatic masking misses a few
saturated stars.
\item \textbf{XMM-06 ($B$)} -- Due to a small rotation of the image of a few
degrees the trimming frame does not mask the borders completely.
\item \textbf{XMM-06 ($V$)} -- The deblending near bright
galaxies is insufficient. Deblending near bright stars is too
strong.
\item \textbf{XMM-06 ($R$)} -- As in the $V$-band image the deblending
near bright galaxies is insufficient.
\item \textbf{XMM-06 ($I$)} -- As in the $V$-band image the deblending
near bright galaxies is insufficient. Spurious object detections are
caused by reflection features of bright stars and stray light
reflections.
\item \textbf{XMM-07 ($B$)} -- Spurious objects in the corners are
caused by insufficient trimming.
\item \textbf{XMM-08 ($B$)} -- Masks are missing for a number of
saturated stars. XMM-08 contains an extended, bright galaxy (NGC
4666) at the center of the image, plus a companion galaxy located
South-East of it. The presence of these galaxies leads to a large
number of spurious object detections in their surroundings in all
bands.
\item \textbf{XMM-08 ($VRI$)} -- See the comments about spurious
object detections for XMM-08 $B$-band.
\item \textbf{XMM-09 ($B$)} -- Cosmic rays are misidentified as real
objects. The very bright galaxy located at the North-West of the
image leads to the detection of a large number of spurious objects
extending over a large area ($10\arcmin\times10\arcmin$) in all
bands. Even though the galaxy has been automatically masked, the
affected area is much larger than that predicted by the algorithm,
which is optimized for stars. Thus, additional masking by hand would
be required.
\item \textbf{XMM-09 ($VI$)} -- See the comments about spurious
object detections for XMM-09 $B$-band.
\item \textbf{XMM-10 ($R$)} -- The stacked image was graded C because
of fringing. The fringing pattern causes a high number of spurious
object detections along the fringing pattern, leading to a catalog
with no scientific value. \emph{This catalog is released exclusively
as an illustration.}
\end{itemize}
\section{Discussion}
\label{sec:discussion}
\subsection {Comparison of counts and colors}
\label{sec:comp-counts-colors}
A key element in public surveys is to provide potential users with
information regarding the quality of the products released. To this
end a number of checks of the data are carried out and several
diagnostic plots summarizing the results are automatically produced
by the EIS Survey System. They are an integral part of the product
logs available from the survey release page. Due to the large number
of plots produced in the verification process these are not reproduced
here. Instead a small set illustrating the results are presented.
A relatively simple statistics that can be used to check the catalogs
and the star/galaxy separation criteria is to compare the star and
galaxy number counts derived from the data to that of other authors
and/or to model predictions. As an example, Fig.~\ref{fig:xmm07counts}
shows the galaxy counts in different observed passbands for the field
XMM-07. Here objects with CLASS\_STAR$<0.95$ or fainter than the
object classification limit were used to create the sample of
galaxies. Note that the number counts shown in the figure take into
account the effective area of the catalog, which is available in its
\texttt{FIELDS} table (see Appendix~\ref{sec:catalogs}). As can be
seen, the computed counts are consistent with those obtained by
previous authors for all passbands \citep{2001A&A...379..740A,
2001MNRAS.323..795M}.
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{3785fi03.eps}
\caption{Galaxy number counts for the XMM-07 field for the different
passbands as indicated in each panel. Full circles represent EIS
data points, open triangles \citet{2001MNRAS.323..795M}, open
squares~\citet{2001A&A...379..740A}.}
\label{fig:xmm07counts}
\end{figure*}
A complementary test is to compare the stellar counts to those
predicted by models, such as the galactic model of \citet[][and
references therein]{2005A&A...436..895G}. Generally, the agreement of
model predictions is excellent for $B$- and $V$-band catalogs,
becoming gradually worse for $R$- and especially in $I$-band, near the
classification limit, with the counts falling below model predictions
(e.g. XMM-09 $I$-band). Note, however, that plots of CLASS\_STAR
versus magnitude show a less well defined stellar locus for these
bands. It is thus reasonable to assume that the observed differences
between catalogs and model predictions are due to misclassification of
stars as galaxies. Alternatively, these may also reflect short-comings
in the model adopted. However, a detailed discussion of this issue is
beyond the scope of the present paper.
While useful to detect gross errors, number counts are not
sufficiently sensitive to identify more subtle differences. The
comparison of expected colors of stars with theoretical models
provides a better test of the accuracy of the photometric calibration
in the different bands. Using color transformations computed in the
same way as in \citet{2002A&A...391..195G}, the theoretical colors of
stars can be obtained. Such comparisons were made for all five fields
with data in four passbands. The results for two cases, XMM-06 and
XMM-07 are illustrated in Figs.~\ref{fig:xmm06stellartracks}
and~\ref{fig:xmm07stellartracks}, respectively, which show $(B-V)
\times (V-I)$ and $(V-R)\times (R-I)$ diagrams. For XMM-06 the data
are in excellent agreement with the colors of stars predicted by the
theoretical model, with only a small ($\lesssim 0.05$ mag) offset in
$R-I$, indicating a good calibration. On the other hand, for XMM-07,
one observes a significant offset ($\sim 0.2$~mag) in $B-V$. This
field was chosen because it exemplifies the worst offset observed
relative to the theoretical models. Since this offset is only visible
in the $(B-V) \times (V-I)$ diagram, it suggests a problem in the
$B$-band data. Data for this field/filter combination comes from two
nights 2003-06-30 and 2003-08-06. Closer inspection of the
observations in the night of 2003-06-30 show that: (1) the
standard stars observations span only 2 hours in the middle of the
night; (2) the photometric zeropoint derived using the available
measurements (24.59~mag) is reasonably close ($\sim 1\sigma$) to the
median value of the long-term trend (24.71~mag); (3) the $B$
exposures were taken close to sunrise; and (4) there was a significant
increase in the amplitude of the DIMM seeing at the time the XMM-07
exposures under consideration were taken. For the night of 2003-08-06
standards cover a much larger time interval, yielding a
zeropoint of 24.82~mag with comparable difference relative to the
long-term median value for this filter as given above. The results
suggest that the observed problem is not related to the calibration of
the night, as will be seen below.
\subsection{Comparison with other reductions}
\label{sec:external-comparison}
\begin{table*}
\centering
\caption{Summary of the astrometric and photometric comparison. All
differences were computed EIS$-$GaBoDS.}
\begin{tabular}{lllrrr}
\hline\hline
Field & Target & Passband & $\Delta \alpha \cos(\delta)$ &
$\Delta \delta$ & $\Delta m$ \\
& & & (arcsec) & (arcsec) & (mag) \\\hline
XMM-03 & HE~1104$-$1805 & $B$ & $0.02 \pm 0.04$ & $-0.01 \pm 0.03$ & $0.13
\pm 0.02$ \\
XMM-03 & HE~1104$-$1805 & $R$ & $0.02 \pm 0.05$ & $-0.01 \pm 0.05$ & $0.01
\pm 0.04$ \\
XMM-04 & MS~1054.4$-$0321 & $B$ & $0.00 \pm 0.05$ & $0.00 \pm 0.05$ & $0.12
\pm 0.04$ \\
XMM-04 & MS~1054.4$-$0321 & $V$ & $0.02 \pm 0.05$ & $-0.00 \pm 0.05$ & $0.00
\pm 0.04$ \\
XMM-04 & MS~1054.4$-$0321 & $R$ & $0.03 \pm 0.06$ & $0.00 \pm 0.06$ & $0.00
\pm 0.03$ \\
XMM-05 & BPM~16274 & $B$ & $0.00 \pm 0.06$ & $-0.01 \pm 0.06$ & $0.05 \pm
0.04$ \\
XMM-05 & BPM~16274 & $R$ & $0.03 \pm 0.09$ & $-0.01 \pm 0.09$ & $0.18 \pm
0.04$ \\
XMM-06 & RX J0505.3$-$2849 & $B$ & $0.02 \pm 0.04$ & $0.00 \pm 0.04$ &
$0.00 \pm 0.02$ \\
XMM-06 & RX J0505.3$-$2849 & $V$ & $0.02 \pm 0.04$ & $-0.01 \pm 0.04$ &
$-0.04 \pm 0.04$ \\
XMM-06 & RX J0505.3$-$2849 & $R$ & $0.01 \pm 0.03$ & $0.01 \pm 0.04$ &
$0.05 \pm 0.03$ \\
XMM-07 & LBQS~2212$-$1759 & $B$ & $0.02 \pm 0.06$ & $-0.02 \pm 0.06$ &
$0.34 \pm 0.04$ \\
XMM-08 & NGC~4666 & $B$ & $0.00 \pm 0.06$ & $-0.01 \pm 0.05$ & $0.00 \pm
0.03$ \\
XMM-08 & NGC~4666 & $V$ & $0.00 \pm 0.06$ & $-0.01 \pm 0.05$ & $-0.01 \pm
0.03$ \\
XMM-09 & QSO~B1246$-$057 & $B$ & $0.02 \pm 0.06$ & $-0.01 \pm 0.05$ & $0.02
\pm 0.02$ \\
XMM-10 & PB~5062 & $B$ & $-0.03 \pm 0.05$ & $-0.01 \pm 0.07$ & $0.05 \pm
0.02$ \\\hline
\end{tabular}
\label{tab:comparison}
\end{table*}
As shown above, comparison of different statistics, based on the
sources extracted from the final image stacks, to those of other
authors and to model predictions provide an internal means to assess
the quality of the data products. However, in the particular case of
this survey one can also benefit from the fact that about one third of
the accumulated data has been independently reduced by the Bonn group
in charge of the contributing program (Sect.~\ref{sec:observations}).
The images in common are used in this section to make a direct
comparison of the astrometric and photometric calibrations. In their
reduction, the Bonn group used their ``Garching-Bonn Deep Survey''
(GaBoDS) pipeline \citep{2005AN....326..432E}.
A total of 15 stacked images in the $B$-, $V$-, and $R$-bands were
produced and compared to those produced by the EIS/MVM pipeline. The
astrometric calibration was done using the GSC-2.2 catalog, the same
as that of the EIS reduction. In contrast to the reductions carried
out by the EIS system, images were photometrically calibrated using
the measurements of standard stars compiled by
\citet{2000PASP..112..925S}. The type of solution (number of free
parameters of a linear fit) for a night was decided on a case by case
basis after visual inspection of the linear fits.
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{3785fi04.eps}}
\caption{$BVI$ (\emph{left panel}) and $VRI$ (\emph{right panel})
color-color plot for stars objects in the XMM-06 field (large
dots) and that obtained using theoretical models (small dots).}
\label{fig:xmm06stellartracks}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{\includegraphics{3785fi05.eps}}
\caption{Same as Fig.~\ref{fig:xmm06stellartracks} for XMM-07.}
\label{fig:xmm07stellartracks}
\end{figure}
To carry out the comparison of the data products, catalogs were
produced from the EIS and GaBoDS images using the same extraction
parameters. These catalogs were associated with each other to produce
a merged catalog for each field and passband. The results of this
comparison for all the available images in common are presented in
Table~\ref{tab:comparison}. The table gives: in Col.~1 the field name;
in Col.~2 the original target name; in Cols.~3 and 4 the mean offset
and standard deviation in right ascension and declination in
arcseconds; in Col.~5 the mean and standard deviation of the magnitude
differences as measured within an aperture of 3\arcsec. The mean and
standard deviation of the magnitude differences were determined in the
interval $17<m<21$. This range was chosen to avoid saturated objects
at the bright end and to limit the comparison to objects whose
estimated error in magnitude is smaller than about $0.01$~mag at the
faint end. An iterative $5\sigma$ rejection, which allowed rejected
points to re-enter if they are compatible with later determinations of
the mean and variance, was employed to ignore obvious outliers in the
computation of the mean and the standard deviation.
Fig.~\ref{fig:sharc2r.astrom} illustrates the results obtained from
the comparison of the position of sources extracted from images
produced by the two pipelines for the particular case of XMM-06 in
$R$-band. From the figure one can see that the positions of the
sources agree remarkably well. In fact, as summarized in
Table~\ref{tab:comparison}, the typical mean deviation is $\sim
20$~mas with a standard deviation of $\sim 50$~mas, confirming the
excellent agreement in the \emph{external} (absolute) astrometric
calibration to be distinguished from the \emph{internal} calibration
discussed later.
Fig.~\ref{fig:MS1054.R.EIS-GaBoDS} shows a plot of the magnitude
differences measured on the GaBoDS $R$-band image of the field
MS~1054.4$-$0321 (XMM-04) versus the magnitudes measured on the
corresponding EIS image. This field shows that the photometry of both
reductions agree remarkably well. The measured scatter of the
magnitude differences is small ($\sim 0.03$~mag) for this as well as
for most other fields. This result indicates that the internal
procedures used by the two pipelines to estimate chip-to-chip
variations are consistent. Moreover, inspection of the last column of
Table~\ref{tab:comparison} shows that for 11 out of 15 cases the mean
offsets are $\lesssim$0.05~mag. This is reassuring for both pipelines
considering all the differences involved in the process, which include
differences in the routines, procedures and the standard stars
used. It is important to emphasize that differences in the computed
zeropoint of the photometric solutions are $\lesssim$ 0.08~mag, even for
the cases with the largest differences such as XMM-05 ($R$) and XMM-07
($B$). The value of 0.08~mag is consistent with the scatter measured
from the long-term trend shown by the zeropoints computed over a large
time interval, as presented in the EIS release of WFI photometric
solutions, thus representing the uncertainty in the photometric
calibration. Therefore, the offsets reported in the table cannot be
explained by differences in the photometric calibration alone. This
point is investigated in more detail for XMM-05 $R$-band and XMM-07
$B$-band.
\begin{figure}[ht]
\resizebox{\hsize}{!}{\includegraphics{3785fi06.eps}}
\caption{Comparison of astrometry for the $R$-band image of
XMM-06 (RX~J0505.3$-$2849), selected to represent a
typical case. The offsets are
computed EIS$-$GaBoDS. The dashed lines are centered on $(0,0)$,
while the solid lines denote the actual barycenter of the points.}
\label{fig:sharc2r.astrom}
\end{figure}
All $R$-band images for XMM-05 were taken in one night and the
photometric solutions determined by both teams agree very well. While
the source of the discrepancy has not yet been identified, the stellar
locus in the $(B-V) \times (V-I)$ and $(V-R) \times (R-I)$ diagrams
based on the source catalog extracted from the EIS images yield
results which are consistent with model predictions, suggesting that
the problem may lie in the Bonn reductions. On the other hand, the
large offset (0.34~mag) between the $B$-band observations of the field
XMM-07 is most likely caused by the data taken in the night
2003-06-30. While the standard star observations in this night suggest
relatively good photometric conditions, the available measurements
span only about 2 hours in the middle of the night, while the science
exposures were taken at the very end of the night. Inspection of the
ambient condition shows a rapid increase in the amplitude of the DIMM
seeing which could be related to a localized variation in the
transparency. In fact, the Bonn pipeline, which monitors the relative
differences in magnitude for objects extracted from different
exposures in an OB, finds strong flux variations that could be caused
by changes in the sky transparency or by the twilight at sunrise. The
latter could also account for the fact that these observations were
later repeated in August of that year. The important point is that the
Bonn group discarded the calibration of the frames taken in
2003-06-30, while the automatic procedure adopted by EIS did not.
\begin{figure}[ht]
\resizebox{\hsize}{!}{\includegraphics{3785fi07.eps}}
\caption{Comparison of aperture magnitudes (3\arcsec\ aperture)
measured on the $R$-band image of the field XMM-04
(MS~1054.4$-$0321). The dashed line is at a magnitude difference of
0, while the solid line denotes the actual offset between the EIS
and the GaBoDS reduction. The difference at the bright end is caused
by different treatments of saturated objects in both pipelines.}
\label{fig:MS1054.R.EIS-GaBoDS}
\end{figure}
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{3785fi08.eps}
\caption{Stellar PSF pattern in the $R$-band images of the field
MS~1054.4$-$0321 in the EIS reduction (left panel) and the GaBoDS
reduction (right panel). The encircled stick in the left panel
denotes an ellipticity of $\varepsilon = 0.01$. Both plots have the
same scale.}
\label{fig:psf_comp}
\end{figure*}
In addition to evaluating the accuracy of the image registration and
photometric calibration, the independent reductions also offer the
possibility to evaluate the shape of the images. To this end the PSF
of bright, non-saturated stars on the $R$-band images for XMM-04
(MS~1054.4$-$0321) and XMM-06 (RX~J0505.3$-$2849) were measured and
compared. These are the only two cases in which the final stacked
images were produced by using exactly the same reduced images. This
is caused by the differences in the criteria adopted in building the
SBs. In the case of XMM-06, one finds that the size and pattern of
the PSF are in good agreement and both reductions yield a smooth PSF
with no obvious effects of chip boundaries over the whole field. The
situation is different for XMM-04 as can be seen in
Fig.~\ref{fig:psf_comp}, which shows a map of the PSF distortion
obtained by the EIS (left panel) and Bonn (right panel) groups. While
the overall pattern of distortion is similar, the amplitude of the PSF
distortion of the EIS reduction is significantly larger and exhibits
jumps across chip borders. Although the effect is small in absolute
terms, it should be taken into account for applications relying on
accurate shape measurements. The reason for these differences is
likely due to the fact that the astrometric calibration in the EIS
pipeline is done for each chip relative to an absolute \emph{external}
reference, without using the additional constraint that the chips are
rigidly mounted to form a mosaic. By neglecting this constraint, the
solution for each chip in the mosaic may vary slightly depending on
the dithered exposure being considered and the density and spatial
distribution of the reference stars in and around the field of
interest. Since the accuracy of the GSC-2.2 of $250$~mas is
approximately equal to the pixel size of WFI of $0\farcs{238}$, in
addition to the absolute calibration of the image centroid, finding an
\emph{internal} relative astrometric solution further ensures that
images in different dithered exposures map more precisely onto each
other during co-addition. Imperfections in the \emph{internal}
relative astrometry result in objects not being matched exactly onto
each other, thereby degrading the PSF of the co-added image.
\subsection{X-ray/optical correlation}
\label{sec:x-rayopt-corr}
As pointed out in the introduction, the ultimate goal of this optical
survey has been to provide catalogs from which one can identify and
characterize the optical properties of X-ray sources detected with
deep XMM-Newton exposures.
X-ray source lists for the high-galactic latitude fields were produced
by the AIP-node of the SSC. These are based on
pipeline processed event lists which were obtained with the latest
official version of the Software Analysis System (SAS-V6.1). In its
current version this SAS-based pipeline does not work with stacked
images.
Source detection was performed as a three-stage process using
\texttt{eboxdetect} in local and in map mode followed by a multi-PSF
fit with \texttt{emldetect} for all sources present in the initial
source lists. The multi-PSF fit invoked here works on 15 input images,
i.e.~5 per EPIC camera. The five energy bands used per camera cover
the ranges: (1) 0.1--0.5~keV; (2) 0.5--1.0~keV; (3) 1.0--2.0~keV; (4)
2.0--4.5~keV; and (5) 4.5--12.0~keV.
The SAS task \texttt{eposcorr} was applied to the X-ray source list.
\texttt{Eposcorr} correlates the X-ray source positions with the
positions from an optical source catalog, in this case the EIS
catalog, to correct the X-ray positions, assuming that the true
counterparts are contained in the reference catalogue.
The source detection scheme used here is very similar to the pipeline
implemented for the production of the second XMM-Newton catalog of
X-ray sources to be published by the XMM-Newton-SSC later in 2005
(Watson et al., in preparation). This approach is superior to that
used for the creation of source lists which are currently stored in
the XMM-Newton Science Archive since it makes use of X-ray photons
from all cameras simultaneously. It also distinguishes between
point-like and extended X-ray sources. In this paper we only consider
point-like sources. Extended sources at high galactic latitudes are
almost exclusively galaxy clusters and cannot be matched with
individual objects in the optical catalogs. Examining their properties
is beyond the scope of this work.
In carrying out the matching between the XMM-Newton source lists and those
extracted from the optical images, it is important to note that the
X-ray images lie fully within the FOV of the WFI images. Hence an
optical counterpart can be potentially found for any of the X-ray
sources.
From the high-galactic latitudes there are 3 fields with more than one
observation. However, for XMM-05 only the two available observations
with good time $t > 10$~ks were considered. One of them had technical
problems that prevented it from being used for catalog extraction.
For XMM-09 the source list created contained many spurious sources due
to remaining calibration uncertainties in the pipeline processed
images and was not considered further. Figures showing the results of
the source detection process with all sources indicated on an image in
TIFF format, the composite X-ray images, and the source lists can be
found on the web-page of the
AIP-SSC-node.\footnote{\url{http://www.aip.de/groups/xray/XMM_EIS/}}
Below these source lists are used to identify their optical
counterparts.
The extraction yields 995 point-like X-ray sources of which 742 are
unique. The difference between these two numbers reflects differences
in the three independent source lists extracted from the field XMM-07.
The mean flux of the 742 unique X-ray sources is
$F_\mathrm{mean}(0.5-2.0\,\mathrm{keV}) = 8.5 \times
10^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$, the median flux in this band is
$F_\mathrm{med}(0.5-2.0\,\mathrm{keV}) = 3.7 \times
10^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$. Sources with
$F(0.5-2.0\,\mathrm{keV}) = 4 \times
10^{-15}$\,erg\,cm$^{-2}$\,s$^{-1}$ are detected already with an
exposure time of 5~ks, while the limiting flux in the EIS-XMM fields at the
deepest exposure levels is $F_\mathrm{lim} \simeq 3 \times
10^{-16}$\,erg\,cm$^{-2}$\,s$^{-1}$.
\begin{table*}
\centering
\caption{Contents of X-ray source lists for high-latitude XMM-EIS fields.}
\begin{tabular}{llrcrrrrrrrrrrr}
\hline\hline
Field & Obs. ID & $N_\mathrm{s}$ & Passband & $N_\mathrm{m}$ &
$N_\mathrm{1}$ & $(N_\mathrm{1}/N_\mathrm{s})(\%)$ &
$N_\mathrm{all}$ & $N_\mathrm{1,all}$ & $B$ & $V$ & $I$ & $BV$ &
$BI$ & $BVI$\\
\multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} & \multicolumn{1}{c}{(11)}
& \multicolumn{1}{c}{(12)} & \multicolumn{1}{c}{(13)} &
\multicolumn{1}{c}{(14)} & \multicolumn{1}{c}{(15)} \\\hline
XMM-03 & 0112630101 & 69 & $R$ & 92 & 61 & 88 & 62 & 47 &
1 & 1 & 1 & 0 & 0 & 0 \\
& & 69 & & 36 & 35 & 51 & 27 &
27 & 0 & 0 & 0 & 0 & 0 & 0 \\
XMM-04 & 0094800101 & 101 & $R$ & 156 & 91 & 90 & 84 & 68 &
0 & 0 & 1 & 1 & 0 & 0 \\
& & 101 & & 74 & 73 & 72 & 57 & 57 &
0 & 1 & 1 & 0 & 0 & 1 \\
XMM-05 & 0125320401 & 89 & $R$ & 130 & 79 & 89 & 52 & 42 &
1 & 0 & 2 & 0 & 0 & 0 \\
& & 89 & & 59 & 57 & 64 & 37 & 37 &
1 & 0 & 2 & 1 & 0 & 0 \\
XMM-06 & 0111160201 & 110 & $R$ & 173 & 101 & 92 & 105 & 79 &
1 & 1 & 0 & 0 & 0 & 0 \\
& & 110 & & 76 & 72 & 65 & 61 & 58 &
3 & 0 & 1 & 0 & 0 & 0 \\
XMM-07 & 0106660101 & 144 & $R$ & 191 & 119 & 83 & 84 & 66 &
3 & 0 & 2 & 1 & 0 & 2 \\
& & 144 & & 82 & 81 & 56 & 52 & 52 &
2 & 2 & 1 & 1& 0 & 1 \\
XMM-07 & 0106660201 & 110 & $R$ & 134 & 91 & 83 & 65 & 53 &
3 & 0 & 2 & 1 & 0 & 2 \\
& & 110 & & 62 & 61 & 56 & 39 & 39 &
1 & 0 & 0 & 2 & 1 & 0 \\
XMM-07 & 0106660601 & 162 & $R$ & 211 & 139 & 86 & 93 & 76 &
1 & 0 & 1 & 2 & 0 & 0 \\
& & 162 & & 100 & 98 & 60 & 59 & 59 &
1 & 1 & 1 & 2 & 0 & 1 \\
XMM-08 & 0110980201 & 123 & $I$ & 130 & 97 & 79 & 82 & 70 &
1 & 0 & --- & 0 & --- & --- \\
& & 123 & & 73 & 73 & 59 & 58 & 58 &
0 & 2 & --- & 1 & --- & --- \\
XMM-10 & 0012440301 & 88 & $R$ & 113 & 76 & 86 & 50 & 46 &
1 & --- & --- & --- & --- & --- \\
& & 88 & & 57 & 56 & 64 & 35 & 34 &
3 & --- & --- & --- & --- & --- \\\hline
\end{tabular}
\label{tab:xs}
\end{table*}
Nearly all the X-ray source lists were matched to catalogs extracted
from the $R$-band images, with the exception of field XMM-08, which
was correlated with the $I$-band. Two search radii, 2\arcsec\ and
5\arcsec, were used. The larger value reflects the typical statistical
error in X-ray source position determination (typically in the range
$\sim0.5\arcsec\text{--}2\arcsec$), coupled with an additional
systematic error component ($\sim1\arcsec$) in the attitude of the
spacecraft. Hence, a matching radius of 5\arcsec\ corresponds to
roughly a $2\text{--}3\sigma$ uncertainty for most of the sources.
The smaller correlation radius is justified by the distribution of the
positional accuracy of the X-ray sources, which peaks at $\sim
1\farcs3$. It extends up to 3\arcsec\ with the majority of sources
(92\%) being within 2\arcsec.
The results of X-ray source extraction and their cross-identification
with their optical counterparts for 7 high-galactic latitude fields (9
observations) are summarized in Table~\ref{tab:xs}. For each field two
rows are given: the first row refers to the matching done with a
5\arcsec\ search radius, in the second row the numbers for the smaller
2\arcsec\ search radius are reported. The table lists: in Col.~1 the
field name; in Col.~2 the Obs.~ID of the XMM-Newton observation; in
Col.~3 the number of detected X-ray point sources with a likelihood of
existence larger than \texttt{detml = 6}, $N_\mathrm{s}$; in Col.~4
the passband of the catalog used as the optical reference for
matching; in Col.~5 the number of matches, $N_\mathrm{m}$ within
5\arcsec\ (2\arcsec); in Col.~6 the number of X-ray sources with at
least one match $N_\mathrm{1}$. In the case of multiple matches the
$m=1$ sources refer to the closest matching optical source; in Col.~7
the identification rate $N_\mathrm{1}/N_\mathrm{s}$; in Col.~8, the
number of X-ray sources, which have at least one counterpart in the
optical reference catalog, and are also detected in all other
available optical passbands. This is a subset of the objects listed in
Col.~5; in Col.~9, the same as in the previous column but for the
$m=1$ optical counterparts, $N_\mathrm{1,all}$, which is a subset of
the objects listed in Col.~6; finally, in Col.~10--15 the number of
optical counterparts in other passbands, which are not detected in the
reference catalog. In Col.~13--15 $BV$, $BI$ and $BVI$ refer to
objects which are simultaneously detected in the respective passband
but do not correspond to matches of X-ray sources with the reference
catalog. Because we only list objects \emph{without} match in the
reference catalog the number of objects reported in Col.~10--15 is in
some cases higher in the second row than in the first row. These are
X-ray sources with matches in $B$-, $V$-, or $I$-band within a circle
of 2\arcsec\ having matches in the reference catalog only in the
larger 5\arcsec\ search radius.
The results of the X-ray/optical cross-correlation for all fields with
available $R$-band catalogs (619 unique sources) are displayed in
Fig.~\ref{fig:xoc}. The figure shows: (top left) the multiplicity
function; (top right) the cumulative fraction of X-ray sources with
optical counterparts in a 5\arcsec search radius (dashed line) and a
2\arcsec search radius (straight line); (bottom left) the distribution
of the positional offsets between X-ray and optical sources; and
(bottom right) the corresponding scatter plot in the
$\alpha~\times~\delta$ plane. Note that in three panels all $m=1$
matches are represented by filled histograms and/or larger symbols.
Inspection of Fig.~\ref{fig:xoc} shows that: (1) about 87\% (61\%) of
the X-ray point sources have at least one optical counterpart within
the search radius of 5\arcsec\ (2\arcsec) down to $R\sim25$~mag, and
very few sources have more than 3 matches. In only very few cases one
finds up to five associated optical sources, i.e.~potential physical
counterparts; (2) only about 15\% of the X-ray sources have
counterparts down to the Digital Sky Survey magnitude limit ($R \sim
20.5$), underscoring the need for dedicated optical imaging in order
to identify the X-ray source population; (3) the distribution of the
X-ray/optical positional offset peaks at around 1\arcsec\ for the
sources with $m=1$. The $m=1$ matches are well concentrated within a
circle of 2\arcsec. The distribution is almost flat if all
associations are considered. This underlines that the true physical
counterparts to the X-ray sources will be found predominantly among
the $m = 1$ sources, i.e.~the nearest and in most cases single
associated optical sources; (4) the positional differences between
X-ray and optical coordinates seem to be randomly distributed.
\begin{figure*}
\centering
\includegraphics[angle=-90, width=17cm]{3785fi09.eps}
\caption{X-ray/optical $R$-band positional correlation. (\textit{top
left}) number of correlated optical sources to X-ray sources
within 5\arcsec; (\textit{top right}) cumulative fraction of X-ray
sources with optical counterparts in the $R$-band catalog. The
dashed line is for the 5\arcsec search radius, the straight line
for the 2\arcsec search radius. The vertical short-dashed line
denotes the approximate limit of the DSS; (\textit{bottom left})
distribution of X-ray minus optical positional offset of all and
$m=1$ sources; (\textit{bottom right}) distribution of positional
offsets in the right ascension -- declination plane.}
\label{fig:xoc}
\end{figure*}
The statement that the additional correlations found within the
greater search radius are chance alignments is strengthened by an
estimate of the number of random matches between X-ray and optical
sources. Using an average of 110 X-ray sources per field we can
compute the total area covered by the search circles with 5\arcsec
(2\arcsec) radius. Multiplying this with the typical number density
of sources in the optical catalogs (30~arcmin$^{-2}$) we estimate 70
(12) random matches for an average field. The number of random matches
within the smaller correlation radius is well below the observed number
of optical/X-ray counterparts.
In addition, from this preliminary analysis the following conclusions
can be drawn: (1) about 39\% of the X-ray sources have no associated
optical source within 2\arcsec. This optical identification
completeness is comparable with that found by
\citet{2005ApJS..156...35E} in a similar study of X-ray source
samples, but there may also be small contributions from sources with
larger offsets than allowed by the adopted search radius,
contamination by spurious X-ray sources, and random matches with the
optical catalogs; (2) over 50\% of the $m=1$ sources are detected in
all the other bands available, within 1\arcsec; (3) correlations which
occur at search radii greater than 2\arcsec\ are most likely random
correlations with the comparably dense optical catalogs; (4) all
sources which are detected in $RI$ are also detected in $BV$,
indicating that the optical counterparts of the X-ray sources are not
excessively red or, even if they are red, the blue images are
sufficiently deep to detect them; (5) the small number of X-ray
sources matched with objects in the $B$- and $V$-band catalogs without
matches in the $R$-band catalog suggests that we are also not dealing
with excessively blue objects. Figure~\ref{fig:colors06} shows
color-color diagrams for the field XMM-06 for stars and galaxies in
the field and compares it with the optical colors of X-ray sources
matching objects in the optical catalogs. Stars and galaxies were
selected using the SExtractor CLASS\_STAR classifier with the cuts
made at $\mathrm{CLASS\_STAR}<0.1$ and $\mathrm{CLASS\_STAR>0.99}$ for
galaxies and stars, respectively. These diagrams show that, as one
would expect, no specific sub-population of stars or galaxies can be
identified with the X-ray sources.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=17cm]{3785fi10.eps}
\caption{Optical colors for galaxies (top panels) and stars (bottom
panels) in the field XMM-06. The black squares mark X-ray sources
in this field with matches to the optical catalogs in all four
passbands.}
\label{fig:colors06}
\end{figure*}
\section{Summary}
\label{sec:summary}
This paper describes the data products -- reduced and stacked images
as well as science-grade catalogs extracted from the latter --
produced and released for the XMM-Newton follow-up survey performed
with WFI at the ESO/MPG-2.2m telescope as part of the ESO Imaging
Survey project. The survey was carried out as a collaboration between
the EIS, XMM-Newton-SSC and IAEF-Bonn groups. At the time of writing 15 WFI
fields (3.75~square degrees) have been observed for this survey of
which 12 were released in the fall of 2004, with corrections to the
weight maps in July 2005, and are described in this
paper. For the 8 fields at high galactic latitude catalogs are also
presented.
The images were reduced employing the EIS/MVM image processing library
and photometrically calibrated using the EIS data reduction
system. The EIS system was also used to produce more advanced survey
products (stacks and catalogs), to assess their quality, and to make
them publicly available via the web, together with comprehensive
product logs. The quality of the data products reported in the logs is
based on the comparison of different statistical measures such as
galaxy and star number counts and the locus of stars in color-color
diagrams with results obtained in previous works as well as
predictions of theoretical models calibrated by independent
studies. These diagnostics are regularly produced by the system
forming an integral part of it.
In the particular case of this survey, a number of frames have been
reduced by both EIS and the Bonn group, using independent software
thus allowing a direct comparison of the resulting images and catalogs
to be made. From this comparison one finds that the position of the
sources extracted from images produced for the same field/filter
combination by the different pipelines are in excellent agreement with
a mean offset of $\sim$ 20~mas and a standard deviation of $\sim$
50~mas. Comparison of the magnitudes of the extracted sources shows
that in general the mean offset is $\lesssim 0.05$~mag, consistent with
the estimated error of the photometric calibration of about
$0.08$~mag. Cases with larger deviations were investigated further and
the problem with the two most extreme cases were found to be unrelated
to the calibration procedure. Instead, it demonstrates the need for
the implementation of additional procedures to cope with the specific
situation encountered and the need for a better calibration plan.
This discussion illustrates a couple of important points. First, that
while an automatic process is prone to errors in dealing with extreme
but rare situations, reductions carried out with human
intervention are prone to random errors which can never be eliminated.
Second, more robust procedures can always be added or existing ones
tuned to deal with exceptions once they are found. However, as always
when dealing with automatic reduction of large volumes of data, the
real issue is to decide on the trade-off between coping with these
rare exceptions and the speed of the process and margin of failures
one is willing to accept.
Finally, a comparison of the PSF distortions suggests that some
improvement could be achieved by requiring the EIS/MVM to impose an
additional constraint on the astrometric solution to improve the
\emph{internal} registration. As mentioned earlier this can be
achieved by imposing the geometrical constraint that the CCDs form a
mosaic.
Preliminary catalogs were also extracted from the available X-ray
images and cross-correlated with the source lists produced from the
$R$-band images. From this analysis one finds that about 61\% of the
X-ray sources have an optical counterpart within 2\arcsec, most of
which are unique. Out of these about 70\% are detected in all the
available passbands. Combined, these results indicate that the adopted
observing strategy successfully yields the expected results of
producing a large population of X-ray sources ($\sim$ 300) with
photometric information in four passbands, therefore enabling a
tentative classification and redshift estimation, sufficiently faint
to require follow-up observations with the VLT.
The present paper is one in the series presenting the
results of a variety of optical/infrared surveys carried out by the
EIS project.
\begin{acknowledgement}
The results presented in this paper are partly based on observations
obtained with XMM-Newton, an ESA science mission with instruments
and contributions directly funded by ESA Member States and
NASA. This paper was supported in part by the German DLR under
contract number 50OX0201. JPD was supported by the EIS visitor
programme, by the German Ministry for Science and Education (BMBF)
through DESY under the project 05AE2PDA/8, and by the Deutsche
Forschungsgemeinschaft under the project SCHN 342/3--1. LFO
acknowledges financial support from the Carlsberg Foundation, the
Danish Natural Science Research Council and the Poincar\'e
Fellowship program at Observatoire de la C\^ote d'Azur.
\end{acknowledgement}
\bibliographystyle{aa}
|
1,477,468,750,890 | arxiv | \section*{Introduction}
Free probability has received much attention since its discovery as
an algebraic structure for noncommuting op\-e\-ra\-tors.\cite{Voiculescu1985}
Subsequently, it has found a place in combinatorics with the deep
relationship between free cumulants and noncrossing partitions.\cite{Nica2006a,Novak2011}
Free probability for random matrices usually focuses on the asymptotic
freeness of infinite matrices.\cite{Voiculescu1991,Biane1998} In
contrast, we investigate here how free probability offers us new perspectives
on finite-dimensional matrices. We develop and extend the notion of
freeness to finite random matrices using linear algebra and elementary
statistics, without requiring intricate knowledge of operator algebras
or combinatorics.
In this paper, we consider the problem of calculating eigenvalues
of sums of finite matrices given the eigenvalues of the individual
matrices, as an illustration of the power of free probability theory.
In general, the eigenvalues of the sum of two matrices $A+B$ are
not simply the sums of the eigenvalues of the individual matrices
$A$ and $B$;\cite{Knutson2001} as matrices do not generally commute,
the addition of eigenvalues must take into account the relative orientations
of eigenvectors. However, free probability does allows us to do this
calculation in the limiting case where the rotation matrix between
the two bases is so random as to be uniformly oriented, i.e. of uniform
Haar measure. The matrices $A$ and $B$ are then said to be in generic
position, or free, and the eigenvalue spectrum of $A+B$ converges,
in a sense, to the additive free convolution $A\boxplus B$ of two
random matrices $A$ and $B$ as the matrix dimensions increase to
infinity.\cite{Nica2006a}
A natural question to ask is how accurately $A\boxplus B$ approximates
the exact eigenvalue spectrum, or density of states (d.o.s.), of the
sum $A+B$ when the individual matrices are known to be noncommuting
but not necessarily free. We seek to quantify this statement in this
paper. In addition to classifying two random matrices as being free
or not free relative to each other, we can characterize them as having
an intermediate, graduated property which we call partial freeness,
and furthermore we are able to quantify the leading-order discrepancy
between freeness and partial freeness. This has already helped us
explain the unexpected accuracy of approximations to the Hamiltonians
of disordered condensed matter systems.\cite{Chen2012}
We begin with a brief, self-contained review of free probability from
a random matrix theoretic perspective, and provide an elementary illustration
of how computing the additive free convolution using an integral transform
allows us to calculate the d.o.s.\ for the sum of free random matrices.
Next, we recap how the additive free convolution can also be approached
via the moments of random matrices, and in particular how both classical
and free independence can be interpreted as imposing precise rules
for the decomposition of joint moments of arbitrary orders. We then
show how we can generalize this to the notion of partial freeness
and describe a procedure for detecting it numerically from samples
of random matrices.
\section{Freeness of two matrices}
We use the notation $\left\langle \cdot\right\rangle $ for the normalized
expected trace (n.e.t.) $\frac{1}{N}\mathbb{E}\mbox{ Tr}\cdot$ of
a $N\times N$ matrix.
\begin{defn}
The random matrices $A$ and $B$ are free (or synonymously, freely
independent) with respect to the n.e.t.\ if for all $k\in\mathbb{N}$,
\begin{equation}
\left\langle p_{1}\!\left(A\right)q_{1}\!\left(B\right)p_{2}\!\left(A\right)q_{2}\!\left(B\right)\cdots p_{k}\!\left(A\right)q_{k}\!\left(B\right)\right\rangle =0\label{eq:free-defn}
\end{equation}
for all polynomials $p_{1},q_{1},p_{2},q_{2},\dots p_{k},q_{k}$ such
that $\left\langle p_{1}\!\left(A\right)\right\rangle =\left\langle q_{1}\!\left(B\right)\right\rangle =\cdots=0$.\cite[Definition 4.2]{Voiculescu1985}
This generalizes the notion of (classical) independence of scalar
random variables: were $A$ and $B$ to commute, the preceding with
$k=1$ would suffice.\end{defn}
\begin{fact}
The preceding is equivalent to defining free independence using the
special case of the centering polynomials $p_{i}\left(x\right)=x^{n_{i}}-\left\langle x^{n_{i}}\right\rangle $,
$q_{i}\left(x\right)=x^{m_{i}}-\left\langle x^{m_{i}}\right\rangle $,
$i=1,\dots,k$ for positive integers $n_{1},m_{1},\dots,n_{k},m_{k}$.\cite[Proposition 4.3]{Voiculescu1985}
\end{fact}
That this is sufficient follows from the linearity of the n.e.t. In
principle, this would allow us to check if two matrices $A$ and $B$
were free by checking that all centered joint moments
\[
\left\langle \left(A^{n_{1}}\!-\!\left\langle A^{n_{1}}\right\rangle \right)\left(B^{m_{1}}\!-\!\left\langle B^{m_{1}}\right\rangle \right)\cdots\left(A^{n_{k}}\!-\!\left\langle A^{n_{k}}\right\rangle \right)\left(B^{m_{k}}\!-\!\left\langle B^{m_{k}}\right\rangle \right)\right\rangle
\]
vanish for all positive exponents $n_{1}$,$m_{1}$,$\ldots$,$n_{k}$,$m_{k}$.
However, this is numerically impractical due to the need to check
joint moments of all orders, as well as the presence of fluctuations
from sampling error if using Monte Carlo, which causes the higher
order joint moments to converge slowly. In practice, it is far easier
to check for freeness by examining how the d.o.s.\ of the exact sum
$A+B$ converges to the p.d.f.\ defined by the free convolution $A\boxplus B$,\cite{Voiculescu1986}
which we will now define.
\subsection{The free convolution}
\begin{defn}
The $R$-transform of the p.d.f. $f_{A}$, denoted by $R_{A}$, is
defined implicitly via the following Cauchy trans\-form:\cite{Voiculescu1985,Nica2006a}%
\footnote{There are unfortunately two extant notations for the $R$-transform.
We use here the $R$-transform as presented in \cite{Nica2006a};
this differs slightly from the original notation of Voiculescu,\cite{Voiculescu1985}
which is the $\mathcal{R}$-transform elsewhere, e.g.\ in \cite{Nica2006a}.
The relationship between the two is $R\left(w\right)=w\mathcal{R}\left(w\right)$.%
}
\begin{equation}
w=\lim_{\epsilon\downarrow0}\int_{\mathbb{R}}\frac{f_{A}\left(z\right)}{R_{A}\left(w\right)-\left(z+i\epsilon\right)}dz.\label{eq:r-transform}
\end{equation}
Some intuition for the $R$-transform may be achieved by expanding
the Cauchy integral as a formal power series:%
\footnote{Physicists may recognize $G_{A}\left(w\right)$ as the retarded Green
function corresponding to the Hamiltonian $A$.%
}
\begin{equation}
G_{A}\left(w\right)=\lim_{\epsilon\downarrow0}\int_{\mathbb{R}}\frac{f_{A}\left(z\right)}{w-z-i\epsilon}dz=\sum_{k=0}^{\infty}\frac{\mu_{k}\left(A\right)}{w^{k+1}},
\end{equation}
where $\mu_{k}$ is the $k$th moment
\begin{equation}
\mu_{k}\left(A\right)=\int_{\mathbb{R}}x^{k}f_{A}\left(x\right)dx=\left\langle A^{k}\right\rangle .
\end{equation}
In other words, the Cauchy transform of a p.d.f.\ is a generating
function of its moments. We then have that the $R$-transform $R_{A}$
inverts the Cauchy transform $G_{A}$ in the functional sense, i.e.\ that
\begin{equation}
G_{A}\left(R_{A}\left(w\right)\right)=w.
\end{equation}
Viewing both $G_{A}$ and $R_{A}$ as formal power series, the latter
is simply the reversion of the former,\cite{Morse1953,Henrici1974}
in the sense that $R_{A}$ is a series in $w$ whose inverse with
respect to composition is $G_{A}$ as a series in $1/z$. The coefficients
of the $R$-transform are then the free cumulants $\nu_{k}$, i.e.
\end{defn}
\begin{equation}
R_{A}\left(w\right)=\sum_{k=0}^{\infty}\nu_{k}w^{k-1},
\end{equation}
with $\nu_{0}=1$. The free cumulants are particular combinations
of moments $\nu_{k}=\nu_{k}\left(\mu_{1},\dots,\mu_{k}\right)$ which
shall be made more explicit later.
\begin{defn}
The free convolution $A\boxplus B$ is defined via its $R$-transform
\begin{equation}
R_{A\boxplus B}\left(w\right)=R_{A}\left(w\right)+R_{B}\left(w\right)-\frac{1}{w}.
\end{equation}
\end{defn}
The free cumulants linearize the free convolution in the sense that
for all $k>0$,\cite{Nica2006a,Novak2011}
\begin{equation}
\nu_{k}\left(A\boxplus B\right)=\nu_{k}\left(A\right)+\nu_{k}\left(B\right),
\end{equation}
and the subtraction of $1/w$ produces a properly normalized p.d.f.
by conserving $\nu_{0}\left(A\boxplus B\right)=\mu_{0}\left(A\boxplus B\right)=1$.
In Section~\ref{sub:ex-free-finite}, we show an example of calculating
$f_{A\boxplus B}$ analytically via the $R$-transform. In general,
such analytic calculations are hindered by the functional inversions
required in (\ref{eq:r-transform}). This has inspired interesting
work in calculating $A\boxplus B$ numerically, such as in the RMTool
package.\cite{Rao2007,Olver2012} We discuss instead an alternate
strategy starting directly from numerical samples of random matrices,
which generalizes naturally to general pairs of matrices. In situations
where only the numerical samples are known, it may be convenient instead
to use the result of Fact~\ref{A-QBQ} described in the next section.
\subsection{Free convolution from random rotations}
\begin{defn}
A square matrix $Q$ is a unitary/\-orthogonal/\-symplectic random
matrix of Haar measure if for any constant unitary/orthogonal/symplectic
matrix $P$, the integral of any function over $dQ$ is identical
to the integral over $d\left(PQ\right)$ or that over $d\left(QP\right)$.\end{defn}
\begin{example}
Unitary matrices of dimension $N=1$ are simply scalar unit complex
phases of the form $e^{i\theta}$. Haar measure over $e^{i\theta}$
can be written simply as $\mbox{d}\theta/2\pi$. This is manifestly
rotation invariant, as multiplying $e^{i\theta}$ by any constant
phase factor $e^{i\phi}$ simply changes the measure to $\mbox{d}(\theta+\phi)/2\pi=\mbox{d}\theta/2\pi$.
\end{example}
Uniform Haar measure generalizes the concept of uniformity to higher
dimensions by preserving the notion of invariance with respect to
arbitrary rotations. Consequently, the eigenvalues of $Q$ lie uniformly
on the unit circle on the complex plane.\cite{Diaconis1994} Explicit
samples can be generated numerically by performing $QR$ decompositions
on $N\times N$ matrices sampled from the Gaussian orthogonal (unitary)
ensemble.\cite{Diaconis2005}
\begin{fact}
\label{A-QBQ}For a pair of Hermitian (real symmetric) random matrices
$A$ and $B$, the d.o.s.\ of $A+QBQ^{\dagger}$, where $Q$ is a
unitary (orthogonal) random matrix of Haar measure, coincides with
the p.d.f.\ of $A\boxplus B$ in the limit of infinitely large matrices
$N\rightarrow\infty$.
\end{fact}
Consider the diagonalization of $A=Q_{A}\Lambda_{A}Q_{A}^{\dagger}$
and $B=Q_{B}\Lambda_{B}Q_{B}^{\dagger}$. The d.o.s.\ of $A+QBQ^{\dagger}$
is identical to that of $\Lambda_{A}+\left(Q_{A}^{\dagger}QQ_{B}\right)\Lambda_{B}\left(Q_{B}^{\dagger}Q^{\dagger}Q_{A}\right)$,
since these matrices are related by the similarity transformation
$Q_{A}^{\dagger}\left(\cdot\right)Q_{A}$. However, the Haar property
of $Q$ means that the d.o.s.\ of this matrix is identical to that
of $\Lambda_{A}+Q\Lambda_{B}Q^{\dagger}$. This gives us another interpretation
of free convolution: it describes the statistics resulting from adding
two random matrices when the basis of one matrix is randomly rotated
or ``spun around'' relative to the other. The information about
the relative orientations of the two bases is effectively ignored,
retaining only the knowledge that they are not parallel so that $A$
and $B$ do not commute.
The freeness of random matrices usually discussed only in the limit
of infinitely large matrices, where it is called asymptotic freeness.\cite{Voiculescu1991}
For example, two matrices sampled from the Gaussian ensembles (orthogonal,
unitary or symplectic) are free.\cite{Nica2006a} Nevertheless, finite-dimensional
random matrices can exhibit freeness as well. We now provide some
examples and illustrate the analytic calculation of the free convolution
using the $R$-transform.
\subsection{Examples of free finite-dimensional matrices\label{sub:ex-free-finite}}
\begin{example}
Consider the $2\times2$ real symmetric random matrices
\begin{equation}
A\left(t\right)=U\left(t\right)\sigma_{z}U\left(-t\right),\quad B\left(t\right)=U\left(-t\right)\sigma_{z}U\left(t\right),
\end{equation}
where $\sigma_{z}$ is the Pauli matrix $\left(\begin{array}{cc}
1 & 0\\
0 & -1
\end{array}\right)$, $U\left(t\right)$ is the rotation matrix $\left(\begin{array}{cc}
\cos t & \sin t\\
-\sin t & \cos t
\end{array}\right)$, and the rotation angle $t$ is uniformly sampled on the interval
$\left[0,\pi\right)$. By construction, the d.o.s.\ of $A\left(t\right)$
and $B\left(t\right)$ are identical; their eigenvalues have the p.d.f.
\begin{equation}
f_{A}\left(x\right)=f_{B}\left(x\right)=\frac{1}{2}\left(\delta\left(x+1\right)+\delta\left(x-1\right)\right),
\end{equation}
where $\delta\left(x\right)$ is the Dirac delta distribution. Furthermore,
for any particular $t$, the sum of $A\left(t\right)$ and $B\left(t\right)$
can be written in the basis where $A\left(t\right)$ is diagonal as
\begin{equation}
M\left(t\right)=\sigma_{z}+U\left(-2t\right)\sigma_{z}U\left(2t\right).
\end{equation}
By construction, $U\left(2t\right)$ is of uniform Haar measure and
so the d.o.s.\ of $M\left(t\right)$ is given exactly by the additive
free convolution of $A\left(t\right)$ and $B\left(t\right)$.\cite{Voiculescu1991,Nica2006a}
The $R$-transforms of $f_{A}$ and $f_{B}$ are
\begin{equation}
R_{A}\left(w\right)=R_{B}\left(w\right)=\frac{1\pm\sqrt{1+4w^{2}}}{2w}.
\end{equation}
Performing the free convolution,
\begin{equation}
R_{A\boxplus B}\left(w\right)=R_{A}\left(w\right)+R_{B}\left(w\right)-\frac{1}{w}=\pm\frac{\sqrt{1+4w^{2}}}{w}.
\end{equation}
Finally, we calculate the p.d.f.\ using the Plemelj inversion formula:\begin{subequations}
\begin{align}
f_{A\boxplus B}\left(x\right) & =\frac{1}{\pi}\left[\mbox{Im }R_{A\boxplus B}^{-1}\left(w\right)\right]_{w=x}\label{eq:plemelj}\\
& =\frac{1}{\pi\sqrt{4-x^{2}}},\label{eq:arcsine}
\end{align}
\end{subequations}which is the arcsine distribution on the interval
$\left[-2,2\right]$, and we have retained only the positive root
to obtain a nonnegative probability density. The odd moments all vanish
by the even symmetry of $f_{\negthinspace A\boxplus B}$, and the
even moments are the central binomial coefficients $\mu_{2n}\negthinspace\left(\negthinspace A\negthinspace\boxplus\negthinspace B\right)$
$=\binom{2n}{n}$.
\end{example}
This example shows that the free convolution of two discrete probability
distributions can be a continuous probability distribution. In contrast,
the classical convolution $A\star B$ produces the p.d.f.
\begin{equation}
f_{A\star B}\left(x\right)=f_{A}\star f_{B}=\int_{\mathbb{R}}f_{A}\left(y\right)f_{B}\left(x-y\right)dy=\frac{1}{4}\left(\delta\left(x+2\right)+2\delta\left(x\right)+\delta\left(x-2\right)\right),\label{eq:discrete-binomial}
\end{equation}
which is simply a discrete binomial distribution. The results of the
two convolutions are plotted in Figure~\ref{fig:arcsine-vs-binomial}.
\begin{figure}[h]
\caption{\label{fig:arcsine-vs-binomial}The d.o.s. $f_{A\boxplus B}\left(x\right)$
and $f_{A\star B}$$\left(x\right)$ for the free (dashed blue line)
and classical convolutions (solid black bars) of the matrices in Example
1, as given in (\ref{eq:arcsine}) and (\ref{eq:discrete-binomial})
respectively. The heights of the lines in the plot of $f_{A\star B}$
indicate the point masses.}
\includegraphics[width=0.95\columnwidth,height=0.95\columnwidth,keepaspectratio]{fig-freevsclassical}
\end{figure}
Here is another example of matrices with the same d.o.s.\ as before.
This time, the matrices are not random at all.
\begin{example}
\label{example:pauli-sx}The $2N\times2N$ deterministic matrices
\begin{equation}
A=\left(\begin{array}{ccccc}
0 & 1\\
1 & 0\\
& & 0 & 1\\
& & 1 & 0\\
& & & & \ddots
\end{array}\right),\quad B=\left(\begin{array}{ccccc}
0 & & & & 1\\
& 0 & 1\\
& 1 & 0\\
& & & \ddots\\
1 & & & & 0
\end{array}\right)
\end{equation}
are asymptotically free as $N\rightarrow\infty$. Each consists of
$N$ direct sums of the Pauli matrix $\sigma_{x}=\left(\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right)$, with $B$ having a basis shifted relative to $A$ with circulant
(periodic) boundary conditions. The d.o.s.\ are the same as in the
previous example and so the calculations of $A\boxplus B$ and $A\star B$
proceed identically. Considering the matrix $B^{\prime}$, being $B$
without the circulant entries on the lower left and upper right, we
also have that $A$ and $B^{\prime}$ are asymptotically free as $N\rightarrow\infty$.
\end{example}
\subsection{Comparison of free and classical convolutions}
To conclude this introductory survey, we compare the free additive
convolution $\boxplus$ and the classical convolution $\star$ in
more detail. We note that the Fourier transform $\;\widehat{\cdot}\;$
turns classical convolutions into products according to the convolution
theorem, i.e. $\widehat{f_{A}\star f_{B}}=\widehat{f_{A}}\widehat{f_{B}}$;
taking the logarithm allows this to be written as a linear sum:
\begin{equation}
\log\widehat{f_{A}\star f_{B}}=\log\widehat{f_{A}}+\log\widehat{f_{B}}.
\end{equation}
Furthermore if $f_{A}$ and $f_{B}$ are p.d.f.s, then $\log\widehat{f_{A}}$
and $\log\widehat{f_{B}}$ can be identified as the corresponding
(classical) cumulant generating functions, i.e. $\log\widehat{f_{A}}$
can be expanded in a formal power series
\begin{equation}
\log\widehat{f_{A}}\left(w\right)=\sum_{k=0}^{\infty}\frac{\kappa_{n}\left(A\right)w^{n}}{n!},
\end{equation}
where $\kappa_{n}\left(A\right)$ is the $n$th (classical) cumulant
of $f_{A}$, and similarly for $f_{B}$. We have also have that $\kappa_{n}\left(A\star B\right)=\kappa_{n}\left(A\right)+\kappa_{n}\left(B\right)$
for $n\ge1$, i.e. that cumulants linearize the convolution. Drawing
an analogy between the cumulants $\kappa_{n}$ and the free cumulants
$\nu_{n}$, the $R$-transform is often described as the free analogue
of the log-Fourier transform, with the Cauchy transform being the
analogue of the Fourier transform and the functional inversion playing
the part analogous to the logarithm.
Finally, we note that the sum of two scalar random variables $a$
and $b$, sampled with p.d.f.s $f_{A}$ and $f_{B}$ respectively,
itself has the resulting p.d.f.\ $f_{A\star B}$.\cite{Feller1971}
For matrices $A$ and $B$ with d.o.s. $f_{A}$ and $f_{B}$ respectively,
then the matrix $A+\Pi B\Pi^{T}$, where $\Pi$ is a random permutation
matrix, has d.o.s.\ $f_{A}\star f_{B}$. This is equivalent to the
p.d.f.\ formed by picking an eigenvalue of $A$ at random and an
eigenvalue of $B$ at random. In this sense, the discrete random permutation
$\Pi$ which generates the classical convolution is the analogue of
the continuous random rotation $Q$ of uniform Haar measure in free
convolution.
Table~\ref{tab:free-vs-classical} summarizes the analogies between
the free and classical convolutions.
\begin{table}[h]
\caption{\label{tab:free-vs-classical}Correspondence between the free and
classical convolutions.}
\begin{tabular}{|c|c|}
\hline
$A\boxplus B$ & $A\star B$\tabularnewline
\hline
\hline
$R$-transform $R$ & log-Fourier transform $\log\hat{f}$\tabularnewline
\hline
Cauchy transform $G$ & Fourier transform $\hat{f}$\tabularnewline
\hline
functional inversion & logarithm\tabularnewline
\hline
free cumulants $\nu_{n}$ & (classical) cumulants $\kappa_{n}$\tabularnewline
\hline
Plemelj inversion & inverse Fourier transform\tabularnewline
\hline
\hline
$A+QBQ^{\dagger}$ & $A+\Pi B\Pi^{T}$\tabularnewline
\hline
uniform Haar measure $Q$ & random permutations $\Pi$\tabularnewline
\hline
\end{tabular}
\end{table}
\section{Density of states of sums of random matrices}
The introductory examples show that if $A$ and $B$ are free, then
the d.o.s.\ of $A+B$ can be calculated exactly without any detailed
knowledge of the eigenvectors of $A$ and $B$. However, not all pairs
of random matrices are free. Nevertheless, the free convolution can
provide surprisingly accurate approximations to their d.o.s.\ even
in the general case, and the degree of approximation can even be quantified
by examining individual moments of the sum, $\mu_{n}\left(A+B\right)$
as shown in Section~\ref{partial-freeness}. In order to do this,
however, we will first need to examine how each moment of $A+B$ subdivides
into sums over joint moments of $A$ and \textbf{$B$}, and how the
primordial definition of freeness in (\ref{eq:free-defn}) determines
what these moments must be for free $A$ and $B$. For simplicity,
we assume in this paper that all necessary moments $\left\{ \mu_{n}\right\} $
exist, so that the moments capture all the information contained in
the corresponding p.d.f. For random matrices, this simply means that
all powers of the matrix must have a finite n.e.t.
\subsection{Calculating moments of $A+B$ from the joint moments of $A$ and
$B$}
For general $A$ and $B$, it is possible to characterize the d.o.s.\ of
$A+B$ completely without constructing the sum and diagonalizing it
if all their joint moments are known. We can then calculate all the
moments $\left\{ \mu_{n}\left(A+B\right)\right\} $ from the definition
of the moment of a random matrix:\begin{subequations}
\begin{align}
\mu_{n}= & \left\langle \left(A+B\right)^{n}\right\rangle \label{eq:moment-def}\\
= & \langle A^{n}+A^{n-1}B+A^{n-2}BA+\cdots+BA^{n-1}+A^{n-2}B^{2}+\cdots+B^{2}A^{n-2}+\cdots+B^{n}\rangle\label{eq:moment-binomial}\\
= & \left\langle A^{n}\right\rangle +n\left\langle A^{n-1}B\right\rangle +n\left\langle A^{n-2}B^{2}\right\rangle +\cdots+\left\langle B^{n}\right\rangle .\label{eq:moment-expanded}
\end{align}
\end{subequations}The second equality follows directly from the noncommutative
binomial expansion of (\ref{eq:moment-binomial}), and the third equality
follows from the linearity of $\left\langle \cdot\right\rangle $
and its cyclic invariance, i.e.\ $\left\langle AB\right\rangle =\left\langle BA\right\rangle $.
We refer to (\ref{eq:moment-expanded}) as the \emph{word expansion
of $\mu_{n}$}. As written, there are $2^{n}$ terms in (\ref{eq:moment-binomial})
but some of them yield the same n.e.t.\ in (\ref{eq:moment-expanded})
identically because of cyclic invariance. The equivalence classes
defined by grouping identical terms in this manner are exactly those
of combinatorial necklaces.\cite{MacMahon1892,Riordan1957}
\begin{defn}
\label{def:necklace}An $\left(n,k\right)$-word $W$ is a string
of $n$ symbols, each of which can have any of $k$ values. An $\left(n,k\right)$-necklace
$\left[\mathcal{N}\right]$ is the equivalence class over $\left(n,k\right)$-words
$W$ with respect to cyclic permutations $\Pi$ of length $n$, i.e.
\begin{equation}
\left[\mathcal{N}\right]=\left\{ w\in W\vert\exists\pi\in\Pi:\;\mathcal{N}=\pi w\right\} .
\end{equation}
\end{defn}
There are efficient algorithms for enumerating all $\left(n,k\right)$-necklaces
for a given $n$ and $k$.\cite{Ruskey1992,Sawada2001} Furthermore,
the total number of terms in the word expansion (\ref{eq:moment-expanded})
is well-known:
\begin{fact}
The number of $\left(n,k\right)$-necklaces is
\begin{equation}
N\left(n,k\right)=\frac{1}{n}\sum_{d\vert n}\phi\left(d\right)k^{n/d}=\frac{1}{n}\sum_{i=1}^{n}k^{\gcd\left(i,n\right)},
\end{equation}
where $d\vert n$ means that $d$ divides $n$, $\phi$ is the Euler
totient function, and $\gcd$ is the greatest common divisor.\cite{MacMahon1892,Riordan1957}
By definition, $\phi\left(d\right)$ is the number of integers $m$
in the range $1\le m\le d$ that are relatively prime to $d$, i.e.
$gcd\left(d,m\right)=1$.
\end{fact}
In addition, we can determine the multiplicity of each term in (\ref{eq:moment-expanded}),
which is identical to the number of words in the equivalence class
defined by each corresponding particular necklace. We state this very
simple fact without proof and provide an example.
\begin{fact}
Let $m=\#\left(\left[\mathcal{N}\right]\right)$ be the number of
$\left(n,k\right)$-words belonging to the equivalence class that
defines the necklace $\left[\mathcal{N}\right]$. Then $m$ is the
length of the longest cyclic permutation that leaves any word $W\in\mathcal{N}$
unchanged, i.e. it is the length of the longest subword $S$ of a
word $W\in\mathcal{N}$ such that $W=S^{n/m}$.\end{fact}
\begin{example}
The necklace $\left[AABAAB\right]=\left[A^{2}BA^{2}B\right]$ is an
equivalence class over $\left(6,2\right)$-words of size 3, since
applying a (one-symbol) cyclic permutation three times leaves $A^{2}BA^{2}B$
unchanged:
\begin{equation}
AABAAB\mapsto ABAABA\mapsto BAABAA\mapsto AABAAB,
\end{equation}
i.e. $\#\left(\left[A^{2}BA^{2}B\right]\right)=3$ which follows from
the fact that $AABAAB=\left(AAB\right)^{2}$.
\end{example}
The algorithmic enumeration of necklaces and their multiplicities
allow us to sum the joint moments in the word expansion (\ref{eq:moment-expanded})
to obtain $\mu_{n}$. As $N\left(n,k\right)=\mathcal{O}\left(k^{n}/n\right)$
asymptotically as $n\rightarrow\infty$, the word expansion saves
approximately a factor of $n$ in effort relative to working with
the naive noncommutative binomial expansion, which has $k^{n}$ terms.
\subsection{Decomposition rules for joint moments}
We have reduced the problem of calculating $\mu_{n}$ to that of calculating
joint moments; each has the form $\left\langle A^{n_{1}}B^{m_{1}}\cdots A^{n_{k}}B^{m_{k}}\right\rangle $
for positive integers $n_{1}$, $m_{1}$, $\ldots$, $n_{k}$, $m_{k}$.
In general, this is not the most compact way to specify the relationship
between $A$ and $B$. However, classical and free independence each
provide a prescription for computing such joint moments in terms of
the pure moments $\left\langle A\right\rangle $, $\left\langle A^{2}\right\rangle $,
$\ldots$ and $\left\langle B\right\rangle $, $\left\langle B^{2}\right\rangle $,
$\ldots$ of $A$ and $B$ respectively.
\begin{fact}
For classically independent random matrices $A$ and $B$,
\begin{equation}
\left\langle A^{n_{1}}B^{m_{1}}\cdots A^{n_{k}}B^{m_{k}}\right\rangle =\left\langle A^{n_{1}+\cdots+n_{k}}B^{m_{1}+\cdots+m_{k}}\right\rangle =\left\langle A^{n_{1}+\cdots+n_{k}}\right\rangle \left\langle B^{m_{1}+\cdots+m_{k}}\right\rangle ,\label{eq:class-indep}
\end{equation}
i.e. $A$ and $B$ behave as if they commute.\cite{Nica2006a}
\end{fact}
The analogous rule for free independence is more complicated; however,
an implicit formula can be derived from the primordial definition
of freeness in (\ref{eq:free-defn}) by using the linearity of the
n.e.t.:\begin{subequations}
\begin{align}
0= & \left\langle \left(A^{n_{1}}-\left\langle A^{n_{1}}\right\rangle \right)\left(B^{m_{1}}-\left\langle B^{m_{1}}\right\rangle \right)\left(A^{n_{k}}-\left\langle A^{n_{k}}\right\rangle \right)\left(B^{m_{k}}-\left\langle B^{m_{k}}\right\rangle \right)\right.\\
= & \left\langle A^{n_{1}}B^{m_{1}}\cdots A^{n_{k}}B^{m_{k}}\right\rangle \nonumber \\
& -\left\langle A^{n_{1}}\right\rangle \left\langle A^{n_{2}}B^{m_{2}}\cdots A^{n_{k}}B^{m_{k}+m_{1}}\right\rangle -\left\langle B^{m_{1}}\right\rangle \left\langle A^{n_{1}+n_{2}}B^{m_{2}}\cdots A^{n_{k}}B^{m_{k}}\right\rangle \nonumber \\
& +\dots+\left(-1\right)^{\sum_{i=1}^{k}\left(n_{i}+m_{i}\right)}\left\langle A^{n_{1}}\right\rangle \cdots\left\langle A^{n_{k}}\right\rangle \left\langle B^{m_{1}}\right\rangle \cdots\left\langle B^{m_{k}}\right\rangle ,\label{eq:free-expandjoint}
\end{align}
\end{subequations}which can be rearranged immediately to give a recurrence
relation for the joint moment $\left\langle A^{n_{1}}B^{m_{1}}\cdots A^{n_{k}}B^{m_{k}}\right\rangle $
in terms of joint moments of lower order.
\section{\label{sec:Partial-freeness}Partial freeness}
The main result of our paper is to show that the following notion
of partial freeness is a useful generalization of freeness, particularly
for finite random matrices.
\begin{defn}
\label{partial-freeness} Two random matrices $A$ and $B$ are partially
free to order $p$ if the first difference between $A+B$ and $A\boxplus B$
occurs at the $p$th moment $\mu_{p}$, i.e. (\ref{eq:free-expandjoint})
holds for all joint moments of the form
\[
\left\langle A^{n_{1}}B^{m_{1}}\cdots A^{n_{k}}B^{m_{k}}\right\rangle
\]
with positive integers $n_{1}$, $m_{1}$, $\dots$, $n_{k}$, $\sum_{i=1}^{n}\left(n_{i}+m_{i}\right)=q$,
for all $q<p$, but there exists at least one joint moment for $q=p$
for which (\ref{eq:free-expandjoint}) does not hold. We say that
$A$ and $B$ are free to $p$ moments.
\end{defn}
In numerical applications, the difference between $A+B$ and $A\boxplus B$
must be tested for statistical significance if the joint moments are
calculated from Monte Carlo samples of $A$ and $B$.
This definition immediately allows us to restate the matching three
moments theorem of Refs.~\cite{Movassagh2010,Movassagh2011a}:
\begin{fact}
Let $A$ and $B$ be a pair of $N\times N$ diagonalizable random
matrices with $A=Q_{A}\Lambda_{A}Q_{A}^{\dagger}$ and $B=Q_{B}\Lambda_{B}Q_{B}^{\dagger}$.
If $\mathbb{E}\left[\left(Q_{B}^{\dagger}Q_{A}\right)_{ij}\right]=1/N$
for each matrix element of $Q_{B}^{\dagger}Q_{A}$, then $A$ and
$B$ are free to $p>3$ moments.
\end{fact}
Our definition is a natural generalization of the concept of freeness,
and they coincide if all the moments match.
\begin{claim}
Two random matrices $A$ and $B$ are free if they are partially free
to all orders.
\end{claim}
This follows immediately from the definitions of freeness and partial
freeness, so long as the limit $N\rightarrow\infty$ exists.
The following example illustrates that (partial) freeness can also
be a property of a pair of random and deterministic matrices.
\begin{example}
\label{example:tridiagonal}The $N\times N$ random matrix $A$, a
diagonal matrix with elements i.i.d.\ standard Gaussian random variates,
and $B$, the tridiagonal matrix
\[
\left(\begin{array}{cccc}
0 & 1 & & 0\\
1 & \ddots & \ddots\\
& \ddots & \ddots & 1\\
0 & & 1 & 0
\end{array}\right)
\]
are partially free of order 8. This can be verified by explicit calculation
of the first eight moments. Again by even symmetry all the odd moments
of $A+B$ vanish while even moments are 1, 3, 17, 125, 1099, 11187,
129759,$\dots$ Furthermore we can identify the leading order deviation
as arising from the term $\left\langle \left(AB\right)^{4}\right\rangle =1$.
The significance of this result for condensed matter physics is discussed
in Ref.~\cite{Chen2012}.
\end{example}
In fact, partial freeness can also be a property of purely deterministic
matrices. Revisiting Example~\ref{example:pauli-sx}, we can show
that the $2N\times2N$-dimensional matrices $A$ and $B$ in that
Example are partially free to $2N$ moments. Since $A$ and $B$ are
each constructed out of direct sums of the same Pauli matrix, $A^{2}=B^{2}=I$
where $I$ is the identity matrix. Therefore, the demonstration of
partial freeness reduces to finding a $k$ for which $\mbox{tr }\left(AB\right)^{k}\ne0$.
As an illustration of what happens, consider that for $N=6$, we have
the sequence of matrices
\begin{equation}
\left\{ \left(AB\right)^{k}\right\} _{k=0}^{3}=\left\{ I,\left(\begin{array}{cccccc}
0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 0 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0
\end{array}\right),\left(\begin{array}{cccccc}
0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 1 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1\\
0 & 0 & 1 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0
\end{array}\right),I\right\} .
\end{equation}
As $k$ increments by 1, the 1s in the odd columns move up two rows
(with wraparound) and the 1s in the even columns move down two rows.
Thus $k=N$ is the smallest positive integer for which the trace of
$\left(AB\right)^{k}$ does not vanish, and $A$ and $B$ are partially
free to $2N$ moments. We then recover the asymptotic freeness of
$A$ and $B$ immediately by taking the $N\rightarrow\infty$ limit.
Returning to the problem of computing the d.o.s.\ of the sum $A+B$,
we seek to ask how good an approximation the free convolution $A\boxplus B$
is to the d.o.s.\ of the sum $A+B$ when $A$ and $B$ are not free,
but only partially free. We now quantify this statement using asymptotic
moment expansions.\cite{Chen2012}
\section{Distinguishing between two distributions using asymptotic moment
expansions}
We have described partial freeness in terms of how the moments of
the sum $A+B$ differ from what free probability requires it to be.
This suggests that asymptotic moment expansions,\cite{Wallace1958}
which expand a p.d.f.\ $f$ about a reference p.d.f.\ $\tilde{f}$
and are parameterized by the moments (or cumulants) of the two distributions
being compared, provide a natural framework for examining how the
exact p.d.f.\ differs from the free convolution. We develop this
notion using the two standard expansions, namely the Gram--Charlier
series (of Type A) and the Edgeworth series.\cite{Stuart1994}
\subsection{The Gram--Charlier series}
The Gram--Charlier series arises immediately from the orthogonal polynomial
expansion with respect to $\tilde{f}$ as the weight:\cite[Chapter IX]{Szego1975}
\begin{equation}
f\left(x\right)=\sum_{n=0}^{\infty}c_{n}\phi_{n}\left(x\right)\tilde{f}\left(x\right),
\end{equation}
where the coefficients can be shown, by the orthonormality of the
orthogonal polynomials, to be
\begin{align}
\int_{\mathbb{R}}\phi_{m}\left(x\right)f\left(x\right)dx & =\sum_{n=0}^{\infty}c_{n}\int_{\mathbb{R}}\phi_{m}\left(x\right)\phi_{n}\left(x\right)\tilde{f}\left(x\right)dx=c_{m},
\end{align}
i.e.\ the $m$th coefficient is the expected value of the $m$th
orthogonal polynomial with respect to the probability density $f$.
By expressing the orthogonal polynomials in the monomial basis,
\begin{equation}
\phi_{m}\left(x\right)=\sum_{k=0}^{m}a_{mk}x^{k},\quad c_{m}=\sum_{k=0}^{m}a_{mk}\int_{\mathbb{R}}x^{k}f\left(x\right)dx=\sum_{k=0}^{m}a_{mk}\mu_{k},
\end{equation}
we get an explicit expansion of the Gram--Charlier coefficients $\left\{ c_{m}\right\} $
as linear combinations of the moments $\left\{ \mu_{k}\right\} $
of $f$. The so--called Gram--Charlier Type A series%
\footnote{This is often referred to simply as the Gram--Charlier series; however,
in this paper we mean the latter to be the generalization of the commonly
used Type A series to possibly non-Gaussian weight functions $f$.%
} is simply the special case of a standard Gaussian weight:
\begin{equation}
\tilde{f}\left(x\right)=\Phi\left(x\right)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right).
\end{equation}
The corresponding orthogonal polynomials $\left\{ \phi_{n}\right\} _{n}$
are the (pr\-obabilist's) Hermite polynomials $\left\{ He_{n}\right\} _{n}$.
\subsection{Edgeworth series}
The Gram--Charlier series can be seen as the output of an operator
$T$:
\begin{equation}
T:\tilde{f}\rightarrow f,\quad T\left(x\right)=\sum_{n=0}^{\infty}c_{n}\phi_{n}\left(x\right),
\end{equation}
as applied to the reference p.d.f.\ $\tilde{f}$. In contrast, the
Edgeworth series is derived by rewriting $T$ as a differential operator,
as can be derived using the relations between a probability density,
its characteristic function $\chi\left(x\right)$ and the moment generating
function, and its cumulant generating function:
\begin{equation}
\chi\left(t\right)=\int_{\mathbb{R}}e^{itx}f\left(x\right)dx=\mathbb{E}_{f\left(x\right)}\left(e^{itx}\right)=\sum_{n=0}^{\infty}\frac{\mu_{n}}{n!}\left(it\right)^{n}=\exp\left(\sum_{n=1}^{\infty}\frac{\kappa_{n}}{n!}\left(it\right)^{n}\right).
\end{equation}
Writing down the analogous relations for $\tilde{f}$ and dividing
yields
\begin{equation}
\frac{\chi\left(t\right)}{\tilde{\chi}\left(t\right)}=\exp\left(\sum_{n=1}^{\infty}\frac{\kappa_{n}-\tilde{\kappa}_{n}}{n!}\left(it\right)^{n}\right),
\end{equation}
which, after rearrangement and taking the inverse Fourier transform,
yields
\begin{equation}
f\left(x\right)=\exp\left(\sum_{n=1}^{\infty}\frac{\kappa_{n}-\tilde{\kappa}_{n}}{n!}\left(-\frac{d}{dx}\right)^{n}\right)\tilde{f}\left(x\right).\label{eq:moment-expand}
\end{equation}
As with the Gram--Charlier series, the Edgeworth series is usually
presented for the Gaussian case $\tilde{f}=\Phi$. Although these
series are formally identical, they yield different partial sums when
truncated to a finite number of terms and hence have different convergence
properties. The Edgeworth form is generally considered more compact
than the Gram--Charlier series, as only the former is a true asymptotic
series.\cite{Blinnikov1998a,Stuart1994}
\subsection{Deriving the Gram--Charlier series from the Edgeworth series}
Rederiving the Gram--Charlier form from the Edgeworth series reveals
additional interesting relationships. One such relation follows from
the identity
\begin{equation}
\exp\left(\sum_{n=1}^{\infty}\frac{a_{n}}{n!}t^{n}\right)=\sum_{n=0}^{\infty}\frac{B_{n}(\left\{ a_{k}\right\} _{k=1}^{n})}{n!}t^{n},
\end{equation}
where $B_{n}$ is the complete Bell polynomial of order $n$ with
parameters $a_{1},\dots,a_{n}$.\cite{Bell1927} Setting $t=-d/dx$
gives immediately the differential operator
\begin{equation}
T\left(x\right)=\exp\left(\sum_{n=1}^{\infty}\frac{\left(\kappa_{n}-\tilde{\kappa}_{n}\right)}{n!}\left(-\frac{d}{dx}\right)^{n}\right)=\sum_{n=0}^{\infty}\frac{B_{n}\left(\left\{ \kappa_{k}-\tilde{\kappa}_{k}\right\} _{k=1}^{n}\right)}{n!}\left(-\frac{d}{dx}\right)^{n}.\label{eq:direct-series}
\end{equation}
We will call the last series of (\ref{eq:direct-series}) the direct
series of $T$.
Further specializing again to the Gaussian reference, we can use Rodrigues's
formula
\begin{equation}
He_{n}\left(x\right)\Phi\left(x\right)=\left(-1\right)^{n}\left(\frac{d}{dx}\right)^{n}\Phi\left(x\right),
\end{equation}
so that
\begin{equation}
T\left(x\right)\Phi\left(x\right)=\sum_{n=0}^{\infty}\frac{B_{n}\left(\left\{ \kappa_{k}-\tilde{\kappa}_{k}\right\} _{k=1}^{n}\right)}{n!}He_{n}\left(x\right)\Phi\left(x\right).
\end{equation}
The first few coefficients $c_{n}=B_{n}\left(\left\{ \kappa_{k}-\tilde{\kappa}_{k}\right\} _{k=1}^{n}\right)$
have been tabulated explicitly,\cite{Stuart1994} but to our knowledge
the relationship to the Bell polynomials have not been previously
discussed in the literature.
\subsection{Quantifying the effect of differing moments}
The Edgeworth series yields a useful result for error quantification.
If the first $k-1$ moments of two p.d.f.s $f$ and $\tilde{f}$ are
the same, but the $k$th moments differ, then the leading term in
the Edgeworth series is\begin{subequations}
\begin{align}
f\left(x\right) & =\tilde{f}\left(x\right)+\frac{\left(-1\right)^{k}B_{k}\left(\left\{ \kappa_{l}-\tilde{\kappa}_{l}\right\} _{l=1}^{k}\right)}{k!}\tilde{f}^{\left(k\right)}\left(x\right)+\mathcal{O}\left(\tilde{f}^{\left(k+1\right)}\right)\\
& =\tilde{f}\left(x\right)+\frac{\left(-1\right)^{k}\left(\mu_{k}-\tilde{\mu}_{k}\right)}{k!}\tilde{f}^{\left(k\right)}\left(x\right)+\mathcal{O}\left(\tilde{f}^{\left(k+1\right)}\right).
\end{align}
\end{subequations}The second equality follows from the definition
of cumulants: the $k$th cumulant is a function of only the first
$k$ moments and can be written as $\kappa_{k}=\kappa_{k}\left(\mu_{1},\dots,\mu_{k}\right)=\mu_{k}+\dots$.
\subsection{The locus of differences in moments}
The first-order term in the preceding expansion is a quantitative,
asymptotic estimate of the difference between $f_{A+B}$ and $f_{A\star B}$.
The word expansion (\ref{eq:moment-expanded}) allows us to refine
the error analysis in terms of specific joint moments that contribute
to $\mu_{k}\left(A+B\right)-\mu_{k}\left(A\star B\right)$. Further
insight may be gained from the lattice sum approach pioneered by Wigner\cite{Wigner1955}
to interpret each term in (\ref{eq:moment-expanded}), being a trace
of a product of $k$ matrices, as a closed path with up to $k$ hops
as allowed by the structure of the matrices being multiplied.
\begin{example}
\label{example:tridiag-hopping}Consider $A$ and $B$ as in Example~\ref{example:tridiagonal},
which are partially free of degree 8 and whose discrepancy in the
eighth moments relative to complete freeness is solely in the term
$\left\langle \left(AB\right)^{4}\right\rangle $. $B$ is the adjacency
matrix of the one--dimensional chain $\cdot-\cdot-\cdots-\cdot-\cdot$
with $N$ nodes and periodic boundary conditions, and we can interpret
$\left\langle \left(AB\right)^{4}\right\rangle $ as the expected
sum of weights of particular paths on this lattice. These paths must
have exactly four hops, as $A$, being diagonal, does not permit hops,
whereas $B$, having nonzero entries only on the super- and sub-diagonals,
require exactly one hop either to the immediate left or the immediate
right. This gives rise to four different paths as illustrated in Figure~\ref{fig:tridiag-hopping}.
\begin{figure}[h]
\caption{\label{fig:tridiag-hopping} Paths contributing to the term $\left\langle \left(AB\right)^{4}\right\rangle $
in Example~\ref{example:tridiag-hopping}.}
\includegraphics{fig-hops.png}
\end{figure}
We can show this by writing out the explicit matrix multiplication.
Writing the diagonal elements of $A$ as the i.i.d.\ standard Gaussians
$A_{ii}=g_{i}$ and using the Einstein implicit summation convention,
\begin{subequations}
\begin{align}
\left\langle \left(AB\right)^{4}\right\rangle = & \frac{1}{N}\mathbb{E}A_{i_{1}i_{2}}B_{i_{2}i_{3}}A_{i_{3}i_{4}}B_{i_{4}i_{5}}A_{i_{5}i_{6}}B_{i_{6}i_{7}}A_{i_{7}i_{8}}B_{i_{8}i_{1}}\\
= & \frac{1}{N}\mathbb{E}g_{i_{1}}g_{i_{3}}g_{i_{5}}g_{i_{7}}\delta_{i_{1}i_{2}}\left(\delta_{i_{2}-1,i_{3}}+\delta_{i_{2}+1,i_{3}}\right)\delta_{i_{3}i_{4}}\left(\delta_{i_{4}-1,i_{5}}+\delta_{i_{4}+1,i_{5}}\right)\nonumber \\
& \qquad\times\delta_{i_{5}i_{6}}\left(\delta_{i_{6}-1,i_{7}}+\delta_{i_{6}+1,i_{7}}\right)\delta_{i_{7}i_{8}}\left(\delta_{i_{8}-1,i_{1}}+\delta_{i_{8}+1,i_{1}}\right)\\
= & \frac{1}{N}\mathbb{E}\left(g_{i}g_{i-1}g_{i}g_{i+1}+g_{i}g_{i+1}g_{i}g_{i-1}+g_{i}g_{i-1}g_{i}g_{i-1}+g_{i}g_{i+1}g_{i}g_{i+1}\right)\\
= & 2\mathbb{E}\left(g_{1}\right)^{2}\mathbb{E}\left(g_{1}^{2}\right)+2\mathbb{E}\left(g_{1}^{2}\right)^{2}=2.
\end{align}
\end{subequations}This simple example illustrates several important
concepts. First, if the underlying matrices can be interpreted as
adjacency matrices of graphs, these graphs are topologically significant
for the calculation of joint moments by controlling the number of
allowed returning paths in the lattice sum. The third equality makes
explicit use of this fact, and the numerical factor of 2 can be seen
as encoding the average degree of the underlying graph, which in this
case is the infinite one--dimensional chain. Second, the analyses
of joint moments of random matrices can be reduced to studying moments
and correlations of scalar matrix elements. While in this example
we assumed that the $g_{i}$s were uncorrelated, the calculation up
to the penultimate line is still valid in the general case where they
are correlated, and results like Wick's theorem\cite{Wick1950} or
Isserles's theorem\cite{Isserlis1916,Isserlis1918} can be applied
instead to finish the calculation. In an even more general setting,
other random variates other than standard Gaussians can be analyzed
with little difficulty so long as the required moments and correlations
exist. Third, these calculations can be performed for both finite
and infinite random matrices; for the former, there is even some information
about boundary conditions. If $B$ were replaced by $B^{\prime}$,
the implicit sum after the third equality excludes the some terms
for the boundary cases $i=1$ and $i=N$, since the paths would not
be able to travel beyond the edges of the lattice. As a result, the
numerical factors of 2 in the last line would be replaced by $2-2/N$
instead. In general, we expect boundary conditions to give rise to
corrections from the bulk behavior of order $\mathcal{O}\left(1/N\right)$.
To summarize this section, the notion of partial freeness unites two
disparate ideas in probability theory. First, the violation of free
independence in specific joint moments leads to asymptotic correction
factors to the density of states that show up as leading--order terms
in Edgeworth series expansions of the free convolution. These correction
factors have magnitudes that decay strongly with the lengths of the
words in question. Second, the coefficient of the correction also
encodes information about lattice sums over closed paths on random
graphs, whose topologies are encoded by the random matrices in question,
and can be related to quantities such as the average degree of a node
in the graph. Our generalization of the Edgeworth series to correct
for deviations from freeness (rather than to correct for nonnormality
in its classical usage) thus elucidates new connections between the
combinatorics of joint moments, sums over lattice paths on random
graphs, and asymptotic moment expansions of probability distributions.
\end{example}
\section{Computational implementation}
The relationship between joint moments and corrections to the density
of states is a particular feature of partial freeness which lends
itself naturally to numerical investigation. In this section, we sketch
how the characterization of partial freeness can be calculated in
an entirely automated fashion, by combining algorithms for enumerating
all joint moments of a given order with new algorithms using the results
above. Perhaps interestingly, partial freeness generates useful statistics
even when the analytic forms of the random matrices $A$ and $B$
are not known \emph{a priori}. We will treat this as a separate case
below.
\subsection{Symbolic computation of moments and joint moments}
If the analytic forms of the random matrices $A$ and $B$ are known
and can be multiplied analytically, a computer algebra system can
be used to calculate the necessary moments symbolically. The algorithm
for characterizing partial freeness then proceeds as follows:
\begin{enumerate}
\item Calculate the moments of $A$ and $B$ as well as the moments $\mu_{k}\left(A\boxplus B\right)$.
The $k$th moment of $A\boxplus B$ can be calculated by generating
all terms in the word expansion of $\left\langle \left(A+B\right)^{k}\right\rangle $
using Sawada's algorithm to generate all $\left(k,2\right)$--bracelets\cite{Sawada2001}.
\item For each word, check whether the relation (\ref{eq:free-defn}) required
by free independence holds. The first order $p$ for which this fails
is the degree of partial freeness.
\item Calculate the density of states $f_{A\boxplus B}$ using the $R$--transform
(\ref{eq:r-transform}) and its $p$th derivative $f_{A\boxplus B}^{\left(p\right)}$.
\item The leading--order correction to $f_{A\boxplus B}$ due to lack of
freeness is then $f_{A\boxplus B}+f_{A\boxplus B}^{\left(p\right)}\left(\mu_{p}-\tilde{\mu}_{p}\right)/p!$.
\end{enumerate}
To illustrate Step 2, we provide some \emph{Mathematica} code for
calculating the necessary moments and joint moments in Algorithm~\ref{alg:ma-joint-moments}.
The code also provides a function for calculating n.e.t.s of an arbitrary
joint moment or centered joint moment in terms of the distribution
of matrix elements. For simplicity, only the i.i.d.\ case of one
scalar probability distribution with moments $\left\{ m_{k}\right\} $
is illustrated, although this approach can be extended to more complicated
situations as necessary.
\begin{algorithm}[h]
\caption{\label{alg:ma-joint-moments}Mathematica code for calculating normalized
expected traces of joint matrix products and centered joint matrix
products for finite dimensional random matrices.}
\begin{lstlisting}[language=Mathematica]
NN = 100; (* Size of matrix *)
(* The following generates the map which
formally evaluates the expectation of the
G random variables assuming that they
are i.i.d. with vanishing mean. *)
MomentsOfG := Flatten[{
Table[Subscript[G, j]^i -> Subscript[m, i],
{i, 2, NN}, {j, 1, NN}],
Table[Subscript[G, j] -> 0, {j, 1, NN}]
}];
ExpectationOfG[x_] := x /. MomentsOfG;
(* Normalized expected trace *)
AngleBracket[x_] := ExpectationOfG[
Tr[x]/NN // Expand ];
(* centering operator *)
c[x_] := (x - AngleBracket[x] IdentityMatrix[NN])
(* Example 19 *)
A = DiagonalMatrix[Array[Subscript[G, #] &, NN]];
B = SparseArray[{Band[{1, 2}] -> 1}, {NN, NN}];
B[[1]][[-1]] = 1; (* Add circulant boundary *)
B = B + Transpose[B];
AngleBracket[c[A.A].c[B.B]]
(* Output: 0 *)
AngleBracket[MatrixPower[c[A].c[B], 4]]
(* Output: 2 Subscript[m, 2]^2 *)
(* Specialized to standard Gaussian Gs *)
GaussianG = Array[Subscript[m, #] ->
Moment[NormalDistribution[], #] &, NN];
AngleBracket[MatrixPower[A+B], 4]] /. GaussianG
(* Output: 1099 *)
\end{lstlisting}
\end{algorithm}
\subsection{Numerical calculations on empirical samples}
The partial freeness formalism can also be used when the underlying
distributions of the random matrices $A$ and $B$ are unknown, but
when samples of each are available, e.g.\ from Monte Carlo simulations
or from empirical data. The algorithm for characterizing the partial
freeness of numerical samples is as follows:
\begin{enumerate}
\item Generate $t$ pairs of samples $\left\{ \left(A_{i},B_{i}\right)\right\} _{i=1}^{t}$
of $N\times N$ diagonalizable random matrices $A$ and $B$.
\item For each pair:
\begin{enumerate}
\item Calculate the exact eigenvalues of $A_{i}$ and $B_{i}$.
\item Calculate the eigenvalues of the sample $A_{i}+B_{i}$ of the exact
sum $A+B$. (This is for comparison purposes only and can be omitted.)
\item Calculate $n$ samples of the free convolution $f_{A\boxplus B}$
using the eigenvalues of $M_{i}=A_{i}+Q_{i}B_{i}Q_{i}^{\dagger}$
using a numerically generated Haar orthogonal (or unitary) matrix
$Q_{i}$.
\item Calculate $n$ samples from the classical convolution $f_{A\star B}$
using the eigenvalues of $L_{i}=A_{i}+\Pi_{i}^{-1}B_{i}\Pi_{i}$ using
a random permutation matrix $\Pi_{i}$.
\end{enumerate}
\item Calculate the first $2K$ moments of $A$ and $B$ as well as the
first $K$ moments of $\mu_{k}\left(A\boxplus B\right)$, $\mu_{k}\left(A\star B\right)$,
and $\mu_{k}\left(A+B\right)$.
\item Calculate the degree $k$ for which $A$ and $B$ are partially free
by testing for the smallest $k$ such that the moments of the free
convolution differ from the exact result, i.e. test the hypothesis
\[
\mu_{k}\left(A+B\right)\ne\mu_{k}\left(A\boxplus B\right).
\]
\item Using Sawada's algorithm,\cite{Sawada2001} enumerate all unique terms
$T_{j}=\left\langle A^{m_{1j}}B^{n_{1j}}\cdots A^{m_{k_{j}j}}B^{n_{k_{j}j}}\right\rangle $
in $\left\langle \left(A+B\right)^{k}\right\rangle $. For each term
$T_{j}$:
\begin{enumerate}
\item Calculate
\[
T_{j}^{\left(cl\right)}=\left\langle A^{m_{1j}+\cdots+m_{k_{j}j}}\right\rangle \left\langle B^{n_{1j}+\cdots+n_{k_{j}j}}\right\rangle ,
\]
which would be its value expected from classical independence. Test
the hypothesis of equality $T_{j}=T_{j}^{\left(cl\right)}$.
\item Calculate the normalized expected trace of the centered term
\[
T_{j}^{\left(c\right)}=\left\langle \left(A^{m_{1j}}-\left\langle A^{m_{1j}}\right\rangle \right)\left(B^{n_{1j}}-\left\langle B^{n_{1j}}\right\rangle \right)\cdots\right\rangle ,
\]
which would be expected to vanish if $A$ and $B$ were truly free.
Test the hypothesis of equality $T_{j}^{\left(c\right)}=0$.
\end{enumerate}
\item Calculate the $k$th derivative $f_{A\boxplus B}^{\left(k\right)}$
using numerical finite difference.
\item Plot $f_{A+B}$, $f_{A\boxplus B}$ and $f_{A\boxplus B}+\left(\mu_{k}-\tilde{\mu}_{k}\right)/k!\cdot f_{A\boxplus B}^{\left(k\right)}$.
\end{enumerate}
This algorithm tests for partial freeness of degree $k\le K$, attempts
to identify the locus of discrepancy by testing all possible $\left(k,2\right)$-words,
and calculates the leading order correction term to the density of
states. The calculation of the classical convolution and exact density
of states are purely for comparative purposes and can be omitted.
In practice, we also account for sampling error in the hypothesis
tests in Steps 4 and 5 by calculating the standard error of each term
being tested, and evaluating the $p$--value for each such hypothesis.
For example, the standard error of the $k$th moment is
\begin{equation}
SE\left(\mu_{k}\right)=\sqrt{\frac{\mu_{2k}-\mu_{k}^{2}}{t}},
\end{equation}
and the standard error of a term $T_{j}=\left\langle \left(A^{m_{1j}}B^{n_{1j}}\cdots A^{m_{k_{j}j}}B^{n_{k_{j}j}}\right)\right\rangle $
in the expansion of $\left\langle \left(A+B\right)^{k}\right\rangle $
is
\begin{equation}
SE\left(T_{j}\right)=\sqrt{\frac{\left\langle \left(A^{m_{1j}}B^{n_{1j}}\cdots A^{m_{k_{j}j}}B^{n_{k_{j}j}}\right)^{2}\right\rangle -T_{j}^{2}}{t}}.
\end{equation}
The calculation of necessary standard errors require information up
to the $2K$th moment for calculating variances stemming from the
$K$th moment, which is why $2K$ moments of $A$ and $B$ are calculated
in Step 3. Alternatively, other measures of statistical fluctuation
could be used, such as bootstrap or jackknife errors.
The \emph{Supplementary Information} includes an implementation of
the algorithm described in this section for analyzing empirical samples
of random matrices that is written in MATLAB. In numerical tests of
this algorithm, we observe the expected $O\left(1/\sqrt{Nt}\right)$
rate of convergence in the word values with the number of eigenvalues
$Nt$. Thus in practical numerical studies where $N$ and $t$ can
be controlled, we recommend that for maximum numerical efficiency
that $N$ be set only as large as necessary to minimize finite-size
effects, and $t$ be taken as large as necessary to ensure numerical
convergence, as the diagonalization of typical matrices is superlinear
in $N$.
\section{Summary}
Partial freeness is a relationship between random matrices that brings
together ideas from various aspects of probability theory. First,
the notion of asymptotic freeness arises naturally as a special limiting
case of partial freeness for infinite--dimensional matrices, but unlike
the former, partial freeness is still well--defined for arbitrary
diagonal random or deterministic matrices of finite or infinite dimensions.
Second, partial freeness allows for deviations from asymptotic freeness
to be quantified in terms of well--defined asymptotic corrections
to quantities such as the empirical density of states. These asymptotic
corrections generalize the notions of Gram--Charlier and Edgeworth
series which arise from classical probability in the study of deviations
from non--Gaussianity. Third, the organization of joint moments by
words of a given length reveals new combinatorial structure, which
to our knowledge, has not been elucidated in the context of free probability
before. The enumeration of joint moments evokes the combinatorics
of necklaces, which also shows how the ideas in this paper generalize
straightforwardly to multiple additive free convolutions: the $k$
parameter of the necklaces in Definition~\ref{def:necklace} counts
the number of matrices whose sum $M=A+B+\dots$ is being investigated.
We have also demonstrated that partial freeness is a theoretically
interesting abstract relation between random matrices, but it also
comes with a statistical framework which can be tested in numerical
computations in a practical manner. Partial freeness organizes clearly
the relationships between the joint moments of random matrices and
the moments and correlations of the scalar random variables in their
matrix elements. Additionally, partial freeness can be tested for
statistically using purely empirical data, without resorting to any
model for the random matrices in question. These ideas can be stated
in algorithmic form and thus we expect partial freeness to be useful
both theoretically and in practical numerical applications. We are
currently exploring how the theoretical ideas brought together by
partial freeness can be used to construct new computational statistical
tools.
\thanks{We acknowledge funding from NSF SOLAR Grant No.~1035400. A.E.\ acknowledges
additional funding from NSF DMS Grant No.~1016125. We gratefully
acknowledge useful discussions with D. Shlyakhtenko (UCLA), N. Raj
Rao (Michigan) and A. Su\'arez (Univ. Aut\'onoma Madrid) that have
led us to pursue this avenue of investigation. We thank M. Welborn
(MIT) and E. Hontz (MIT) for graphics assistance with Figure~\ref{fig:tridiag-hopping}
and Figure~\ref{fig:tridiag-hopping} respectively.}
\bibliographystyle{focm}
|
1,477,468,750,891 | arxiv | \section{Introduction}
\label{intro}
\subsection{$G$-corks.}
{\it Cork} $(C,g)$ is a pair of a compact contractible (Stein\footnote{This condition is included in some original papers by Akbulut et.al., for example \cite{AK}}) 4-manifold $C$ and a diffeomorphism $g$ on the boundary $\partial C$
that $g$ cannot extend to the inside $C$ as a smooth diffeomorphism.
{\it Cork twist} means the 4-dimensional surgery by the following cut-and-paste
$$X'=(X-C)\cup_gC.$$
The manifold presented by the diagram as in {\sc Figure}~\ref{AKB} becomes a cork.
The map $g$ is the 180$^\circ$ rotation about the horizontal line in the picture.
In particular, $C(1)$ is the first cork which was used by Akbulut.
Here a box with the integer $x$ stands for the $x$-fold right handed full twist.
\begin{figure}[htpb]
\begin{center}
\includegraphics{AkbCork2.eps}
\caption{The handle diagram of $C(m)$.}
\label{AKB}
\end{center}
\end{figure}
In the definition of the original cork the condition $g^2=\text{id}_{\partial C}$ is included.
Recently, in some papers the order of gluing map $g$ is generalized to finite order (\cite{TM1}, \cite{AKMR}), infinite order \cite{G} or generally any group $G$ in \cite{AKMR}.
In terms of the view by Auckly, Kim, Melvin, and Ruberman \cite{AKMR},
if a group $G$ smoothly and effectively acts on the boundary of a contractible 4-manifold $C$ and any non-trivial diffeomorphism $g\in G$ cannot smoothly extend to the inside $C$, then the pair $(C,G)$ is called a {\it $G$-cork}.
As examples of finite order corks, the author \cite{TM1} gave pairs of $(X_{n,m},\tau_{n,m})$ for $X=C, D, E$ and or generally, $X=X({\bf x})$ for $\{\ast,0\}$-sequence ${\bf x}\neq (0,\cdots, 0)$ or $(\ast,\cdots, \ast)$, where we call such a sequence ${\bf x}$ {\it non-trivial}.
The diffeomorphism $\tau_{n,m}$ is the $2\pi/n$-rotation with respect to the diagram.
In the paper \cite{TM1}, we put the index $X$ on $\tau_{n,m}$, like the notation $\tau^X_{n,m}$.
We remove the indexes if it is understood from the context.
We describe $C_{n,m}$ in {\sc Figure}~\ref{Cnm}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.5\textwidth]{cyclic2.eps}
\caption{The handle decomposition of $C_{n,m}$.}
\label{Cnm}
\end{center}
\end{figure}
$D_{n,m}$ is obtained by exchanging all the dots and $0$-framings in $C_{n,m}$.
$E_{n,m}$ is obtained by modifying $C_{n,m}$ as in {\sc Figure}~\ref{beisotopy}.
The concrete diagrams for these examples are described in \cite{TM1}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.5\textwidth]{henkan.eps}
\caption{The modification.}
\label{beisotopy}
\end{center}
\end{figure}
\begin{thm}[\cite{TM1}]
\label{tange}
For $X=C,D, E$ or $X({\bf x})$, for any non-trivial $\{\ast,0\}$-sequence ${\bf x}$, $(X_{n,m},\tau_{n,m})$ is a finite order cork.
Furthermore, $C_{n,m}$ is a ${\mathbb Z}_n$-cork with Stein structure.
\end{thm}
Auckly, Kim, Melvin, and Ruberman \cite{AKMR} gave the examples of $G$-corks for any finite subgroup $G$ of $SO(4)$.
\begin{thm}[\cite{AKMR}]
Let $G$ be any finite subgroup in $SO(4)$.
Then there exists a $G$-cork.
\end{thm}
Let $Y_1,Y_2$ be two $n$-manifolds with boundary.
We call the surgery of attaching an $n$-dimensional 1-handle along two neighborhoods of $p_i\in \partial Y_i$ {\it boundary-sum}
and the resulting manifold as $X_1\natural X_2$.
Their Stein corks in \cite{AKMR} were constructed by the boundary-sum of several copies of $C(1)$.
They also announce the existence of finite order cork with hyperbolic boundary in \cite{AKMR}.
We say that an $n$-manifold $X$ with boundary is {\it boundary-sum irreducible} if $X=X_1\natural X_2$, then $X_1$ or $X_2$ is homeomorphic to an $n$-disk.
If $X$ is not boundary-sum irreducible, then we call $X$ {\it boundary-sum reducible}.
Here a 4-manifold $X$ is called {\it irreducible} if for any connected-sum decomposition $X=X_1\#X_2$, $X_1$ or $X_2$ is a homotopy $n$-sphere.
We call a 3-manifold $Y$ {\it prime} if for any connected-sum decomposition $Y=Y_1\#Y_2$, $Y_1$ or $Y_2$ is a $3$-sphere.
The following holds.
\begin{lem}
\label{equiv}
Let $X$ be a 4-manifold.
If $X$ is irreducible and $\partial X$ is prime, then $X$ is boundary-sum irreducible.
\end{lem}
The problem of whether the examples $X_{n,m}$ in Theorem~\ref{tange} are boundary-sum irreducible corks or not has remained.
Our main theorem answers this question for the case of $X({\bf x})_{n,m}$.
\begin{thm}
\label{main}
For any integer $m$ and positive integer $n$.
There exist boundary-sum irreducible ${\mathbb Z}_n$-corks $(C_{n,m},\tau_{n,m})$ with Stein structure.
For any non-trivial $\{\ast, 0\}$-sequence ${\bf x}$, $(X({\bf x})_{n,m},\tau_{n,m})$ are boundary-sum irreducible
finite order corks.
Another variation $(E_{n,1},\tau_{n,1})$ is boundary-sum irreducible ${\mathbb Z}_n$-corks.
\end{thm}
Indeed, $X({\bf x})_{n,m}$ is irreducible and $\partial X({\bf x})_{n,m}$ is prime.
This result means that $(X({\bf x})_{n,m},\tau_{n,m})$ is a different finite order cork from the one used in Theorem A in \cite{AKMR}.
We do not know whether our examples are different from their finite order corks with hyperbolic boundary.
We can show the following result which follows immediately from the proof of Theorem~\ref{main}.
We set $Y_{n,m}:=\partial C_{n,m}$.
Clearly this 3-manifold is diffeomorphic to any $\partial X_{n,m}({\bf x})$, for any $\{\ast,0\}$-sequence ${\bf x}$.
We set $Y'_{n,m}:=\partial E_{n,m}$.
\begin{thm}
\label{irre}
Let $n,m$ be integers as above.
Then $Y_{n,m}$ and $Y'_{n,1}$ are prime homology spheres.
\end{thm}
Furthermore we can prove the following hyperbolicity.
In \cite{AKMR} they suggested any $Y_{n,m}$ would be a hyperbolic 3-manifold.
\begin{thm}
\label{hyper}
Let $n,m$ be integers with $0\le m\le 2$ and $1\le n\le 4$.
$Y_{n,m}$ and $Y'_{n,m}$ are hyperbolic 3-manifolds.
\end{thm}
These are direct results by the computer software HIKMOT \cite{HIKMOT2}.
It is proven that $Y_{1,m}=Y'_{1,m}=\partial C(m)$ are hyperbolic 3-manifolds in \cite{KOU}, by using the fact that these are Dehn surgeries of the pretzel knot $Pr(-3,3,-3)$.
We put a question here.
\begin{que}
Let $X$ be $X({\bf x})$ for non-trivial $\{\ast,0\}$-sequence or $E$.
\begin{itemize}
\item Is $(X_{n,m},\tau_{n,m})$ finite order cork with Stein structure?
\item Is $(X_{n,m},\tau_{n,m})$ finite order cork with hyperbolic boundary?
\end{itemize}
\end{que}
Notice that it is not known at all what reflection for the exotic structures does a cork twist by a cork with hyperbolic boundary give.
This theme is left up to a future study of exotic 4-manifolds.
\section*{Acknowledgements
I thank for Akitoshi Takayasu and Hidetoshi Masai for helps of the installation of HIKMOT and some useful comments and some strategies to find hyperbolic solution.
I also thank for Kouichi Yasui, Kouki Sato, Takahiro Oba, and Robert Gompf for giving me useful advice, comments and suggestion.
\section{Primeness of $K_{n,m}$ and $K'_{n,m}$.}
$Y_{n,m}$ and $Y'_{n,m}$ are $n$-fold cyclic branched covers of $Y(m):=\partial C(m)$ with the branch locus $K_{n,m}$ and $K'_{n,m}$ respectively.
See {\sc Figure}~\ref{seifert} for $K_{n,m}$.
The picture of $K'_{n,m}$ is obtained by modifying the diagram of $K_{n,m}\subset Y(m)$ in {\sc Figure}~\ref{seifert} according to {\sc Figure}~\ref{beisotopy}.
In this picture, the slice disks of $K_{n,m}$ and $K'_{n,m}$ intersect with the 0-framed 2-handle at $2n$ points.
Let $d(K)$ be the top degree of the symmetrized Alexander polynomial $\Delta_K(t)$.
\begin{lem}
For any integer $m$ and positive integer $n$, the Alexander polynomials of $K_{n,m}$ and $K'_{n,m}$ are $\Delta_{K_{n,m}}\doteq 2t^{n}-5+2t^{-n}$ and $\Delta_{K'_{n,m}}=6t^{n}-13+6t^{-n}$.
Furthermore, the genera of $K_{n,m}$ and $K'_{n,m}$ are $n$.
\end{lem}
Note that the computation in the case of $K_{1,m}$ was done in \cite{KOU}.\\
\begin{proof}
$K_{n,m}$ has the genus $n$ Seifert surface $\Sigma_{n,m}$ as in {\sc Figure}~\ref{seifert}.
We compute the Seifert matrix for $\Sigma_{n,m}$.
We take the generators $\{\lambda_i,\mu_i|i=1,\cdots,n\}$ in $H_1(\Sigma_{n,m})$ as in {\sc Figure}~\ref{generators}.
We define $\lambda_i^+$ and $\mu_i^+$ to be the parallel transforms in the
one side of the neighborhood of $\Sigma_{n,m}$.
Consider the order of the generators as
$$\lambda_1,\lambda_2,\cdots,\lambda_n,\mu_1,\mu_2,\cdots,\mu_n.$$
The $(r,s)$-entry of the Seifert matrix $S_{n,m}$ is the linking number $lk(x_r^+,x_s)$,
where $x_i$ is the $i$-th generator above.
Here we have the following:
$$lk(\lambda_i^+,\lambda_j)=0,\ lk(\mu_i^+,\mu_j)=0,\ \ lk(\lambda_i^+,\mu_j)=\begin{cases}2&i\le j\\1&i>j\end{cases}$$
and
$$lk(\mu_i^+,\lambda_j)=\begin{cases}1&i\le j\\2&i>j.\end{cases}$$
These calculations are done by considering the linking indicated in {\sc Figure}~\ref{linking1}.
The Seifert matrix $S_n$ is $\begin{pmatrix}O_n&A_n\\B_n&O_n\end{pmatrix}$, where $O_n$ is the $n\times n$ zero matrix, $A_n$ and $B_n$ are $n\times n$ matrices satisfying the following:
$$A_n=(a_{ij}),\ a_{ij}=\begin{cases}2&j\ge i\\1&j< i.\end{cases}\text{ and }B_n=(b_{ij}),\ b_{ij}=\begin{cases}1&j\ge i\\2&j< i.\end{cases}$$
Then we have
\begin{eqnarray*}
\Delta_{K_{n,m}}&=&\det(tS_n-S_n^T)=\det\begin{pmatrix}O_n&tA_n-B_n^T\\tB_n-A_n^T&O_n\end{pmatrix}\\
&=&(-1)^n\det(tA_n-B_n^T)\det(tB_n-A_n^T)\\
&=&\det(tA_n-B_n^T)\det(A_n-tB_n^T).
\end{eqnarray*}
We set
$(\alpha_{ij})=tA_n-B_n^T$, where $\alpha_{ij}=\begin{cases}2t-2&j>i\\2t-1&i=j\\t-1&j<i.\end{cases}$
We define $\det(tA_n-B_n^T)$ to be $\alpha_n$.
By expanding $\alpha_n$ and deforming it, we have
$$\alpha_n=\det\begin{pmatrix}1&2t-2&\cdots&\cdots&2t-2\\-t&2t-1&2t-2&\cdots&2t-2\\0&t-1&2t-1&\ddots&\vdots\\\vdots&\vdots&\ddots&\ddots&2t-2\\0&t-1&\cdots&t-1&2t-1\end{pmatrix}=\alpha_{n-1}+t\beta_{n-1},$$
where $\beta_{n-1}$ is the $(n-1)\times (n-1)$ matrix satisfying the following:
\begin{eqnarray*}
\beta_{n-1}&=&
\det
\begin{pmatrix}
2t-2&2t-2&\cdots&\cdots&2t-2\\
t-1&2t-1&2t-2&\cdots&2t-2\\
\vdots&t-1&2t-1&\ddots&\vdots\\
\vdots&\vdots&\ddots&\ddots&2t-2\\
t-1&t-1&\cdots&t-1&2t-1\end{pmatrix}\\
&=&
\det
\begin{pmatrix}
0&2t-2&\cdots&\cdots&2t-2\\
-t&2t-1&2t-2&\cdots&2t-2\\
0&t-1&2t-1&\ddots&\vdots\\
\vdots&\vdots&\ddots&\ddots&2t-2\\
0&t-1&\cdots&t-1&2t-1\end{pmatrix}=t\beta_{n-2}
\end{eqnarray*}
$$\beta_{2}=\det\begin{pmatrix}2t-2&2t-2\\t-1&2t-1\end{pmatrix}=2t(t-1).$$
Thus $\beta_{n-1}=2t^{n-2}(t-1)$, therefore, we have
$\alpha_n=2t^2-1+\sum_{k=3}^n2t^{k-1}(t-1)=2t^n-1$.
By using the following equality
$$\det(A_n-tB_n^T)=(-t)^n\det(1/tA_n-B_n^T)$$
we have $\det(A_n-tB_n^T)=(-t)^n(2t^{-n}-1)=(-1)^n(2-t^n)$.
Therefore, we have
$$\Delta_{K_{n,m}}(t)=(2t^n-1)(-1)^n(2-t^n)\doteq 2t^{n}-5+2t^{-n}.$$
Hence, since $d(K_{n,m})$ coincides with the genus of $\Sigma_{n,m}$,
we can see that the surface is the minimal Seifert surface.
Thus, we have $g(K_{n,m})=n$.
In the case of $K'_{n,m}$, we can do the similar computation to above by taking the corresponding generators in the Seifert surface.
The Seifert matrix $S'_n$ is $\begin{pmatrix}O_n&A'_n\\B'_n&O_n\end{pmatrix}$, where $O_n$ is the $n\times n$ zero matrix, $A'_n$ and $B'_n$ are $n\times n$ matrices satisfying the following:
$$A'_n=(a'_{ij}),\ a'_{ij}=\begin{cases}-2&j\ge i\\-3&j< i.\end{cases}\text{ and }B_n=(b_{ij}),\ b_{ij}=\begin{cases}-3&j\ge i\\-2&j< i.\end{cases}$$
Then we have
\begin{eqnarray*}
\Delta_{K'_{n,m}}&=&\det(tS'_n-{S'}_n^T)=\det\begin{pmatrix}O_n&tA'_n-{B'}_n^T\\tB'_n-{A'}_n^T&O_n\end{pmatrix}\\
&\doteq&6t^n-13+6t^{-n}.
\end{eqnarray*}
\hfill$\Box$
\end{proof}
\begin{figure}[htbp]
\begin{center}
\includegraphics{seifert.eps}
\caption{$K_{n,m}$ in $Y(m)$}
\label{seifert}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics{gene.eps}
\caption{Generators of $H_1(\Sigma_{n,m})$.}
\label{generators}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics{linking1.eps}
\caption{$lk(\lambda_i^+,\mu_j)$ and $lk(\lambda_i^+,\mu_j)$}
\label{linking1}
\end{center}
\end{figure}
\begin{lem}
Let $K_1$ and $K_2$ be two knots in two homology spheres $Y_1$, $Y_2$ respectively.
Then
$$g(K_1\#K_2)=g(K_1)+g(K_2)$$
holds.
\end{lem}
This is a classical result, however, we prove it here again.\\
\begin{proof}
Let $S\subset Y_1\#Y_2$ be the embedded separating sphere for $Y_1$ and $Y_2$.
We suppose that $S$ is separating $K_1\#K_2$ i.e., $(K_1\#K_2)\cap S$ are two points.
Let $\Sigma$ be the minimal genus Seifert surface of $K_1\#K_2$.
The set of the intersection $\Sigma\cap S$ consists of finite circles and single arc connecting the two points in the general position.
We take the inner most circle $C$ not including the arc in the interior.
$C$ bounds a disk in $\Sigma$ because $\Sigma$ is the minimal genus surface.
Then by cutting the disk and capping two new disks, we can decrease the number of the intersection circles.
We call the new embedded surface $\Sigma$ again.
This cut-and-past process preserves the genus of $\Sigma$.
The isotopy class of $\Sigma$ may be changed.
By iterating this process we vanish all the intersection circles.
Then $\Sigma=\Sigma_1\natural\Sigma_2$ is obtained and
$$g(K_1\# K_2)=g(\Sigma)=g(\Sigma_1)+g(\Sigma_2)\ge g(K_1)+g(K_2)$$
holds.
Conversely, since $g(K_1)+g(K_2)\ge g(K_1\#K_2)$, we obtain
$g(K_1\#K_2)=g(K_1)+g(K_2)$.\hfill$\Box$
\end{proof}
Let $K$ be a knot in a homology sphere.
If $\Delta_K(t)$ cannot be decomposed as in $\Delta_K(t)=f_1(t)f_2(t)$ and $f_i(t)$ agrees with the Alexander polynomial of a knot in a homology sphere, then we call $\Delta_K(t)$ {\it A-irreducible}.
\begin{lem}
\label{prime}
Let $K$ be a knot in a homology sphere $Y$.
If $\Delta_K(t)$ is A-irreducible and $g(K)=d(K)$, then $K$ is prime.
\end{lem}
\noindent{\begin{proof}
Suppose that $K$ is not prime.
Then $K$ is isotopic to a composite knot $K_1\#K_2$.
Then $\Delta_K(t)=\Delta_{K_1}(t)\Delta_{K_2}(t)$.
Hence, $d(K)=d(K_1)+d(K_2)$ holds.
Since $g(K)=d(K)$, we have $d(K)=g(K)=g(K_1)+g(K_2)\ge d(K_1)+d(K_2)$.
Therefore, $g(K_1)+g(K_2)= d(K_1)+d(K_2)$ holds.
From the inequalities $g(K_i)\ge d(K_i)$, $g(K_i)=d(K_i)$ holds for $i=1,2$.
On the other hand, since $K$ is A-irreducible, $\Delta_{K_1}(t)=1$ or $\Delta_{K_2}(t)=1$.
Thus $g(K_1)=0$ or $g(K_2)=0$ holds.
This means that $K$ is prime.
\hfill$\Box$
\end{proof}}
We prove $K_{n,m}$ is a prime knot.
\begin{lem}
$K_{n,m}$ and $K'_{n,m}$ are prime knots in $Y_{n,m}$, $Y'_{n,m}$ respectively.
\end{lem}
\noindent{\begin{proof}
The Alexander polynomials of $K_{n,m}$ and $K'_{n,m}$ are $2t^{n}-5+2t^{-n}$ and $6t^n-13+2t^{-n}$.
These polynomials are A-irreducible.
Because, the polynomials are completely decomposed as $\Delta_{K_{n,m}}\doteq 2t^{n}-5+2t^{-n}=(2t^n-1)(t^n-2)$ and $\Delta_{K'_{n,m}}\doteq 6t^{n}-13+6t^{-n}\doteq(2t^n-3)(3t^n-2)$ as a polynomial over ${\mathbb Z}$.
Any factor of these decompositions is an irreducible polynomial by Eisenstein's criterion
and is not an Alexander polynomial of a knot in a homology sphere.
Since the genus of $K_{n,m}$ and $K'_{n,m}$ is $n$, from Lemma~\ref{prime}, $K_{n,m}$ and $K'_{n,m}$ are prime.
\hfill$\Box$
\end{proof}}
\section{Boundary-sum irreducibility of $X({\bf x})_{n,m}$.}
Before proving Theorem~\ref{main}, we prove Lemma~\ref{equiv}.\\
\begin{proof}
Suppose that $X^4$ is boundary-sum reducible.
Then there exists a decomposition $X=X_1\natural X_2$ and $X_i$ is not homeomorphic to a 4-disk.
Suppose that either $\partial X_1$ or $\partial X_2$ is diffeomorphic to $S^3$.
We may assume $\partial X_1\cong S^3$.
Then $X$ is connected-sum $\hat{X}_1\#X_2$, where $\hat{X}_1$ is $X_1$ capped off by a 4-disk $D^4$ and $\hat{X}_1$ is not homeomorphic to $S^4$.
Then $X$ is irreducible.
Therefore we get the desired assertion.
\end{proof}
Here we prove Theorem~\ref{main}.\\
\begin{proof}
For any $\{\ast, 0\}$-sequence we set $X=X({\bf x})$.
The irreducible decomposition of $X_{n,m}$ is already done and unique.
Use Freedman's classification \cite{F} for the double of $X$.
Thus $X_{n,m}$ is irreducible.
We prove that $Y_{n,m}$ is a prime 3-manifold.
$Y_{n,m}$ is the $n$-fold cyclic branched cover of $Y(m)$ along $K_{n,m}$.
Namely, $Y_{n,m}/\langle \tau\rangle=Y(m)$, where $\tau=\tau_{n,m}$.
If $S\subset Y_{n,m}$ is an embedded 2-sphere, we assume that up to isotopy, $S$ satisfies either of the following conditions for any $g\in\langle \tau\rangle$ due to \cite{MSY} and \cite{Dun}:
\begin{itemize}
\item $g(S)\cap S=\emptyset$
\item $g(S)=S$.
\end{itemize}
Suppose that the first condition is satisfied.
$S$ does not intersect with the branch locus, namely, $S$ is projected to a sphere in $Y(m)$.
Since $Y(m)$ is a prime 3-manifold due to \cite{KOU}, then the sphere bounds a 3-ball in $Y(m)$.
Hence, lifting the ball to $Y_{n,m}$, we can find a 3-ball in $Y_{n,m}$ with the boundary $S$.
Suppose that the second condition is satisfied.
Then the action restricts on $S$.
The action is orientation-preserving, because if the action on $S$ is orientation-reversing,
then the quotient space has a connected-sum component of $L(2,1)$.
Then in the general position, $S$ transversely intersects with the branch locus at finite points.
By this argument, we can rule out the case where the branch locus is included in $S$.
This means that $\langle \tau\rangle$ acts on $S$ with the fixed points discrete.
The finite action of the 2-sphere is conjugate to the rotation in $SO(3)$ up to homotopy due to \cite{smale}.
In particular, the fixed points are two points.
Let $S'$ be an image of $S$ into $Y(m)$ and $S'\cap K_{n,m}$ are two points.
Since $Y(m)$ is prime, $S'$ bounds a 3-disk $D$ in $Y(m)$.
$D\cap K_{n,m}$ is the trivial arc, because $K_{n,m}$ is prime knot in $Y(m)$.
Since the branched cover along the trivial arc is a 3-disk, $S$ bounds a 3-disk in $Y_{n,m}$.
In each case, any embedded sphere in $Y_{n,m}$ bounds a 3-disk.
This means $Y_{n,m}$ is prime and it follows that $X_{n,m}$ is boundary-sum irreducible.
$Y'_{n,1}$ is the $n$-fold branched cover over $K_{n,1}$ in $Y'(1)=C(1)$.
This argument works for $Y'_{n,1}$.
This means $Y'_{n,1}$ is prime.
Thus, for any $n$, $(E_{n,1},\tau_{n,1})$ is boundary-sum irreducible ${\mathbb Z}_n$-order cork.
\hfill$\Box$
\end{proof}
Here, we put a quick proof of Theorem~\ref{irre}.\\
{\bf Proof of Theorem~\ref{irre}.}
It immediately follows from the latter of the proof.
\hfill$\Box$
For any integer $m$ with $m\neq 1$, we do not know whether $E_{n,m}$ is boundary-sum irreducible or not.
We need to prove the primeness of $Y'(m)$.
The Dehn surgery diagram of $Y'(m)$ is drawn in {\sc Figure}~\ref{YPnm}.
This manifold is a Dehn surgery of $S^3_1(Pr(-3,3,-3))$.
\begin{figure}[htbp]
\begin{center}
\includegraphics{YPnm.eps}
\caption{The Dehn surgery diagram of $Y'(m)$.}
\label{YPnm}
\end{center}
\end{figure}
\section{Proof of hyperbolicity.}
Finally, we prove Theorem~\ref{hyper}.\\
\begin{proof}
The output ``True" for the program HIKMOT means that the 3-manifold admits hyperbolic structure \cite{HIKMOT}.
To get True-output, we need apply Algorithm 2 in \cite{HIKMOT}.
The data after using the algorithm are updated in the site \cite{TM2}.
We can get ``True" for these four examples by running the data by HIKMOT.
\hfill$\Box$
\end{proof}
|
1,477,468,750,892 | arxiv | \section{Introduction}
With the increasing interest in deploying relays in 4th
generation mobile networks, multi-user multi-hop systems
have drawn substantial research attention. In
spite of the rapid advances in the understanding of
single-hop networks, our knowledge on how to deal with inter-user
interference and design efficient transmission schemes in
multi-hop systems is relatively limited. For instance, we consider a
wireless communication system in which a $K$-antenna source
intends to communicate to $K$ single-antenna destinations. If the
source's transmission can directly reach the destinations, this
system is a well-studied $K$-user MIMO broadcast channel. It is
already known that if perfect channel state information at
transmitter (CSIT) is available, the optimal sum degrees of
freedom (DoF) of the system is $K$, while without CSIT the result
is only one. Clearly, CSIT serves as a very important factor that
influences system capacity. In practice, channel estimation is in
general performed by receivers and CSIT is typically obtained via
feedback signals sent from them. However, attaining perfect
\emph{instantaneous} CSIT in realistic systems may be a challenging task
when feedback delay is not negligible compared with channel
coherence time. To gain understanding in such scenarios,
Maddah-Ali and Tse \cite{MaddahAli2010} proposed a \emph{delayed
CSIT} concept to model the extreme case where channel coherence
time is smaller than feedback delay so that CSIT would be
completely outdated. They showed that by interference alignment (IA)
design even the outdated CSIT can be advantageous to offer DoF
gain achieving the optimal sum DoF of
a $K$-user MIMO broadcast channel $\frac{K}{1+\frac{1}{2}+\ldots+\frac{1}{K}}$
\cite{MaddahAli2010}. Hence, from a DoF perspective, communication
in this single-hop network is relatively well understood.
Nevertheless, if the source and the destinations are not
physically connected so that the communication has to be assisted
by intermediate relays, how many DoF are available is not clear,
especially when potentially \emph{multiple layers} of relays are
required and only delayed CSIT can be available. To study the DoF
of a multi-hop network, a straightforward \emph{cascade approach}
sees the network as a concatenation of individual single-hop
sub-networks. The network DoF is limited by the minimum DoF of all
sub-networks. In this paper, we consider a class of relay-aided
MIMO broadcast networks with a $K$-antenna source, $K$ single-antenna
destinations, and $N-2$ relay layers, each containing $K$
single-antenna full-duplex relays. Following the cascade approach,
the first hop can be treated as a $K$-user MIMO broadcast channel.
Each of the remaining hops can be seen as a $K \times K$
single-antenna X channel \cite{Cadambe2009}. Hence, the achievable
sum DoF of the considered network is $\frac{4}{3} -
\frac{2}{3(3K-1)}$, i.e. that of a $K \times K$ X channel
\cite{Abdoli2011}.
However, separating the network into individual sub-networks may
not always be a good strategy. For instance, provided perfect
instantaneous CSIT, references \cite{Jeon2009,Gou2010,Chao2011}
showed that in certain systems designing transmission by treating
all hops as a whole entity can perform strictly better than
applying the cascade approach. In this paper, we will show that
with delayed CSIT this is also the case for the considered
$N$-layer relay-aided MIMO broadcast networks. Specifically, we
focus on two delayed CSIT scenarios. In a \emph{global-range}
feedback scenario, where the CSI of all layers
can be decoded by the source, we propose a joint transmission design to
prove the optimal network sum DoF to be
$\frac{K}{1+\frac{1}{2}+\ldots+\frac{1}{K}}$. In addition, in a
\emph{one-hop-range} feedback scenario, where the CSI feedback
signals sent from each layer can only be received by its adjacent
upper-layer, we show that when $K=2$ the optimal sum DoF $\frac{4}{3}$
is achievable, and when $K \geq 3$ a DoF $\frac{3}{2}$ is
achievable. These results depend not on $N$ but only on $K$,
and are clearly better than those attained by the cascade
approach.
\section{System Model}
\label{section:System_Model}
As shown in Fig.~\ref{Fig:N_K_Relay_BC_Def}, we consider a
multi-hop MIMO broadcast network in which a source node with $K$
transmit antennas intends to communicate to $K$ single-antenna
destinations. There is no physical link between them so that $N-2$
($N \geq 3$) layers of intermediate relay nodes, each with $K$
full-duplex single-antenna relays, are deployed to aid the
communication. The network contains a total of $N$ layers of
nodes. No connection exists between non-adjacent layers. We term
this network an $(N,K)$ relay-aided MIMO broadcast network
throughout the paper. $n_{k}$ is used to represent the node $k$
($k \in\{1,2,\ldots,K\}$) at layer-$n$ ($n \in \{2,3,
\ldots,N\}$).
\begin{figure}[t!]
\centerline{\epsfig{file=NKModelLatexDrew.eps,width=80mm}}
\caption{$(N,K)$ relay-aided MIMO broadcast networks.}
\label{Fig:N_K_Relay_BC_Def}
\end{figure}
Assume the rate tuple $(R_{1},R_{2},...,R_{K})$ between the source
and destinations can be achieved. Let $\mathcal{C}$ denote
the capacity region and $P$ denote the power constraint of each
layer. The sum DoF of the $(N,K)$ relay-aided MIMO broadcast
network with delayed CSIT is defined as \cite{MaddahAli2010}
\begin{equation} \label{Eqn:Eqn_DoF_Def}
D^{d-CSI}(N,K) = \max_{(R_{1},...,R_{K}) \in
\mathcal{C}}\left\{\lim_{P\rightarrow\infty}\frac{\Sigma_{i=1}^{K}R_{i}(P)}{\textrm{log}{P}}\right\}.
\end{equation}
Let a $K \times K$ matrix $\mathbf{H}^{[n-1]}(t)$ denote the
channel matrix between the $(n-1)$th and the $n$th layers (i.e.
the $(n-1)$th hop) at time slot $t$. The $i$th row
and $k$th column element of $\mathbf{H}^{[n-1]}(t)$, $h_{ik}^{[n-1]}(t)$,
represents the channel gain
from node $(n\!-\!1)_{k}$ to node $n_{i}$. We consider block
fading channels. All fading coefficients remain constant within
one time slot, but change independently across different time
slots. Let $x_{k}^{[n-1]}(t)$ ($E[|x_k^{[n-1]}(t)|^{2}] \leq \frac{P}{K}$)
and $y_{k}^{[n]}(t)$ represent the transmit signal of node
$(n\!-\!1)_{k}$ and the received signal of node $n_{k}$ at time
slot $t$, respectively. The received signals of layer-$n$ is \be
\label{Eqn:InOut_Relation_Nlayer}
\mathbf{y}^{[n]}(t) = \mathbf{H}^{[n-1]}(t)\mathbf{x}^{[n-1]}(t) + \mathbf{z}^{[n]}(t), n=2,3,...,N,
\ee where $\mathbf{x}^{[n-1]}(t) = [x_{1}^{[n-1]}(t)~
x_{2}^{[n-1]}(t)~ \ldots~ x_{K}^{[n-1]}(t)]^{T}$ is the transmit
signals of layer-$(n\!-\!1)$, $\mathbf{y}^{[n]}(t) =
[y_{1}^{[n]}(t)~ y_{2}^{[n]}(t)~ \ldots~ y_{K}^{[n]}(t)]^{T}$, and
$\mathbf{z}^{n}(t)$ is the unit-power complex additive white
Gaussian noise (AWGN).
At each time slot $t$, each receiver is able to obtain the CSI of
its incoming channels by a proper training process. That is, $n_i$
knows $h_{ik}^{[n-1]}(t)$, $\forall k \in \{1,2, \ldots, K\}$.
Such knowledge can be directly delivered to nodes in later layers
along with data transmission. To transmit CSI to
previous layers, feedback signals are used from each receiver. We
assume that the feedback delay is larger than the channel coherence
time. Thus if any transmitter can receive and decode the feedback
signals, its obtained CSIT is in fact delayed by one time slot. In
this paper, we consider two scenarios of delayed CSIT feedback in
the $(N,K)$ relay-aided MIMO broadcast network:
\emph{1) Global-range delayed CSIT:} In this scenario, the source
node can receive and successfully decode the feedback signals
transmitted by all nodes. Hence it can obtain the global CSI
$\mathbf{H}^{[1]}(t), \mathbf{H}^{[2]}(t), ...,
\mathbf{H}^{[N-1]}(t)$ at time slot $t+1$.
\emph{2) One-hop-range delayed CSIT:} In this case, the feedback
signals can be delivered only between
adjacent layers. Then at time slot $t+1$, $\mathbf{H}^{[n-1]}(t)$
is known at only layer-$(n-1)$.
\section{Main Results and Discussions} \label{section:Main_Results}
We study the sum DoF of the considered $(N,K)$
relay-aided MIMO broadcast network, for both global-range and
one-hop-range delayed CSIT scenarios. Our main results are
summarized in the following two theorems.
\begin{theorem}\label{Theorem:DoF_Global_Range_Delayed_CSIT}
With \emph{global-range} delayed CSIT, the sum DoF of the $(N,K)$
relay-aided MIMO broadcast network is
\begin{equation}\label{eq:DoF_Global_Range_Delayed_CSIT}
D^{d-CSI}(N,K) = \frac{K}{1+\frac{1}{2}+\ldots+\frac{1}{K}}.
\end{equation}
\end{theorem}
\begin{proof}
Please see Section \ref{section:Proof_Theorem1} for the proof.
\end{proof}
\begin{theorem}\label{Theorem:DoF_One_Hop_Range_Delayed_CSIT}
With \emph{one-hop-range} delayed CSIT, the sum DoF of the $(N,K)$
relay-aided MIMO broadcast network is
\begin{eqnarray}\label{eq:DoF_One_Hop_Range_Delayed_CSIT}
&\!\!\!\!D^{d-CSI}(N,2)\!\!\!\!& = \frac{4}{3}, \nonumber \\
\frac{3}{2} \leq &\!\!\!\!D^{d-CSI}(N,K)\!\!\!\!& \leq
\frac{K}{1+\frac{1}{2}+\ldots+\frac{1}{K}},~ K\geq3.
\end{eqnarray}
\end{theorem}
\begin{proof}
Please see Section
\ref{section:Proof_Theorem2} for the proof.
\end{proof}
We can see that $N$ does not appear in
(\ref{eq:DoF_Global_Range_Delayed_CSIT}) or
(\ref{eq:DoF_One_Hop_Range_Delayed_CSIT}). Thus, the sum
DoF of the $(N,K)$ relay-aided broadcast network would not be
limited by the number of layers in the network, but may be related
only to $K$, the number of antennas/users.
With \emph{global-range} feedback, the sum DoF of the network
is the same as that in a single-hop $K$-user MIMO broadcast
channel. The result reveals the importance of providing the
CSI of the whole network to the source. In practice, this
can be achieved by e.g., each node broadcasting its feedback
signal with a sufficiently high power.
However, this may not be possible in some systems, and
\emph{one-hop-range} feedback may be more feasible. In this
case, the CSI flow is limited within only one
hop, which in turn affects the interference management in the
network. When $K\!=\!2$, the sum DoF is shown
to be $\frac{4}{3}$, by a joint transmission design among all
hops. Following a similar strategy, the sum DoF can be
lower bounded by $\frac{3}{2}$ for $K \geq 3$. Although currently
it is difficult to quantify the distance between this lower
bound and the actual achievable sum DoF, we can see that
when $K$ is small, e.g., $K=3,4$, the lower bound is tight
since it is only slightly smaller than a sum DoF upper bound.
Recall that applying the cascade approach the achievable sum DoF
is limited by that of a $K \times K$ X channel, i.e., $\frac{4}{3}
- \frac{2}{3(3K-1)}$ \cite{Abdoli2011}. By a joint transmission
design among all hops, our scheme strictly surpasses the cascade
approach. The task of proving the optimality of our results or
finding even better schemes to attain the actual sum DoF of an
$(N,K)$ relay-aided MIMO broadcast network will be left for future
investigation.
\section{Proof of Theorem \ref{Theorem:DoF_Global_Range_Delayed_CSIT}}
\label{section:Proof_Theorem1}
\subsubsection{Outer Bound}
We assume that all the relays in each layer can fully
cooperate and jointly process their signals. Since this assumption
would not reduce network performance, the sum DoF of this new
system, which is clearly limited by that of the last hop (i.e. a
single-hop $K$-user MIMO broadcast channel), would serve as an
outer bound of the sum DoF of the considered $(N,K)$ relay-aided
broadcast network. According to \cite{MaddahAli2010}, the outer
bound is $\frac{K}{1+\frac{1}{2}+\ldots+\frac{1}{K}}$.
\subsubsection{Achievability} Consider full-duplex
amplify-and-forward relays. At time slot $t$, node $n_{i}$ chooses
$g^{[n]}_{i}(t)$ as its amplification coefficient such that
$|g^{[n]}_{i}(t)|^{2} \left( \sum_{k=1}^{K} |h^{[n-1]}_{ik}(t)|^{2} +
\frac{1}{K} \right) \leq 1$. We define $\mathbf{G}^{[n]}(t) =
\textrm{diag}\{g^{[n]}_{1}(t),g^{[n]}_{2}(t), \ldots, g^{[n]}_{K}(t)\}$
and focus on the high-SNR regime (where DoF is effective). Hence
we omit the noise term in (\ref{Eqn:InOut_Relation_Nlayer}).
At time slot $t$, the received signals at
the layer-$N$ (i.e. the destinations) can be denoted by
\begin{equation}\label{eq:Input_Output_Relation_Global_Range_CSIT}
\mathbf{y}^{[N]}(t) = \left( \prod_{n=3}^{N}
\mathbf{H}^{[n-1]}(t) \mathbf{G}^{[n-1]}(t) \right) \mathbf{H}^{[1]}(t)
\mathbf{x}^{[1]}(t).
\end{equation}
Let $\tilde{\mathbf{H}}(t) = \prod_{n=3}^{N} \left(
\mathbf{H}^{[n-1]}(t) \mathbf{G}^{[n-1]}(t) \right)
\mathbf{H}^{[1]}(t)$ and substitute it into
(\ref{eq:Input_Output_Relation_Global_Range_CSIT}). We obtain an
equivalent single-hop $K$-user MIMO broadcast channel with
an equivalent channel matrix $\tilde{\mathbf{H}}(t)$. Because the
source has the delayed CSI of the whole network,
$\tilde{\mathbf{H}}(t-1)$ is known at time slot $t$. Thus the
transmission scheme proposed in \cite{MaddahAli2010} for achieving
the sum DoF of a single-hop $K$-user MIMO broadcast channel with
delayed CSIT can also be employed in the equivalent system.
The sum DoF outer bound
$\frac{K}{1+\frac{1}{2}+\ldots+\frac{1}{K}}$ is achievable.
\section{Proof of Theorem $2$}
\label{section:Proof_Theorem2}
Clearly, the outer bound above still holds for
\emph{one-hop-range} feedback. When $K=2$ it can be shown
that the outer bound $\frac{4}{3}$ is tight. However,
it may not be true for $K \geq 3$. In this proof we will present a new
multi-round transmission scheme that treats all hops as a whole
entity aiming for aligning interference. The achievable sum DoF is
higher than that obtained by the cascade approach and thus will serve
as a lower bound to the sum DoF of the considered network.
Due to the page limitation, we will mainly focus on an example
$(3,3)$ relay-aided MIMO broadcast network. Let integer $l \geq
1$. We will show that $9l$ independent messages can be delivered
from the $3$-antenna source to the $3$ single-antenna destinations
through a layer of $3$ single-antenna full-duplex relays, using a
total of $6l+3$ time slots. Then when $l \rightarrow \infty$, the
sum DoF $\frac{3}{2}$ can be asymptotically achieved. The
corresponding approach for general networks will be given later.
\begin{table*}
\addtolength{\tabcolsep}{-2pt} \centering \caption{}
\begin{tabular}{c | c c c | c c c | c c c | c c c}
\hline
$t$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$
& $7$ & $8$ & $9$ & $10$ & $11$ & $12$ \\ \hline
$x_{1}^{[1]}(t)$ & $\mu_{1}(1)$ & $\mu_{2}(1)$ & $\mu_{3}(1)$ & $L_{2}^{[2]}(1)$ & $L_{3}^{[2]}(1)$ &
& $\mu_{1}(2)$ & $\mu_{2}(2)$ & $\mu_{3}(2)$ & $L_{2}^{[2]}(2)$ & $L_{3}^{[2]}(2)$ & \\
$x_{2}^{[1]}(t)$ & $\nu_{1}(1)$ & $\nu_{2}(1)$ & $\nu_{3}(1)$ & $L_{4}^{[2]}(1)$ & & $L_{6}^{[2]}(1)$
& $\nu_{1}(2)$ & $\nu_{2}(2)$ & $\nu_{3}(2)$ & $L_{4}^{[2]}(2)$ & & $L_{6}^{[2]}(2)$ \\
$x_{3}^{[1]}(t)$ & $\omega_{1}(1)$ & $\omega_{2}(1)$ & $\omega_{3}(1)$ & & $L_{7}^{[2]}(1)$ & $L_{8}^{[2]}(1)$
& $\omega_{1}(2)$ & $\omega_{2}(2)$ & $\omega_{3}(2)$ & & $L_{7}^{[2]}(2)$ & $L_{8}^{[2]}(2)$ \\ \hline
$y_{1}^{[2]}(t)$ & $L_{1}^{[2]}(1)$ & $L_{4}^{[2]}(1)$ & $L_{7}^{[2]}(1)$ & $\gamma_{12}^{[2]}(1)$ & $\gamma_{13}^{[2]}(1)$ &
& $L_{1}^{[2]}(2)$ & $L_{4}^{[2]}(2)$ & $L_{7}^{[2]}(2)$ & $\gamma_{12}^{[2]}(2)$ & $\gamma_{13}^{[2]}(2)$ & \\
$y_{2}^{[2]}(t)$ & $L_{2}^{[2]}(1)$ & $L_{5}^{[2]}(1)$ & $L_{8}^{[2]}(1)$ & $\gamma_{12}^{[2]}(1)$ & & $\gamma_{23}^{[2]}(1)$
& $L_{2}^{[2]}(2)$ & $L_{5}^{[2]}(2)$ & $L_{8}^{[2]}(2)$ & $\gamma_{12}^{[2]}(2)$ & & $\gamma_{23}^{[2]}(2)$ \\
$y_{3}^{[2]}(t)$ & $L_{3}^{[2]}(1)$ & $L_{6}^{[2]}(1)$ & $L_{9}^{[2]}(1)$ & & $\gamma_{13}^{[2]}(1)$ & $\gamma_{23}^{[2]}(1)$
& $L_{3}^{[2]}(2)$ & $L_{6}^{[2]}(2)$ & $L_{9}^{[2]}(2)$ & & $\gamma_{13}^{[2]}(2)$ & $\gamma_{23}^{[2]}(2)$ \\ \hline
$x_{1}^{[2]}(t)$ & & & & $L_{1}^{[2]}(1)$ & $L_{4}^{[2]}(1)$ & $L_{7}^{[2]}(1)$
& $L_{2}^{[3]}(1)$ & $L_{3}^{[3]}(1)$ & & $L_{1}^{[2]}(2)$ & $L_{4}^{[2]}(2)$ & $L_{7}^{[2]}(2)$ \\
$x_{2}^{[2]}(t)$ & & & & $L_{2}^{[2]}(1)$ & $L_{5}^{[2]}(1)$ & $L_{8}^{[2]}(1)$
& $L_{4}^{[3]}(1)$ & & $L_{6}^{[3]}(1)$ & $L_{2}^{[2]}(2)$ & $L_{5}^{[2]}(2)$ & $L_{8}^{[2]}(2)$ \\
$x_{3}^{[2]}(t)$ & & & & $L_{3}^{[2]}(1)$ & $L_{6}^{[2]}(1)$ & $L_{9}^{[2]}(1)$
& & $L_{7}^{[3]}(1)$ & $L_{8}^{[3]}(1)$ & $L_{3}^{[2]}(2)$ & $L_{6}^{[2]}(2)$ & $L_{9}^{[2]}(2)$ \\ \hline
$y_{1}^{[3]}(t)$ & & & & $L_{1}^{[3]}(1)$ & $L_{4}^{[3]}(1)$ & $L_{7}^{[3]}(1)$
& $\gamma_{12}^{[3]}(1)$ & $\gamma_{13}^{[3]}(1)$ & & $L_{1}^{[3]}(2)$ & $L_{4}^{[3]}(2)$ & $L_{7}^{[3]}(2)$ \\
$y_{2}^{[3]}(t)$ & & & & $L_{2}^{[3]}(1)$ & $L_{5}^{[3]}(1)$ & $L_{8}^{[3]}(1)$
& $\gamma_{12}^{[3]}(1)$ & & $\gamma_{23}^{[3]}(1)$ & $L_{2}^{[3]}(2)$ & $L_{5}^{[3]}(2)$ & $L_{8}^{[3]}(2)$ \\
$y_{3}^{[3]}(t)$ & & & & $L_{3}^{[3]}(1)$ & $L_{6}^{[3]}(1)$ & $L_{9}^{[3]}(1)$
& & $\gamma_{13}^{[3]}(1)$ & $\gamma_{23}^{[3]}(1)$ & $L_{3}^{[3]}(2)$ & $L_{6}^{[3]}(2)$ & $L_{9}^{[3]}(2)$ \\ \hline
\end{tabular}
\label{Table:BlockCoding}
\end{table*}
Recall that we use $y_k^{[n]}(t)$ and $x_k^{[n]}(t)$ respectively to
denote the received and transmitted signals of the
$k$th node in layer-$n$ (or the $k$th antenna if $n=1$) at time
slot $t$. The transmission process in the $(3,3)$ relay-aided MIMO
broadcast network, for the first $12$ time slots, is shown in
Table \ref{Table:BlockCoding}. Specifically, $2$ rounds of
messages, each containing $9$ independent messages, are delivered
to the destinations. Let $\mu_k(l)$, $\nu_k(l)$, and $\omega_k(l)$
($k\in \{1,2,3\}$) denote the source messages intended for the
destinations $3_k$ (the index $l$ means that the notations apply
for the $l$th transmission round). In what follows, we will
explain the first round of transmission. It consists of two
\emph{phases}.
\textbf{Phase One:} The first phase takes the first $3$ time
slots. At time slot $t$ ($t \in \{1,2,3\}$), $\mu_{t}(1),
\nu_{t}(1), \omega_{t}(1)$ are transmitted by the three source
antennas respectively. Hence each relay (i.e. each node of
layer-$2$) receives a linear combination of three messages at each
time slot. Again, we ignore the noise in
(\ref{Eqn:InOut_Relation_Nlayer}). The received signals at $2_k$
is expressed as ($t \in \{1,2,3\}$)
\begin{eqnarray}
y_k^{[2]}(t) = h_{k1}^{[1]}(t)\mu_{t}(1) + h_{k2}^{[1]}(t)\nu_{t}(1)
+ h_{k3}^{[1]}(t)\omega_{t}(1).
\end{eqnarray}
Let $L_{3(t-1)+k}^{[2]}(1)=y_k^{[2]}(t)$ denote the linear
equation known by $2_k$ at time slot $t$.
After the $3$rd time slot, since $\mathbf{H}^{[1]}(1)$,
$\mathbf{H}^{[1]}(2)$, and $\mathbf{H}^{[1]}(3)$ are known at the
source, all the equations $L_{i}^{[2]}(1)$, $\forall
i=1,2,\cdots,9$, can be recovered by the source.
\textbf{Phase Two:} This phase takes the next $6$ time slots after
phase one. At each time slot $t$ ($t \in \{3,4,5\}$) only two
source antennas are activated to retransmit the equations
$L_{i}^{[2]}(1)$. According to $x_{k}^{[1]}(t)$
shown in Table \ref{Table:BlockCoding}, we have
\begin{align}
\label{Eqn:Eqn_OrdertwoSymbol}
y_{k}^{[2]}(4) = h_{k1}^{[1]}(4) L_{2}^{[2]}(1) + h_{k2}^{[1]}(4) L_{4}^{[2]}(1); \\
y_{k}^{[2]}(5) = h_{k1}^{[1]}(4) L_{3}^{[2]}(1) + h_{k3}^{[1]}(4) L_{7}^{[2]}(1); \\
y_{k}^{[2]}(6) = h_{k2}^{[1]}(4) L_{6}^{[2]}(1) + h_{k3}^{[1]}(4) L_{8}^{[2]}(1).
\end{align}
Since node $2_1$ obtains $L_{4}^{[2]}(1)$ in phase one, at time
slot $4$ it can recover $L_{2}^{[2]}(1)$. Similarly, both
$L_{2}^{[2]}(1)$ and $L_{4}^{[2]}(1)$ are also known at node
$2_2$. Use $\gamma_{ij}^{[n]}(l)=(a,b)$ to represent that
equations $a$ and $b$ are recovered by both nodes $n_i$ and $n_j$.
As shown in Table \ref{Table:BlockCoding}, we can replace both
$y_{1}^{[2]}(4)$ and $y_{2}^{[2]}(4)$ with $\gamma_{12}^{[2]}(1) =
(L_{2}^{[2]}(1),L_{4}^{[2]}(1))$. Clearly, we also have
$\gamma_{13}^{[2]}(1) = (L_{3}^{[2]}(1), L_{7}^{[2]}(1))$ and
$\gamma_{23}^{[2]}(1) = (L_{6}^{[2]}(1),L_{8}^{[2]}(1))$.
Meanwhile, the relay nodes also send the equations they received
in phase one to the destinations, as shown in Table
\ref{Table:BlockCoding}. The received equations at the
destinations $3_{k}$ are:
\begin{eqnarray}
\label{Eqn:Eqn_NlayerRepre2_1}
\!\!\!\!\!y_{k}^{[3]}(4) \!\!\!\!\!&=&\!\!\!\!\! h_{k1}^{[2]}(4)L_{1}^{[2]}(1) \!+\!
h_{k2}^{[2]}(4)L_{2}^{[2]}(1) \!+\! h_{k3}^{[2]}(4)L_{3}^{[2]}(1), \\
\label{Eqn:Eqn_NlayerRepre2_2}
\!\!\!\!\!y_{k}^{[3]}(5) \!\!\!\!\!&=&\!\!\!\!\! h_{k1}^{[2]}(5)L_{4}^{[2]}(1) \!+\!
h_{k2}^{[2]}(5)L_{5}^{[2]}(1) \!+\! h_{k3}^{[2]}(5)L_{6}^{[2]}(1), \\
\label{Eqn:Eqn_NlayerRepre2_3}
\!\!\!\!\!y_{k}^{[3]}(6) \!\!\!\!\!&=&\!\!\!\!\! h_{k1}^{[2]}(6)L_{7}^{[2]}(1) \!+\!
h_{k2}^{[2]}(6)L_{8}^{[2]}(1) \!+\! h_{k3}^{[2]}(6)L_{9}^{[2]}(1).
\end{eqnarray}
Let $L_{3(t-4)+k}^{[3]}(1)=y_k^{[3]}(t)$. Clearly, if the
destination $3_1$ knows the three equations $L_{1}^{[3]}(1)$,
$L_{2}^{[3]}(1)$, $L_{3}^{[3]}(1)$, it can recover its desired
messages $\mu_1(1)$, $\nu_1(1)$, $\omega_1(1)$. After time slot
$6$, the node $3_1$ has $L_{1}^{[3]}(1)$. Thus if $L_{2}^{[3]}(1)$
and $L_{3}^{[3]}(1)$ can be provided to node $3_1$, the problem is
solved. Similarly, having $L_{5}^{[3]}(1)$, the destination
$3_{2}$ needs $L_{4}^{[3]}(1)$ and $L_{6}^{[3]}(1)$ to recover
$\mu_{2}(1), \nu_{2}(1), \omega_{2}(1)$. $L_{7}^{[3]}(1)$ and
$L_{8}^{[3]}(1)$ are desired by the destination $3_{3}$, who
already has $L_{9}^{[3]}(1)$, to recover $\mu_{3}(1)$,
$\nu_{3}(1)$, $\omega_{3}(1)$. Therefore, we aim to deliver these
six equations from the relays to the destinations in the next
three time slots.
According to the above description, we can see that after time
slot $6$, node $2_{1}$ knows the equations $L_{1}^{[2]}(1)$,
$L_{2}^{[2]}(1)$ and $L_{3}^{[2]}(1)$. Node $2_2$ knows the
equations $L_{4}^{[2]}(1)$, $L_{5}^{[2]}(1)$ and $L_{6}^{[2]}(1)$.
Node $2_3$ knows the equations $L_{7}^{[2]}(1)$,
$L_{8}^{[2]}(1)$ and $L_{9}^{[2]}(1)$. Since the channel matrices
$\mathbf{H}^{[2]}(4)$, $\mathbf{H}^{[2]}(5)$, and
$\mathbf{H}^{[2]}(6)$ are available at all nodes in layer-$2$,
the node $2_1$ can formulate the equations
$L_{2}^{[3]}(1)$ and $L_{3}^{[3]}(1)$ using
(\ref{Eqn:Eqn_NlayerRepre2_1}). Similarly, the node $2_2$ can
formulate the equations $L_{4}^{[3]}(1)$ and $L_{6}^{[3]}(1)$
according to (\ref{Eqn:Eqn_NlayerRepre2_2}). The node $2_{3}$
can formulate the equations $L_{7}^{[3]}(1)$ and $L_{8}^{[3]}(1)$
using (\ref{Eqn:Eqn_NlayerRepre2_3}).
At time slot $7$, let $2_1$ transmit $L_{2}^{[3]}(1)$ and $2_2$
transmit $L_{4}^{[3]}(1)$, as shown in Table
\ref{Table:BlockCoding}. Node $3_1$, which already knows
$L_{4}^{[3]}(1)$, can recover $L_{2}^{[3]}(1)$ by eliminating
$L_{4}^{[3]}(1)$ from its received signal. The node $3_2$ can also
attain both $L_{2}^{[3]}(1)$ and $L_{4}^{[3]}(1)$, following the
similar approach. Thus the received signals $y_1^{[3]}(7)$ and
$y_2^{[3]}(7)$ in Table \ref{Table:BlockCoding} can be replaced
with a simpler expression $\gamma_{12}^{[3]}(1) =
(L_{2}^{[3]}(1),L_{4}^{[3]}(1))$. Then we can also have
$\gamma_{13}^{[3]}(1) = (L_{3}^{[3]}(1), L_{7}^{[3]}(1))$ and
$\gamma_{23}^{[3]}(1) = (L_{6}^{[3]}(1),L_{8}^{[3]}(1))$, at the
$8$th and $9$th time slots, respectively.
Consequently, equations $L_{1}^{[3]}(1)$, $L_{2}^{[3]}(1)$,
and $L_{3}^{[3]}(1)$ are known at the destination $3_1$. The
desired messages can be recovered now. The same result holds also
for the destinations $3_2$ and $3_3$. $9$ independent
messages are delivered successfully from the source to the
destinations in one transmission round.
The same process can continue until $l$ rounds of transmissions
are finished using a total of $6l+3$ time slots (the second round
transmission is shown in Table \ref{Table:BlockCoding} partially).
When $l \rightarrow \infty$, this scheme achieves a sum
DoF $\frac{3}{2}$. The lower bound for $D^{d-CSI}(3,3)$ is proven.
To generalize this scheme to $N$ ($N\!>\!3$) layers, we first denote
the messages from the source as:
$L^{[1]}_{3(k-1)+1}(l)=\mu_{k}(l) $, $L^{[1]}_{3(k-1)+2}(l)=\nu_{k}(l)$
and $L^{[1]}_{3(k-1)+3}(l)=\omega_{k}(l)$. The
$l$th-round transmission at layer-$n$ ($n \in \{1,2,\ldots,N-1\}$)
can be denoted by the following formula. It takes the time slots
$t=6(l\!-\!1)\!+\!3(n\!-\!1)\!+\!\hat{t}$ ($\hat{t}=1,2,\ldots,6$):
\begin{equation} \label{Eqn:GeneralizedFrom}
\mathbf{x}^{[n]}(t) =
\begin{cases}
\{L^{[n]}_{3(\hat{t}-1)+k}(l)\}_{k=1}^{3} & \quad \hat{t} = 1,2,3; \\
[L^{[n+1]}_{2}(l), L^{[n+1]}_{4}(l), 0]^{T} & \quad \hat{t} = 4; \\
[L^{[n+1]}_{3}(l), 0, L^{[n+1]}_{7}(l)]^{T} & \quad \hat{t} = 5; \\
[0, L^{[n+1]}_{6}(l), L^{[n+1]}_{8}(l)]^{T} & \quad \hat{t} = 6. \\
\end{cases}
\end{equation}
Here $\{L^{[n]}_{3(\hat{t}-1)+k}(l)\}_{i=k}^{3}$ represents the column
vector composed by $L^{[n]}_{3(\hat{t}-1)+k}(l)$ ($k \in \{1,2,3\}$).
We denote the received equation at node $(n\!+\!1)_{k}$ when
$\hat{t} \in \{1,2,3\}$ as $L_{3(\hat{t}-1)+k}^{[n+1]}(l)=
\sum_{i=1}^{3} h_{ki}^{[n]}(t)$ $L_{3(\hat{t}-1)+i}^{[n]}(l)$. By
induction, we assume $n_{k}$ can recover $L_{3(k-1)+i}^{[n+1]}(l)$
after the first three time slots ($i \in \{1,2,3\}$). Then the
transmission can be designed as shown in (\ref{Eqn:GeneralizedFrom}) when
$\hat{t} \in \{4,5,6\}$. Therefore, $(n\!+\!1)_{1}$ and $(n\!+\!1)_{2}$
can recover $\gamma_{12}^{[n+1]}(l) \!=\! (L_{2}^{[n+1]}(l),
L_{4}^{[n+1]}(l))$; $(n\!+\!1)_1$ and $(n\!+\!1)_3$ can recover
$\gamma_{13}^{[n+1]}(l)\!=\!(L_{3}^{[n+1]}(l),L_{7}^{[n+1]}(l))$;
and $(n\!+\!1)_2$ and $(n\!+\!1)_3$ can recover $\gamma_{23}^{[n+1]}(l) =
(L_{6}^{[n+1]}(l),L_{8}^{[n+1]}(l))$ after time slot $6l+3(n-1)$.
Since the destinations refer to the $N$th layer,
the $l$-round transmission takes $6l+3(N-2)$ time slots
to deliver $9l$ independent messages. The achievable
sum DoF is $\frac{9l}{6l+3(N-2)} \approx \frac{3}{2}$ when
$l \rightarrow \infty$. The result still holds for $K>3$.
Now we consider $K=2$. In this case, $4$ messages are delivered
using $3$ time slots. Let $L^{[1]}_{1}=\mu_{1}$ and
$L^{[1]}_{2}=\nu_{1}$ denote the messages for the first
destination, and $L^{[1]}_{3}=\mu_{2}$ and $L^{[1]}_{4}=\nu_{2}$
denote those for the second destination. During time slot $1$, the
nodes (or antennas) $n_1$ and $n_2$ ($n \in \{1,2,\cdots,N\}$)
send $L^{[n]}_{1}$ and $L^{[n]}_{2}$, respectively. The received
signal at node $(n+1)_k$ ($k \in \{1,2\}$) is $L^{[n+1]}_{k} =
h_{k1}^{[n]}(1)L_1^{[n]}+ h_{k2}^{[n]}(1)L_{2}^{[n]}$. During time
slot $2$, $n_1$ sends $L^{[n]}_{3}$ and $n_2$ sends $L^{[n]}_{4}$.
$(n+1)_k$ receives $L^{[n+1]}_{k+2} = h_{k1}^{[n]}(2)L_3^{[n]}+
h_{k2}^{[n]}(2)L_{4}^{[n]}$. Assume $n_1$ can recover
$L^{[n+1]}_{2}$, and $n_2$ can recover $L^{[n+1]}_{3}$. During
time slot $3$, $n_1$ and $n_2$ transmit $L^{[n+1]}_{2}$ and
$L^{[n+1]}_{3}$, respectively. Since $(n+1)_1$ knows
$L^{[n+1]}_{3}$, it can recover $L^{[n+1]}_{2}$. Similarly,
$(n+1)_1$ can recover $L^{[n+1]}_{3}$. As a result, the
destination $N_1$ can thus obtain both $\mu_{1}$ and $\nu_{1}$
because it can have $L^{[N]}_{1}$ and $L^{[N]}_{2}$. The
destination $N_2$ can obtain $\mu_{2}$ and $\nu_{2}$ from
$L^{[N]}_{3}$ and $L^{[N]}_{4}$. The achieved sum DoF is
$\frac{4}{3}$ to meet the upper bound.
\section{Conclusions}
We investigate the sum DoF of a class of multi-hop MIMO
broadcast network with delayed CSIT feedback. Our results
show the transmission design by treating the
multi-hop network as an entity can achieve better sum DoF
than the cascade approach which separates each hop individually.
\bibliographystyle{IEEEtran}
|
1,477,468,750,893 | arxiv | \section{Introduction}
\label{sec:intro}
In recent years, a wide range of methodological developments on FPGAs
{\color{black} aim at combining} the performance of an ASIC implementation
with the flexibility of software realizations. One important development
is partial runtime reconfiguration, which allows overcoming significant area overhead,
monetary cost, higher power consumption, or speed penalties (see e.g.~\cite{rose_FPGAgap}).
As described in~\cite{fks-ddrd-12}, the idea is to load a sequence of different
modules by partial runtime reconfiguration.
In a general setting, we are faced with a dynamically changing set of modules,
which may be modified by deletions and insertions. Typically, there is no full
a-priori knowledge of the arrival or departure of modules, i.e., we have to deal
with an online situation. The challenge is to ensure that
arriving modules can be allocated. Because previously deleted modules may
have been located in different areas of the layout, free space may be fragmented,
making it necessary to {\em relocate} existing modules in order to provide
sufficient area. In principle, this can be achieved by completely {\em defragmenting}
the layout when necessary; however, the lack of control over
the module sequence makes it hard to avoid frequent full defragmentation,
resulting in expensive operations for insertions if a na\"ive approach is used.
Dynamic insertion and deletion are classic problems of Computer Science.
Many data structures (from simple
to sophisticated) have been studied
that result in low-cost operations and efficient maintenance of
a changing set of objects. These data structures are mostly
one-dimensional (or even dimensionless) by nature, making it hard to
fully exploit the 2D nature of an FPGA. In this
paper, we propose a 2D data structure based on a quadtree
for maintaining the module layout under partial reconfiguration and reallocation.
The key idea is to control the overall structure of the layout,
such that future insertions can be performed with a limited
amount of relocation, even when free space is limited.
Our main contribution is to introduce a 2D
approach that is able to achieve provable constant-factor efficiency
for different types of relocation cost. To this end,
we give detailed mathematical proofs for a slightly simplified setting, along
with sketches of extensions to the more general cases. {\color{black} We also provide
basic simulation runs for various scenarios, indicating the quality of
our approach.}
The rest of this paper is organized as follows. The following Section~2
provides a survey of related work. For better accessibility of the key
ideas and due to limited space, our technical description
in Section~3, Section~4, and Section~5 focuses on the case of discretized quadratic modules
on a quadratic chip area. We discuss in Section~6 how general
rectangles can be dealt with, with corresponding {\color{black} simulations}
in Section~7. {\color{black}Along the same lines, we do not explicitly elaborate on
the dynamic maintenance of the communication infrastructure; see Figure~\ref{fig:config} for the basic
idea. Further details are left to future work, with groundwork laid in~\cite{meyer}.}
\section{Related Work}
The problem considered in our paper has a resemblance to one-dimensional
{\em dynamic storage allocation}, in which a sequence of storage requests of varying
size have to be assigned to a block of memory cells, such
that the length of each block corresponds to the size of the request.
In its classic form (without virtual memory), this block needs to be contiguous;
in our setting, contiguity of two-dimensional allocation is a must, as reconfigurable devices
do not provide techniques such as paging and virtual memory.
Once the allocation has been performed,
it is static in space: after a block has been occupied,
it will remain fixed until the corresponding data is no longer needed
and the block is released. As a consequence, a sequence of
allocations and releases can result in fragmentation of
the memory array, making it hard or even impossible to store
new data.
On the practical side,
classic buddy systems partition the one-dimensional storage into a number of standard block
sizes and allocate a block in a smallest free standard interval
to contain it. Differing only in the
choice of the standard size, various systems have been
proposed \cite{Bromley80,Hinds75,Hirs73,Know65,Shen74}.
Newer approaches based on cache-oblivious structures
in memory hierarchies
include Bender et al.~\cite{Bender05,Bender05a}.
Theoretical work on one-dimensional contiguous allocation
includes
Bender and Hu~\cite{bender_adaptive_2007}, who consider
maintaining $n$ elements in sorted
order, with not more than $O(n)$ space.
Bender et
al.~\cite{bender_maintaining_2009} aim at reducing
fragmentation when maintaining $n$ objects that require
contiguous space. Fekete et
al.~\cite{fks-ddrd-12} study
complexity results and consider practical applications on FPGAs.
Reallocations have also been studied in the context of heap
allocation. Bendersky and Petrank~\cite{bendersky_space_2012} observe
that full compaction, i.e., creating a contiguous block of free space
on the heap,
is prohibitively expensive and consider partial compaction.
Cohen and Petrank~\cite{cohen_limitations_2013} extend these
to practical applications.
Bender et al.~\cite{bender_cost-oblivious_2014}
describe a strategy that achieves good amortized movement costs
for reallocations, where allocated blocks {\color{blue}can} be moved at a cost to a new position that is disjoint
from with the old position.
Another paper by the same authors~\cite{bender_reallocation_2014} deals
with reallocations in the context of scheduling.
Examples for packing problems in applied computer science come from
allocating FPGAs. Fekete et al.~\cite{fekete_efficient_2014} examined
a problem dealing with the allocation of different types of resources
on an FPGA that had to satisfy additional properties. For example, to
achieve specified clock frequencies diameter restrictions had to be
obeyed by the packing. The authors were able to solve the problem
using integer linear programming techniques.
Over the years, a large variety of methods and results for
allocating storage have been proposed. The classical sequential fit
algorithms, First Fit, Best Fit, Next Fit and Worst Fit can be found
in Knuth~\cite{Knuth97} and Wilson et al.~\cite{Wils95}.
These are closely related to problems of offline and online packing of
two-dimensional objects. One of the earliest considered packing variants is the problem of finding
a dense packing of a known set of squares for a rectangular container; see
Moser~\cite{m66}, Moon and Moser~\cite{mm67} and
Kleitman and Krieger~\cite{kk70}, as well as more recent work by
Novotn{\'y}~\cite{n95,n96} and Hougardy~\cite{h11}.
There is also a considerable number of other related work on offline packing squares, cubes, or hypercubes;
see~\cite{ck-soda04,js-ptas08,h09} for prominent examples.
The {\em online} version of square packing has been studied by
Januszewski and Lassak~\cite{jl97} and Han et al.~\cite{hiz08}, with more recent
progress due to Fekete and Hoffmann~\cite{fh-ossp-13,fh-ossp-17}.
A different kind of online square packing was considered by
Fekete et al.~\cite{fks-osp-09,fks-ospg-14}. The container is an unbounded strip,
into which objects enter from above in a Tetris-like fashion; any new
object must come to rest on a previously placed object, and the
path to its final destination must be collision-free.
There are various ways to generalize the online packing of squares; see Epstein and van Stee~\cite{es-soda04,es05,es07} for online bin packing variants
in two and higher dimensions. In this context, also see parts of Zhang et al.~\cite{zcchtt10}.
A natural generalization of online packing of squares is online packing of rectangles,
which have also received a serious amount of attention. Most notably, online strip packing
has been considered; for prominent examples, see Azar and Epstein~\cite{ae-strip97}, who employ
shelf packing, and Epstein and van Stee~\cite{es-soda04}.
Offline packing of rectangles into a unit square or rectangle has also been considered
in different variants; for examples, see \cite{fgjs05}, as well as \cite{jz-profit07}.
Particularly interesting for methods for online packing into a single container may be the work by Bansal et
al.~\cite{bcj-struct-09}, who show that for any complicated packing of rectangular items into a rectangular container,
there is a simpler packing with almost the same value of items. For another variant of online allocation, see~\cite{frs-csdaosa-14},
which extends previous work on optimal shapes for allocation~\cite{bbd-wosc-04}.
From within the FPGA community, there is a huge amount of related work
dealing with problems related to relocation.
Becker et al.~\cite{blc-erpbr-07} present a method for
enhancing the relocability of partial
bitstreams for FPGA runtime configuration, with a special focus on
heterogeneities. They study the underlying prerequisites and
technical conditions for dynamic relocation.
Gericota et al.~\cite{gericota05} present a relocation procedure for
Configurable Logic Blocks (CLBs) that is able to carry out online
rearrangements, defragmenting the available FPGA resources without
disturbing functions currently running. Another relevant approach was
given by Compton et al.~\cite{clckh-crdrt-02}, who present a new
reconfigurable architecture design extension based on the ideas of
relocation and defragmentation.
Koch et al.~\cite{kabk-faepm-04} introduce efficient hardware extensions to
typical FPGA architectures in order to allow hardware task
preemption.
These papers do not consider the algorithmic implications and how the
relocation capabilities can be exploited
to optimize module layout in a fast, practical fashion, which is what we consider in this paper. Koester et
al.~\cite{koester07} also address the problem of
defragmentation. Different defragmentation algorithms that minimize
different types of costs are analyzed.
The general concept of defragmentation is well known, and has been applied to
many fields, e.g., it is typically employed for memory management. Our approach
is significantly different from defragmentation techniques which have been
conceived so far: these require a freeze of the system, followed by
a computation of the new layout and a complete reconfiguration of all modules
at once. Instead, we just copy one module
at a time, and simply switch the execution to the new module as soon as the move is complete.
{\color{blue} This concept aims at providing a seamless, dynamic defragmentation
of the module layout, eventually resulting in much better
utilization of the available space for modules.} All this
makes our work a two-dimensional extension of the one-dimensional approach
described in \cite{fks-ddrd-12}.
\section{Preliminaries}
We are faced with an (online) sequence of configuration requests
that are to be carried out on a rectangular chip area.
A request may consist of {\em deleting} an existing module, which
simply means that the module may be terminated and
its occupied area can be released to free space.
On the other hand,
a request may consist of {\em inserting} a new module,
requiring an axis-aligned, rectangular module
to be allocated to an unoccupied section of the chip;
if necessary, this may require rearranging the
allocated modules in order to create free space of
the required dimensions, incurring some cost.
Previous work on reallocation problems of this type
has focused on one-dimensional approaches.
Using these in a two-dimensional setting does not result in
satisfactory performance.
The main contribution of our paper is to demonstrate a two-dimensional
approach that is able to achieve an efficiency that is provably within a constant factor of the optimum,
even in the worst case, which
requires a variety of mathematical details.
{\color{black} For better accessibility of the key ideas,
our technical description in the rest of this Section~3,
as well as in Section~4 and Section~5 focuses on the case of quadratic modules
on a quadratic chip area. Section~6 addresses how to deal with general
rectangles.}
The rest of this section provides technical notation and descriptions.
A square is called \emph{aligned} if its edge length equals $2^{-r}$
for any $r \in \mathbb{N}_0$. It is called an $r$-square if its size is
$2^{-r}$ for a specific $r \in \mathbb{N}_0$. The \emph{volume} of an $r$-square $Q$ is $|Q|=4^{-r}$.
A {\em quadtree} is a rooted tree in which every node has either four
children or none. As a quadtree can be interpreted as the subdivision
of the unit square into nested $r$-squares, we can use quadtrees to
describe certain packings of aligned squares into the unit square.
\begin{defi}
A \emph{(quadtree) configuration} $T$ assigns a set of axis-aligned squares to the
nodes of a quadtree. The nodes with a distance $j$ to the root of the
quadtree form \emph{layer} $j$. Nodes are also called \emph{pixels}
and pixels in layer $j$ are called \emph{$j$-pixels}. Thus,
$j$-squares can only be assigned to $j$-pixels. A
pixel $p$ \emph{contains} a square $s$ if $s$ is assigned to $p$ or
one of the children of $p$ contains $s$. A $j$-pixel that has
an assigned $j$-square is \emph{occupied}.
For a pixel $p$ that is not occupied, with $P$ the unique path from $p$ to the root,
we call $p$
\begin{itemize}
\item \emph{blocked} if there is a $q \in P$ that is occupied,
\item \emph{free} if it is not blocked,
\item \emph{fractional} if it is free and contains a square,
\item \emph{empty} if it is free but not fractional,
\item \emph{maximally empty} if it is empty but its parent is not.
\end{itemize}
The \emph{height $h(T)$} of a configuration $T$ is defined as $0$ if the root of $T$ is empty. Otherwise, as the maximum $i+1$ such that $T$ contains an $i$-square.
\end{defi}
\begin{obs}\label{obs:disjoint}
Let $p \ne q$ be two maximally empty pixels and $P$ and $Q$ be the
paths from the root to $p$ and $q$, respectively. Then $p \notin Q$
and $q \notin P$.
\end{obs}
\begin{proof}
Without loss of generality, it is sufficient to show $p \notin Q$.
Assume $p \in Q$. Let $r \in Q$ be the parent of $q$. As $p$ is
maximally empty and $r$ is on the path from $p$ to $q$, $r$ must be
empty. However, that would imply that $q$ is not maximally empty, in
contradiction to the assumption.\qed
\end{proof}
The \emph{(remaining) capacity $\mathrm{cap}(p)$} of a $j$-pixel $p$
is defined as $0$ if $p$ is occupied or blocked and as $4^{-j}$ if $p$ is empty. Otherwise, $\mathrm{cap}(p) := \sum_{p' \in C(p)} \mathrm{cap}(p')$, where $C(p)$ is the set of children of $p$. The \emph{(remaining)
capacity} of $T$, denoted $\mathrm{cap}(T)$, is the remaining
capacity of the root of $T$.
\begin{lem}\label{lem:fullcap}
Let $p_1, p_2, \ldots, p_k$ be all maximally empty pixels of a quadtree
configuration $T$. Then we have $\mathrm{cap}(T) = \sum_{i=1}^k \mathrm{cap}(p_i)$.
\end{lem}
\begin{proof}
The claim follows directly from the definition of the capacity, as the only
positive capacities considered for $\mathrm{cap}(T)$ are exactly those
of the maximally empty pixels. \qed
\end{proof}
\begin{figure}[tbh]
\centering
\includegraphics[width=0.45\textwidth]{img/config.pdf}
\caption{A quadtree configuration {\color{black}(above)} and the corresponding dynamically generated quadtree layout {\color{black}(below)}.
Gray nodes are occupied, white ones with gray stripes
fractional, black ones blocked, and white nodes without stripes
empty. Maximally empty nodes have a circle inscribed. Red lines in the module layout
indicate the dynamically produced communication infrastructure, induced by the quadtree structure.}
\label{fig:config}
\end{figure}
See Figure~\ref{fig:config} for an example of a quadtree configuration
and the corresponding packing of aligned squares in the unit square.
Quadtree configurations are transformed using \emph{moves}
(\emph{reallocations}). A $j$-square $s$ assigned to a $j$-pixel
$p$ can be \emph{moved} (\emph{reallocated}) to another $j$-pixel $q$
by creating a new assignment from $q$ to $s$ and deleting the old
assignment from $p$ to $s$. $q$ must have been empty for this to be
allowed.
We allow only one move at a time. For example, two squares cannot
change places unless there is a sufficiently large pixel to
temporarily store one of them. Furthermore, we do not put limitations
on how to transfer a square from one place to another, i.e., we can
always move a square even if there is no collision-free path between
the origin and the destination.
\begin{defi}
A fractional pixel is \emph{open} if at least one of its children is
(maximally) empty. A configuration is called \emph{compact} if there
is at most one open $j$-pixel for every $j \in \mathbb{N}_0$.
\end{defi}
In (one-dimensional) storage allocation and scheduling, there are
techniques that avoid reallocations by requiring more space than the
sum of the sizes of the allocated
pieces. See Bender et al.~\cite{bender_reallocation_2014} for an
example. From there we adopt the term \emph{underallocation}. In particular, given two squares $s_1$ and $s_2$, $s_2$ is an $x$-underallocated copy
of $s_1$, if $|s_2| = x \cdot |s_1|$ for $x > 1$.
\begin{defi}
A \emph{request} has one of the forms \textsc{Insert($x$)} or
\textsc{Delete($x$)}, where $x$ is a unique identifier for a
square. Let $v \in [0, 1]$ be the volume of the square $x$. The
\emph{volume} of a request $\sigma$ is defined as
\[
\mathrm{vol}(\sigma) = \left\{ \begin{array}{ccl}
v & \text{if} & r=\textsc{Insert($x$)},\\
-v & \text{if} & r=\textsc{Delete($x$)}.
\end{array} \right.
\]
\end{defi}
\begin{defi}
A sequence of requests $\sigma_1, \sigma_2, \ldots, \sigma_k$
is \emph{valid} if $\sum_{i=1}^j \mathrm{vol}(\sigma_i) \le 1$ holds
for every $j=1,2,\ldots,k$. It is called \emph{aligned}, if
$|\mathrm{vol}(\sigma_j)| = 4^{-\ell_j}, \ell_j \in \mathbb{N}_0,$
where $|.|$ denotes the absolute value, holds for every
$j=1,2,\ldots,k$, i.e., if only aligned squares are packed.
\end{defi}
Our goal is to minimize the costs of reallocations. Costs can be
measured in different ways, for example in the number of moves or the
reallocated volume.
\begin{defi}
Assume we fulfill a request $\sigma$ and as a consequence reallocate
a set of squares $\{s_1, s_2, \ldots, s_k\}$. The \emph{movement
cost} of $\sigma$ is defined as $c_{\mathrm{move}}(\sigma) = k$,
the \emph{total volume cost} of $\sigma$ is defined as
$c_{\mathrm{total}}(\sigma) = \sum_{i=1}^k |s_i|$,
and the \emph{(relative) volume cost} of $\sigma$ is defined as
$c_{\mathrm{vol}}(\sigma) = \frac{c_{\mathrm{total}}(\sigma)}{|\mathrm{vol}(\sigma)|}$.
\end{defi}
\section{Inserting into a Given Configuration}
In this section we examine the problem of rearranging a given
configuration in such a way that the insertion of a new square is
possible. {\color{black}
Before we present our results in mathematical detail, including all
necessary proofs, we give a short overview of the individual
propositions and their significance: We first examine properties of
quadtree configurations culminating in Theorem~\ref{thm:qtmoves}, which
establishes that any configuration with sufficient capacity allows the
insertion of a square. Creating the required contiguous space for the insertion
comes at a cost due to required reallocations. This cost is analysed
in detail in Subsection~\ref{sec:costs}. There, we present matching
upper and lower bounds on the reallocation cost for our three cost
functions -- total volume cost (Theorems~\ref{thm:volumebound} and
\ref{thm:volumexample}), (relative) volume cost
(Corollary~\ref{cor:relvoltight}), and movement cost
(Theorems~\ref{thm:movbound} and \ref{thm:movexample}).
}
\subsection{Coping with Fragmented Allocations}\label{sec:delete}
Our strategy follows one general idea: larger empty pixels can be
built from smaller ones; e.g., four empty $i$-pixels can
be combined into one empty $(i-1)$-pixel. This can be iterated
to build an empty pixel of suitable volume.
\begin{lem}\label{lem:order}
Let $p_1, p_2, \ldots, p_k$ be a sequence of empty pixels sorted by
volume in descending order. Then
$\sum_{i=1}^k \mathrm{cap}(p_i) \ge 4^{-\ell} > \sum_{i=1}^{k-1} \mathrm{cap}(p_i)$
implies the following properties:
\begin{equation}\label{eq:four_p1}
k < 4 \Leftrightarrow k = 1
\end{equation}
\begin{equation}\label{eq:exact}
k \ge 4 \Rightarrow \sum_{i=1}^k \mathrm{cap}(p_i) = 4^{-\ell}
\end{equation}
\begin{equation}\label{eq:four_p2}
k \ge 4 \Rightarrow \mathrm{cap}(p_k) = \mathrm{cap}(p_{k-1}) =
\mathrm{cap}(p_{k-2}) = \mathrm{cap}(p_{k-3})
\end{equation}
\end{lem}
\begin{proof}
For $k \ge 2$, $p_1$ must be a pixel of smaller capacity than an
$\ell$-pixel, because otherwise we would not need $p_2$ for the sum to
be greater than $4^{-\ell}$ -- in contradiction to the
assumption. Thus, we need to add up smaller capacities to at least
$4^{-\ell}$. As we need at least four $(\ell+1)$-pixels for that,
statement~\eqref{eq:four_p1} holds.
In the following we assume $k \ge 4$. Let $x=\sum_{i=1}^{k-1}
\mathrm{cap}(p_i)$. We know from the assumption that $x$ is strictly
less than $4^{-\ell}$, but $x+\mathrm{cap}(p_k)$ is at least
$4^{-\ell}$. Consider the base-4 (quaternary) representation
of $x/4^{-\ell}$: $x_4=(x/4^{-\ell})_4$. It has a zero before the
decimal point and a sequence of base-4 digits after. Let $n$ be the
rightmost non-zero digit of $x_4$. As the sequence is sorted in
descending order and the capacities are all negative powers of four,
adding the capacity of $p_k$ can only increase $n$, or a digit right
of $n$, by one. Since all digits right of $n$ are zero, increasing one
of them by one does not increase $x$ to at least
$4^{-\ell}$. Therefore, it must increase $n$. But if increasing $n$ by
one means increasing $x$ to at least $4^{-\ell}$, then every digit of
$x_4$ after the decimal point and up to $n$ must have been
three. Consequently, increasing $n$ by one leads not only to
$x + \mathrm{cap}(p_k) \ge 4^{-\ell}$ but also to
$x + \mathrm{cap}(p_k)=4^{-\ell}$, which is statement~\eqref{eq:exact}.
Furthermore, as $n$ must have been three and the sequence is sorted,
the previous three capacities added must have each increased $n$ by
exactly one as well. This proves statement~\eqref{eq:four_p2}. \qed
\end{proof}
\begin{figure}[t!hp]
\centering
\includegraphics[width=0.25\textwidth]{img/four_pixels.pdf}
\caption{Illustration to Lemma~\ref{lem:four}.}
\label{fig:four_pixels}
\end{figure}
\begin{lem}\label{lem:four}
Given a quadtree configuration $T$ with four maximally empty
$j$-pixels. Then $T$ can be transformed (using a sequence
of moves) into a configuration $T^*$ with one more maximally
empty $(j-1)$-pixel and four fewer maximally empty $j$-pixels than $T$
while retaining all its maximally empty $i$-pixels for $i < j-1$.
\end{lem}
\begin{proof}
Let $p_1, p_2, p_3$ and $p_4$ be four maximally empty $j$-pixels and
$q_1, q_2, q_3$ and $q_4$ be the parents of $p_1, p_2, p_3$ and $p_4$,
respectively. Then $q_i$ has at most three children that are not
empty. Now, we can move the at most three non-empty subtrees from one
of the $q_i$ to the others, $i=1,2,3,4$. Without loss of generality,
we choose $q_1$. Let $a, b$ and $c$ be the children of $q_1$ that are
not $p_1$. We move $a$ to $p_2$, $b$ to $p_3$ and $c$ to $p_4$. See
Figure~\ref{fig:four_pixels} for an illustration. Thus, we get a new
configuration $T^*$ with the empty $(j-1)$-pixel $q_1$ and occupied or
fractional pixels {\color{black}$q_2$, $q_3$, $q_4$}. Note that $p_1$ is still empty,
but no longer maximally empty, because its parent $q_1$ is now empty.
The construction does not affect any other maximally empty pixels. \qed
\end{proof}
\begin{thm}\label{thm:qtmoves}
Given a quadtree configuration $T$ with a remaining capacity of at least
$4^{-j}$, you can transform $T$ into a quadtree configuration $T^*$
with an empty $j$-pixel using a sequence of moves.
\end{thm}
\begin{proof}
Let $S=p_1, p_2, \ldots, p_n$ be the sequence containing all maximally
empty pixels of $T$ sorted by capacity in descending order. If the
capacity of $p_1$ is at least $4^{-j}$, then there already is an empty
$j$-pixel in $T$ and we can simply set $T^* = T$.
Assume $\mathrm{cap}(p_1) < 4^{-j}$. In this case we inductively build an empty
$j$-pixel. Let $S'=p_1, p_2, \ldots, p_k$ be the shortest prefix of
$S$ satisfying $\sum_{i=1}^{k} \mathrm{cap}(p_i) \ge 4^{-j}$.
Such a prefix has to exist because of {\color{black}Lemma~\ref{lem:fullcap}}.
Note that due to Observation~\ref{obs:disjoint} no pixel
$p_i$ is contained in another pixel $p_j$, $i, j \in
\{1,2,\ldots,k\}$, $i \ne j$.
Lemma~\ref{lem:order} tells us $k \ge 4$ and the
last four pixels in $S'$, $p_{k-3}, p_{k-2}, p_{k-1}$ and
$p_k$, are from the same layer, say layer $\ell$. Thus, we can apply
Lemma~\ref{lem:four} to $p_{k-3}, p_{k-2}, p_{k-1}, p_k$ to get a new
maximally empty $(\ell - 1)$-pixel $q$. We remove $p_{k-3}, p_{k-2},
p_{k-1}, p_k$ from $S'$ and insert $q$ into $S'$ according to its
capacity.
The length of the resulting sequence $S''$ is three less than
the length of $S'$. This does not change the sum of the capacities, since
an empty $(\ell-1)$-pixel has the same capacity as four empty
$\ell$-pixels. That is, $\sum_{p \in S'} \mathrm{cap}(p) = \sum_{p \in
S''} \mathrm{cap}(p)$ holds.
We can repeat these steps until $k < 4$ holds. Then
Lemma~\ref{lem:order} implies that $k=1$, i.e., the sequence contains
only one pixel $p_1$, and because $\mathrm{cap}(p_1)=4^{-j}$, $p_1$ is
an empty $j$-pixel. \qed
\end{proof}
\subsection{Reallocation Cost}\label{sec:costs}
Reallocation cost is made non-trivial by \emph{cascading moves}:
Reallocated squares may cause further reallocations, when there is no
empty pixel of the required size available.
\begin{obs}\label{obs:largebad}
In the worst case, reallocating an $\ell$-square is not cheaper than
reallocating four $(\ell+1)$-squares -- using any of the three defined
cost types.
\end{obs}
\begin{proof}
It is straightforward to see this for volume costs, total or relative:
Wherever you can move one $\ell$-square you can also move four
$(\ell+1)$-squares without causing more cascading moves.
For movement costs a single move of an $\ell$-square is less than four
moves of $(\ell+1)$-squares, but it can cause cascading moves of three
$(\ell+1)$-squares plus the cascading moves caused by the reallocation
of an $(\ell+1)$-square and, therefore, does not cause lower costs in
total. \qed
\end{proof}
\begin{thm}\label{thm:volumebound}
The maximum total volume cost caused by the insertion of an
$i$-square $Q$, $i \in \mathbb{N}_0$,
into a quadtree configuration $T$ with
$\mathrm{cap}(T) \ge 4^{-i}$ is bounded by
\[
c_\mathrm{total,max} \le \frac{3}{4} \cdot
4^{-i} \cdot \mathrm{min} \{(s-i), i\} \in O(|Q| \cdot h(T))
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
For $s \le i$ there has to be an empty $i$-square in $T$, as
$\mathrm{cap}(T) \ge 4^{-i}$, and we can insert $Q$ without any moves. In
the following, we assume $s > i$.
Let $Q$ be the $i$-square to be inserted. We can assume that we do not
choose an $i$-pixel with a remaining capacity of zero to pack $Q$ -- if
there were no other pixels, $\mathrm{cap}(T)$ would be zero as
well. Therefore, the chosen pixel, say $p$, must have a remaining
capacity of at least $4^{-s}$. From Observation~\ref{obs:largebad}
follows that the worst case for $p$ would be to be filled with 3
$k$-squares, for every $i < k \le s$. Let $v_i$ be the worst-case
volume of a reallocated $i$-pixel. We get $v_i \le \sum_{j=i+1}^s \frac{3}{4^j} = 4^{-i} - 4^{-s}$.
Now we have to consider cascading moves. Whenever we move an
$\ell$-square, $\ell > i$, to make room for $Q$, we might
have to reallocate a volume of $v_{\ell}$ to make room for the
$\ell$-square. Let $x_i$ be the total volume that is at most
reallocated when inserting an $i$-square.
Then we get the recurrence $x_i = v_i + \sum_{j=i+1}^s 3 \cdot x_j$
with $x_s=v_s=0$. This resolves to $x_i=3/4 \cdot 4^{-i} \cdot (s-i)$.
$v_i$ cannot get arbitrarily large, as the
remaining capacity must suffice to insert an $i$-square. Therefore,
if all the possible $i$-pixels contain a volume of $4^{-s}$ (if some
contained more, we would choose those and avoid the worst case), we
can bound $s$ by $4^i \cdot 4^{-s} \ge 4^{-i} \Leftrightarrow s \le 2i$,
which leads to $c_\mathrm{total,max} \le \frac{3}{4} \cdot 4^{-i} \cdot i$.
With $|Q|=4^{-i}$ and $i < s < h(T)$ we get $c_\mathrm{total,max} \in
O(|Q| \cdot h(T))$. \qed
\end{proof}
\begin{cor}\label{cor:maxtotalvol}
Inserting a square into a quadtree configuration has a total volume
cost of no more than $3/16=0.1875$.
\end{cor}
\begin{proof}
Looking at Theorem~\ref{thm:volumebound} it is easy to see that the
worst case is attained for $i=1$: $c_\mathrm{total}
= 3/4 \cdot 4^{-1} \cdot 1 = 3/16=0.1875$. \qed
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[width=0.40\textwidth]{img/wc_volume.pdf}
\caption{The worst-case construction for volume cost for $s=6$ and
$i=3$. Every 3-pixel contains three 4-, 5-, and 6-squares with only
one remaining empty 6-pixel.}
\label{fig:wc_volume}
\end{figure}
\begin{thm}\label{thm:volumexample}
For every $i \in \mathbb{N}_0$ there are quadtree configurations $T$ for which the
insertion of an $i$-square $Q$ causes a total volume
cost of
\[
c_\mathrm{total,max} \ge \frac{3}{4} \cdot
4^{-i} \cdot \mathrm{min} \{(s-i), i\} \in \Omega(|Q| \cdot h(T))
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
We build a quadtree configuration to match the upper bound of
Theorem~\ref{thm:volumebound}. Let $s=2i$ and consider a subtree
rooted at an $i$-pixel
that contains three $k$-pixels for every $i < k \le s$. They do not have
to be arranged in such a way that the single free $s$-pixel is in the
lower right corner, but the nesting structure is important. Assume all
$4^i$ $i$-pixels of $T$ are constructed in such a way. Then you have
to reallocate three $k$-squares for every $i < k \le s$. However, every
fractional $k$-pixel in the configuration in turn contains three
$k'$-pixel for every $k < k' < s$, i.e., moving every $k$-square
causes cascading moves. See Figure~\ref{fig:wc_volume} for the whole
construction for $s=6$ and $i=3$. The reallocated volume without
cascading moves adds up to $v_i = \sum_{k=i+1}^s 3 \cdot 4^{-k}$.
Including cascading moves we get $x_i = v_i + \sum_{k=i+1}^s 3 \cdot x_k$,
which resolves to $x_i=3/4 \cdot 4^{-i} \cdot (s-i)$.
With
$s=h(T)-1$, $i=s/2$ and $|Q|=4^{-i}$ we get
$c_\mathrm{total,max} \in \Omega(|Q| \cdot h(T))$. \qed
\end{proof}
As a corollary we get an upper bound for the (relative) volume cost
and a construction matching the bound.
\begin{cor}\label{cor:relvoltight}
Inserting an $i$-square into a quadtree configuration $T$ with sufficient capacity
$\mathrm{cap}(T) \ge 4^{-i}$ causes a (relative) volume cost of at
most
\[
c_\mathrm{vol,max} \le \frac{3}{4} \cdot \mathrm{min} \{(s-i), i\} \in
\Theta(h(T)),
\]
when the smallest previously inserted square is an $s$-square,
and this bound is tight, i.e., there are configurations for which the
bound is matched.
\end{cor}
It is important to note that
relative volume cost can be arbitrarily bad by increasing the height
of the configuration, as opposed to total volume cost with the upper
bound derived in
Corollary~\ref{cor:maxtotalvol}. What is more, large total volume
cost is achieved by inserting $i$-squares for small $i$, whereas
large relative volume cost is only possible for large $i$ (and large
$s-i$). This has an interesting interpretation with regard to the structure of
the quadtree: Large total volume cost can happen when you assign a
square to a node close to the root. To get large relative volume cost
you need a high quadtree and assign a square to a node roughly in the
middle (with respect to height).
The same methods we used to derive worst case bounds for volume cost
can also be used to establish bounds for movement cost, which results
in $c_\mathrm{move,max} \le 4^{\mathrm{min}\{s-i, i\}}-1 \in
O(2^{h(T)})$. A matching construction is the same as the one in the
proof of Theorem~\ref{thm:volumexample}.
\begin{thm}\label{thm:movbound}
The maximum movement cost caused by the insertion of an
$i$-square $Q$, $i \in \mathbb{N}_0$, into a quadtree configuration $T$ with
$\mathrm{cap}(T) \ge 4^{-i}$ is bounded by
\[
c_\mathrm{move,max} \le 4^{\mathrm{min}\{s-i, i\}}-1 \in O(2^{h(T)})
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
The proof is analogous to the proof of
Theorem~\ref{thm:volumebound}. We can use
Observation~\ref{obs:largebad} and formulate a new recurrence. The
number of reallocations without cascading moves caused by the
insertion of $Q$ can be bounded by $v_i \le 3(s-i)$ and including
cascading moves we get $x_i = v_i + \sum_{j=i+1}^s 3 x_i$, which
resolves to $x_i = 4^{s-i} - 1$.
As we need at least $4^{-i}$ remaining capacity to insert $Q$ we can
again deduce $s \le 2i$. With $s=h(T)-1$ we get $\mathrm{min}\{s-i,
i\} \le h(T)/2$, which results in the claimed bound. \qed
\end{proof}
\begin{thm}\label{thm:movexample}
For every $i \in \mathbb{N}_0$ there are quadtree configurations $T$ for which the
insertion of an $i$-square $Q$ causes a movement
cost of
\[
c_\mathrm{move,max} \ge 4^{\mathrm{min}\{s-i, i\}} \in \Omega(2^{h(T)})
\]
when the smallest previously inserted square is an $s$-square.
\end{thm}
\begin{proof}
The example from Theorem~\ref{thm:volumexample} works here as well. As
every fractional $j$-pixel, $j < s$, contains three $(j+1)$-pixels,
you have to move three squares for every $j=i,\ldots,s-1$ and account
for cascading moves. This results in a number of moves $c_{move,max}
\ge x_i = 3(s-i) + \sum_{j=i+1}^s x_j = 4^{s-i} - 1$, where
$s=2i=h(T)-1$. \qed
\end{proof}
\section{Online Packing and Reallocation}
Applying Theorem~\ref{thm:qtmoves} repeatedly to successive configurations
yields a strategy for the dynamic allocation of aligned squares.
\begin{cor}\label{cor:strategy}
Starting with an empty square and given a valid, aligned sequence of
requests, there is a strategy that fulfills every request in the
sequence.
\end{cor}
\begin{proof}
We only have to deal with aligned squares and can use quadtree
configurations to pack the squares, since the sequence of requests
$\sigma_1, \sigma_2, \ldots, \sigma_k$ is aligned. We start with the
empty configuration that contains only one empty $0$-pixel. Thus, we
have a configuration with capacity $1$. We only have to consider
insertions, because deletions can always be fulfilled by definition.
As the sequence of requests is valid, whenever a request $\sigma_\ell$
demands to insert a $j$-square $s$, the remaining capacity of the
current quadtree configuration $T$ is at least
$1 - \sum_{i=1}^{\ell-1} \mathrm{vol}(\sigma_i) + 4^{-j} \ge 4^{-j}$.
Therefore, we can use Theorem~\ref{thm:qtmoves} to transform $T$ into
a configuration $T^*$ with an empty $j$-pixel $p$. We assign $s$
to $p$. \qed
\end{proof}
This strategy may incur the heavy insertion cost
derived in the previous section. However, when we do not have to
work with a given configuration and have the freedom to handle all
requests starting from the empty unit square, we can use the added
flexibility to derive a more sophisticated strategy. In particular,
we can use reallocations to clean up a configuration when squares
are deleted. This can make deletions costly operations, but allows us
to eliminate insertion cost entirely.
\subsection{First-Fit Packing}
We present an algorithm that fulfills any valid, aligned sequence of
requests and does not cause any reallocations on insertions. We call
it \emph{First Fit} in imitation of the well-known technique employed
in one-dimensional allocation problems.
Given a one-dimensional
packing and a request to allocate space for an additional item,
First-fit chooses the first suitable location. In one dimension it is
trivial to define an order in which to check possible locations. For
example, assume your resources are arranged horizontally and proceed
from left to right.
{\color{black}
Finding an order in two or more dimensions
is less straightforward than in 1D. We use space-filling curves to overcome this
impediment. Space-filling curves are of theoretical interest, because
they fill the entire unit square
(i.e., their Hausdorff dimension is $2$).
More useful for us are the schemes used to create a
space-filling curve, which employ a recursive construction on the nodes
of a quadtree and become space-filling as the height of the tree
approaches infinity. In particular, they provide an order for the
nodes of a quadtree. In the following, we make use of the z-order
curve~\cite{morton_1966}.
}
First Fit assigns items to be packed to the next available position in
z-order. We denote the position
of a pixel $p$ in z-order by $z(p)$, i.e.,
$z(p) < z(q)$ if and only if $p$ comes before $q$ in z-order.
In
general, the z-order is only a partial order, as it does not make
sense to compare nodes with their parents or children. However, there
are three important occasions for which the z-order is a total order: If
you only consider pixels in one layer, if you only consider
occupied pixels, and if you only consider maximally empty pixels. In
all three cases pixels are pairwise disjoint, which
leads to a total order.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{img/firstfit.pdf}
\caption{The z-order for layer 2 pixels (left); a First Fit allocation and the z-order of the occupied pixels
-- which is not necessarily the insertion order (right).}
\label{fig:firstfit}
\end{figure}
First Fit proceeds as follows: A request to insert an $i$-square
$Q$ is handled by assigning $Q$ to the first empty $i$-pixel in
z-order; see Figure~\ref{fig:firstfit}. Deletions are more
complicated. After unassigning a
deleted square $Q$ from a pixel $p$ the following procedure handles
reallocations (an example deletion can be seen in
Figure~\ref{fig:invstrategy}):
\begin{algorithmic}[1]
\State $S \gets \{p'\}$, where $p'$ is the maximally empty pixel
containing $p$
\While{$S \ne \varnothing$}
\State Let $a$ be the element of $S$ that is first in z-order.
\State $S \gets S \setminus \{a\}$
\State Let $b$ be the last occupied pixel in z-order.
\While{$z(b) > z(a)$}
\If{the square assigned to $b$, $B$, can be packed into $a$}
\State Assign $B$ to the first suitable descendant of $a$ in
z-order.
\State Unassign $B$ from $b$.
\State Let $b'$ be the maximally empty pixel containing $b$.
\State $S \gets S \cup \{b'\}$
\State $S \gets S \setminus \{b'': b''\text{ is child of }b'\}$
\EndIf
\State Move the pointer $z$ back in z-order to the next occupied
pixel.
\EndWhile
\EndWhile
\end{algorithmic}
The general idea is to reallocate squares from the current end of the
z-order to empty spots. As reallocating creates new empty squares, we
need to apply the method repeatedly in what can be considered an
inverse case of cascading moves. We ensure termination by always
moving the currently {\color{black}considered} empty pixel in positive z-order and
reallocating squares in negative z-order. We analyze the strategy in
more detail now.
\begin{inv}\label{inv:inv}
For every empty $i$-pixel $p$ in a quadtree configuration $T$ there is
no occupied $i$-pixel $q$ with $z(q) > z(p)$.
\end{inv}
\begin{lem}\label{lem:invcompact}
Every quadtree configuration $T$ satisfying Invariant~\ref{inv:inv}
is compact.
\end{lem}
\begin{proof}
Assume a quadtree configuration $T$ is not compact. Then it contains
two fractional $i$-pixels, $i \in \mathbb{N}$, $p$ and $q$ with maximally
empty children
$p'$ and $q'$, respectively. Without loss of generality, assume $z(p)
< z(q)$. As $q$ is fractional, there is a $j$-square, $j > i$, assigned
to some descendant of $q$, say $q''$. However, $p'$ is an empty
$(i+1)$-pixel and therefore contains an empty $j$-pixel, $p''$. As
$z(p) < z(q)$, we also have $z(p'') < z(q'')$ and
Invariant~\ref{inv:inv} does not hold. \qed
\end{proof}
\begin{lem}\label{lem:3max}
In a compact quadtree configuration $T$ there are at most three
maximally empty $j$-pixels for every $j \in \mathbb{N}_0$.
\begin{proof}
The statement holds for $j=0$, since there is only one $0$-pixel. For
$j>0$ there is at most one open $(j-1)$-pixel $p$ in $T$, because $T$
is compact. Therefore, all other $(j-1)$-pixels except for $p$ either
do not have an empty child or are maximally empty themselves. Thus,
all maximally empty $j$-pixels have to be children of $p$. Since $p$
is not empty, there can be at most three. \qed
\end{proof}
\end{lem}
\begin{lem}\label{lem:compactspace}
Given an $\ell$-square $s$ and a compact quadtree configuration $T$,
then $s$ can be assigned to an empty $\ell$-pixel in $T$, if and only
if $\mathrm{cap}(T) \ge 4^{-l}$.
\end{lem}
\begin{proof}
The direction from left to right is obvious, as there can be no empty
$\ell$-pixel if the capacity is less than $4^{-l}$. For the other
direction assume there is no empty $\ell$-pixel in $T$. Since
there is no empty $\ell$-pixel, there is also no empty $j$-pixel for
any $j < \ell$. Let the smallest square assigned to a node be an
$s$-square. As $T$ is compact, we can use Lemma~\ref{lem:3max}
and {\color{black}Lemma~\ref{lem:fullcap}} to bound the remaining capacity of $T$ from
above: $\mathrm{cap}(T) \le \sum_{k=l+1}^{s} 3 \cdot 4^{-k} = 4^{-\ell} -
4^{-s} < 4^{-\ell}$. \qed
\end{proof}
In other words, packing an $\ell$-square in a compact configuration
requires no reallocations.
\begin{thm}\label{thm:ff}
The strategy presented above is correct. In particular,
\begin{enumerate}
\item every valid insertion request is fulfilled at zero cost,
\item every deletion request is fulfilled,
\item after every request Invariant~\ref{inv:inv} holds.
\end{enumerate}
\end{thm}
\begin{proof}
The first part follows from Lemmas~\ref{lem:compactspace}
and \ref{lem:invcompact} and point 3. Insertions maintain the invariant,
because we assign it to the first suitable empty
pixel in z-order. Deletions can obviously always be fulfilled. We
still need to prove the important part, which is that the invariant
holds after a deletion.
We show this by proving that whenever the procedure reaches line 3 and
sets $a$, the invariant holds for all squares in
z-order up to $a$. As we only move squares in negative z-order, the
sequence of pixels $a$ refers to is increasing in z-order. Since we
have a finite number of squares, the procedure terminates after a
finite number of steps when no suitable $a$ is left. At that point the
invariant holds throughout the configuration.
Assume we are at step 3 of the procedure and the invariant holds for
all squares up to $a$. None of the squares considered to be moved to
$a$ fit anywhere before $a$ in z-order -- otherwise the invariant
would not hold for pixels before $a$. Afterwards, no square that has
not been moved to $a$ fits into $a$, because it would have been moved
there otherwise. Once we reach line 3 again, and set the new $a$, say
$a'$, consider the pixels between $a$ and $a'$ in z-order. If any
square after $a'$ would fit somewhere into a pixel between $a$ and
$a'$, then the invariant would not have held before the
deletion. Therefore, the invariant holds up to $a'$. \qed
\end{proof}
{\color{black}
Comparing our results in Section~4 to those in this section, a major
advantage of an empty initial configuration becomes apparent. For all
examined cost functions there are configurations into which no square
can be inserted at zero cost (cf. Theorem~\ref{thm:volumexample},
Corollary~\ref{cor:relvoltight}, Theorem~\ref{thm:movexample}). This
is in contrast to First Fit, which achieves insertion at zero
cost (Theorem~\ref{thm:ff}). The downside is the potentially large cost
of deletions. The thorough analysis of a strategy with provably low
cost for both insertions and deletions is the subject of future work.
}
\begin{figure}[h!]
\centering
\includegraphics[width=0.25\textwidth]{img/invstrategy.pdf}
\caption{Deleting a square causes several moves. The
deleted square is marked with a cross. Once it is unassigned, the
squares are checked in reverse z-order until square 1, which
fits. Afterwards, there is a now maximally empty pixel into which
square 2 can be moved. Finally, the same happens for square 3.}
\label{fig:invstrategy}
\end{figure}
\section{General Squares and Rectangles}\label{sec:generals}
Due to limited space and for clearer exposition,
the description in the previous three sections considered aligned squares.
We can adapt the technique to general squares
and even rectangles at the expense of a constant factor.
{\color{black}
To accommodate a non-aligned square, we pack it like an
aligned square of the next larger volume. That is, a square of size
$s$ with $2^{i-1} < s < 2^i$ for some $i \in \{0, -1,-2,\ldots\}$ is
assigned to an $i$-pixel. This approach results in space that cannot
be used to assign squares, even though the remaining capacity
would suffice, and we can no longer guarantee to fit every valid sequence of
squares into the unit square. However,
we can guarantee to pack every such sequence into a $4$-underallocated
unit square (i.e., a $2 \times 2$ square), as every square is assigned
to a pixel that can hold no more than four times its volume. Most
importantly, our reallocation schemes continue to work in this setting
without any modifications. An example allocation is shown in
Figure~\ref{fig:quadtree}, where solid gray areas are assigned squares
and shaded areas indicate wasted space.
Note that a satisfactory reallocation scheme for arbitrary squares
with no or next to no underallocation is unlikely. Even the problem
of handling a sequence of insertions of total volume at most one,
without considering dynamic deletions and reallocation, requires
underallocation. This problem is known as {\em online square packing}, see
Fekete and Hoffmann~\cite{fh-ossp-13,fh-ossp-17};
currently, the best known approach results in
$5/2$-underallocation, see Brubach~\cite{brubach_improved_2014}.
}
\begin{figure}[h!]
\centering
\includegraphics[width=.7\linewidth]{img/general_squares.pdf}
\caption{\label{fig:quadtree}Example of a dynamically generated
quadtree layout. {\color{black}The solid gray areas are packed
squares. Shaded areas represent space lost due to rounding.}
}
\end{figure}
Rectangles of bounded aspect ratio $k$ are dealt with in the same way.
Also accounting for intermodule communication, every rectangle is
padded to the size of the next largest aligned square and assigned to
the node of a quadtree, at a cost not exceeding a factor of $4k$
compared to the one we established for the worst case.
{\color{black} As described in the following section, this theoretical bound
is rather pessimistic: the performance in basic simulation runs
is considerably better.}
\section{\color{black}Simulation Results}\label{sec:experiments}
We carried out a number of {\color{black} simulation runs to get an idea of the potential performance
of our approach}. For each test, we generated a random sequence of $1000$ requests
that were chosen as \textsc{Insert($\cdot$)} (probability $0.7$) or \textsc{Delete($\cdot$)} (probability $0.3$).
We apply a larger probability for \textsc{Insert($\cdot$)} to avoid the (relatively simple)
situation that repeatedly just a few rectangles are inserted and deleted, and in order
to observe the effects of increasing congestion. The individual modules were generated
by considering an upper
bound $b \in [0,1]$ for the side lengths of the considered squares. For
$b=0.125$, the value of the current underallocation seems to be stable except
for the range of the first $50$-$150$ requests. For $b=1$, the current
underallocation may be unstable, which could be caused by the following
simple observation: A larger $b$ allows larger rectangles that induce
$4k$-underallocations.
Our {\color{black} simulations indicate
the theoretical worst-case bound of $1/4k$ may be overly pessimistic, see Figures~\ref{fig:experimentsA}--~\ref{fig:experimentsF}}.
\textcolor{black}{In particular, the $x$-axis represents the number of operations and the $y$-axis represents the inverse value of underallocations. Furthermore, the red curves illustrate the inverse values of the underallocation and lie below the worst case values of $4k$.}
Taking into account that a purely one-dimensional approach cannot provide
an upper bound on the achievable underallocation, this {\color{black} provides reason to be optimistic about the potential
practical performance.}
{\color{black} A simulation of} the First-Fit approach for different values of
$k$ and upper bounds of $b = 0.125$ and $b=1$ for the side length of the
considered squares {\color{black} is shown in Figures~\ref{fig:experimentsA}--~\ref{fig:experimentsF}.
Each diagram illustrates the results of a simulation of $1000$ requests
that are randomly chosen as \textsc{Insert($\cdot$)} (probability $0.7$) or
\textsc{Delete($\cdot$)} (probability $0.3$). We apply a larger probability
for \textsc{Insert($\cdot$)} to avoid the situation that repeatedly just a few
rectangles are inserted and deleted. The red graph shows the total current
underallocation after each request. The green graph shows the average of the
total underallocation in the range between the first and the current request.
We denote by~$c$ the number of collisions, i.e., the situations in that an
\textsc{Insert($\cdot$)} cannot be processed.}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5.2cm]{img/experiments/daten_n1000_k1_bound0125_p07.pdf}
\end{center}
\caption{\textcolor{black}{Number of operations ($x$-axis) vs. the inverse value of underallocation ($y$-axis) for the setting} $k=1$, $b=0.125$, $c=219$}
\label{fig:experimentsA}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5.2cm]{img/experiments/daten_n1000_k1_bound1_p07.pdf}
\end{center}
\caption{\textcolor{black}{Number of operations ($x$-axis) vs. the inverse value of underallocation ($y$-axis) for the setting} $k=1$, $b=1$, $c=419$}
\label{fig:experimentsB}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5.2cm]{img/experiments/daten_n1000_k2_bound0125_p07.pdf}
\end{center}
\caption{\textcolor{black}{Number of operations ($x$-axis) vs. the inverse value of underallocation ($y$-axis) for the setting} $k=2$, $b=0.125$, $c=232$}
\label{fig:experimentsC}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5.2cm]{img/experiments/daten_n1000_k2_bound1_p07.pdf}\\
\end{center}
\caption{\textcolor{black}{Number of operations ($x$-axis) vs. the inverse value of underallocation ($y$-axis) for the setting} $k=2$, $b=1$, $c=438$}
\label{fig:experimentsD}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.2cm]{img/experiments/daten_n1000_k3_bound0125_p07.pdf}
\end{center}
\caption{\textcolor{black}{Number of operations ($x$-axis) vs. the inverse value of underallocation ($y$-axis) for the setting} $k=5$, $b=0.125$, $c=264$}
\label{fig:experimentsE}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.2cm]{img/experiments/daten_n1000_k5_bound1_p07.pdf}\\
\end{center}
\caption{\textcolor{black}{Number of operations ($x$-axis) vs. the inverse value of underallocation ($y$-axis) for the setting} $k=5$, $b=1$, $c=421$}
\label{fig:experimentsF}
\end{figure}
\section{Conclusions}
\label{sec:conc}
We have presented a data structure for exploiting
the full dimensionality of dynamic geometric storage and reallocation
tasks, such as online maintenance of the module layout for an FPGA.
These first results indicate that our approach is suitable for
making progress over purely one-dimensional approaches.
There are several possible refinements and extensions, including
a more sophisticated way of handling rectangles inside of
square pieces of the subdivision, handling heterogeneous chip areas,
and advanced algorithmic methods. These will be addressed in future work.
{\color{black}
Another aspect of forthcoming work is an explicitly self-refining intermodule wiring.
As indicated in Section~3 (and illustrated in Figure~\ref{fig:config}),
dynamically maintaining this communication infrastructure can be envisioned along the subdivision
of the recursive quadtree structure: making the routing a certain proportion of each cell area provides
a dynamically adjustable bandwidth, along with intersection-free routing, as shown in Figure~\ref{fig:config}.
First steps in this direction have been taken with an MA thesis~\cite{meyer}, with more work
to follow; this also addresses the aspect of robustness of communication in a hostile
environment that may cause individual connections to fail.}
\section*{References}
|
1,477,468,750,894 | arxiv | \section{Introduction}
Outflows in active galactic nuclei (AGNs) play an important role in galaxy evolution. Recent studies indicate that the outflow is regulated by the accretion process (Sulentic et al. 2000; Leighly \& Moore 2004; Richards et al. 2011; Wang et al. 2011; Marziani \& Sulentic 2012).
By carrying away angular momentum,
the outflowing gas is crucial to maintain the accretion onto the central black hole (BHs) (Sulentic et al. 2000; Higginbottom et al. 2013; Feruglio et al. 2015; Fontanot et al. 2015), regulating the growth of the central supermassive BHs. Moreover, outflows are considered to be able to affect star formation in the host galaxies (Silk \& Rees 1998).
As one of important phenomena in the quasars, outflows leave prominent imprints in the quasar spectra,
such as blueshifted broad absorption lines (BALs; Weymann et al. 1991), as well as broad emission lines (BELs; Gaskell 1982).
To date, our study and understanding of the outflows are mainly based on the analysis of BALs and/or BELs.
BALs appear in the spectra of 10-15\% optically selected quasars. These quasars often show absorptions from both high and low ionization ions, such as N\,{\footnotesize V}, C\,{\footnotesize IV}, Si\,{\footnotesize IV}, O\,{\footnotesize VI}, Al\,{\footnotesize III}\ and Mg\,{\footnotesize II} (Hall et al. 2002; Tolea et al. 2002; Hewett \& Foltz 2003; Reichard et al. 2003; Trump et al. 2006; Gibson et al. 2009; Zhang et al. 2010, 2014). Studies of BALs can place constraints on the physical properties of the outflows, which are helpful to understand the connection between the evolution of SMBHs and their host galaxies.
However, due to the single line of sight, the covering factor which is an important parameter of the BAL outflows, is difficult to be determined for an individual quasar. For most of BAL quasars,
the covering factor of outflows is usually derived in a statistical way from a sample of sources,
resulting in that the estimation for other properties may be not reliable.
As another important feature of outflows, the blueshifted BELs was first detected in the
high-ionization lines (e.g., C\,{\footnotesize IV}, Gaskell 1982; Wilkes 1984). The blueshifted BELs are
difficult to reconcile with gravitationally-bound BELR models, but can be considered as a
signature of outflowing gas (Gaskell 1982; Marziani et al. 1996; Leighly 2004; Wang et al. 2011).
Recently, blueshifted BELs have also been found in the low-ionization lines, such as Mg\,{\footnotesize II}, which
can be interpreted as the signature of a radiation-driven wind or outflow (Marziani et al. 2011).
Different from the BALs, the integral flux of blueshifted BELs can reflect the global properties of
outflowing gas. The equivalent widths (EWs) and line ratios can be used imposing strong constraints on the
density, ionization state, and geometry of the line emitting gas (Liu et al. 2016). However, in most
of the quasars with blueshifted BELs, the blueshifted BELs are always blended with the normal BELs
emitted from the broad line region (BLR) and the decomposition between them is a challenging task.
This paper presents a detailed emission line and absorption line analysis of SDSS J163345.22+512748.4 (hereafter SDSS J1633+5127), a type-1 quasar at z = 0.6289 with outflows revealed in both blueshifted BELs and BALs. Since its Mg\,{\footnotesize II}\ emission line is dominated by the blueshifted BELs, the uncertainty of the decomposing them from normal BELs is small. Besides Mg\,{\footnotesize II}, UV Fe\,{\footnotesize II}\ and H$\alpha$\ also show similar blueshifted BEL components. These blueshifted lines can be considered emitted from outflows, for which
the properties can be inferred from the EWs and line ratios of BELs.
Combined with the properties of BALs, we provide new insights into outflowing gas.
The paper is organized as follows. The observational data are described in Section 2,
and further analysis is shown in Section 3. In Section 4, we give our discussions on the results.
In this paper, the cosmological parameters $H_0=70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\rm M}=0.3$, and
$\Omega_{\Lambda}=0.7$ have been adopted throughout this paper.
\section{Observation and Data Reduction}
SDSS J1633+5127 was imaged by the SDSS on February 8, 2001. The point-spread function magnitudes measured from the images are $18.59 \pm 0.04$, $18.04 \pm 0.01$, $18.23 \pm 0.01$, $17.80 \pm 0.01$, $17.76 \pm 0.02$ at u, g, r, i, and z bands,
respectively, which are shown with black diamonds in Fig.\ref{f1} (a).
The optical spectrum of SDSS J1633+5137 was observed by the Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al. 2013) on October 23, 2011, for which spectrographs (Smee et al. 2013) can cover a wavelength range of 3600-10500 \AA.
The spectrum we used was extracted from the BOSS Date Release 10 (DR10; Ahn et al. 2014). After correcting
for the Galactic reddening of $E(B - V) = 0.051$ (Schlafly \& Finkbeiner 2011), the spectrum
is presented by black curve in the panel (a) of Fig.\ref{f1}.
The comparison with SDSS photometry clearly indicates that the spectrum has a bluer continuum slope and lower flux density
than the photometry at longer wavelengths.
We also calculate the spectral synthetic magnitudes at the g, r, i,and z bands,
which are shown in blue diamonds.
The later three magnitudes are even $\sim 1$ mag lower than the photometry.
This difference is possibly due to the BOSS spectrophotometric calibration uncertainty
or variability in the 6.5 rest-frame years between the two observations.
The Catalina Surveys Data Release 2\footnote{The Catalina Web site is http://nesssi.cacr.caltech.edu/DataRelease/. } (Drake et al. 2014) gives us an opportunity to clarify this issue. SDSS J1633+5137 is monitored for eight observing seasons, beginning in April 10, 2005. Each observing season, spanning from October to April next year, contains about 50 times photometric observations.
Since SDSS J1633+5137 is faint, the individual photometric error is large (about 0.5 mag) and
some observed magnitudes have large offsets from their neighbouring data likely due to noise fluctuations.
To display the light curve clearly, the photometric data in one observing season are combined and presented in the
panel (b) of Fig.\ref{f1}, which show very weak long-term variability with large measurement errors.
The intrinsic variability amplitude $\sigma_V=0.06~\rm mag$ ($=\sqrt{\Sigma_V^2-\xi^2}$; Ai et al. 2010)
is much smaller than the offset between the SDSS photometry and spectrum.
This suggests that the difference between the SDSS photometry and spectrum is more likely
to be caused by the spectrophotometric calibration uncertainties.
Thus, we attempted to use a 2-order polynomial to fit the flux ratios between the SDSS photometry and
spectral synthetic magnitudes
in g, r, i, and z bands,
which are shown in the panel (c) of Fig.\ref{f1}.
Using the fitted flux ratio at each wavelength bin,
we then scaled the spectrum to match the SDSS photometry
to obtain a recalibrated spectrum, which is shown with green in Fig.\ref{f1} (panel (a)).
At the infrared bands, we collected infrared photometric data of SDSS J1633+5137 from
the two micron all sky survey (2MASS; Skrutskie et al. 2006) and
the Wide-field Infrared Survey Explorer (WISE; Wright et al. 2010).
Meanwhile, we observed the near-infrared (NIR) spectra of SDSS J1633+5137
using the TripleSpec (Wilson et al. 2004) on the 200-inch Hale telescope at Palomar Observatory.
Four exposures of 300s each were taken in an A-B-B-A dithering mode with the primary configuration
of the instrument. A 1.1\arcsec\ slit was chosen to match the seeing. TripleSpec NIR spectrograph provides
simultaneous wavelength coverage from 0.9 to 2.46 microns at a resolution of 1.4 - 2.9 \AA.
The raw data were processed using IDL-based Spextool software (Cushing et al. 2004).
There are two gaps in the infrared spectrum around 1.35 microns and 1.85 microns due to the
effect of the atmosphere transmissivity.
Fortunately, the redshifted H$\alpha$\ emission line is detected with the TripleSpc at J-band.
After masking the bad and skyline seriously polluted pixels, we created a new spectrum by combining
the recalibrated optical spectrum with the TripleSpec NIR spectrum for the following analysis.
The systemic redshift of $z = 0.6289\pm0051$ reported from Paris et al. (2014) is consistent with that
derived from the narrow [O\,{\footnotesize II}], [O\,{\footnotesize III}]\ lines, and the peak of broad H$\beta$, H$\alpha$\ lines. However, different from these lines, the Mg\,{\footnotesize II}\ shows blueshifted profile with the blueshifted velocity for the peak emission of about 2000 $\rm km~s^{-1}$. After being converted to the quasar rest-frame, the spectrum and the spectral energy distribution (SED) from ultraviolet (UV) to mid-infrared (MIR) from the SDSS, 2MASS, and WISE are shown in black curve and green points in the panel (a) of Fig.\ref{f2}.
The broad band SED of SDSS J1633+5137 is decomposed into a power law with
index of -1.3 (cyan) and two black bodies with a temperature of 1232 K and 312 K (red dotted), respectively.
Compared to the quasar composite spectrum (Zhong et al. 2010), the SED of SDSS J1633+5137 shows
clear excess in the NIR bands.
As a common feature of BAL quasars where strong hot dust emission was found (Zhang et al. 2014),
this excess may hint at the existence of BALs in the spectrum.
Indeed, as shown in the inset panel of Fig.\ref{f2} (a),
a BAL trough is present at about 7000 $\rm km~s^{-1}$\ with respect to the He\,{\footnotesize I*} $\lambda$10830\ in the NIR spectrum.
\section{Emission Lines Analysis}
\subsection{UV \& Optical Fe\,{\footnotesize II}\ Multiples }
The Mg\,{\footnotesize II}\ broad emission line, which is dominated by the blueshifted component, is the most
remarkable characteristic of SDSS J1633+5137. The blueshift velocity of Mg\,{\footnotesize II}\ peak is about 2200 $\rm km~s^{-1}$.
To precisely obtain the profile of Mg\,{\footnotesize II}\ emission line, the UV Fe\,{\footnotesize II}\ multiples should be fitted and
subtracted first. Interestingly, in the analysis of the UV Fe\,{\footnotesize II}\ multiples, we find that they are
also blueshifted and the blueshifted velocity is close to that of Mg\,{\footnotesize II}.
This is supported by the following three evidences:
First, the valley between the two spikes of Fe\,{\footnotesize II}\ multiple UV 60 and UV 61 is an important feature
in the UV Fe\,{\footnotesize II}\ pseudocontinuum emission around Mg\,{\footnotesize II}.
In the panel (a) of Fig.\ref{f3}, we present this valley in the SDSS J1633+5137 rest-frame in black curve.
For comparison, we also plot the scaled spectrum of NLS1 AGN IZW1 in cyan (shifted to its
rest-frame).
Despite of the influence of Fe\,{\footnotesize I}\ and \heitnff\ in 2930-3000\AA, the valley of SDSS J1633+5137
shows blueshifted with respect to that of IZW1. To make it more clear, we manually blueshifted the scaled spectrum of
IZW1 by 2200 $\rm km~s^{-1}$\ which is displayed by orange curve.
The valley seems to be consistent with that of SDSS J1633+5137, indicating that
the UV Fe\,{\footnotesize II}\ multiples of SDSS J1633+5137 are blueshifted.
Second, we searched for five normal quasars in the BOSS DR10, for which features of UV Fe\,{\footnotesize II}\ multiples
and Mg\,{\footnotesize II}\ are not blueshifted.
After scaled with a power law curve, we constructed the composite spectrum of the five quasars and then
matched it to the Mg\,{\footnotesize II}\ peak of SDSS J1633+5137, which is plotted in yellow line in the panel (b) of Fig.\ref{f3}.
Since the UV Fe\,{\footnotesize II}\ and Mg\,{\footnotesize II}\ in normal quasars are considered to have the same relative velocity with respect to
the systemic redshift, the agreement of the normal quasars and SDSS J1633+5137 suggests that
the blueshifted velocity of UV Fe\,{\footnotesize II}\ multiples in SDSS J1633+5137 is close to that of Mg\,{\footnotesize II}.
The last evidence comes from quantitative measurements of the UV Fe\,{\footnotesize II}\ multiples. We used a combination of a single power law continuum and UV Fe\,{\footnotesize II}\ multiples to fit the spectrum of SDSS J1633+5137 in the rest-frame wavelength range of 2200-3000 \AA. The model used to fit the observed spectrum can be described as
\begin{equation}
Model_{UV Fe\,{\footnotesize II}}=C_{1} \lambda^{C_{2}}+C_3 f(v_0,\sigma).
\end{equation}
The $C_{1} \lambda^{C_{2}}$ is a power law which is used to fit the continuum and the $C_3 f(v_0,\sigma)$ is used to fit the UV Fe\,{\footnotesize II}\ multiples. $v_0$ and $\sigma$ represents the shifted velocity and broadened width
of UV Fe\,{\footnotesize II}\ respectively. In the fitting process, the $v_0$ is fixed at a given value,
the $C_1$, $C_2$, $C_3$ and $\sigma$ are free parameters and their best-fit values are searched by minimizing $\chi^2$.
To distinguish the fitting results between different $v_0$ values given, we select the most remarkable
UV Fe\,{\footnotesize II}\ multiple, the red shape of UV1 and the gap between UV 60 and UV61, which are marked in gray-shaded
region in the panel (c) of Fig.\ref{f3}, to calculate the reduced $\chi_e^2$. The $v_0$ is first fixed at 0, which means the
UV Fe\,{\footnotesize II}\ has no shift compared to the quasar's rest-frame. The result is displayed in red in the panel (c) of Fig.\ref{f3} with
the reduced $\chi_e^2=3.03$. Then we fixed the $v_0$ at -2200 $\rm km~s^{-1}$, which means that the UV Fe\,{\footnotesize II}\ multiple is blueshifted at
a velocity the same as that of Mg\,{\footnotesize II}.
The results is also displayed in blue with the reduced $\chi_e^2=1.27$, which suggests an obvious improvement
compared to $v_0=0$.
To display the variation of reduced $\chi_e^2$ as a function of $v_0$, we run a series of fitting programs where
a grid of $v_0$ is provided.
The $\chi_e^2$ variation with $v_0$ is plotted in the inset panel of Fig.\ref{f3}(c).
It can be seen that the reduced $\chi_e^2$ at $v_0=-2200$ $\rm km~s^{-1}$\ is very close to the minimum value of reduced $\chi_e^2$ (1.17),
suggesting that the shift velocity of UV Fe\,{\footnotesize II}\ is indeed close to that of Mg\,{\footnotesize II}.
Previous studies for the UV Fe\,{\footnotesize II}\ and optical Fe\,{\footnotesize II}\ have shown that there is no obvious redshift offset between
the two components (Sameshima et al. 2011). However, this conclusion is based on the quasar samples for which
UV Fe\,{\footnotesize II}\ and optical Fe\,{\footnotesize II}\ are nearly at the systematic redshift. As mentioned above, the UV Fe\,{\footnotesize II}\ multiples
of SDSS J1633+5137 are supposed to be blueshifted with a velocity of about 2200 $\rm km~s^{-1}$\ .
It is not clear whether the optical Fe\,{\footnotesize II}\ have the same blueshifted velocity in SDSS J1633+5137.
Thus, we first compared the optical Fe\,{\footnotesize II}\ of SDSS J1633+5137 (black) to the scaled spectrum of IZW1 (cyan)
in the wavelength range of 5100\AA\ to 5400\AA, which is shown in the inset panel of Fig.\ref{f3} (d).
Different from the UV Fe\,{\footnotesize II}\ multiples, the peaks of strong Fe\,{\footnotesize II}\ lines are close to those of IZW1, for which
the shift velocity is corresponding to the source systematic redshift.
Furthermore, the model with a single power law continuum and optical Fe\,{\footnotesize II}\ multiples was also used to fit the
spectrum of SDSS J1633+5137 in the wavelength range of 4000-6000\AA.
The fitting results with the shift velocity fixed at 0 is plotted in red in Fig.\ref{f3} (d).
Consistent with above empirical analysis, no obvious velocity shift is found.
According to the analysis above, the UV Fe\,{\footnotesize II}\ and optical Fe\,{\footnotesize II}\ have different velocity shift.
A reasonable assumption is that there are two Fe\,{\footnotesize II}\ emitters excited in SDSS J1633+5137.
One emits the blueshifted, strong UV Fe\,{\footnotesize II}\ but faint optical Fe\,{\footnotesize II}\,
which could be arisen from the outflow gas (Gaskell 1982; Marziani et al. 1996; Leighly 2004).
The another is from the normal BLR, where the UV Fe\,{\footnotesize II}\ is faint but optical Fe\,{\footnotesize II}\ is strong.
Thus, we decomposed the UV-optical Fe\,{\footnotesize II}\ multiples in SDSS J1633+5137 into two different components.
One component is blueshifted and the another is at the quasar's systematic redshift.
For each component, we employed the same program to fit the UV Fe\,{\footnotesize II}\ and optical Fe\,{\footnotesize II}\ as above.
In the fitting program, the shift velocity of UV-optical Fe\,{\footnotesize II}\ is tied and allowed to vary.
The first input shift velocity of the blueshifted component is -2200 $\rm km~s^{-1}$\ and that of
the rest component is 0.
The fitting results of two components can be seen in Table 2 and shown in Fig.\ref{f4}.
Similar to Sameshima et al.(2011), the total flux of optical Fe\,{\footnotesize II}\ flux in 4435-4685 \AA\ is chosen
as the intensity parameter of optical Fe\,{\footnotesize II}\ multiples.
The total flux of Fe\,{\footnotesize II}\ UV1 multiples (2565-2665 \AA, Baldwin et al. 2004) is selected
as the intensity parameter of UV Fe\,{\footnotesize II}.
It should be noted that the F-test results for the blueshifted optical Fe\,{\footnotesize II}\ and rest UV Fe\,{\footnotesize II}\ suggest
that the two components are not required statistically during the fittings.
Thus, we consider the fitting results of blueshifted optical Fe\,{\footnotesize II}\ and rest UV Fe\,{\footnotesize II}\ as their upper limits.
\subsection{Narrow Emission Lines }
After subtracting the UV \& optical continuum and UV \& optical Fe\,{\footnotesize II}\ multiples, we are able to
obtain the Mg\,{\footnotesize II}, H$\beta$\ broad emission line which is blended with H$\beta$\ and [O\,{\footnotesize III}]\ narrow lines,
and H$\alpha$\ broad emission line which is blended with H$\alpha$, [N\,{\footnotesize II]}, and [Si\,{\footnotesize II}]\ narrow emission lines.
With the help of the individual narrow emission line, [O\,{\footnotesize II}], we can derive the profiles of other
narrow emission lines, which are then used to deblend the H$\beta$\ and H$\alpha$\ broad lines.
To measure the [O\,{\footnotesize II}]\ emission line, we masked out the spectrum in the velocity range -1500 to 1500 $\rm km~s^{-1}$\ and used
a 3-order spline curve to fit the local continuum of [O\,{\footnotesize II}]. The local continuum is displayed by cyan dashed line
in the top panel of Fig.\ref{f5}. The [O\,{\footnotesize II}]\ includes two narrow emission lines, [O\,{\footnotesize II}]\ 3729 and [O\,{\footnotesize II}]\ 3726.
For each narrow emission line, we used one Gaussian to fit its profile.
The two Gaussians have the same profile in its own velocity space.
The line ratio of [O\,{\footnotesize II}]\ I(3729)/I(3726) is first fixed at 1.
The fitting result is given in Table 2.
According to Pardhan et al. (2006), the line ratio of [O\,{\footnotesize II}]\ can vary from 0.35 to 1.5.
We also try to model with different line ratios, while the width and wavelength shift are constrained to vary less than 30$\rm km~s^{-1}$.
If assuming the width of [O\,{\footnotesize II}]\ is approximate to the velocity dispersion of the host bulge,
the mass of the central BH log $M_{BH}/M_\odot$ can be estimated as 8.8 $\pm$ 1.3,
according to the Ferrarese \& Merritt (2000).
Besides [O\,{\footnotesize II}], we also tried to fit the [O\,{\footnotesize III}]\ 5007 narrow emission line despite it is blended with H$\beta$\ broad emission line.
As shown in the bottom panel of Fig.\ref{f5}, the intensity of H$\beta$\ BEL extended to the [O\,{\footnotesize III}]\ wavelength region is about $\sim$1
while the flux of [O\,{\footnotesize III}]\ NEL is $\sim$10. Thus, we conclude that the influence of H$\beta$\ BEL can be ignored in the [O\,{\footnotesize III}]\ fittings.
We modeled the [O\,{\footnotesize III}]\ line with one Gaussian and the results are shown in Table 2.
Note that the modelled profile of [O\,{\footnotesize III}]\ is very close to that of [O\,{\footnotesize II}].
\subsection{Broad Emission Lines}
Based on the analysis above, we have derived the Mg\,{\footnotesize II}, H$\beta$\ and H$\alpha$\ BELs in SDSS J1633+5137
and these BELs are displayed in corresponding velocity space in Fig.\ref{f6}.
Same as Wang et al. (2011), for a specific emission line, the parameter BAI is defined as the flux ratio of the blue part to the total profile, where the blue part is the portion of the emission line at wavelength less than
its laboratory rest-frame wavelength. For Mg\,{\footnotesize II}\ doublet, the rest-frame wavelength is set to be 2999.4, which is obtained from the Mg\,{\footnotesize II}\ line core of IZW1. Based on this definition, we calculated the BAI of Mg\,{\footnotesize II}\ in our source and the value is $0.85 \pm 0.01$. We note that
if we consider the possible existence of Mg\,{\footnotesize II}\ NEL, this value would be larger.
It indicates that the Mg\,{\footnotesize II}\ is dominated by the blueshifted component.
Similar to Mg\,{\footnotesize II}, the BAI of H$\beta$\ and H$\alpha$\ is $\sim$0.56 and 0.54 respectively,
suggesting that the blueshifted components of H$\beta$\ and H$\alpha$\ BELs are also detected.
Thus, we tried to decompose the Mg\,{\footnotesize II}, H$\beta$, and H$\alpha$\ BELs into two components: one is blueshifted and emitted
from the outflow, the another is in the quasar's rest-frame from the normal BLR.
The blueshifted component was modelled with one Gaussian,
while the component from normal BLR was fitted with multiple Gaussians.
For the latter, we started from one Gaussian, and inspected visually the resulting
$\chi^2$ and residuals to determine the goodness of fit.
When the best possible fit was not achieved, we added another Gaussian with a relative
velocity shift less than 100 $\rm km~s^{-1}$. The fit was repeated until the $\chi^2$ was minimized
with no further improvement in statistics.
In the fitting progress, the intensity of each line is free except the ratio of Mg\,{\footnotesize II}\ doublets
which was held fixed at 1:1.
For SDSS J1633+5137, three Gaussians are good enough to fit the non-blueshifted BEL component.
All these Gaussians were simultaneously fitted through the above iterative $\chi^2$-minimization process,
and the fitting results are summarized in Table 2.
Besides the BELs, the H$\beta$\ and H$\alpha$\ also include
the [N\,{\footnotesize II]}, [Si\,{\footnotesize II}], and Balmer NELs.
Each NEL was modelled with one Gaussian, for which velocity shift and
width were fixed at the values derived from [O\,{\footnotesize II}], assuming that all NELs in
the spectrum have a similar profile to [O\,{\footnotesize II}].
In the fitting progress, we noted that absorption troughs are present
around Mg\,{\footnotesize II}\ emission lines. To further eliminate the effect of absorption lines,
we first fitted the Mg\,{\footnotesize II}\ emission line with one Gaussian, and then masked our those pixels
of absorption features deviating strongly from the model.
In addition to NEL, [O\,{\footnotesize III}]\ always contains the blue
outlier (e.g. Komossa et al. 2008; Zhang et al. 2011). For SDSS J1633+5137,
however, the F-test suggests that another Gaussian for the blue outlier is not required in
SDSS J1633+5137.
Based on the profile of H$\beta$\ rest component, we derived the mass of central BH of
log $M_{BH}/M_\odot=8.37\pm$ 0.27, which is consistent with the mass estimated from [O\,{\footnotesize II}].
The intensity ratio of blueshifted Mg\,{\footnotesize II}\ to H$\alpha$\ is useful to constrain the properties of outflowing gas.
However, the decomposition of blueshifted H$\alpha$\ may be model-dependent, leading to uncertainty in the intensity ratio.
We tried to determine its upper and lower limits. For the line ratio of Mg\,{\footnotesize II}\ to H$\alpha$, the lower limit
can be estimated as shown in the left panel of Fig. 7.
In this figure, the flux of H$\alpha$ and Mg\,{\footnotesize II}\ is normalized by the peak of H$\alpha$.
The total Mg\,{\footnotesize II}\ emission line (red) is obviously
blueshifted and its red side can reach about 1000 $\rm km~s^{-1}$.
Under the assumption that the rest component in the broad emission lines arises from the normal BLR for which
the predominant motion is either Keplerian or virial
(see Gaskell 2009 for a review), the Mg\,{\footnotesize II}\ rest component is expected symmetric.
However, the red side of the observed Mg\,{\footnotesize II}\ is affected by the absorption line (Figure 6), and
the red side of the modelled total Mg\,{\footnotesize II}\ reaches 3000 $\rm km~s^{-1}$,
This gives the blue side of rest component of -3000 $\rm km~s^{-1}$ for the rest component of Mg\,{\footnotesize II}.
Thus, we selected the part of Mg\,{\footnotesize II}\ with relative velocity between -5000 $\rm km~s^{-1}$ and -3000 $\rm km~s^{-1}$
where the Mg\,{\footnotesize II}\ flux is prominent and the influence of Mg\,{\footnotesize II}\ rest component is small.
For the H$\alpha$\ in the same relative velocity range, however, the emission line flux includes that
of rest component.
Hence the line ratio of blueshifted Mg\,{\footnotesize II}\ to H$\alpha$\ in this velocity range can be considered as
the lower limit, which is estimated to be 0.46.
As shown in the right panel of Fig. 7, based on the same assumption that the H$\alpha$\ rest component is symmetric, the lower limit of blueshifted broad H$\alpha$\ can be estimated by subtracting the symmetric flux on the blue side from the total.
The residual flux at the blue side is shown in green.
The line ratio of Mg\,{\footnotesize II}\ to H$\alpha$\ in the velocity range between -5000 $\rm km~s^{-1}$\ and -3000 $\rm km~s^{-1}$\
is close to 1, which can be considered as its upper limit.
\subsection{Ionization Model for Blueshifted Emission Lines}
Because the blueshifted velocities of UV Fe\,{\footnotesize II}, Mg\,{\footnotesize II}, and Balmer lines are nearly the same,
we supposed that these blueshifted components arise from the same outflowing gas.
Thus, we can infer the properties of the outflows from these line ratios, using mainly the
blueshifted Mg\,{\footnotesize II}/H$\alpha$\ and UV Fe\,{\footnotesize II}/H$\alpha$.
We did not use the H$\beta$/H$\alpha$\, as the H$\beta$\ and H$\alpha$\ blueshifted components are relatively weak in the emission lines.
The line ratio between them may have large errors and hence be not reliable.
For the blueshifted Mg\,{\footnotesize II}/H$\alpha$, as we discussed above, it was estimated to be $\sim$0.46--1.
The blueshifted UV Fe\,{\footnotesize II}\ to H$\alpha$\ is equal to the UV Fe\,{\footnotesize II}/Mg\,{\footnotesize II}\ times Mg\,{\footnotesize II}/H$\alpha$.
Because the blueshifted H$\alpha$\ component is relatively weak compared to total flux in the emission line,
we expect that the error of H$\alpha$\ blueshifted component is much higher than the Mg\,{\footnotesize II}\ and UV Fe\,{\footnotesize II}.
Taking this into account, the error of blueshifted UV Fe\,{\footnotesize II}/H$\alpha$\ mainly comes from the error of
Mg\,{\footnotesize II}/H$\alpha$\ and the error of UV Fe\,{\footnotesize II}/Mg\,{\footnotesize II}\ can be neglected.
As shown in Table 2, the value of blueshifted UV Fe\,{\footnotesize II}/Mg\,{\footnotesize II}\ is 0.75. Multiplied by the range
of blueshifted Mg\,{\footnotesize II}/H$\alpha$, the blueshifted UV Fe\,{\footnotesize II}/H$\alpha$\ can be estimated in the range $\sim$0.35-0.75.
This is consistent with the Mg\,{\footnotesize II}/H$\alpha$\ (0.48) derived from the quasar composite spectrum (Vanden Berk et al. 2001).
In spite of the blueshifted UV Fe\,{\footnotesize II}/H$\alpha$\ is much larger than that from the quasar composite spectrum (0.01),
it is consistent with the line ratio of a typical BLR derived from the photoionization model
(Baldwin et al. 2004; Sameshima et al. 2011).
Therefore, the UV Fe\,{\footnotesize II}\ blueshifted components can be modelled with
photoionization model and the physical conditions of outflowing gas are supposed to be
similar to the BLR in normal quasars.
The large-scale synthesis code CLOUDY (c13.03; Ferland et al. 1998) is employed to
perform the photonization modeling of the blueshifted broad emission lines.
The simulation results are used to compare with the luminosities and ratios of the
blueshifted components measured from the spectrum of SDSS J1633+5137.
In photoionization simulations, solar elemental abundance is adopted and the gas is
assumed free of dust. To model the Fe\,{\footnotesize II}\ emission lines, we used a 371 level $\rm Fe^+$ model
that includes all energy levels up to 11.6 eV, and calculated strengths for 68,000 emission
lines (Verner et al. 1999). For simplicity in the computation, the geometry is assumed as a slab-shaped emission medium with a uniform density, metallicity, and abundance.
This medium is exposed to the ionizing continuum from the central engine
with a SED defined by Mathews \& Ferland (1987, hereafter MF87).
As shown in Fig. 8, an array of Hydrogen absorption column densities (N$_{\rm H}$)
was set in the simulations
from $10^{21}$ to $10^{24}$ cm$^{-2}$ stepped with 1 dex.
As we discussed above, the ionization conditions may be nearly same as the BLR.
Thus, for each column density, the range of outflow gas electron density (n$\rm _H$) was set
from $10^7$ to $10^{14}$ cm$^{-3}$ and a grid of models are calculated by
varying the $n_H$ of the emitting gas with a step of 0.5 dex.
Finally, the logarithmic ionization parameter (log U) was sampled from -3.5 to 1.5 with a step of 0.5 dex.
The calculated results are shown in Fig.8, where we plot the contours of blueshifted Mg\,{\footnotesize II}/H$\alpha$\ and
UV Fe\,{\footnotesize II}/H$\alpha$\ as a function of n$\rm _H$ and U. In each panel, the solid lines denote the basic models
and the filled areas represent the observed range with $\rm 1 \sigma $ confidence level.
The simulation results indicate that the observed regions of Mg\,{\footnotesize II}/H$\alpha$\ and UV Fe\,{\footnotesize II}/H$\alpha$\
have no overlap when N$\rm _H \leq 10^{22}$ cm$^{-2}$. The overlap region starts to appear
for the column density N$\rm _H \geq 10^{23}$ cm$^{-2}$. Hence we considered $10^{23}$ cm$^{-2}$
as the lower limit on the column density, which implies that the outflow in SDSS J1633+5137 may be
optically thick.
Thus, we provided an ionization boundary model (Ferland et al. 1998) to simulate the emitting gas in the outflow
and the simulation results are plotted in Fig. 9.
In this model, the parameters n$\rm _H$ and U of the emitting gas can be constrained as of n$\rm _H$ from $10^{10.6}$ to $10^{11.3}$ cm$^{-3}$ and log U from -2.1 to -1.5. With n$\rm _H$ and U, the distance of emitting gas to central ionizing
source was derived as $R_{emit} ~=~ (Q(H)/(4 \pi cUn_e) )^{0.5}$, where Q(H) is the number of ionizing photons,
$Q(H) = \int_{\nu}^{\infty}{L_{\nu}/h \nu d \nu }$. Based on the continuum luminosity at 5100\AA ($\rm \lambda L_{\lambda} (5100\AA)~=~1.3 \times 10^{45} ~ erg~ s^{-1}$) and MF87 SED,
we derived $\rm Q(H) \approx 1.1 \times 10^{56}~ photon~ s^{-1} $.
Thus, we obtained that the distance between the emitting gas and central ionizing source is $\sim$0.1pc.
Based on the best constrained parameter values by our photoionization modeling, namely,
$\rm n_H$ = $10^{11.1}$ cm$^{-3}$ and log U = -1.8, we obtained the simulated EW of Mg\,{\footnotesize II}\ of 257 \AA.
Since this value is modelled under the assumption of full sky coverage of outflowing gas,
the ratio of observed EW of Mg\,{\footnotesize II}\ to the modelled one can be used to constrain the covering
factor $C_{f,emit}$ of the emitting gas. The observed EW of Mg\,{\footnotesize II}\ is about 45 \AA, suggesting that
the $C_{f,emit}$ is about 0.16.
\section{Absorption Lines Analysis}
\subsection{Absorption-free Spectrum for the Absorption Lines}
As shown in Fig.\ref{f1}, a prominent BAL trough is present in the spectrum at about 7000 $\rm km~s^{-1}$\ in velocity space
blueshifted with respect to the He\,{\footnotesize I*} $\lambda$10830.
This trough can be identified as He\,{\footnotesize I*} $\lambda$10830\ BAL.
Hinted by the location of the trough, we detected another BAL trough at about 7000 $\rm km~s^{-1}$\ blueshifted
with respect to the \heiteen. In addition, at the same location in the respective velocity space,
the Mg\,{\footnotesize II}\ BAL was found in the spectrum. With these BALs, we are able to place constrains on the properties
of the absorption line outflowing gas.
To measure these BALs, we first used the pair-match method (Zhang et al. 2014; Liu et al. 2015) to
recover the absorption-free spectrum of SDSS J1633+5137. The absorption lines of interest
in the observed spectral regime include He\,{\footnotesize I*} $\lambda$10830, \heiteen, \heitoen\ and Mg\,{\footnotesize II}. For each absorption, the pair-match method was employed to obtain the absorption-free spectrum.
(1) He\,{\footnotesize I*} $\lambda$10830\ regime: As can be seen from Fig.\ref{f10}, with a large blueshift of 7000 $\rm km~s^{-1}$, the He\,{\footnotesize I*} $\lambda$10830\ BAL is well detached from the corresponding emission line. This spectral regime is largely free from other emission lines (see the quasar composite spectrum displayed in Fig.\ref{f2}; Zhou et al. 2010). The absorption-free flux recovered by the pair-matching method is mostly contributed by the featureless continuum, which is well reproduced by a power-law and a black body emission. We did not detect starlight from the host galaxy, and interpret the power-law component to be originated from the accretion disk of the quasar. The black body component is generally believed to be hot dust reradiation of the torus presumed by the AGN unification schemes (e.g., Netzer 1995). After removal of the black body component, we found that the residual flux is still significant in the BAL trough.
(2) \heiteen\ regime: The absorption free flux around \heiteen\ BAL is mainly contributed by the power-law continuum radiated by the accretion disk. The absorption depth of the deepest part in the \heiteen\ BAL trough is $\sim 20\%$ on the normalized spectrum (see Fig.\ref{f10}). Since the absorption strength ratio ($gf_{ik}\lambda$) of He\,{\footnotesize I*} $\lambda$10830\ to \heiteen\ is as large as 23.3 (e.g., Leighly et al. 2011), this indicates that the BAL region only partially covers the accretion disk, incorporating the fact that there is still significant residuals in the He\,{\footnotesize I*} $\lambda$10830\ trough after removal of the host dust contribution. Detailed analysis yielded the covering factor of the absorption gas to the accretion disk is about 0.4 (see Sec 4.2)
{(3) \heitoen\ and Mg\,{\footnotesize II}\ regime:
The emission and absorption characteristics around these two absorption lines are nearly the same. }
The power law continuum from the accretion disk and the Mg\,{\footnotesize II}\ and Fe\,{\footnotesize II}\ broad lines from the outflow.
Considering that the absorption gas only partially covers the accretion disk, and the emission line gas
is of the similar size as the normal BLR, the UV Fe\,{\footnotesize II}\ multiples should not be included in the absorption free
spectrum. The normalized absorption spectra of He\,{\footnotesize I*} $\lambda$10830, \heiteen, \heitoen, and Mg\,{\footnotesize II}\ are displayed in Fig.\ref{f10} (right).
\subsection{Characterizing the Absorption Line Gas}
Before investigating the properties of BALs, we first constrain the distance of BAL outflow gas
in a qualitative way. According to the discussion above, the absorption medium partially obscures the
accretion disk. Thus, we considered that the distance of the absorption medium is comparable to the
size of accretion disk at 10830 \AA. Based on the equation 3.2 in Peterson (1997), the size of accretion
disk at 10830 \AA\ is about 1500 $r_g$ ($r_{g} \equiv G{M_{BH}}/c^2 $), or 0.017pc.
More quantitive constraint on the distance of the
absorption gas requires the measurements of ionization
parameter U and gas density $n_H$. The latter can be constrained
by comparing the photoionization simulations with observed line
ratios of multiple ions (e.g., Leighly et al. 2011; Liu et
al. 2016). However, only He I$^*$ and Mg\,{\footnotesize II}\ absorptions are
detected in J1633. As we will show below, they are not sufficient to set useful constraint on
the gas density, but useful in determining the ionization parameter and lower limit on the total column density for the BAL gas.
For a BAL, the normalized intensity is
\begin{equation}
I(v) = [1-C_f(v)]+C_f(v)e^{-\tau(v)},
\end{equation}
where $C_f(v)$ is the covering factor and $\tau$ is true optical depth as a function of
radial velocity. For transitions from the ion at a given level, the values of $\tau(v)$
is proportional to f$\lambda N_{col}$, where f is the oscillator strength, $\lambda$ is the rest wavelength of the transition and $N_{col}$ is the column density of the ion with the given level.
Theoretically, two absorption lines transited from the same ion at a given level are
needed to derive the physical conditions of outflowing gas, such as $C_f$ and $N_{col}$.
For SDSS J1633+5137, three absorption lines, \heitoen, \heiteen\ and He\,{\footnotesize I*} $\lambda$10830\ are transited from the same
energy level He\,{\footnotesize I*}\ and can be used to derive the $C_f$ and $N_{col}$ of this ion.
As the \heitoen\ trough is weak, we used the \heiteen\ and He\,{\footnotesize I*} $\lambda$10830\ to solve the equation (2) to
obtain $C_f$\ and $\tau(v)$\ of HeI*.
The \heitoen\ trough is employed to check for consistency.
Because the He\,{\footnotesize I*} $\lambda$10830\ absorption line is affected seriously by the sky lines,
the pixels in these region was marked and the data were interpolated using 3-order spline.
As the bottom of He\,{\footnotesize I*} $\lambda$10830\ is about 0.6, we defined that the edge of absorption trough region
is located at the pixels where three continuous pixels are below 0.96,
corresponding to a depth of absorption of 4\%, or 10\% of the depth of He\,{\footnotesize I*} $\lambda$10830\ BAL.
For every pixel in the absorption line regions of \heiteen\ and He\,{\footnotesize I*} $\lambda$10830,
we derived the $C_f$ and $N_{col}$ of He\,{\footnotesize I*}, and $\tau_{\rm HeI*3889}$, which are shown in Fig.\ref{f11}.
The integral $N_{col}$ of He I$^*$ along with the absorption trough in the velocity space
is found to be $(5.0 \pm 1.7) \times 10^{14}$ cm$^{-2}$.
With the $C_f$ and $N_{col}$, we also simulated the absorption trough of \heitoen\ and compared it
with the observed data in Fig.\ref{f12}. We found that the simulated and observed absorption
trough are consistent with each other, indicating that the derived $C_f$ and $N_{col}$ are
reliable.
In addition, with the derived $C_f$ and Mg\,{\footnotesize II}\ absorption trough, we tried to constrain the
$N_{col}$ of $Mg^+$.
{In Fig.\ref{f13}, we show the trough of Mg\,{\footnotesize II}\ in blue and 1-$C_f$ in comparison,
which indicates the saturation of Mg\,{\footnotesize II}\ absorption trough at some velocities.
Thus, the $N_{col}$ of $Mg^+$ can not be obtained directly from Eq. 2.}
However, because of the saturation of Mg\,{\footnotesize II}, we can derive log U from the $N_{col}$ of He\,{\footnotesize I*}\
through the equation (3)
in Ji et al. (2015) and the value of log U is -1.9 $\pm$ 0.2.
{\bf This value is in the range of e
log U derived from the blueshifted emission lines.
}
With our measurement for the total column density seen in the He I metastable lines, we
can set a minimal He$^+$ column density of $\sim1\times10^{20}$ cm$^{-2}$ in the outflow,
taking the maximum density ratio of HeI$^*$ to He$^+$ (Rudy et al. 1985; Arav et al. 2001).
Assuming solar abundances this estimate yields a minimal H II column
density N$_{\rm H}\sim1\times10^{21}$ cm$^{-2}$.
On the other hand, we can estimate the HII column density of BAL gas through the equation
$\rm N_{H} \approx 23+log U$ (Ji et al. 2015), yielding N$_{\rm H}\sim10^{21}$ cm$^{-2}$.
However, it should be noted that the $N_{col}$ of He\,{\footnotesize I*}\ is not a suitable indicator of $\rm N_{H}$
for the optical thick gas. This is because HeI* is a high-ionization line and its column
density mainly grows in the very front of hydrogen ionization front and stops growing behind it
(e.g., Arav et al. 2001; Ji et al. 2015). Instead, absorption lines with lower ionization
potentials, such as CaII, Mg\,{\footnotesize II}, and Fe\,{\footnotesize II}, are useful to probe the total column density
of outflow. Unfortunately, the CaII and Fe\,{\footnotesize II} absorption lines are not detected, and
$N_{col}$ of $Mg+$ is difficult to derive due to the saturation effect.
Therefore, we can only set a lower limit for the total
HII column density of BAL gas, N$_{\rm H}>1\times10^{21}$ cm$^{-2}$.
On the other hand, further constraint on the column density of absorbing medium
can be placed with the non-detections of the
corresponding UV Fe\,{\footnotesize II}\ BALs.
This is because given the same ionization parameter,
the MgII and FeII absorption lines are both sensitive to the total column density.
Similar to our analysis with BEL outflow (Section 3.4), we employ the photoionization simulations to
evaluate the dependence of FeII BALs on the total column density.
We assume the geometry of BEL gas as a slab-shaped medium exposed to the ionizing continuum from the central
engine with uniform density.
The model setups are the same as that for the BEL simulations except for the ionization parameter, which is
logU = -1.9 as derived from HeI* BALs.
Each individual simulation model is customized in terms of the column density ($\rm N_H$), which is
set to vary in the range 21 $\le$ log$\rm N_H$ $\rm (cm^{-2})$ $\le$ 22 with a dex step of 0.2.
This model can predict the population on various levels of Fe$ ^+ $ and the strength of absorption lines originated from these levels.
Fig.14 (upper panel) presents a series of models with the grid of log N$\rm _{H}$.
To better visualize the models, each photoionization model is broadened using the absorption profile of He\,{\footnotesize I*}~.
As can be seen from the simulation results, at log$\rm N_H$ $\ge$ 21.4, there are obvious absorption troughs from
the iron multiplets raised from the ground state (e.g.,
Fe II UV2+3 at approximately 2400 \AA\ and Fe II UV1 at approximately 2600 \AA. Due to the BALs are 7000 $\rm km~s^{-1}$\ blueshifted, the ). Such absorption features are, however, not
observed in the spectrum. Thus the upper limit on the column density of BAL gas can be constrained to be
at log$\rm N_H$ = 21.4. In fact, when compared to the observed spectrum in detail (Fig.14, lower panel), we found a model
with column density of log$\rm N_H$ = 21.2 matches the data well in the
spectral range of 2300-3000\AA.
Therefore, in combination with the lower limit on the column density
given by the
HeI* BAL, the most probable column density for the BAL gas is log$\rm N_H$ $\sim$ 21.2. This suggests that
the physical conditions of BAL and BEL gas are not strictly the same, at least in terms of the total column density.
\section{Summary and Discussion}
In this paper, we present a detailed study of the emission and absorption line properties of J1633+5137.
In the optical and NIR spectra, in addition to the normal emission lines
originating from BLR and NLR, there are several blueshifted emission components
with a common velocity at $\sim-2200$km s$^{-1}$ from MgII, UV FeII and hydrogen Balmer lines, suggestive
of the AGN BEL outflows.
These lines can be mutually used to constrain the physical properties of the outflowing gas
by confronting the observations with the photoionization simulations.
The physical parameters for the BEL outflow are constrained to be
$10^{10.6}$ $\le $ $\rm n_H$ $\le$ $10^{11.3}$ cm$^{-3}$, -2.1 $\le$ log$ U_E$ $\le$ -1.5,
and $\rm N_H$ $ \lower.5ex\hbox{\gtsima} $ $10^{23}$ cm$^{-2}$.
Using the ionization parameter, gas density and EW of MgII, we
estimated the covering factor and distance of the BEL outflow materials to the central source,
which is $C_{\rm f,emit}$ $\sim$ 0.16 and $r$ $\sim$ 0.1pc.
In addition, strong BALs from Mg\,{\footnotesize II}\ and HeI* metastable lines are also detected.
Using a simple partial coverage model, we derived the integral column density of HeI* and
ionization parameter for the BAL gas, ,
which is $(5.0\pm1.7)\times10^{14}$ cm$^{-2}$ and logU = -1.9 $\pm$ 0.2, respectively.
The total column density is estimated in the range $10^{21}$ $\le$ log(N$_{\rm H})$ $\le$ $10^{21.4}$ cm$^{-2}$,
which is about two orders of magnitude less than that derived for the BEL gas, suggesting that
the physical conditions of BAL and BEL gas are not strictly the same.
Though the blushifted BELs are crucial in studying AGN outflows which can reflect the
global properties of outflowing gas, their physical conditions and locations
are difficult to investigate, except for a limited number of sources where
the spectra from multiple ionic species can be reliably measured.
Liu et al. (2016) identified both the BELs and BALs produced by the
AGN outflows in the quasar SDSS J163459.82+204936.0.
The physical parameters determined for the BEL and BAL outflows are very
close, with $10^{4.5}$ $\le$ $\rm n_H$ $\le$ $10^{5}$ cm$^{-3}$, -1.3 $\le$ log$U$ $\le$ -1.0, $\rm N_H$ $ \sim$ $10^{22.5}$ cm$^{-2}$, and
the outflow materials are 48--65 pc from the central source, likely exterior of the
torus. The similarity of the physical parameters strongly suggest that
blueshifted-BEL and BALs should be generated in the common outflowing gas.
Zhang et al. (2017) reported similar UV and optical emission line outflows in the
heavily obscured quasar SDSS J000610.67+121501.2, and inferred a distance at the scale of
the dusty torus (and beyond).
Conversely, the emission line outflow identified in J1633+5137 has a much higher density ($\rm n_H$ $\sim$ $10^{11}$ cm$^{-3}$)
with a distance at the scale of BLR to the central source, reflecting the diversity
of physical conditions for the outflowing gas.
\subsection{Energetic Properties of the Outflow}
Since the physical conditions for the BELs and BALs are not the same,
we discuss separately the energetic properties of the BEL and BAL outflows.
As discussed in Borguet et al. (2012),
assuming that the BEL outflowing material are described as a thin ($\Delta R/R \ll 1$ ),
partially filled shell, the mass-outflow rate ($\dot{M}$) and kinetic luminosity ($\dot{E_{k}}$)
are given by
\begin{equation}
\dot{M}=4 \pi R \Omega \mu m_{p} N_{H} v
\label{fun:M}
\end{equation}
and
\begin{equation}
\dot{E_{k}}=2 \pi R \Omega \mu m_{p} N_{H} v^3
\label{fun:EK}
\end{equation}
, where R is the distance of the outflow from the central source, $\Omega$ is the global covering
fraction of the outflow, $\mu = 1.4$ is the mean atomic mass per proton, $m_{p}$ is the mass of proton.
N$\rm _{H}$ is the total hydrogen column density of the outflow gas. $v$ is the radial velocity.
Based on the physical parameters inferred for the BEL outflow, and
taking the velocity of outflow $v$ as the peak of blueshifted Mg\,{\footnotesize II}\ BEL, which is -2200 $\rm km~s^{-1}$,
the mass-outflow rate and the kinetic luminosity can be derived as $\dot{M}$ = 0.9 M$ _{\odot} $ yr$ ^{-1} $ and $\dot{E_{k}}$ = 1.5 $\times$ 10$ ^{42} $ erg s$ ^{-1} $, respectively.
Similar to the BEL outflow, we can also obtain the $ \dot{M} $ and $ \dot{E_k} $ for the BAL outflow.
However, the global covering factor and density of BAL outflow gas in SDSS J1633+5127 cannot be directly
constrained by the observations. In the studies of BAL quasars, the
global covering fraction of BAL outflow gas is generally derived from the fraction of BAL quasars. This fraction is
about 10\%-20\% in optical-selected quasars (e.g., Trump
et al. 2006; Gibson et al. 2010; Zhang et al. 2014).
Moreover, we assumed that BAL outflow locates at the same distance to the central source as the BEL outflow.
With the column density of BAL outflow log N$ \rm _{H} $ (cm$ ^{-2} $) = 21.2 and radial velocity of $\sim$7000 km s$^{-1}$,
the $ \dot{M} $ and $ \dot{E_k} $ for the BAL outflow can be estimated as $ \dot{M} $ = 0.01 M$ _{\odot} $ yr$ ^{-1} $ and $ \dot{E_k} $ = 2.2 $ \times $ 10$ ^{41} $ erg s$ ^{-1} $, respectively. These values are a factor of 7-9 less than
that obtained for the BEL outflow.
Therefore, the mass flux and kinetic luminosity are dominated by the BEL outflow,
and the contribution from BAL outflow is minor.
Previous studies suggest that efficient AGN feedback in the form of high-velocity outflows typically requires kinetic luminosity
to be the order of a few percent of the Eddington luminosity (L$ _{EDD} $)(e.g., Scannapieco \& Oh 2004; Di
Matteo et al. 2005; Hopkins \& Elvis 2010). For SDSS J1633+5127, the mass of black hole (log M$ _{BH} $/M$ _{\odot} $) derived from H$\beta$\ is about 8.37 and L$ _{EDD} $ is about 3$ \times $ 10$^{46} $ erg s$ ^{-1} $. Taking the calculation results above,
the sum of kinetic luminosities of the BEL and BAL outflow is only $\sim$1.7 $ \times $ 10$^{42} $ erg s$ ^{-1} $ ( $ < $ 10$ ^{-4} $ L$ _{EDD} $). This value is apparently far from efficient to drive the AGN feedback.
Note that the kinetic luminosity of the total outflow gas can be considered only a lower limit for the following reasons:
(1) The column density of 10$ ^{23} $ cm$ ^{-2} $ we inferred for the BEL outflow is the lower limit.
(2) The velocity $ v $ for BEL gas is a sum of the projected velocities of the outflowing gas
along different directions, the value of which is only a lower limit on the outflow velocity (Liu et al. 2016; Zhang et al. 2017).
(3) The distance of the BAL outflowing gas may also be a lower limit, as if it was located at much greater distances from the central source.
\subsection{Outflow Geometry and the Profile of Outflow Emission Line}
As we mentioned above, blueshifted BELs from multiple ionic species are rarely observed in quasars, and
both of the BELs and BALs are observed in the spectrum of the same quasar is even rare.
In Section 3 and 4, we investigated the physical properties of the BEL and BAL outflows respectively,
and obtained similar ionization parameters for them.
Therefore, though the physical conditions are not strictly the same, the BEL and BAL outflows may not be independent.
In order to further constrain the outflow geometry,
we attempted to reproduce the profile of BELs with the radial velocity of BALs.
The outflows are always considered as a biconical structure in the previous works
(e.g. Elvis 2000) and the emission line profile can be successfully modeled with this structure
(Zheng, Binette, \& Sulentic 1990; Marziani et al. 1993; Sulentic et al. 1995).
However, the biconical structure is two-dimensional and need a certain number of free parameters
to reproduce the emission line profile in models. For simplicity,
we employed a one-dimensional ``ring'' model to reproduce the emission line outflow profile of
SDSS J1633+5137. The cross-section of this model is displayed in the left panel of Fig.\ref{f15}.
The ring model for the outflow assumes that the line originates on a ring above the disk
for which axis inclines with an angle $ i $ relative to the line of sight.
The ring has an angle $ \theta_r $ relative to the normal direction of the accretion disk.
For SDSS J1633+5137, as the blueshifted BELs and BALs of the outflow are observed at the same time,
it is natural to assume our line of sight is penetrating through the outflow,
which means the angel $i$ = $ \theta_r $.
{To reproduce the velocity range of BALs, the $v_r$ in this model is constrained
at 7000 $\rm km~s^{-1}$, which corresponds to the blueshifted velocity of the BALs.}
The distance from the ring to the black hole is $r$ (expressed in units of the gravitational
radius, $r_{g}$).
The distance of outflow to the central source derived from blueshifted BELs is about
0.1pc and about 9000 $r_{g}$.
The outflow velocity along with the radial direction is $v_r$.
Besides the radial velocity, the outflow ring also has a rotation velocity.
However, if we assumed our outflow is launched by the diskwind which arises from the accretion disk
at about 100 $r_g$. When the outflow arrives at 9000 $r_g$, due to the angular momentum conservation,
the rotation velocity is about 300 $\rm km~s^{-1}$. This rotation velocity is much less than the $v_r$.
Therefore, in our model, the rotation velocity was ignored. The coordinate of gas in the ring can be
expressed as( $r, \theta_r, \phi $), where the $\phi$ changes from -$\pi$ to $\pi$ and our line of sight
corresponds to $\phi$ = 0. Assuming the outflowing ring's rotation increases with the direction of $\phi$,
for a certain ring at $r, \theta_r$, and $\phi $, the velocity on the line of sight $v_{obs}$ can be expressed
as
\begin{equation}
v_{obs}(r,\theta_r, \phi)=-(v_rsin^2(\theta_r)cos(\phi)+v_rcos^2(\theta_r)).
\label{functions:vobs}
\end{equation}
To compare with the observed profile easily, we defined the direction of far away the central BH as
the positive direction of $v_{obs}$. With this equation, we can derive the emission line profiles of
the outflow ring for only one free parameter $\theta_r$ in our model.
In the right panel of Fig.\ref{f17}, we display three model results.
For comparison, we also show the fitting result to the Mg\,{\footnotesize II}\ blueshifted BEL (black).
All the three models are different from the Mg\,{\footnotesize II}\ blueshifted BEL.
The model profile is single-peaked when the $\theta_r$ is small,
but the blueshifted velocity is higher than Mg\,{\footnotesize II}.
For the case of larger $\theta_r$, the model profile becomes double-peaked,
which is also inconsistent with Mg\,{\footnotesize II}.
Even though, we found that when the $\theta_r$ = 40$^\circ$,
the red part of model profile appears to match well with the red side of Mg\,{\footnotesize II}.
If the emission at the blue side is obscured under a certain condition, and
only the emission at the red side can be observed,
the modelled profile could be consistent with Mg\,{\footnotesize II}.
According to the Eq.5, for a certain $\theta_r$, the larger blueshifted velocity corresponds
to the smaller absolute value of $\phi$. Thus, we proposed another amended toy model.
All the parameters in this model are the same to the model above except we added a free parameter
"shadow". The top view of this model is shown in the left panel of Fig.\ref{f16}.
The parameter "shadow" is in the range from 0 to 1.
For a given shadow parameter, the outflowing gas in the range $-shadow \times \pi$ to $shadow \times \pi$
is obscured and only the photons emitted from the rest of the ring can be detected.
In the middle panel of Fig.\ref{f16}, we display a modelled profile which can reproduce the
profile of Mg\,{\footnotesize II}\ well. The parameters of this best-fitted profile are
$\theta_r$ = 40$^\circ$ and shadow = 0.48.
Note that the free parameter $v_r$ and $shadow$ can be well constrained
in our model.
Fig.\ref{f16} (right panel) shows the 1, 2, and 3$\sigma$ confidence levels of the parameter $\theta_r$
versus $shadow$.
At 1$\sigma$ confidence level, the $\theta_r$ was constrained in the range from 33$^\circ$ to 43$^\circ$, while
the $shadow$ was from 0.33 to 1.
In Section 3.4, we have estimated that the distance of BEL gas is $\sim$0.1pc,
which is much smaller than the distance of dust torus, typically at $\sim$pc scale (e.g., Barvainis 1987; Koshida et al. 2014;
Kishimoto et al. 2012).
In addition, previous studies of the dust torus yielded the dust covering factors of $\sim$48$^\circ$
(Schmitt et al. 2001) and in a large range of 38-56$^\circ$ (Osterbrock \& Martel 1993; Sazonov et al.2015).
Therefore, it is possible that the shielding of outflowing gas in SDSS J1633+5137
is the dusty torus.
Future spectropolarimetry observations will be required to further test this model,
and helpful to place new constrains on the geometry of the outflowing gas.
It should be noted that this model is based on the assumption that the
density and column density are uniformly distributed in the outflow gas, which may be over-simplified.
Using more complex models, e.g., the column density distribution is inhomogenous, to explain the emission and absorption features observed in
SDSS J1633+5127 is beyond the scope of this paper. We present in the Appendix such a multi-column density modelling of
the outflow emission lines, and a detailed investigation will be presented elsewhere.
This work is supported by the National Natural Science
Foundation of China (NSFC-11573024, 11473025, 11421303, {11573001 and 11822301})
and the National Basic Research Program of China (the 973
Program 2013CB834905 {and 2015CB857005}). T.J. is supported by the National
Natural Science Foundation of China (NSFC-11503022) and
the Natural Science Foundation of Shanghai (NO. 15ZR1444200). P.J. is supported by the National Natural Science Foundation of China (NSFC-11233002).
{X.S. acknowledges support Anhui Provincial NSF (1608085QA06)
and Young Wanjiang Scholar program. }
We acknowledge the use of the Hale 200-inch Telescope at Palomar
Observatory through the Telescope Access Program (TAP), as
well as the archive data from the SDSS, 2MASS, and WISE
surveys. TAP is funded by the Strategic Priority Research
Program, the Emergence of Cosmological Structures
(XDB09000000), National Astronomical Observatories, Chinese Academy of Sciences, and the Special Fund for
Astronomy from the Ministry of Finance. Observations
obtained with the Hale Telescope at Palomar Observatory
were obtained as part of an agreement between the National
Astronomical Observatories, Chinese Academy of Sciences,
and the California Institute of Technology. Funding for SDSS-
III has been provided by the Alfred P. Sloan Foundation, the
Participating Institutions, the National Science Foundation, and
the U.S. Department of Energy Office of Science. The SDSS-
III Web site is http://www.sdss3.org/.
\begin{deluxetable}{cccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablenum{1}
\tablecaption{Photometric Observations of SDSS J1633+5127}
\tablehead{
\colhead{Wavelength Band/Range} & Mag. & Survey & \textit{MJD} }
\startdata
\ $\textit{u}$ &$ 18.59\pm 0.04 $ &SDSS &51948\\
\ $\textit{g}$ &$ 18.04\pm 0.01 $ &SDSS &51948\\
\ $\textit{r}$ &$ 18.23\pm 0.01 $ &SDSS &51948\\
\ $\textit{i}$ &$ 17.80\pm 0.01 $ &SDSS &51948\\
\ $\textit{z}$ &$ 17.76\pm 0.02 $ &SDSS &51948\\
$ J $ &$ 16.57\pm 0.13 $ &2MASS & 50937\\
$ H $ &$ 15.73\pm 0.16 $ &2MASS & 50937\\
$ K_{s} $ &$ 14.41\pm 0.08 $ &2MASS & 50937\\
$ W1 $ &$ 12.25\pm 0.02 $ &WISE &55332 \\
$ W2 $ &$ 10.93\pm 0.02 $ &WISE &55332 \\
$ W3 $ &$ 8.31\pm 0.02 $ &WISE &55332 \\
$ W4 $ &$ 6.22\pm 0.04 $ &WISE &55332 \\
$ V $ &- &Catalina&53653-56454\\
\enddata
\end{deluxetable}
\begin{deluxetable}{lccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablenum{2}
\tablecaption{Decomposition Measurements of Emission Lines}
\tablehead{
\colhead{}& $\rm Int.^a$ & $\rm Shift^b$ & $\rm FWHM^b$}
\startdata
[O\,{\footnotesize II}]\ 3729& $72 \pm 4$ & $21 \pm 4$ &$700 \pm 22$\\
[O\,{\footnotesize III}]\ 5008& $111 \pm 13$ & $26 \pm 5$ &$677 \pm 35$\\
blueshift Mg\,{\footnotesize II}\ 2796 &$457 \pm 4$ &$-2210 \pm 8$ &$3889 \pm 20$\\
blueshift Mg\,{\footnotesize II}\ 2803 &$457 \pm 4$ &$-2210 \pm 8$ &$3889 \pm 20$\\
rest H$\beta$\ &$671 \pm 12$ &$39 \pm 18$ &$2623 \pm 71$ \\
rest H$\beta$\ Gaussian1 &$336 \pm 6$ &$81 \pm 15$ &$11242 \pm 41$ \\
rest H$\beta$\ Gaussian2 &$72 \pm 8$ &$-65 \pm 12$ &$1161 \pm 54$ \\
rest H$\beta$\ Gaussian3 &$263 \pm 6$ &$-61 \pm 13$ &$2541 \pm 25$ \\
blueshift H$\beta$\ &$93 \pm 6$ &$-2210 \pm 8$ &$3889 \pm 20$\\
rest H$\alpha$\ &$4836 \pm 47$ &$19 \pm 14$ &$2623 \pm 73$ \\
rest H$\alpha$\ Gaussian1 &$2420 \pm 43$ &$81 \pm 15$ &$11242 \pm 41$ \\
rest H$\alpha$\ Gaussian2 &$522 \pm 56$ &$-65 \pm 12$ &$1161 \pm 54$ \\
rest H$\beta$\ Gaussian3 &$1892 \pm 46$ &$-61 \pm 13$ &$2541 \pm 25$ \\
blueshift H$\alpha$\ &$1069 \pm 40$ &$-2210 \pm 8$ &$3889 \pm 20$\\
$\textit{rest UV Fe\,{\footnotesize II}\ } ^{c,d}$ &$96 \pm 54$ &$-92 \pm 87$ &$8133 \pm 103$\\
blueshift UV $\rm Fe\,{\footnotesize II}\ ^e$ &$623 \pm 47$ &$-2135 \pm 52$ &$1417 \pm 31$\\
rest optical $\rm Fe\,{\footnotesize II}\ ^f$ &$1002 \pm 20$ &$ -92 \pm 87 $ &$2544 \pm 86$\\
$\textit{blueshift optical Fe\,{\footnotesize II}\ } ^{c,g}$ &$189 \pm 19$ &$-2135 \pm 52$ &$8372 \pm 95$\\
\enddata
\tablecomments{\\
\textbf{a}: In units of $\rm 10^{-17} erg~s^{-1}~cm^{-2}$.\\
\textbf{b}: In units of $\rm km~s^{-1}$.\\
\textbf{c}: The upper limits.\\
\textbf{d}: The total computed UV Fe\,{\footnotesize II}\ flux over the wavelength ranges 2565-2665 \AA.\\
\textbf{e}: The total computed blueshifted UV Fe\,{\footnotesize II}\ flux over the wavelength ranges 2548-2647 \AA\ in the quasar's
rest-frame.\\
\textbf{f}: The total computed optical Fe\,{\footnotesize II}\ flux over the wavelength ranges 4435-4685 \AA.\\
\textbf{g}: The total computed optical Fe\,{\footnotesize II}\ flux over the wavelength ranges 4405-4653 \AA\ in the quasar's rest-frame.\\
}
\end{deluxetable}
\begin{figure}[ht]
\epsscale{1.1}
\plotone{j1633_var.eps}
\caption{\textbf{Panel (\textit{a})}: The observed spectra and photometry of SDSS J1633+5127.
The BOSS spectrum (MJD 56191) are displayed by black curve. For comparison, we plot the
SDSS five bands photometry (MJD 51948) and the BOSS spectral synthetic magnitude at \textit{g, r, i, }
and \textit{z} bands by black and blue diamonds, respectively.
The recalibrated BOSS spectrum is present in green. \textbf{Panel (\textit{b})}: The light curve of SDSS J1633+5127 at V band monitored by the
Catalina Sky Survey. The red dots represent the mean magnitude for each season.
The intrinsic source variability is about 0.06 magnitude
in 6.5 years in the rest-frame, which indicates the difference between the BOSS spectrum and SDSS photometry is likely
due to the spectrophotometric calibration uncertainty. \textbf{Panel (\textit{c})}:
The correction curve between the SDSS photometry and BOSS spectrum. The blue circles present the ratios of
SDSS photometry to BOSS spectrum in the \textit{g, r, i, z} bands.
The correction curve (grey) was fitted with 2 order polynomial.
}
\label{f1}
\end{figure}
\begin{figure}[ht]
\epsscale{1.1}
\plotone{j1633_sed.eps}
\caption{\textbf{Panel (\textit{a})}: The UV to mid-infrared spectra and SED of SDSS J1633+5137 in the
quasar's rest-frame. The spectrum and photometry are present with black curve and green diamonds, respectively.
We modelled the broadband SED with a power law (cyan solid line) and two black bodies (red dotted line).
The sum of all modelled components (blue solid line) can roughly reproduce the continuum.
Compared to the quasar composite spectrum (gray), the excess in NIR bands implies that this source is
a BAL quasar which is confirmed in the NIR spectrum.
\textbf{Panel (\textit{b})}: The spectrum near the Mg\,{\footnotesize II}\ emission line.
Compare to the intrinsic wavelength (dotted line), Mg\,{\footnotesize II}\ of SDSS J1633+5137 is obviously blueshifted.
\textbf{Panel (\textit{c,d,e})}: The spectrum near the [O\,{\footnotesize II}], [O\,{\footnotesize III}], H$\beta$, and H$\alpha$\ emission lines.
All these lines converted from the redshift corresponds to the intrinsic wavelength, which indicates
that the redshift is reliable.
}
\label{f2}
\end{figure}
\begin{figure*}[htbp]
\centering
\epsscale{0.6}
\plotone{j1633_izw1_two.ps}
\caption{ \textbf{Panel (\textit{a})}: The comparison between SDSS J1633+5137 (black) and scaled spectrum of
IZW1 (cyan) in the wavelength range from 2850\AA\ to 3050\AA. The black and cyan dashed lines mark the
valley between UV 60 and UV 61 spikes of SDSS J1633+5137 and IZW1, respectively. Vertical offsets have been
applied for clarity. After manually shifting the spectrum by --2200 $\rm km~s^{-1}$\, the valley in the scaled spectrum of
IZW1 (orange) looks the same as that of SDSS J1633+5137. \textbf{Panel (\textit{b})}:
The composite spectrum of five normal quasars (yellow) for which the UV Fe\,{\footnotesize II}\ and Mg\,{\footnotesize II}\ are matched to that of
SDSS J1633+5137 (black). \textbf{Panel (\textit{c})}: The UV Fe\,{\footnotesize II}\ fitting results for the blueshifted
velocity $v_0$ = -2200 $\rm km~s^{-1}$\ (blue) and $v_0$ = 0 (red).
The corresponding reduced $\chi_e^2$ in the wavelength range marked with grey are also presented.
Besides, the variations of $\chi_e^2$ as a function of the blueshifted velocity of UV Fe\,{\footnotesize II}\ are displayed.
Compared to $v_0=0$, the case of $v_0=2200$ $\rm km~s^{-1}$\ is more acceptable for the fitting results.
\textbf{Panel (\textit{d})}: The fitting results (red) of optical Fe\,{\footnotesize II}\ multiples with the blueshifted velocity fixed
at zero. The strong Fe\,{\footnotesize II}\ lines (Fe\,{\footnotesize II}\ 5169.03 \AA, 5197.57 \AA, 5234.62 \AA, 5264.8 \AA, and 5316.61 \AA) in the wavelength range from 5100 \AA\ to 5400 \AA\ are also marked by dotted lines and correspond to the peak of SDSS J1633+5137 spectrum. }
\label{f3}
\end{figure*}
\begin{figure*}[ht]
\epsscale{1.1}
\plotone{j1633_dou.ps}
\caption{The fitting results of UV and optical Fe\,{\footnotesize II}\ multiples. The continuum is plotted in green.
The blueshifted and rest Fe\,{\footnotesize II}\ components are displayed by blue and red curve respectively.
The sum of continuum is present in magenta. The best-fitted velocity of the blueshifted Fe\,{\footnotesize II}\ component relative
to the quasar's rest-frame is -2135 $\rm km~s^{-1}$ ($\sim$-2200 $\rm km~s^{-1}$) and that of rest Fe\,{\footnotesize II}\ component is -92 $\rm km~s^{-1}$ ($\sim$0 $\rm km~s^{-1}$).
Compared to the observed spectrum, the trough near 2730 \AA\ is considered to be Mg\,{\footnotesize II}\ absorption line.
The excess in 2930-3000\AA\ might be Fe\,{\footnotesize I}\ and \heitnff.
The disagreement near 4300\AA\ is due to H$\gamma$\ and [O\,{\footnotesize III}] $\lambda$4353\ emission lines.
}
\label{f4}
\end{figure*}
\begin{figure*}[ht]
\epsscale{0.7}
\plotone{mgiihb.ps}
\caption{The NELs of [O\,{\footnotesize II}]\ and [O\,{\footnotesize III}]\ in SDSS J1633+5137. The two lines are fitted freely.
The modelled velocity shifts and widths of these NELs are very close.
}
\label{f5}
\end{figure*}
\begin{figure*}[ht]
\epsscale{0.7}
\plotone{mgiihb2_bal.ps}
\caption{The Mg\,{\footnotesize II}, H$\beta$\ and H$\alpha$\ of SDSS J1633+5137 shown in their common velocity space.
From top to bottom, emission lines are sorted from shorter to longer wavelengths.
{The modelled [O\,{\footnotesize III}]\ (dotted) has been subtracted and the blueshifted Mg\,{\footnotesize II}\ doublets are present in green.}
The Balmer emission lines are decomposed into the broad (red)
and blueshifted (green) component.
The broad components of Balmer lines are fitted with three Gaussians,
each of which is present with pink curve.
The narrow emission lines are plotted in purple. }
\label{f6}
\end{figure*}
\begin{figure*}[ht]
\epsscale{1.0}
\plotone{ratio.ps}
\caption{ Estimation of lower and upper limit of Mg\,{\footnotesize II}/H$\alpha$. The Mg\,{\footnotesize II}\ and H$\alpha$\ flux are all normalized by the peak of H$\alpha$. \textbf{Left}:
The H$\alpha$\ flux in the wavelength range between -5000 to -3000 $\rm km~s^{-1}$\ comprises the photons
emitted from BLR hence the flux of blueshifted component may be overestimated, providing the lower limit of Mg\,{\footnotesize II}/H$\alpha$.
\textbf{Right}: The estimation for the upper limit of Mg\,{\footnotesize II}/H$\alpha$. The H$\alpha$\ in
3000-5000 $\rm km~s^{-1}$\ may include the photons of blueshifted componet. Thus,
the rest component of mirror symmetric H$\alpha$\ flux from -5000 to -3000 $\rm km~s^{-1}$\ may be underestimated,
giving the upper limit of Mg\,{\footnotesize II}/H$\alpha$.
}
\label{f7}
\end{figure*}
\begin{figure*}[ht]
\epsscale{0.8}
\plotone{cloudy2_final.eps}
\caption{Contours of Mg\,{\footnotesize II}/H$\alpha$\ (blue) and UV Fe\,{\footnotesize II}/H$\alpha$\ (green) as a function
of $n_H$ and U calculated by CLOUDY for the column density $N_H=10^{21}-10^{24}$ cm$^{-2}$, solar abundance, and MF87 SED.
When the $N_H \geq 10^{23}$ cm$^{-2}$, the $\rm 1-\sigma $ confidence levels of Mg\,{\footnotesize II}/H$\alpha$\ and UV Fe\,{\footnotesize II}/H$\alpha$\ start to overlap.
$N_H = 10^{23}$ cm$^{-2}$ can be considered as the lower limit on column density of the outflow gas.}
\label{f8}
\end{figure*}
\begin{figure*}[ht]
\epsscale{1}
\plotone{cloudy2_nh25.eps}
\caption{Contours of Mg\,{\footnotesize II}/H$\alpha$\ (blue) and UV Fe\,{\footnotesize II}/H$\alpha$\ (green) as a function
of $n_H$ and U. The calculations are same as Figure 8, but assuming ionization boundary.
The overlapped region constrains the parameters of outflow gas to a narrow region of $n_H$ from $10^{10.6}$ to $10^{11.3}$ cm$^{-3}$ and log U from -2.1 to -1.5.}
\label{f9}
\end{figure*}
\begin{figure*}[ht]
\epsscale{0.8}
\plotone{j1633_abs2_mgi.ps}
\caption{The pair matching results and normalized spectra of Mg\,{\footnotesize II}, \heitoen, \heiteen\ and He\,{\footnotesize I*} $\lambda$10830.
The pair matching results can be directly considered as the absorption-free spectra of \heiteen. For He\,{\footnotesize I*} $\lambda$10830, based on the conclusion that the absorption component partially obscures the accretion disk, the radiation from the hot dust near He\,{\footnotesize I*} $\lambda$10830\ has been subtracted.
For the same reason, the continuum from the accretion disk is chosen as the absorption free spectra of Mg\,{\footnotesize II}\ and \heitoen.}
\label{f10}
\end{figure*}
\begin{figure*}[ht]
\epsscale{0.6}
\plotone{j1633_tau_cf0.4.ps}
\caption{ Normalized absorption spectrum of \heiteen\ (green) and He\,{\footnotesize I*} $\lambda$10830\ (red).
After masking the pixels seriously affected by sky lines (gray), the equation 1 is employed at each pixel to
obtain the $C_f$ and $N_{col}$ of He\,{\footnotesize I*}. The typical $C_{f}$ is around 0.3 and the integral $N_{col}$ of He\,{\footnotesize I*}\ is $(5.0 \pm 0.7) \times 10^{14}$ cm$^{-2}$.}
\label{f11}
\end{figure*}
\begin{figure}[ht]
\epsscale{0.6}
\plotone{j1633_3189.ps}
\caption{The comparison between the simulated \heitoen\ trough (cyan) and observed \heitoen\ trough (blue). The consistence of the two
indicates that the derived $C_f$ and $N_{col}$ of He\,{\footnotesize I*}\ are reliable }
\label{f12}
\end{figure}
\begin{figure}[ht]
\epsscale{0.6}
\plotone{j1633_mgii.ps}
\caption{The comparison between the Mg\,{\footnotesize II}\ BAL trough (red) and 1-$C_f$ (blue). It indicates the saturation of Mg\,{\footnotesize II}\ absorption trough. }
\label{f13}
\end{figure}
\begin{figure}[ht]
\epsscale{0.8}
\plotone{abs3_fe1_3.ps}
\caption{{\it Upper panel:} The modelled absorption profile of UV Fe\,{\footnotesize II}\ in the wavelength range of 2300 to 3000 \AA.
The density of BAL outflow in the model is set to be log n$\rm _{H}$ = 11 and the ionization parameter is log U = -1.9.
A series of column density (N$\rm _H$) is employed in the model from 10$^{21}$ to 10$^{22}$ with a dex step of 0.2.
For clarity, we only show the model with logN$\rm _H$=21.0, 21.2, 21.4, 21.6, 21.8 and 22.0, as color-coded curve.
The absorption profile of single Fe\,{\footnotesize II}\ line is assumed to be the same with that of He\,{\footnotesize I*}.
{\it Lower panel:} The observed spectrum is shown in black. We also added each modelled spectrum above to the observed spectrum
for comparison.
}
\label{f14}
\end{figure}
\begin{figure}[ht]
\epsscale{1}
\plotone{wind_model1.ps}
\caption{\textbf{Left}: The cross-section of the ring model. The angle relative to the normal line of the accretion disk is $ \theta_r $. The distance of the outflow is estimated at r = 9000$r_g$. The radial velocity $v_r$ = 7000$\rm km~s^{-1}$. The angle for the line of sight is $i$(=$\theta_r$). Only one parameter $\theta_r$ is free in this model. \textbf{Right}: Comparisons of the three modelled results ($\theta_r$ = 10$^\circ$, 40$^\circ$, 70$^\circ$) with the fitted blueshifted Mg\,{\footnotesize II}\ profile. }
\label{f15}
\end{figure}
\begin{figure}[ht]
\epsscale{1}
\plotone{wind_model2.ps}
\caption{\textbf{Left}: The top view of the proposed "shadow" model. The setting is the same with the ring model except for
the addition of a free parameter "shadow". \textbf{Middle}: Comparisons of the best-fitted result ($\theta_r$ = 40$^\circ$, shadow = 0.48)
with the blueshifted Mg\,{\footnotesize II}. \textbf{Right}: The 1, 2, and 3$\sigma$ confidence levels for $\theta_r$ versus the shadow parameter.
Red star denotes the best-fitted value.
}
\label{f16}
\end{figure}
|
1,477,468,750,895 | arxiv | \section{Introduction}
The Harmonic Hyperspherical (HH) method is extensively used in the description
of few-body systems. For example the HH method has been applied to describe
bound states of $A=3,4$ nuclei (for a recent review see
Refs.~\cite{kievsky:1997_few-bodysyst,kievsky:2008_j.phys.g}). In these applications the HH basis
elements, extended to spin and isospin degrees of freedom, have been combined
in order to construct antisymmetric basis functions; in fact, the HH functions,
as normally defined, do not have well defined properties under particle
permutation, but several schemes have been proposed to construct HH
functions with an arbitrary permutational symmetry, see
Refs.~\cite{novoselsky:1994_phys.rev.a,%
novoselsky:1995_phys.rev.a,barnea:1999_phys.rev.a,timofeyuk:2008_phys.rev.c}.
All of the proposed symmetrization schemes share an increasing computational
difficulty as the number of particles $A$ increases; to cope with this issue,
the authors proposed in Ref.~\cite{gattobigio:2009_phys.rev.a} to renounce to
the symmetrization step. If the Hamiltonian commutes with the group of
permutations of $A$ objects, $S_A$, the eigenvectors
can be organized in accordance with the irreducible
representations of $S_A$; in fact, if there is no more degenerancy, the eigenvectors
have a well defined permutation symmetry. After the identification of the eigenvectors
belonging to the desired symmetry, the corresponding energies are variational
estimates. The disadvantage of this method results in the large dimension of
the matrices to be diagonalized. However, at present, different techniques are
available to treat (at least partially) this problem.
In order to show the main characteristics of this method, we will discuss
results for bound states up to six particles interacting through a central
potential in two different systems: (i) a nucleon system interacting {\em via}
the Volkov potential, used many times in the
literature~\cite{barnea:1999_phys.rev.a,varga:1995_phys.rev.c,timofeyuk:2002_phys.rev.c,%
viviani:2005_phys.rev.c,gattobigio:2011_phys.rev.c,gattobigio:2011_j.phys.:conf.ser.},
and thus useful to test our approach, and (ii) a systems composed by helium
atoms interacting through a soft-core potential. The {\em ab-initio}-helium
potentials have a strong repulsion at small distances which makes calculations
quite difficult; few calculations exist on clusters of helium with these
potentials~\cite{lewerenz:1997_j.chem.phys.,blume:2000_j.chem.phys.,hiyama:2012_phys.rev.a}. On the
other hand, descriptions of few-atoms systems using soft-core potentials are
currently operated (see for example
Ref.~\cite{von_stecher:2009_phys.rev.a,kievsky:2011_few-bodysyst,gattobigio:2011_phys.rev.a}).
The paper is organized as follows. In Sect.~\ref{sec:method} a brief description of
the method is given. In Sect.\ref{sec:applications}, applications of the method
to a system of nucleons and helium atoms are shown. The conclusions are given
in the Sect.~\ref{sec:conclusions}.
\section{The unsymmetrized HH expansion}\label{sec:method}
In the present section we give a brief description of the HH basis showing
some properties of the basis that allow to use unsymmetrized basis
elements to describe a system of identical particles.
\subsection{The HH basis set}
Following Refs.\cite{gattobigio:2011_phys.rev.c,gattobigio:2011_phys.rev.a,gattobigio:2009_phys.rev.a},
we start with the definition of the Jacobi
coordinates for an equal mass $A$ body system,
with Cartesian coordinates $\mathbf r_1 \dots \mathbf r_A$
\begin{equation}
\mathbf x_{N-j+1} = \sqrt{\frac{2 j}{j+1} } \,
(\mathbf r_{j+1} - \mathbf X_j)\,,
\qquad
j=1,\dots,N\,.
\label{eq:jc2}
\end{equation}
with $\mathbf X_j = \sum_{i=1}^j \mathbf r_{j}/j$.
For a given set of Jacobi coordinates $\mathbf x_1, \dots, \mathbf x_N$,
we can introduce the hyperspherical coordinates. A useful tool to represent
hyperspherical coordinates is the hyperspherical
tree. This is a rooted-binary tree whose leaves represent the modules of Jacobi coordinates.
Once we introduce the hyperradius,
\begin{equation}
\rho = \bigg(\sum_{i=1}^N x_i^2\bigg)^{1/2}
= \bigg(2\sum_{i=1}^A (\mathbf r_i - \mathbf X)^2\bigg)^{1/2}
= \bigg(\frac{2}{A}\sum_{j>i}^A (\mathbf r_j - \mathbf r_i)^2\bigg)^{1/2} \,,
\label{}
\end{equation}
the modulus of the Jacobi coordinates live in a $(N-1)$-sphere of radius $\rho$ and
we can introduce $N-1$ hyperangles to express the Jacobi coordinates as a function
of the hyperradius. The choice is not unique, and different choices are
represented by different hyperspherical trees~\cite{kildyushov:1972_sov.j.nucl.phys.,kildyushov:1973_sov.j.nucl.phys.}.
The relation between a tree and the corresponding Jacobi coordinates is the
following; for each tree's node, labelled by $a$, we have an hyperangle
$\phi_a$. The rule to reconstruct the value of a Jacobi coordinate modululs reads: start
from the root node, and look for the path leading to the leaf corresponding to
the Jacobi coordinate; for each branch turning toward the left (right) we
multiply the hyperradius $\rho$ for cosine (sine) of the hyperangle attached to
the branching point. As an example, in Eq.~(\ref{eq:A5_nonStandard}) we have a
tree choice for $A=5$ with the corresponding relations between Jacobi and hyperspherical
coordinates
\begin{equation}
\begin{aligned}
x_1 &= \rho \sin\phi_4\sin\phi_2 \\
x_2 &= \rho \sin\phi_4\cos\phi_2 \\
x_3 &= \rho \cos\phi_4\sin\phi_3 \\
x_4 &= \rho \cos\phi_4\cos\phi_3 \,, \\
\end{aligned}\quad\quad\quad
\begin{minipage}{0.25\linewidth}
\includegraphics[width=\linewidth]{nonStandardTree}
\end{minipage} \,.
\label{eq:A5_nonStandard}
\end{equation}
Different trees have different topologies; given a node $a$,
the left (right) branch connects the node to a sub-binary tree made up
of $N_a^{l(r)}$ nodes and $L_a^{l(r)}$ leaves. We can use this information to
construct useful topological numbers as
\begin{equation}
C_a = N_a^l + \frac{1}{2} L_a^l + \frac{1}{2}\,,
\label{eq:CtopologicalNumber}
\end{equation}
and
\begin{equation}
S_a = N_a^r + \frac{1}{2} L_a^r + \frac{1}{2}\,.
\label{eq:StopologicalNumber}
\end{equation}
The set of the hyperangles together with the direction of the Jacobi coordinates
$\hat{\mathbf x}_i =(\varphi_i,\theta_i)$ form the hyperangular coordinates
\begin{equation}
\Omega_N = (\hat {\bm x}_1, \dots, \hat {\bm x}_N, \phi_2, \dots, \phi_N) \,.
\label{}
\end{equation}
in terms of which the HH functions ${\mathcal
Y}_{[K]}(\Omega_N)$ are defined.
The subscript
$[K]$ stands for the set of $(3N-1)$-quantum numbers $l_1,\dots,l_N,m_1, \dots,m_N,
K_2, \dots, K_N$, with $K_N=K$ the grand-angular momentum.
They can be expressed
in terms of the usual harmonic functions $Y_{lm}(\hat {\bm x})$ and of the
Jacobi polynomials $P_n^{a,b}(z)$
\begin{equation}
{\mathcal Y}_{[K]}^{LM}(\Omega_N) =
\left[\prod_{j=1}^N Y_{l_jm_j}(\hat {\bm x}_j) \right]_{LM}
\left[ \prod_{a\in\text{nodes}}
{\mathcal P}_{K_a}^{\alpha_{K_a^l},\alpha_{K_{a}^{r\phantom{l}}}}(\phi_a)\right] \,,
\label{eq:hh}
\end{equation}
with
\begin{equation}
{\mathcal P}_{K_a}^{\alpha_{K_a^l},\alpha_{K_{a}^{r\phantom{l}}}}(\phi_a)
=
{\mathcal
N}_{n_a}^{\alpha_{K_{a}^{r\phantom{l}}},\alpha_{K_a^l}}
(\cos\phi_a)^{K_a^l} (\sin\phi_a)^{K_a^r}
P^{\alpha_{K_{a}^{r\phantom{l}}},\alpha_{K_a^l}}_{n_a}(\cos2\phi_a) \,,
\end{equation}
where we have defined
\begin{equation}
\alpha_{K_a^{l(r)}} = K_a^{l(r)} + N_a^{l(r)} + \frac{1}{2} L_a^{l(r)}\, .
\label{}
\end{equation}
The normalization factor reads
\begin{equation}
{\cal N}_{n}^{\alpha\beta} =
\sqrt{\frac{2(2n+\alpha+\beta+1) n!\,
\Gamma(n+\alpha+\beta+1)}{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}}\,.
\label{eq:norma}
\end{equation}
With the above definitions, the HH functions have well defined total
orbital angular momentum $L$ and $z$-projection $M$.
The standard choice of hyperspherical coordinates, and of the
corresponding HH, is represented in the left panel of
Fig.~\ref{fig:tree}; this is the one we use as our basis set.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\linewidth]{genericTree}%
\hspace{2cm}%
\includegraphics[width=0.35\linewidth]{treeBodyTree}
\end{center}
\caption{In the left panel we have drawn the standard hyperspherical tree;
this is the one used in the standard definition of the basis, and the one
used to calculate the two-body potential between particles at $\mathbf r_1$
and $\mathbf r_2$. In the right panel we have drawn the non-standard tree,
used to calculate the three-body force between particles at $\mathbf r_1$,
$\mathbf r_2$, and $\mathbf r_3$.}
\label{fig:tree}
\end{figure}
\subsection{Rotation matrices between HH basis elements of different Jacobi coordinates}
Here we are interested in a particular set of coefficients relating the
reference HH basis to a basis in which the ordering of two adjacent
particles have been transposed.
In the transposition between particles $j,j+1$, only the Jacobi vectors
$\mathbf x_i$ and $\mathbf x_{i+1}$, with $i=N-j+1$, are different.
We label them $\mathbf x'_i$ and $\mathbf x'_{i+1}$, and explicitly they are
\begin{equation}
\begin{aligned}
\mathbf x'_{i} &= - \frac{1}{j} \,\mathbf x_i +
\frac{\sqrt{(j+1)^2-2(j+1)}}{j} \,\mathbf x_{i+1} \\
\mathbf x'_{i+1}&= \frac{\sqrt{(j+1)^2-2(j+1)}}{j} \,\mathbf x_i
+ \frac{1}{j} \,\mathbf x_{i+1} \,,
\end{aligned}
\label{eq:jc3}
\end{equation}
with $i=1,\ldots,N-1$. The corresponding moduli verify
${x'}^2_i+{x'}^2_{i+1}=x^2_i+x^2_{i+1}$.
Let us call ${\mathcal Y}^{LM}_{[K]}(\Omega^i_N)$ the HH basis element
constructed in terms of a set of Jacobi coordinates in which the $i$-th
and $i+1$-th Jacobi vectors are given from Eq.(~\ref{eq:jc3}) with all the
other vectors equal to the original ones (transposed basis).
The coefficients
\begin{equation}
{\mathcal A}^{i,LM}_{[K][K']}=\int d\Omega_N[{\mathcal Y}^{LM}_{[K]} (\Omega_N)]^*
{\mathcal Y}^{LM}_{[K']}(\Omega^i_N)\,,
\label{eq:ca1}
\end{equation}
are the matrix elements of a matrix ${\mathcal A}^{LM}_i$
that allows to express the transposed HH basis
elements in terms of the reference basis.
The total angular momentum as well as the grand angular quantum number $K$
are conserved in the above integral ($K=K'$).
The coefficients ${\mathcal A}^{i,LM}_{[K][K']}$ form a very sparse matrix and
they can be calculated analytically using angular and
${\cal T}$-coupling coefficients (Kil'dyushov coefficients)
and the Raynal-Revai matrix elements~\cite{gattobigio:2011_phys.rev.c,gattobigio:2011_few-bodysyst.,%
gattobigio:2011_phys.rev.a} .
We are now interested in obtaining the rotation coefficients between the
reference HH basis and a basis in which the last
Jacobi vector is defined as $\mathbf x'_N=\mathbf r_j-\mathbf r_i$,
without loosing generality we consider $j>i$.
A generic rotation coefficient of this kind can be
constructed as successive products of the ${\mathcal A}^{k,LM}_{[K][K']}$
coefficients.
Defining ${\mathcal Y}^{LM}_{[K]}(\Omega^{ij}_N)$ the HH basis element
constructed in terms of a set of Jacobi coordinates in which the
$N$-th Jacobi vector is defined $\mathbf x'_N=\mathbf r_j-\mathbf r_i$,
the rotation coefficient relating this basis to the reference basis
can be given in the following form
\begin{equation}
{\mathcal B}^{ij,LM}_{[K][K']}=\int d\Omega[{\mathcal Y}^{LM}_{[K]} (\Omega_N)]^*
{\mathcal Y}^{LM}_{[K]}(\Omega^{ij}_N) =
\left[{\mathcal A}^{LM}_{i_1}\cdots{\mathcal A}^{LM}_{i_n}\right]_{[K][K']} \,.
\label{eq:ca2}
\end{equation}
The particular values of the indices $i_1,\ldots,i_n$, labelling
the matrices ${\mathcal A}^{LM}_{i_1},\ldots,{\mathcal A}^{LM}_{i_n}$,
depend on the pair $(i,j)$.
The number of factors cannot be greater than $2(j-2)$ and it increases,
at maximum, by two units from $j$ to $j+1$.
The matrix
\begin{equation}
{\mathcal B}_{ij}^{LM}={\mathcal A}^{LM}_{i_1}\cdots{\mathcal A}^{LM}_{i_n}\,,
\label{eq:matrixb}
\end{equation}
is written as a product of the sparse matrices ${\mathcal A}^{LM}_{i}$'s,
a property which
is particularly well suited for a numerical implementation of the
potential energy matrix.
\subsection{The two-body and three-body potential energy matrices}\label{sec:pot}
We consider the potential energy of an $A$-body system constructed in terms
of two-body interactions
\begin{equation}
V=\sum_{i<j} V(i,j) \;\;\; .
\end{equation}
In the case of a central two-body interaction, its matrix
elements in terms of the HH basis are
\begin{equation}
V_{[K][K']}(\rho)=\sum_{i<j}
\langle{\cal Y}^{LM}_{[K]}(\Omega_N)|V(i,j)|{\cal
Y}^{LM}_{[K']}(\Omega_N)\rangle \, .
\end{equation}
In each element $\langle{\cal Y}^{LM}_{[K]}|V(i,j)|{\cal Y}^{LM}_{[K']}\rangle$ the integral
is understood on all the hyperangular variables and depends parametrically on
$\rho$. Explicitly, for the pair $(1,2)$, the matrix elements of the matrix
$V_{12}(\rho)$ are
\begin{equation}
\begin{aligned}
& V^{(1,2)}_{[K][K']}(\rho)=
\langle{\cal Y}^{LM}_{[K]}(\Omega_N)|V(1,2)|{\cal Y}^{LM}_{[K']}(\Omega_N)\rangle= \cr
&\delta_{l_1,l^\prime_1}\cdots\delta_{l_N,l^\prime_N}
\delta_{L_2,L^\prime_2}\cdots\delta_{L_N,L^\prime_N}
\delta_{K_2,K^\prime_2}\cdots\delta_{K_N,K^\prime_N} \cr
&\times \int d\phi_N(\cos\phi_N\sin\phi_N)^2
\;{}^{(N)}{\cal P}^{l_N,K_{N-1}}_{K_N}(\phi_N)
V(\rho\cos\phi_N)\;{}^{(N)}{\cal P}^{l_N,K_{N-1}}_{K'_N}(\phi_N)\,.
\end{aligned}
\label{eq:v12}
\end{equation}
Using the rotation coefficients, a general term of the potential $V(i,j)$ results
\begin{equation}
V^{(i,j)}_{[K][K']}(\rho)=
\sum_{[K''][K''']}{\cal B}^{ij,LM}_{[K''][K]}{\cal B}^{ij,LM}_{[K'''][K']}
\langle{\cal Y}^{LM}_{[K'']}(\Omega^{ij}_N)|V(i,j)|{\cal
Y}^{LM}_{[K''']}(\Omega^{ij}_N)\rangle \,.
\label{eq:vij}
\end{equation}
or, in matrix notation,
\begin{equation}
V_{ij}(\rho)= [{\cal B}^{LM}_{ij}]^{t} \,V_{12}(\rho)\,{\cal B}^{LM}_{ij}\,.
\label{eq:mij}
\end{equation}
The complete potential matrix energy results
\begin{equation}
\sum_{i<j} V_{ij}(\rho)=\sum_{i<j}
[{\cal B}^{LM}_{ij}]^t\, V_{12}(\rho)\,{\cal B}^{LM}_{ij} \,.
\label{eq:vpot}
\end{equation}
Each term of the sum in Eq.(\ref{eq:vpot}) results in a product of sparse
matrices, a property which allows an efficient implementation of matrix-vector
product, key ingredient in the solution of the Schr\"odinger equation using
iterative methods.
The three-body force used in the present work depends on the hyperradius
$\rho_{ijk}$ of a triplet of particles $\mathbf r_i,\mathbf r_j,\mathbf r_k$.
For an $A$-body systems, there are $\binom{A}{3}$ three-body terms
\begin{equation}
V^{(3)} = \sum_{i<j<k} W(\rho_{ijk})\,,
\label{}
\end{equation}
and one of them is the force between the triplet $\mathbf r_1,\mathbf
r_2,\mathbf r_3$ for which we have $\rho^2_{123} = x^2_N+x^2_{N-1}$. This term
can be easily calculated on a hyperspherical-basis set relative to an
non-standard hyperspherical tree with the branches attached to leaves $x_N$ and
$x_{N-1}$ going to the same node, as shown in the right panel of
Fig.~\ref{fig:tree}. The transition between this tree and the standard tree is
simply given by the ${\cal T}$-coefficients
\begin{equation}
\begin{minipage}{3.5cm}
\includegraphics[width=\linewidth]{treeDX_4b}
\end{minipage}
=
\sum_{\tilde n_{N-1}}
{\cal T}^{\alpha_{K_{N-2}}\alpha_{l_{N-1}}\alpha_{l_N}}_{n_{N-1} \;\tilde
n_{N-1}\; K}
\begin{minipage}{3.5cm}
\includegraphics[width=\linewidth]{treeSX_4b}~\,,
\end{minipage}
\end{equation}
or
\begin{equation}
{\cal Y}^{LM}_{[K]}(\Omega_N)
=
\sum_{\tilde n_{N-1}}
{\cal T}^{\alpha_{K_{N-2}}\alpha_{l_{N-1}}\alpha_{l_N}}_{n_{N-1} \;\tilde
n_{N-1}\; K}
{\cal Y}^{LM}_{[\tilde K]}(\tilde\Omega_N) \,,
\label{}
\end{equation}
where all the variable with the tilde refer to the non-standard tree.
In fact, with this choice we simply have
\begin{equation}
\rho_{123} = \rho\cos\phi_N\,,
\label{}
\end{equation}
and the fixed-rho matrix elements of the matrix $W_{123}(\rho)$ are
\begin{equation}
\begin{aligned}
& \langle {\cal Y}^{LM}_{[\tilde K']}(\tilde\Omega_N) | W(\rho_{123}) | {\cal Y}^{LM}_{[\tilde K]}(\tilde\Omega_N) \rangle
= \cr
& \delta_{l'_1,l_1} \cdots \delta_{l'_N,l_N}
\delta_{L'_2,L_2} \cdots \delta_{L',L} \delta_{M',M}
\delta_{\tilde K'_2,\tilde K_2} \cdots \delta_{\tilde K'_{N-1},\tilde K_{N-1}}\times \cr
& \int (\cos\phi_N)^{C_K}(\sin\phi_N)^{S_K} d\phi_N\;
{\mathcal P}_{K'}^{\alpha_{\tilde K_{N-1}},\alpha_{K_{N-2}}}(\phi_N)
{\mathcal P}_{K}^{\alpha_{\tilde K_{N-1}},\alpha_{K_{N-2}}}(\phi_N)
W(\rho\cos\phi_N) \,,
\label{}
\end{aligned}
\end{equation}
where $C_K$ and $S_K$ are the topological quantum numbers
relative to the grand-angular-$K$ root node. In practice the matrix is extremely
sparse, and it is diagonal on all quantum numbers but the grand-angular
momentum.
The three-body force matrix in the standard basis is obtained by means of
the ${\cal T}$-coefficients
\begin{equation}
\begin{aligned}
& \langle {\cal Y}^{LM}_{[K']}(\Omega_N) | W(\rho_{123}) | {\cal Y}^{LM}_{[K]}(\Omega_N) \rangle
= \cr
& \sum_{\tilde n_{N-1}}
{\cal T}^{\alpha_{K_{N-2}}\alpha_{l_{N-1}}\alpha_{l_N}}_{n'_{N-1} \;\tilde
n_{N-1}\; K'}
{\cal T}^{\alpha_{K_{N-2}}\alpha_{l_{N-1}}\alpha_{l_N}}_{n_{N-1} \;\tilde
n_{N-1}\; K}
\langle {\cal Y}^{LM}_{[\tilde K']}(\tilde\Omega_N) | W(\rho_{123}) | {\cal
Y}^{LM}_{[\tilde K]}(\tilde\Omega_N) \rangle \,,
\end{aligned}
\end{equation}
which for all practical purposes reduces to a product of sparse matrices.
In order to calculate the other terms of the three-body force, we use the
matrices ${\cal A}_p^{LM}$, defined in Eq.~(\ref{eq:ca1}), that transpose
particles; with a suitable product of these sparse matrices
\begin{equation}
{\cal D}_{ijk}^{LM} = {\cal A}_{p_1}^{LM} \cdots {\cal A}_{p_m}^{LM}\,,
\label{}
\end{equation}
we can permute the
particles in such a way that $\mathbf x_N = \mathbf r_i-\mathbf r_j$,
and $\mathbf x_{N-1} = 2/\sqrt{3}(\mathbf r_k - (\mathbf r_i+\mathbf r_j)/2)$,
and $\rho^2_{ijk} = x_{N-1}^2 + x_N^2$, and the total three-body force reads
\begin{equation}
V^{(3)} = \sum_{i<j<k} [{\cal D}_{ijk}^{LM}]^t\, W_{123}(\rho) \,
{\cal D}_{ijk}^{LM} \,.
\label{}
\end{equation}
\section{Applications of the HH expansion up to six particles}\label{sec:applications}
In this section we present results for $A=3-6$ systems obtained by
a direct diagonalization of the Hamiltonian of the system. The
corresponding Hamiltonian matrix is obtained using
the following orthonormal basis
\begin{equation}
\langle\rho\,\Omega\,|\,m\,[K]\rangle =
\bigg(\beta^{(\alpha+1)/2}\sqrt{\frac{m!}{(\alpha+m)!}}\,
L^{(\alpha)}_m(\beta\rho)
\,{\text e}^{-\beta\rho/2}\bigg)
{\cal Y}^{LM}_{[K]}(\Omega_N) \,,
\label{mhbasis}
\end{equation}
where $L^{(\alpha)}_m(\beta\rho)$ is a Laguerre polynomial with
$\alpha=3N-1$ and $\beta$ a variational non-linear parameter.
The matrix elements of the Hamiltonian are obtained after
integrations in the $\rho,\Omega$ spaces. They depend on
the indices $m,m'$ and $[K],[K']$ as follows
\begin{equation}
\begin{aligned}
\langle m'\,[K']|H|\,m\,[K] \rangle = -\frac{\hbar^2\beta^2}{m}
( T^{(1)}_{m'm}-K(K+3N-2) T^{(2)}_{m'm}) \delta_{[K'][K]} \cr
+ \sum_{i<j} \left[
\sum_{[K''][K''']}{\cal B}^{ij,LM}_{[K][K'']}{\cal B}^{ij,LM}_{[K'''][K']}
V^{m,m'}_{[K''][K''']}\right] \cr
+ \sum_{i<j<k} \left[
\sum_{[K''][K''']}{\cal D}^{ijk,LM}_{[K][K'']}{\cal D}^{ijk,LM}_{[K'''][K']}
W^{m,m'}_{[K''][K''']}\right] \cr
\,.
\end{aligned}
\label{eq:hmm}
\end{equation}
The matrices $T^{(1)}$ and $T^{(2)}$ have an analytical form
and are given in Ref.~\cite{gattobigio:2011_phys.rev.c}. The matrix elements
$V^{m,m'}_{[K][K']}$ are obtained after integrating the matrix $V_{12}(\rho)$
in $\rho$-space whereas the matrix elements $W^{m,m'}_{[K][K']}$ are
obtained after integration of the matrix $W_{123}(\rho)$
(we will call the corresponding matrices $V_{12}$ and $W_{123}$, respectively).
Introducing the diagonal matrix $D$ such that
$\langle [K']\,|\,D\, | [K]\rangle = \delta_{[K],[K']} K(K+3N-2)$, and the identity
matrix $I$ in $K$-space, we can rewrite the Hamiltonian schematically as
\begin{equation}
H = -\frac{\hbar^2\beta^2}{m} ({}^{(1)}T \otimes I + {}^{(2)}T\otimes D )
+ \sum_{ij} [{\cal B}^{LM}_{ij}]^t\, V_{12}\,{\cal B}^{LM}_{ij}
+ \sum_{ijk} [{\cal D}^{LM}_{ijk}]^t\, W_{123}\,{\cal D}^{LM}_{kij} \,,
\label{eq:schemH}
\end{equation}
in which the tensor product character of the kinetic energy is explicitly
given.
\subsection{nuclear system}\label{sec:appli_nucleons}
As a first example we consider e nuclear system interaction through a
simple two-body potential, the Volkov potential
\begin{equation}
V(r)=V_R \,{\rm e}^{-r^2/R^2_1} + V_A\, {\rm e}^{-r^2/R^2_2} \,,
\end{equation}
with $V_R=144.86$ MeV, $R_1=0.82$ fm, $V_A=-83.34$ MeV, and $R_2=1.6$ fm.
The nucleons are considered to have the same mass chosen to be equal to the
reference mass $m$ and corresponding to
$\hbar^2/m = 41.47~\text{MeV\,fm}^{2}$.
With this parametrization of the potential, the
two-nucleon system has a binding energy $E_{2N}=0.54592\;$MeV and a
scattering length $a_{2N}=10.082\;$fm.
This potential has been used several times in the literature making its
use very useful to compare different methods
\cite{barnea:1999_phys.rev.a,varga:1995_phys.rev.c,timofeyuk:2002_phys.rev.c,viviani:2005_phys.rev.c}. The use of central
potentials in general produces too much binding, in particular the
$A=5$ system results bounded. Conversely, the use of the
$s$-wave version of the potential produces a spectrum much closer
to the experimental situation. This is a direct consequence of the
weakness of the nuclear interaction in $p$-waves. Accordingly,
we analyze this version of the potential, the $s$-wave projected potential.
The results are obtained
after a direct diagonalization of the Hamiltonian matrix of
Eq.(\ref{eq:hmm}) including $m_{max}+1$ Laguerre polynomials with a fix
value of $\beta$, and all
HH states corresponding to maximum value of the grand angular momentum
$K_{max}$. The scale parameter $\beta$ can be used as a non-linear
parameter to study the convergence in the index $m=0,1,\ldots,m_{max}$, with
$m_{max}$ the maximum value considered.
We found that $20$ Laguerre polynomials (with proper
values of $\beta$) were sufficient for an accuracy of $0.1$\%
in the calculated eigenvalues.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\linewidth]{convergence_A.tex.eps}%
\includegraphics[width=0.5\linewidth]{convergence_B.tex.eps}
\end{center}
\caption{In the left panel we have the convergence of the $^3$H and $^3$He
binding energies as a function of $K$. The excited state of the alpha
particle $^4$He$^*$ is also shown. In the right panel we have the
convergence of the $^4$He, $^6$He and $^6$Li binding energies as a function
of $K$.}
\label{fig:convergences}
\end{figure}
The results of the present analysis are given in Fig.~\ref{fig:convergences}
where the convergence of the $A=3-6$ binding energies are given as a function
of $K$. In the left panel of Fig.~\ref{fig:convergences} the convergence for
the excited state $^4$He$^*$ of the $\alpha$ particle is also shown. For
$A=3,4$ a very extended HH expansion has been used with the maximum value of
$K=80$ and $K=40$ respectively. For $A=3$, the obtained results are 8.431 MeV
and 7.725 MeV for $^3$H and $^3$He respectively. For $A=4$, the ground state
binding energy converges at the level of $1-2$~keV for $K_{max}=40$. The
convergence of the excited state $^4$He$^*$ has been estimated at the level of
$50$~keV. Though the convergence was not completely achieved, the description
is close to the experimental observation of a $0^+$ resonance between the two
thresholds and centered 395 keV above the $p$-$^3$H threshold. Besides its
simplicity, the $s$-wave potential describes the $A=3,4$ system in reasonable
agreement with the experiment.
For the $A=6$ system a maximum value of $K=22$ has been used which greatly
improve previous attemps in using the HH basis in $A=6$ systems
\cite{novoselsky:1995_phys.rev.a,timofeyuk:2008_phys.rev.c}. The obtained
results are 33.016 MeV and 32.087 MeV for $^6$He and $^6$Li respectively. It
should be noticed that these states belong to the mixed symmetries
$[\bf{4}\,{\bf 2}]$ (without the Coulomb interaction). When the Coulomb
interaction between two nucleons is included the state belong to the symmetry
$[\bf{2}]\otimes[{\bf 2}^2]$ and when it is included between three nucleons the
state belongs to the symmetry $[\bf{2}\,\bf{1}]\otimes[{\bf 2}\,\bf{1}]$.
These states are embedded in a very dense spectrum. In order to follow these
state in the projected Lanczos method a projection-purification procedure is
performed.
\subsection{atomic system}\label{sec:appli_atoms}
As an example of an atomic systems we describe a system of $^4$He atoms up to six atoms.
The $^4$He-$^4$He interaction presents a
strong repulsion at short distances, below 5 a.u. This characteristic makes
it difficult a detailed description of the system with more than four atoms.
Accordingly, we study small clusters of helium
interacting through a soft-core two- and three-body potentials.
Following Refs.~\cite{nielsen:1998_j.phys.b,kievsky:2011_few-bodysyst,gattobigio:2011_phys.rev.a}
we use the gaussian two-body potential
\begin{equation}
V(r)=V_0 \,\, {\rm e}^{-r^2/R^2}\,,
\label{twobp}
\end{equation}
with $V_0=-1.227$ K and $R=10.03$~a.u.. In the following we use
$\hbar^2/m=43.281307~\text{(a.u.)}^2\text{K}$. This parametrization of the
two-body potential approximately reproduces the dimer binding energy $E_{2}$,
the atom-atom scattering length $a_0$ and the effective range $r_0$ given by
the LM2M2 potential. Specifically, the results for the gaussian potential are
$E_{2}=-1.296$ mK, $a_0=189.95$ a.u. and $r_0=13.85$ a.u., to be compared to
the corresponding LM2M2 values $E_{2}=-1.302$ mK, $a_0=189.05$ a.u. and
$r_0=13.84$ a.u.. As shown in Ref.~\cite{kievsky:2011_few-bodysyst}, the use of the
gaussian potential in the three-atom system produces a ground state binding
energy $E^{(0)}_{3}=150.4$ mK, which is appreciable bigger than the LM2M2 helium
trimer ground state binding energy of $126.4$ mK.
In order to have a closer description to the $A=3$ system obtained with the
LM2M2 potential, we introduce the following three-body interaction
\begin{equation}
W(\rho_{ijk})=W_0 \,\, {\rm e}^{-2\rho^2_{ijk}/\rho^2_0}\,,
\label{eq:hyptbf}
\end{equation}
where $\rho^2_{ijk}=\frac{2}{3}(r^2_{ij}+r^2_{jk}+r^2_{ki})$ is the three-body
hyperradius in terms of the distances of the three interacting particles.
Moreover, the strength $W_0$ is fixed to reproduce the LM2M2 helium trimer binding
energy of $126.4$ mK. In Ref.~\cite{gattobigio:2011_phys.rev.a} a detailed analysis of this
force has been performed by varying the range $\rho_0$ between 4 and 16 a.u..
Here we present results for small clusters, up to $A=6$, formed by
atoms of $^4$He using the soft two-body force plus the hyperradial
three-body force with parameters $W_0,\rho_0\equiv 0.422 {\rm K},14.0$ a.u..
The results are collected in Figs.~\ref{fig:A3F2},\ref{fig:A4F2} where we
show the convergence in terms of $K$ of the ground state and first excited
state of the bosonic helium clusters. Starting from $A=3$ the bosonic spectrum
is formed by two states, one deep and one shallow close to the threshold
formed by the $A-1$ system with one atom far away.
The calculations have been performed up to $K=40$ in $A=4$, $K=24$ in $A=5$ and
$K=22$ in $A=6$. From the figure we can observe that the ground state
binding energy, $E^{(0)}_{A}$, has a very fast convergence in terms of $K$.
The convergence of the
$E^{(1)}_{A}$ is much slower than for the ground state, however with the
extended based used it has been determined with an accuracy well below $1\%$.
The results confirm
previous analyses in the four body sector that the lower Efimov state in the
$A=3$ system produces two bound states, one deep and one shallow. Here, we
have extended this observation up to the $A=6$ system. Specifically we have
obtained the following ground state energies: $E^{(0)}_4=568.8$ K,
$E^{(0)}_5=1326.6$ K,and $E^{(0)}_6=2338.9$ K, and first excited state
energies: $E^{(1)}_4=129.0$ K, $E^{(1)}_5=574.9$ K,and $E^{(0)}_6=1351.6$ K.
It is interesting to compare the results obtained using the soft-core
representation of the LM2M2 potential with the results of
Refs.~\cite{lewerenz:1997_j.chem.phys.,blume:2000_j.chem.phys.}
obtained using the original LM2M2 interaction. For the
ground state the agreement is around $2\%$ for $A=4,5$ and around $1\%$ for
$A=6$. The agreement is worst for the excited state, however the results from
Ref.~\cite{blume:2000_j.chem.phys.} are obtained using approximate solutions of the
adiabatic hyperspherical equations. The recent, and very accurate, results
of Ref.~\cite{hiyama:2012_phys.rev.a} for $A=4$ ($E^{(0)}_4=558.98$ K and
$E^{(1)}_4=127.33$ K) shows a good agreement in particular for the first
excited state.
Finally it is possible to analyze the ratios $E^{(1)}_{A}/E^{(0)}_{A-1}$.
In the case of Efimov physics these ratios present and universal character.
The He-He potential it is not located exactly at the unitary limit
(infinite value of $a_0$) but it is close to it.
Using the soft potential models these ratios are:
$E^{(1)}_{4}/E^{(0)}_{3}=1.020$,
$E^{(1)}_{5}/E^{(0)}_{4} =1.011$ and
$E^{(1)}_{6}/E^{(0)}_{5}=1.018$. The ratios between
the trimer ground state and the ground states of the $A=4,5,6$ systems are
$E^{(0)}_{4}/E^{(0)}_{3}=4.5$, $E^{(0)}_{5}/E^{(0)}_{3}= 10.5$ and
$E^{(0)}_{5}/E^{(0)}_{3}=18.5$, respectively. These ratios are in good
agreement with those given in
Refs.~\cite{von_stecher:2009_natphys,deltuva:2010_phys.rev.a,von_stecher:2010_j.phys.b:at.mol.opt.phys.}.
The overall agreement of the $A=4,5,6$ systems between LM2M2 and the soft
potential model gives a further indication that these systems are a nice
realization of which it is called Efimov physics.
\begin{figure}[h]
\includegraphics[width=0.5\linewidth]{a3_vs_a4.tex.eps}
\includegraphics[width=0.5\linewidth]{a4_vs_a5.tex.eps}
\caption{The trimer bound state and tetramer first excited state
(left panel) and tetramer bound state and pentamer first excited state
(right panel) as a functions of $K$.}
\label{fig:A3F2}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.5\linewidth]{a5_vs_a6.tex.eps}
\includegraphics[width=0.5\linewidth]{a6.tex.eps}
\caption{The pentamer bound state and hexamer first excited state
(left pannel) and hexamer bound state (right pannel) as a functions of $K$.}
\label{fig:A4F2}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this work we have shown results using the HH expansion in the description
of a $A$-body system with $A=3-6$. The basis has not been symmetrized or
antisymmetrized as required by a system of identical particles. However, the
eigenvectors of the Hamiltonian have well defined permutation symmetry.
The benefit of the direct use of the HH basis is based on a particular simple
form used to represent the potential energy.
We have limited the analysis to consider a central potential. In a first
example we have describe a system of nucleons interacting through the Volkov
potential, used several times in the literature.
Though the use of a central potential leads to an unrealistic description of the
light nuclei structure, the study has served to analyze the
characteristic of the method: the capability of the diagonalization
procedure to construct the proper symmetry of the state and the particular structure,
in terms of products of sparse matrices, of the Hamiltonian matrix. The
success of this study makes feasible
the extension of the method to treat interactions depending on spin and isospin
degrees of freedom as the realistic NN potentials. A preliminary analysis
in this direction has been done~\cite{gattobigio:2009_few-bodysyst.}.
In a second example we have
studied the possibility of calculating bound and excited states in a bosonic
system consisting of helium atoms interacting through soft two- and three-body
forces. The potential model has been adjusted to approximate the description of
small helium clusters interacting through one of the realistic helium-helium
interactions, the LM2M2 potential. After the
direct diagonalization of the $A$-body system we have observed that clusters
with $A=3,4,5,6$ atoms present a deep bound state and a shallow bound state
just below the energy of the $A-1$ system.
Since the He-He potential predicts a large two-body scattering length we have
studied the universal ratios $E^{(1)}_{A}/E^{(0)}_{(A-1)}$.
These ratios have been studied in detail in
the $A=4$ case (see Refs.~\cite{hammer:2007_eur.phys.j.a,deltuva:2010_phys.rev.a}).
Estimates have been obtained also for bigger
systems~\cite{von_stecher:2010_j.phys.b:at.mol.opt.phys.}. Our calculations, obtained for one
particular value of the ratio $r_0/a$, are in agreement with those ones. An
analysis of the universal ratios as $a\rightarrow\infty$ is at present under
way.
|
1,477,468,750,896 | arxiv |
\section{Introduction}
Rotating neutron stars are the main candidates for sources of persistent, periodic
gravitational radiation detectable by ground-based, long-baseline
gravitational wave interferometers. Time varying quadrupole moments in (and
thus gravitational-wave emission from) these sources can result from
deformations of the solid crust (and possibly a solid core) supported by
elastic stresses \cite{Bildsten_1998, UCB_2000, Owen_2005, HaskellEA_2007,
Lin2007, JM_Owen_2013}, deformations of various parts of the star supported by
magnetic stresses \cite{B_G_1996, Cutler_2002, Melatos_Payne_2005,
Vigelius_Melatos_2008, HaskellEA_2008, Mastrano_Melatos_2012}, or free
precession \cite{Jones_Andersson_2002, Jones_2012} or long-lived oscillation
modes of the entire star \cite{OwenEA_1998, AKS_1999, Bondarescu2007,
Haskell2012, Bondarescu2013}.
Neutron stars in accreting binary systems are an important sub-class
of periodic gravitational wave sources.
Accretion may trigger or enhance the aforementioned gravitational-wave
emission mechanisms, creating or driving the quadrupole moment toward its maximum
value through thermal, magnetic, or other effects \cite{Bildsten_1998,
AKS_1999, Nayyar_Owen_2006, Melatos2007, vEysden_Melatos_2008,
Vigelius_Melatos_2009, AnderssonEA_2011}.
If a balance is assumed between the
gravitational radiation-reaction torque and the accretion
torque~\cite{Papaloizou_Pringle_1978, Wagoner_1984, Bildsten_1998}, then the
strongest emitters of continuous gravitational waves are predicted to be sources in low-mass X-ray binaries
(\acsp{LMXB}\acused{LMXB}), specifically those accreting at
the highest rate~\cite{Verbunt_1993, WattsEA_2008}.
Given the estimated ages (${\sim}10^{10}$ yrs) and observed accretion
rates of \acsp{LMXB} (reaching near the Eddington limit of
$\dot{M}_{\text{Edd}} = 2 \times 10^{- 8} M_{\odot} \text{yr}^{- 1}$),
accretion is expected to spin-up the neutron star beyond the breakup
frequency (${\sim}1.5$ kHz for standard neutron star equations-of-state
\cite{CST_1994,UBC_2000}). However, measured spin frequencies of LMXB
neutron stars (from X-ray pulsations or thermonuclear bursts) so far range
only from 95 to 619 Hz \cite{ChakrabartyEA_Nature_2003, Bhattacharyya2007, WattsEA_2008, GallowayEA_2010}. The spin
frequency cut-off lies well below breakup, and suggests the existence
of a spin-down torque to balance the spin-up from accretion. A possible explanation,
proposed by \citet{Papaloizou_Pringle_1978} and advanced by \citet{Wagoner_1984} and
\citet{Bildsten_1998}, is gravitational radiation. The torque-balance
scenario implies a relation between the X-ray flux, spin frequency and gravitational wave strain; the more luminous the X-ray source, the
greater the strain. \ac{ScoX1}, the brightest \ac{LMXB}, is therefore
a promising target for periodic gravitational wave searches.
The global network of kilometer-scale, Michelson-type laser
interferometers are sensitive to gravitational waves in the $O(10-1000)$ Hz
frequency band. The \ac{LIGO} detectors achieved design sensitivity
during the fifth science run (S5), between November 2005 and October
2007 \cite{LIGO_2009, LIGO_S5cal_Abadie2010}. \ac{LIGO} consists of
three Michelson interferometers (one with 4 km arms at Livingston,
Louisiana, and two co-located at Hanford, Washington, with 4 km and 2
km arms) separated by ${\sim}3000$ km. Together the \ac{LIGO}
\cite{LIGO}, Virgo \cite{VIRGO, Virgo2} and GEO600 \cite{GEO2,
GEO600} detectors form a world-wide network of broad-band
interferometric gravitational wave observatories in an international effort to
directly detect gravitational wave emission for the first
time.
The \ac{LIGO} Scientific Collaboration has so far published three main
types of searches for periodic or continuous gravitational wave
emission; targeted, directed and all-sky searches. Targeted searches
are the most sensitive since they have the most tightly constrained parameter
space. They target known sources, such as radio pulsars, with very well-constrained sky position, spin frequency and frequency evolution, and binary parameters (if any). These searches are fully-coherent, requiring
accurately known prior phase information, making them
computationally expensive to perform over large regions of parameter
space~\cite{BradyEA_1998,LIGO_2004_PRD69,LIGO_S5_KnownPulsars_2010}. Targeted
\ac{LIGO} and Virgo searches have already set astrophysically interesting upper
limits (e.g. beating theoretical indirect limits) on some pulsar
parameters such as the gravitational wave strain from the Crab
pulsar~\cite{LIGO_S5_Crab_2008,LIGO_S5_KnownPulsars_2010} and the Vela pulsar
\cite{LIGO_Vela_2011}. Directed searches aim at a particular sky location but search for unknown
frequency (and/or frequency evolution). In most cases so far, directed searches
have used a fully-coherent approach and approached the limits of
computational feasibility. The search directed at the (possible) neutron star in
the direction of the supernova remnant Cassiopeia A was able to beat indirect limits
\cite{LIGO_CasA_2010}. The third type of continuous gravitational wave
search, the wide parameter-space searches, are also
computationally intensive. They can involve searching over the entire
sky or any comparably large parameter space, and usually employ
semi-coherent approaches, combining short coherently analyzed segments
in an incoherent manner. This process is tuned to balance the
trade-off between reduced computational load and reduced
sensitivity. The all-sky searches presented
in~\cite{LIGO_S4_AllSKy_2008,LIGO_S4_EaH_2009,LIGO_S5i_Powerflux_2009,
LIGO_S5_EaH_2009,LIGO_S5full_Powerflux_2012} target isolated neutron star
sources (i.e. those not in binary systems). There is also an
all-sky search for neutron stars in binary systems currently being run
on \ac{LIGO}'s latest S6 data run~\cite{Goetz_Riles_TwoSpect_2011}.
Directed searches can also be made for known accreting neutron stars in
binaries, and \ac{LIGO} has previously conducted two of these searches for
\ac{ScoX1}. The first, a coherent analysis using data from the second
science run (S2), was computationally limited to the analysis of
six-hour data segments from the \ac{LIGO} interferometers, and placed
$95\%$ upper limits on the wave strain of $\ho^{95\%} \approx 2\times
10^{-22}$ for two different 20 Hz bands \cite{LIGO_S2_ScoX1_2007}. This
search utilized a maximum likelihood detection statistic based on matched
filtering called the $\mathcal{F}$-statistic\xspace~\cite{JKS1998}. The second, a directed version of
the all-sky, stochastic, cross-correlation analysis, known as the
``Radiometer'' search, was first conducted on all 20 days of data from
the S4 science run ~\cite{LIGO_S4_ScoX1_2007}, and later on the
${\sim}2$ year S5 data reporting a $90\%$ upper limit on the \ac{rms} strain
$h_\text{rms}^{90\%} > 5\times10^{-25}$ (over the range 40 -- 1500 Hz, with the minimum around 150 Hz) ~\cite{LIGO_S5stoch_ScoX1_2011}.
Semi-coherent search methods provide a compromise between sensitivity and the computational cost of a fully coherent search. They should be the most sensitive at fixed computing cost \cite{Brady_Creighton_2000,Prix_Shaltev_2012}. A fast and robust search strategy for the detection of signals from binary
systems in gravitational wave data was proposed in~\cite{MW2007}. The signal from a source in a binary system is phase- (or frequency-) modulated due its
periodic orbital motion, forming ``sidebands'' in the gravitational wave frequency spectrum. In searching detector data, this technique, called the sideband search, uses the same coherent ($\mathcal{F}$-statistic\xspace) stage as the previous coherent (S2) search. It then combines the frequency-modulated sidebands arising in $\mathcal{F}$-statistic data in a (computationally inexpensive) incoherent stage, reducing the need for a large template bank. This approach is based on a method that has successfully been employed in searches for binary pulsars in radio data~\cite{RansomEA_2003}.
Here we develop this sideband technique into a search pipeline and
present a detailed description of how it is applied to gravitational
wave detector data, as well as the expected sensitivity. The paper is
set out as follows. Section \ref{sec:LMXBs} briefly describes the
astrophysics of \acp{LMXB} and their predicted gravitational wave
signature. The search algorithm is described in detail in Section
\ref{sec:sideband}. Section \ref{sec:params} outlines the parameter
space of the search allowing primary sources to be identified in Section \ref{sec:sources}. The statistical
analysis of the results of the search is described in Section
\ref{sec:stats}, along with a definition of the upper limits of the search.
The sensitivity of the search is discussed in Section \ref{sec:sensitivity}. A brief
summary is provided in Section \ref{sec:discussion}, with a discussion of the limitations and future prospects of the search.
\section{Low Mass X-ray Binaries}\label{sec:LMXBs}
\acp{LMXB} are stellar systems where a low-magnetic-field ($\lesssim
10^9$ G) compact object (primary) accretes matter from a lower-mass
(secondary) companion ($ < 1 M_\odot$) \cite{Verbunt_1993,Tauris_vandenHeuvel_ch16}. The compact objects in \ac{LMXB} systems can be black
holes, neutron stars or white dwarfs. For gravitational wave emission we are
interested in \acp{LMXB} with neutron stars as the primary mass
(typically $\sim{1.4}M_\odot$), since neutron stars can sustain the largest
quadrupole moment.
Observations of thermal X-ray emission from the inner region of the
accretion disk provide a measurement of the accretion rates in
\acp{LMXB}. The range of observed accretions rates is broad, ranging
from $10^{- 11} M_{\odot} \text{yr}^{- 1}$ to the Eddington limit,
$\dot{M}_{\text{Edd}} = 2 \times 10^{- 8}M_{\odot} \text{yr}^{- 1}$
\cite{UBC_2000}. Some \acp{LMXB} also exhibit periodic pulsations or
burst type behavior, and so provide a means of measuring the spin
frequency $\spinf$ of the neutron star in the system. The measured
$\spinf$ of these systems lie in the range of $95 \leq \spinf \leq
619$ Hz \cite{ChakrabartyEA_Nature_2003, Bhattacharyya2007,
GallowayEA_2010}. The broad range of accretion rates coupled with
the estimated age of these systems (${\sim}10^{10}$ years implied by
evolutionary models~\cite{vanParadijs_White_1995, UBC_2000}) would
suggest a greater upper limit on observed spin frequency since
accretion exerts substantial torque on the neutron star. However, none
of these systems have yet been observed to spin at or near the breakup
frequency $v_b{\sim}1.5$ kHz ($v_b \gtrsim$ 1 kHz for most equations
of state \cite{CST_1994, UBC_2000}). The maximum observed spin
frequency falls far below the theorized breakup frequency and suggests
a competing (damping) mechanism to the spin-up caused by
accretion. One explanation for the observed spin frequency
distribution of \acp{LMXB} is that the spin-up from the accretion
torque is balanced by a gravitational wave spin-down torque \cite{Wagoner_1984,
Bildsten_1998}. Since the gravitational wave spin-down torque scales like
$\spinf^5$ (see Eq.~\ref{eq:N_gw} below), a wide range of accretion
rates then leads to a rather narrow range of equilibrium rotation
rates, as observed.
\subsection{Gravitational wave emission}\label{subsec:GWs}
Using the torque balancing argument from \citet{Wagoner_1984} and
\citet{Bildsten_1998}, we can estimate the gravitational wave strain
amplitude emitted from accreting binary systems from their observable
X-ray flux. This is a conservative upper limit as it assumes all
angular momentum gained from accretion is completely converted into
gravitational radiation.
The intrinsic strain amplitude $\ho$ for a system with angular spin frequency
$\spinomega=2\pi\spinf$ at a distance $d$ from an observer emitting gravitational waves via a
mass quadrupole can be expressed as
\begin{equation}
\ho=\frac{4 G Q}{c^4 d}\spinomega^2\,,
\end{equation}
where $G$ is the gravitational constant, $c$ is the speed of light and
the quadrupole moment $Q=\epsilon I$ is a function of the ellipticity
$\epsilon$ and moment of inertia $I$ \cite{JKS1998}. This can be
expressed in terms of the spin-down (damping) torque $\Ngw$ due to
gravitational radiation giving
\begin{equation}
\ho^2 = \frac{5 G}{8c^3 d^2\spinomega^2} \Ngw, \label{eq:h02}
\end{equation}
where
\begin{equation}
\Ngw = \frac{32GQ^2}{5c^5}\spinomega^5. \label{eq:N_gw}
\end{equation}
The accretion torque $\Na$ applied to a neutron star of mass $M$ and radius
$R$ accreting at a rate $\dot{M}$ is given by
\begin{equation}
\Na=\dot{M}\sqrt{GMR} \label{eq:Na}.
\end{equation}
Assuming that the X-ray luminosity can be written as
$\Lx=GM\dot{M}/R$, the accretion rate $\dot{M}$ can be expressed as a
function of the X-ray flux $\Fx$, such that
\begin{equation}
\dot{M}=\frac{4\pi R d^2}{GM} \Fx \label{eq:Mdot},
\end{equation}
since $\Lx=4\pi d^2 \Fx$. In equilibrium, where gravitational
radiation balances accretion torque, $\Ngw=\Na$. The square of the
gravitational wave strain from Eq.~\ref{eq:h02} can then be expressed
in terms of the observable X-ray flux such that
\begin{equation} \label{eq:h02}
\ho^2 = \frac{5}{3}\frac{\Fx}{\spinf} {\left(\frac{GR^3}{M}\right)}^{1/2}.
\end{equation}
Selecting fiducial values for the neutron star mass, radius, X-ray flux,
and spin frequency (around the middle of the observed range), we can
express the equilibrium strain upper limit $\htorq$ in terms of $\spinf$
and $\Fx$ via
\begin{eqnarray}
\htorq &=& 5.5\times 10^{- 27}
\left(\frac{F_{\text{x}}}{F_*}\right)^{1/2}\left(\frac{R}{10\text{km}}\right)^{
3/4} \left(\frac{1.4M_{\odot}}{M}\right)^{1/4}
\left(\frac{300\text{Hz}}{\spinf}\right)^{1/2},
\nonumber \\ \label{eq:hc_Fx_300}
\end{eqnarray}
where $F_*=10^{-8}\text{ erg cm}^{-2}\text{ s}^{-1}$. If the system emits gravitational waves via current quadrupole radiation instead, as is the
case with $r$-mode oscillations, the relation between gravitational wave frequency and spin frequency differs. In this case the preceding equations are modified slightly, requiring roughly $\spinf \rightarrow (2/3) \spinf$ \cite{Owen2010}. However these expressions, and the rest of the analysis except where otherwise noted, do not change if expressed in terms of the gravitational-wave frequency.
The resulting relation in Eq. \ref{eq:hc_Fx_300} implies that \acp{LMXB} that accrete close to the
Eddington limit are potentially strong gravitational wave emitters. Of these
potentially strong sources, \ac{ScoX1} is the most promising due to its observed X-ray flux ~\cite{WattsEA_2008}.
\section{Search Algorithm}\label{sec:sideband}
In this section we define our detection statistic and show how it
exploits the characteristic frequency-modulation pattern inherent to
sources in binary systems.
Fully-coherent, matched-filter searches for continuous gravitational
waves can be described as a procedure that maximizes the likelihood
function over a parameter space. The amplitude parameters
(gravitational wave strain amplitude $\ho$, inclination $\iota$,
polarization $\psi$ and reference phase $\phio$) can, in general, be
analytically maximized, reducing the dimensions of this parameter
space ~\cite{JKS1998}. These parameters define the signal amplitudes
in our signal model. Analytic maximization leaves the phase-evolution
parameters (gravitational wave frequency $\fo$ and its derivatives
$f^{(k)}$ and sky position $[\alpha,\delta]$) to be numerically
maximized over. Numerical maximization is accomplished through a
scheme of repeated matched-filtering performed over a template bank of
trial waveforms defined by specific locations in the phase parameter
space, which is typically highly computationally expensive
\cite{BradyEA_1998,LIGO_S2_ScoX1_2007,WattsEA_2008}.
The method we outline here makes use of the fact that we know the
sky position of our potential sources and, hence, the phase evolution
due to the motion of the detector can be accurately accounted for. We
also know that the phase evolution due to the binary motion of the
source will result in a specific distribution of signal power in
the frequency domain. This distribution has the characteristics that
signal power is divided amongst a finite set of frequency-modulation
sidebands. The number of sidebands and their relative frequency spacing
can be predicted with some knowledge of the binary
orbital parameters.
In order to avoid the computational limits imposed by a fully-coherent
parameter space search, we propose a single fully-coherent
analysis stage, that accounts for detector motion only, is followed by
a single incoherent stage in which the signal power contained within
the frequency-modulated sidebands in summed to form a new detection
statistic. This summing procedure is accomplished via the convolution
of an approximate frequency domain power template with the output of
the coherent stage.
The three main stages of the search, the $\mathcal{F}$-statistic\xspace, sideband template,
and $\mathcal{C}$-statistic\xspace, are graphically illustrated in Figure~\ref{fig:search}.
In this noise-free example, the frequency-modulation sidebands are
clearly visible. The $\mathcal{F}$\xspace-statistic is also amplitude-modulated due
to the daily variation of the detector antenna response, resulting in the
amplitude-modulation applied to each frequency-modulation sideband.
The second panel represents the approximate frequency domain template,
a flat comb function with unit amplitude teeth (the spikes or delta functions). When
convolved with the $\mathcal{F}$\xspace-statistic in the frequency domain we obtain
the $\mathcal{C}$-statistic\xspace shown in the right-hand panel. The maximum power is
clearly recovered at the simulated source frequency.
\begin{figure*}
\includegraphics[width=2\columnwidth]{figure1}
\caption{\label{fig:search}Graphical illustration of the sideband search
pipeline, showing the frequency-modulated $\mathcal{F}$-statistic\xspace (left, red), the sideband
template (middle, black), and their convolution, known as the $\mathcal{C}$-statistic\xspace{} (right,
blue). In this noise free case, a signal of $\fo = 200$ Hz with amplitude
$\ho=1$ in a system with $\asini = 0.005$ s, $\Porb = 7912.7$ s, sky position
$\alpha = 2.0$ rad, $\delta=1.0$ rad, and phase parameters $\psi = 0.2 \text{
rad},\, \cos\iota = 0.5,\, \phi_0 = 1$ rad, was simulated for a 10 day
observation span. Frequency increases along the horizontal axis, which ranges
from 199.998 to 200.002 Hz on each plot. In each case the location of the
injected signal at 200 Hz is indicated by the vertical dashed black line.}
\end{figure*}
The following sub-sections discuss each of the search components in
more detail. Section~\ref{subsec:phase} presents the phase model used
to characterize the gravitational wave signal from a source in a
binary system. The signal model is introduced in
Sec.~\ref{subsec:signal}. The $\mathcal{F}$-statistic\xspace{} is introduced in
Section~\ref{subsec:fstat} and its behavior as a function of search
frequency is described in Sec. \ref{subsec:fstatmm}. Section
\ref{subsec:fmod} then goes on to describe how matching a filter for
an isolated neutron star system to a signal from a source in a binary
system results in frequency-modulated sidebands appearing in the
output of the $\mathcal{F}$-statistic\xspace. The detection statistic for gravitational wave
sources in binary orbits is fully described as the incoherent sum of
frequency-modulated $\mathcal{F}$-statistics\xspace in Sec.~\ref{subsec:Cstat}. The simplest,
unit amplitude, sideband template, and its justification over a more
realistic template, are discussed in Sec.~\ref{subsec:template}. A
more sensitive implementation, incorporating an approximate binary
phase model in the calculation of the $\mathcal{F}$-statistic\xspace and reducing the width
of the frequency-modulated sideband pattern by the fractional errors
on the semi-major axis parameters, is discussed in
Sec.~\ref{subsec:demod}. Beginning in this section and continuing in
the following sections, we have made a distinction between the
intrinsic values of a search parameter $\bf{\theta_\submath{0}}$
(denoted with a subscript zero) and the observed values $\bf{\theta}$
(no subscripts).
\subsection{Phase model}\label{subsec:phase}
The phase of the signal at the source can be modeled by a Taylor
series such that
\begin{equation}\label{eq:phitaylor}
\Phi(\tau) = 2\pi\sum^s_{k=0}\frac{f^{(k)}}{(k+1)!}(\tau-\tau_{0})^{k+1},
\end{equation}
where $f^{(k)}$ represents the $k^\text{th}$ time derivative of the
gravitational wave frequency evaluated at reference time $\tau_0$. For
the purposes of this work we restrict ourselves to a monochromatic
signal and hence set $f^{(k)}=0$ for all $k>0$ and define
$f_0=f^{(0)}|_{\tau_0}=f(\tau_0)$ as the intrinsic frequency. We
discuss this choice in Sec.~\ref{subsec:spinf}. The phase received at
the detector is $\Phi[\tau(t(t'))]$, where we define the retarded
times measured at the \ac{SSB} and detector as $t$ and $t'$
respectively. The relation between $t$ and $t'$ is a function of
source sky position relative to detector location and, since we only
concern ourselves with sources of known sky position, we assume that
the effects of phase contributions from detector motion can be exactly
accounted for. For this reason we work directly within the \ac{SSB}
frame. The relationship between the source and retarded times for a
non-relativistic eccentric binary orbit is given
by~\cite{Taylor_Weisberg_1989}
\begin{equation}\label{eq:bintime}
t = \tau +\atrue\left[\sqrt{1-e^{2}}\cos{\omega}\sin{E}+\sin{\omega}\left(\cos{E}-e\right)\right],
\end{equation}
where $\atrue$ is the light crossing time of the semi-major axis
projected along the line of sight. The orbital eccentricity is defined
by $e$ and the argument of periapse, given by $\omega$, is the angle
in the orbital plane from the ascending node to the direction of
periapse. The variable $E$ is the eccentric anomaly defined by
$2\pi(\tau-\tp)/P=E-e\sin{E}$, where $\tp$ is the time of passage through
periapse (the point in the orbit where the two bodies are at their
closest) and $P$ is the orbital period.
It is expected that dissipative processes within \acp{LMXB} drive the
orbits to near circularity. In the low eccentricity limit $e\ll1$, we
obtain the following expression
\begin{eqnarray}\label{eq:bintime2}
t &=& \tau + \atrue\left\{\sin\left[\Omega(\tau-\ta)\right]+\frac{e\cos\omega}{2}\sin\left[2\Omega(\tau-\tp)\right]\right.\nonumber\\
&&\left.\left.+\frac{e\sin\omega}{2}\cos\left[2\Omega(\tau-\tp)\right]\right\}+\mathcal{O}(e^{2})\right\},
\end{eqnarray}
where $\Omega=2\pi/P$ and we have used the time of passage through the
ascending node $\ta=\tp-\omega/\Omega$ as our reference phase in the
first term. For circular orbits, where $e=0$, the expression reduces
to only this first term. Using $\ta$ is sensible in this case since
$\tp$ and $\omega$, are not defined in a circular orbit. Note that the
additional eccentric terms are periodic at multiples of twice the
orbital frequency. Using Eq. \ref{eq:bintime2}, we would expect to
accumulate timing errors of order $\mu$s for the most eccentric known
\ac{LMXB} systems. We shall return to this feature in
Sec.~\ref{subsec:ecc}.
To write the gravitational wave phase as a function of \ac{SSB} time, we invert
Eq.~\ref{eq:bintime2} to obtain $\tau(t)$. The binary phase can be
corrected for in a standard matched filter approach, where
Eq.~\ref{eq:bintime} is solved numerically. In our method we instead
approximate the inversion as
\begin{equation}\label{eq:binphase}
\Phi(t)\simeq 2\pi\fo(t - \to) - 2\pi\fo\atrue\sin\left[\Omega(t-\ta)\right]
\end{equation}
for circular orbits.\footnote{Phase errors caused by this inversion approximation amount to maximum phase offsets of $\sim 2\pi\fo \atrue^{2}\Omega$.} Under this approximation the signal phase can
be represented as a linear combination of the phase contributions from
the spin of the neutron star $\phi_\text{spin}$ and from the binary
orbital motion of the source $\phi_\text{bin}$, such that
\begin{subequations}
\begin{eqnarray}
\phi_\text{spin} &=& 2\pi\fo (t-\to) \label{eq:phi_spin}, \\
\phi_\text{bin} &=& -2\pi\fo \atrue\sin \left[\Omega(t-\ta)\right]. \label{eq:phi_bin}
\end{eqnarray}
\end{subequations}
\subsection{Signal model} \label{subsec:signal}
We model the data $\bm{x}(t)$ collected by a detector located at the
\ac{SSB} as the signal $\bm{s}(t)$ plus stationary Gaussian noise $\bm{n}(t)$ so that
\begin{equation}
\bm{x}(t) = \bm{s}(t) + \bm{n}(t) \label{eq:tseries}
\end{equation}
with
\begin{equation}
\bm{s}(t) = \mathcal{A}_\mu \bm{h}_\mu(t)\label{eq:s},
\end{equation}
where we employ the Einstein summation convention for
$\mu=1,2,3,4$. The coefficients $\mathcal{A}_\mu$ are independent of
time, detector location and orientation. They depend only on the
signal amplitude parameters $\bm{\lambda} =
\{\ho,\,\psi,\,\iota,\,\phio\}$, where $\ho$ is the dimensionless
gravitational wave strain amplitude, $\psi$ is the gravitational wave
polarization angle, $\iota$ is the source inclination angle and
$\phio$ is the signal phase at a fiducial reference time. The
coefficients $\mathcal{A}_\mu$ are defined as
\begin{subequations} \label{eq:A1234}
\begin{align}
\mathcal{A}_1 &= A_+\cos\phio\cos 2\psi - A_\times \sin\phio\sin2\psi\\
\mathcal{A}_2 &= A_+\cos\phio\sin 2\psi - A_\times \sin\phio\cos2\psi\\
\mathcal{A}_3 &= A_+\sin\phio\cos 2\psi - A_\times \cos\phio\sin2\psi\\
\mathcal{A}_4 &= A_+\sin\phio\sin 2\psi - A_\times \cos\phio\cos2\psi
\end{align}
\end{subequations}
where
\begin{eqnarray} \label{eq:Apc}
A_+ = \tfrac{1}{2} \ho(1+\cos^2\iota) &,& A_\times = \ho\cos\iota
\end{eqnarray}
are the polarization amplitudes. The time dependent signal components
$\bm{h}_\mu(t)$ are defined as
\begin{equation}
\begin{split}
h_1=a(t)\cos\Phi(t), &\hspace{0.5cm} h_2=b(t)\cos\Phi(t), \\
h_3=a(t)\sin\Phi(t), &\hspace{0.5cm} h_4=b(t)\sin\Phi(t), \label{eq:hs}
\end{split}
\end{equation}
where $\Phi(t)$ is the signal phase at the detector (which we model as
located at the \ac{SSB}) given by Eq.\ref{eq:binphase} and the antenna
pattern functions $a(t)$ and $b(t)$ are described by Eqs.~(12) and
(13) in~\cite{JKS1998}.
\subsection{$\mathcal{F}$-statistic\xspace}\label{subsec:fstat}
The $\mathcal{F}$-statistic\xspace is a matched-filter based detection statistic derived via
analytic maximization of the likelihood over unknown amplitude
parameters~\cite{JKS1998}. Let us first introduce the multi-detector inner product
\begin{equation}
(\bm{x}|\bm{y})=\sum_{X}(x_X|y_X)=\sum_{X}\frac{2}{S_X(f)}\int\limits_{-\infty}^{\infty}w_{X}(t)\,x_X(t)y_X(t)\,dt, \label{eq:ipx}
\end{equation}
where $X$ indexes each detector and $S_{X}(f)$ is the detector
single-sided noise spectral density. We modify the definitions
of~\cite{JKS1998} and~\cite{Prix_2007} to explicitly include gaps in
the time-series by introducing the function $w_{X}(t)$ which has value
$1$ when data is present and $0$ otherwise. This also allows us to
extend the limits of our time integration to $(-\infty,\infty)$ since
the window function will naturally account for the volume and span of
data for each detector.
The $\mathcal{F}$-statistic\xspace itself is defined as
\begin{equation}
2\mathcal{F} = x_{\mu}\mathcal{M}^{\mu\nu}x_{\nu}, \label{eq:fstat}
\end{equation}
where $\mathcal{M}^{\mu\nu}$ form the matrix inverse of $\mathcal{M}_{\mu\nu}$ and we follow the shorthand notation of~\cite{Prix_2007} defining
$x_\mu \equiv (\bm{x}|\bm{h}_\mu)$ and $\mathcal{M}_{\mu\nu} \equiv (\bm{h}_\mu|\bm{h}_\nu)$. Evaluation of $\mathcal{M}$ leads to a
matrix of the form
\begin{equation}
\mathcal{M}= \frac{1}{2}
\left( \begin{array}{cc}
\mathcal{C} & 0 \\
0 & \mathcal{C}
\end{array} \right),\hspace{1cm}
\text{where }\mathcal{C}=
\left( \begin{array}{cc}
A & C \\
C & B
\end{array} \right)
\end{equation}
where the components
\begin{equation}
A=(a|a),\, B=(b|b),\, C=(a|b),
\end{equation}
are antenna pattern integrals. For a waveform with exactly known phase
evolution $\Phi(t)$ in Gaussian noise, the $\mathcal{F}$-statistic\xspace is a random
variable distributed according to a non-central
$\chi^{2}$-distribution with 4 degrees of freedom. The non-centrality
parameter is equal to the optimal \ac{SNR}
\begin{equation}
\ro^{2}=\frac{1}{2}\left[A (\mathcal{A}_1^2 + \mathcal{A}_3^2) + B(\mathcal{A}_2^2 + \mathcal{A}_4^2) + 2C
(\mathcal{A}_1 \mathcal{A}_2 + \mathcal{A}_3 \mathcal{A}_4)\right]\label{eq:rho2}
\end{equation}
such that the expectation value and variance of $2\mathcal{F}$ are given by
\begin{subequations}
\begin{eqnarray}
\text{E}[2\mathcal{F}]&=&4+\ro^{2}\\
\text{Var}[2\mathcal{F}]&=&8+4\ro^{2},
\end{eqnarray}
\end{subequations}
respectively. In the case where no signal is present in the data, the
distribution becomes a central $\chi^{2}$-distribution with 4 degrees
of freedom.
\subsection{$\mathcal{F}$-statistic\xspace and mismatched frequency}\label{subsec:fstatmm}
In this section we describe the behavior of the $\mathcal{F}$-statistic\xspace as a function
of search frequency $f$ for a fixed source frequency $\fo$. In this
case the inner product that defines $x_{\mu}$ becomes
\begin{equation}\label{eq:xmu2}
x_{\mu}= \mathcal{A}_{\nu}(\bm{h}_{\nu}|{\bm{h}_{\mu}}') + (\bm{n}|{\bm{h}_{\mu}}'),
\end{equation}
where $\bm{h}_{\nu}$ are the components of a signal with frequency
$\fo$, and $\bm{h}'_{\mu}$ is a function of search frequency $f$. If
we focus on the $\mu=\nu=1$ component as an example we find that
\begin{align}\label{eq:hp1}
(\bm{h}_{1}|{\bm{h}_{1}}')&\cong\sum_{X}\frac{2}{S_X(f)}\int\limits_{-\infty}^{\infty}w_{X}(t)\,a^{2}_X(t)\cos\left(2\pi
ft\right)\cos\left(2\pi\fo t\right)dt\nonumber\\
\end{align}
where we note that the product of cosine functions results in an
integrand that contains frequencies at $f-\fo$ and $f+\fo$. Since
both $a_{X}(t)$ and $w_{X}(t)$ are functions that evolve on timescales
of hours--days we approximate the contribution from the $f+\fo$
component as averaging to zero. We are left with
\begin{align}
(\bm{h}_{1}|{\bm{h}_{1}}')&\cong\text{Re}\left[\frac{1}{2}\sum_{X}\frac{2}{S_X(f)}\int\limits_{0}^{\infty}w_{X}(t)\,a^{2}_X(t)e^{-2\pi i
(f-\fo)t}dt\right] \nonumber \\
&\cong\frac{1}{2}\sum_{X}\text{Re}\left[A_{X}(f-\fo)\right] \label{eq:hp2}
\end{align}
where we have defined the result of the complex integral as
$A_{X}(f)$. This is the Fourier transform of the antenna pattern
functions weighted by the window function and evaluated at $f-\fo$.
It is equal to the quantity $(a_{X}|a_{X})$ when its argument is zero.
The quantity $a^{2}_{X}(t)$ (and similarly $b_{X}^{2}(t)$ and
$a_{X}(t)b_{X}(t)$) are periodic quantities with periods of 12 and 24
hours plus a non-oscillating component. When in a product with a
sinusoidal function and integrated over time they will result in
discrete amplitude-modulated sidebands with frequencies at $0,\pm
1/P_{\oplus},\pm 2/P_{\oplus},\pm3/P_{\oplus},\pm4/P_{\oplus}$ Hz
where $P_{\oplus}$ represents the orbital period of the Earth (1
sidereal day). We will ignore all but the zero-frequency components of
these functions for the remainder of this paper. We do note that
complications regarding the overlap of amplitude-modulated and
frequency-modulated sidebands (discussed in the next section) will
only arise for sources in binary orbits with periods equal to those
present in the antenna pattern functions.
In addition, the window function describing the gaps in the data will
influence $A_{X}(f)$. For a gap-free observation the window function
serves to localize signal power to within a frequency range $\sim 1/T$
where $T$ is the typical observation length. When gaps are present
this range is broadened and has a deterministic shape given by the
squared modulus of the Fourier transform of the window function. We
can therefore use a further approximation that
\begin{equation}\label{eq:AXf}
A_{X}(f-\fo)\approx (a_{X}|a_{X})\frac{\tilde{w}_{X}(f-\fo)}{\tilde{w}_{X}(0)}
\end{equation}
where the Fourier transform of the window function is normalized by
$\tilde{w}_{X}(0)\equiv \int dt w_{X}(t)$ such that it has a value of unity at
the true signal frequency.
We now define the antenna-pattern weighted window function as
\begin{subequations}\label{eq:Wf}
\begin{align}
\tilde{W}(f-\fo)&\cong\sum_{X}\frac{(a_{X}|a_{X})}{A}\frac{\tilde{w}_{X}(f-\fo)}{\tilde{w}_{X}(0)}\\
&\cong\sum_{X}\frac{(b_{X}|b_{X})}{B}\frac{\tilde{w}_{X}(f-\fo)}{\tilde{w}_{X}(0)}\\
&\cong\sum_{X}\frac{(a_{X}|b_{X})}{C}\frac{\tilde{w}_{X}(f-\fo)}{\tilde{w}_{X}(0)},
\end{align}
\end{subequations}
which is true for observation times $T_{X}\gg$ days. This complex
window function has the property that $\tilde{W}(0)$ is a real
quantity with maximum absolute value of unity when the template
frequency matches the true signal frequency.
Finally we are able to combine Eqs.~\ref{eq:hp2},~\ref{eq:AXf},
and~\ref{eq:Wf} to obtain
\begin{equation}
(\bm{h}_{1}|{\bm{h}_{1}}')\cong\frac{A}{2}\text{Re}\left[\tilde{W}(f-\fo)\right]
\end{equation}
which together with similar calculations for the additional components
in Eq.~\ref{eq:xmu2} give us
\begin{eqnarray} \label{eq:hpmn}
(\bm{h}_{\nu}|{\bm{h}_{\mu}}')\cong\frac{\mathcal{M}}{2}\left\{\left( \begin{array}{cc}\text{I}&0\\0&\text{I}\end{array}\right)
\text{Re}\left[\tilde{W}(f-\fo)\right]+\left( \begin{array}{cc}0&\text{I}\\-\text{I}&0\end{array}\right)
\text{Im}\left[\tilde{W}(f-\fo)\right]\right\} \hspace{-1.2cm} \nonumber \\
\end{eqnarray}
as the complete set of inner products between frequency-mismatched
signal components where $\text{I}$ is the $2\times 2$ identity matrix.
Note that when $f=\fo$ this expression reduces to Eq.~\ref{eq:fstat}.
If we now form the expectation value of the $\mathcal{F}$-statistic\xspace
(Eq.~\ref{eq:fstat}) for mismatched frequencies we find that
\begin{equation}
\text{E}[2\mathcal{F}(f)]=4+\ro^{2}|\tilde{W}(f-\fo)|^{2}.
\end{equation}
Here we see that the fraction of the optimal \ac{SNR} that contributes
to the non-centrality parameter of the $\mathcal{F}$-statistic\xspace $\chi^{2}$ distribution
is reduced by evaluation of the mod-squared of the antenna-pattern
weighted window function with a non-zero argument.
\subsection{Frequency-modulation and the $\mathcal{F}$-statistic\xspace}\label{subsec:fmod}
We now consider the computation of the $\mathcal{F}$-statistic\xspace in the case where the
data contains a signal from a source in a circular binary orbit but
the phase model used in the $\mathcal{F}$-statistic\xspace template is that of a
monochromatic signal of frequency $f$. We again expand $x_{\mu}$ as
done in Eq.~\ref{eq:xmu2} where no prime indicates the signal and the
prime represents the monochromatic template. We again focus on the
mismatched signal inner-product $(\bm{h}_{1}|{\bm{h}_{1}}')$ as an
example. Starting with Eq.~\ref{eq:hp1} we discard the rapidly
oscillating terms inside the integral that will average to zero. We
are then left with
\begin{align}
(\bm{h}_{1}|{\bm{h}_{1}}')\cong\frac{1}{2}\sum_{X}\text{Re}\Bigg\{&\frac{2}{S_{X}(f)}\int\limits_{0}^{\infty}w_{x}(t)a^{2}_{X}(t)e^{-2\pi
i t(f-\fo)}\nonumber\\
&\exp{\left(-2\pi \fo \atrue\sin\left(\Omega(t-t_{a})\right)\right)}\Bigg\}
\end{align}
where the final term involving the exponential of a sinusoidal
function can be represented using the Jacobi-Anger expansion
\begin{equation}\label{eq:eiz}
e^{iz\sin\theta} = \sum_{n=-\infty}^{\infty} J_{n}(z)e^{in\theta},
\end{equation}
where $J_{n}(z)$ is the $n^\text{th}$ order Bessel function of the first
kind. This expansion allows us to transform the binary phase term
into an infinite sum of harmonics such that we can now write
\begin{align}
(\bm{h}_{1}|{\bm{h}_{1}}')\cong\sum_{n=-\infty}^{\infty}&J_{n}(2\pi
\fo\atrue)\nonumber \\
\quad \times\text{Re}\Bigg\{&\frac{e^{i\phi_{n}}}{2}\sum_{X}\frac{2}{S_{X}(f)}\int\limits_{0}^{\infty}w_{X}(t)a^{2}_{X}(t)e^{-2\pi it(f-\fn)}dt\Bigg\}\nonumber\\
\cong\sum_{n=-\infty}^{\infty}&J_{n}(2\pi
\fo\atrue)\frac{A}{2}\text{Re}\left\{\tilde{W}(f-f_{n})e^{i\phi_{n}}\right\} \label{eq:hpmn2}
\end{align}
It follows that all of the signal components can be expanded in the
same way giving us
\begin{align} \label{eq:hpmn3}
(\bm{h}_{\nu}|{\bm{h}_{\mu}}')\cong\frac{\mathcal{M}}{2}&\sum_{n=-\mo}^{\mo}J_{n}(2\pi\fo\atrue)\left\{\left( \begin{array}{cc}\text{I}&0\\0&\text{I}\end{array}\right)
\text{Re}\left[\tilde{W}(f-f_{n})e^{i\phi_{n}}\right]\right.\nonumber\\
& \quad +\left.\left( \begin{array}{cc}0&\text{I}\\-\text{I}&0\end{array}\right)
\text{Im}\left[\tilde{W}(f-f_{n})e^{i\phi_{n}}\right]\right\} \hspace{-1cm}
\end{align}
where we have truncated the infinite summation (explained below) and
defined the monochromatic modulated sideband frequencies and their
respective phases as
\begin{subequations}
\begin{align}
f_{n}&=\fo - n/\Po\,,\\
\phi_{n}&=n\Omega t_{a}\,.
\end{align}
\end{subequations}
The Jacobi-Anger expansion has allowed us to represent the complex phase of
a frequency-modulated signal as an infinite sum of discrete signal
harmonics, or sidebands, each separated in frequency by $1/\Po$ Hz.
Each is weighted by the Bessel function of order $n$ where $n$ indexes
the harmonics and has a complex phase factor determined by the orbital
reference time $t_{a}$. In the limit where the order exceeds the
argument, $n\gg z$, the Bessel function rapidly approaches zero
allowing approximation of the infinite sum in Eqs.~\ref{eq:eiz} and~\ref{eq:hpmn2}
as a finite sum over the finite range $[-\mo,\mo]$ where
$\mo=\texttt{ceil}[2\pi \fo \atrue]$. The summation format of Eq.~\ref{eq:hpmn3}
highlights the effects of the binary phase modulation. The signal can
be represented as the sum of $\Mo=2\mo+1$ discrete harmonics at
frequencies $\fn$ centered on the intrinsic frequency $\fo$, where
each harmonic peak is separated from the next by $1/\Po$
Hz.
Combining Eqs.~\ref{eq:fstat},~\ref{eq:xmu2}, and~\ref{eq:hpmn3}, we
can express the expectation value of the $\mathcal{F}$-statistic\xspace for a binary signal
as a function of search frequency $f$ as
\begin{equation}
E[2\mathcal{F}(f)]=4+\ro^{2}\sum_{n=-\mo}^{\mo}J^{2}_{n}(2\pi \fo\atrue)|\tilde{W}(f-f_{n})|^{2}.\label{eq:Fstat3}
\end{equation}
This expression should be interpreted in the following manner. For a
given search frequency $f$ the contribution to the non-centrality
parameter (the \ac{SNR} dependent term) is equal to the sum of all
sideband contributions at that frequency. Each sideband will
contribute a fraction of the total optimal \ac{SNR} weighted by the
$n^\text{th}$ order Bessel function squared, but will also be strongly
weighted by the window function. The window function will only
contribute significantly if the search frequency is close to the
sideband frequency. Hence, at a given search frequency close to a
sideband, for observation times $\gg \Porb$, the sidebands will be far
enough separated in frequency such that only one sideband will
contribute to the $\mathcal{F}$-statistic\xspace.
\subsection{$\mathcal{C}$-statistic\xspace} \label{subsec:Cstat}
The $\mathcal{F}$-statistic\xspace is numerically maximized over the phase parameters of
the signal on a discrete grids. For this search the search frequency
$f$ is such a parameter and consequently the $\mathcal{F}$-statistic\xspace is computed over
a uniformly spaced set of frequency values $f_{j}$ spanning the region
of interest. In this section we describe how this $\mathcal{F}$-statistic\xspace
frequency-series can be used to approximate a search template that is
then used to generate a new statistic sensitive to signals from
sources in binary systems.
The expectation value of the $\mathcal{F}$-statistic\xspace (Eq.~\ref{eq:Fstat3}) resolves
into localized spikes at $\Mo$ frequencies separated by $1/\Po$ Hz and
centered on the intrinsic gravitational wave frequency $\fo$. A template
$\stat{T}$ based on this pattern with amplitude defined by $G_n$, takes
the general form
\begin{equation}
\mathcal{T}(f)=\sum_{n=-m'}^{m'}G_n|\tilde{W}\left(f-\fn'\right)|^{2}\label{eq:template}\,,
\end{equation}
with $m'=\texttt{ceil}[2\pi f a']$ and $\fn'=\fo'-n/P'$ where we make a distinction between the intrinsic (unknown) values of each parameter (subscript zero) and values selected in the template construction (denoted with a prime). The
window function $\tilde{W}$ is dependent only on the times for which
data is present and is, therefore, also known exactly.
We define our new detection statistic $\mathcal{C}$\xspace as
\begin{eqnarray}
\mathcal{C}(f)&\equiv&\sum_{j}2\mathcal{F}(f_{j})\mathcal{T}(f_{j}-f)\nonumber\\
&=&\left(2\mathcal{F}\ast\mathcal{T}\right)(f),
\end{eqnarray}
where the sum over the index $j$ indicates the sum over the discrete
frequency bins $f_{j}$ and $f$ is the search
frequency. Since the template's ``zero frequency'' represents the intrinsic gravitational wave frequency, $f$ corresponds to the intrinsic frequency. We see that the $\mathcal{C}$-statistic\xspace is, in fact, the convolution of the
$\mathcal{F}$-statistic\xspace with our template, assuming the template is constant
with search frequency (an issue we address in the next section).
The benefit of this approach is that the computation of the $\mathcal{F}$-statistic\xspace for a
known sky position and without accounting for binary effects has
relatively low computational cost. Similarly, the construction of a
template on the $\mathcal{F}$-statistic\xspace is independent of the orbital phase parameter
and only weakly dependent upon the orbital semi-major axis and
eccentricity. The template is highly dependent upon the orbital
period, which, for the sources of interest, is known to high
precision. Also, since the $\mathcal{C}$-statistic\xspace is the result of a convolution, we
can make use of the convolution theorem and the speed of the \ac{FFT}.
Computing $\mathcal{C}$ for all frequencies requires only three applications
of the \ac{FFT}. In practice, the $\mathcal{C}$-statistic\xspace is computed using
\begin{align}\label{eq:disC}
&\mathcal{C}(f_{k})=\left(2\mathcal{F}\ast\mathcal{T}\right)(f_{k})\nonumber\\
&=\sum_{j=0}^{N-1}e^{2\pi
ijk/N}\left(\sum_{p=0}^{N-1}e^{-2\pi ipj/N}2\mathcal{F}(f_{p})\right)\left(\sum_{q=0}^{N-1}e^{-2\pi ijq/N}\mathcal{T}(f_{q})\right),
\end{align}
which is simply the inverse Fourier transform of the product of the
Fourier transforms of the $\mathcal{F}$-statistic\xspace and the template. The $\mathcal{F}$-statistic\xspace and
the template are both sampled on the same uniform frequency grid
containing $N$ frequency bins. The $\mathcal{C}$-statistic\xspace is then also output as a
function of the same frequency grid.
\subsection{Choice of $\mathcal{F}$-statistic\xspace template} \label{subsec:template}
Treating the $\mathcal{F}$-statistic\xspace as the pre-processed input dataset to the $\mathcal{C}$-statistic\xspace
computation, it might be assumed that the optimal choice of template is
that which exactly matches the expected form of $2\mathcal{F}$ in the
presence of a signal. As shown in Fig.~\ref{fig:ROCtempcompare}, this
approach is highly sensitive to the accuracy with which the projected orbital
semi-major axis is known.
We instead propose the use of a far simpler template: one that
captures the majority of the information contained within the $\mathcal{F}$-statistic\xspace
and, by design, is relatively insensitive to the orbital semi-major
axis. We explicitly choose
\begin{equation}\label{eq:flattemp}
\mathcal{T}_{\text{F}}(f_k)=\sum_{j=-m}^{m}\delta_{k\,\lj}\,,
\end{equation}
for discrete frequency $f_k$, where $\delta_{ij}$ is the Kronecker delta-function. The frequency
index $l$, defined by,
\begin{equation}\label{eq:findex}
\lj\equiv\texttt{round}\left[\frac{j}{P'd_f}\right],
\end{equation}
is a function of the best-guess orbital period $P'$ and the frequency resolution $d_f$. The $\texttt{round}[]$ function returns the integer closest to its argument. The template is therefore composed of the sum of $M=2m+1$ unit amplitude ``spikes''
positioned at discrete frequency bins closest to the predicted locations of the frequency-modulated sidebands (relative to the intrinsic gravitational wave frequency). The subscript $\text{F}$ refers to the constant amplitude, ``Flat'' template.
If we now convolve this template with the frequency-modulated $\mathcal{F}$-statistic\xspace we obtain the corresponding $\mathcal{C}$-statistic\xspace, which reduces to
\begin{equation}
\mathcal{C}(f_{k})=
\sum_{j=-m}^{m}2\mathcal{F}(f_{k-\lj})\,.
\label{eq:cstat_comb}
\end{equation}
Equation \ref{eq:cstat_comb} is simply the sum of $2\mathcal{F}$ values taken from discrete
frequency bins positioned at the predicted locations of the
frequency-modulation sidebands. An example is shown in the right hand
panel of Fig. \ref{fig:search}, where the most significant $\mathcal{C}$-statistic\xspace
value is located at $f=\fo$, and is the point where all sidebands included in
the sum contain some signal.
From Eq. \ref{eq:Fstat3}, the expectation value for the $\mathcal{C}$-statistic\xspace using the flat-template can be expressed as
\begin{equation}\label{eq:ECstat}
\text{E}[\mathcal{C}(f_k)]= 4M +
\ro^{2}\sum_{n=-m}^{m}J^{2}_{n}(2\pi
\fo\atrue)|\tilde{W}(f_{n}-f_{k}+l_{[n]}\fres)|^2,
\end{equation}
where the argument of the window function is the frequency difference between
the location of the $n^\text{th}$ signal sideband and the $n^\text{th}$ template sideband on the discrete frequency grid. Note that the $\mathcal{C}$-statistic\xspace is a sum of $M$ statistically independent non-central $\chi^{2}_4$ statistics and hence the result is
itself a non-central $\chi^{2}_{4M}$ statistic, i.e. with $4M$ degrees of freedom, where $M=2\texttt{ceil}[2\pi f \asini] +1$ is the number of sidebands in the expected modulation pattern. The non-centrality parameter is equal to the sum of the
non-centrality parameters from each of the summed $2\mathcal{F}$ values.
For a flat template with perfectly matched intrinsic frequency $f=f_0$ and orbital
period $P'=P$, infinite precision $\fres\rightarrow 0$, and where
the number of sidebands included in the analysis matches or exceeds
the true number, the second term in the above equation reduces to
$\ro^{2}$. In this case we will have recovered all of
the power from the signal but also significantly increased the
contribution from the noise through the incoherent summation of
$\mathcal{F}$-statistic\xspace from independent frequency bins. In general, where the
orbital period is known well, but not exactly, and the frequency
resolution is finite, the signal power recovery will be reduced by
imperfect sampling of the window function term in Eq.~\ref{eq:ECstat},
i.e. evaluation at arguments $\neq 0$.
In terms of the generic template defined in equation
Eq.~\ref{eq:template}, the discrete-frequency flat template is
approximately equivalent to the weighting scheme $G_n=1$. A more sensitive approach could use
\begin{equation}
\mathcal{T}_{\text{B}}(f_k)=\sum_{j=-m}^{m}J^{2}_{j}(2\pi \fo\, \atrue)\delta_{k\,\lj}\label{eq:besseltemp}
\end{equation}
for the template, following the expected form of the $\mathcal{F}$-statistic\xspace given in
Eq.~\ref{eq:Fstat3} and using a subscript $\text{B}$ to denote
Bessel function weighting. Although this would
increase sensitivity for closely matched signal templates (constructed with well constrained signal parameters), this
performance is highly sensitive to the number of sidebands included in the template and therefore sensitive to the semi-major
axis since $M=2\texttt{ceil}[2\pi f \asini] +1$. This is mainly due to the
``double horned'' shape of the expected signal (see the left hand
panel of Fig.~\ref{fig:search}). A large enough offset between the
true and assumed semi-major axis will significantly change the template's overlap with the sidebands in the $\mathcal{F}$-statistic\xspace and reduce the significance of the $\mathcal{C}$-statistic\xspace. Considering the semi-major axis is not well constrained for many \acp{LMXB}, a search over many templates would be necessary,
each with incrementally different semi-major axis values.
The simpler, flat-template (Eq. \ref{eq:flattemp}) has the benefit of being far more
robust against the semi-major axis uncertainty. In this case the semi-major axis
parameter controls only the number of sidebands to use in the template
and does not control the weighting applied to each sideband. It also
simplifies the statistical properties of the $\mathcal{C}$-statistic\xspace, making a
Bayesian analysis of the output statistics (as described in Section
\ref{sec:stats}) far easier to apply.
The \ac{ROC} curves shown in Fig.~\ref{fig:ROCtempcompare} compare the
performance of the sideband search with both choices of template (flat
and Bessel function weighted) for the case of signal with optimal
\ac{SNR} $\ro=20$. As seen from the figure, the Bessel function
weighted template for exact number of sidebands provides improved
sensitivity over the flat template. However, when considering the
possible (and highly likely) error on the number of sidebands in the
template, the performance of the Bessel template is already
drastically diminished, even with only a $10\%$ error on the semi-major axis
parameter. It is also interesting to note that for the flat-template
the result of an incorrect semi-major axis is asymmetric with respect
to an under or over-estimate. The sensitivity degradation is far
less pronounced when the template has over-estimated the semi-major
axis and, therefore, also over-estimated the number of sidebands. This
feature is discussed in more detail in Sec.~\ref{subsec:asini}.
\begin{figure}
\includegraphics[width=\columnwidth]{figure2}
\caption{\label{fig:ROCtempcompare}\ac{ROC} curves comparing performance of the flat (blue) and Bessel function weighted (red)
templates, described by Eqs \ref{eq:flattemp} and \ref{eq:besseltemp} respectively. The theoretical (fine black) curve is constructed from a
non-central $\chi^2_{4M}(\lambda)$ distributed statistic with non-centrality parameter $\lambda=\ro^2=20$ and represents the expected performance of the flat template. Dashed and dotted curves represent a template with a positive and negative $10\%$ error on the semi-major axis respectively. Here the signal parameters were chosen such that the number of sidebands were $M=2001$ and curves were constructed using $10^6$ realizations of noise.}
\end{figure}
\subsection{Approximate binary demodulation}\label{subsec:demod}
When a putative source has a highly localized position in the sky, the effect of the Earth's motion with respect to the \ac{SSB} can be accurately removed from the signal during the calculation of the $\mathcal{F}$-statistic\xspace. This leaves only the Doppler
modulation from the binary orbit. It is also possible to demodulate
the binary orbit (Doppler) modulation in the $\mathcal{F}$-statistic\xspace calculation provided
the binary orbital parameters $(a,\Porb,\ta)$ are well known. A fully-coherent (sky position- and binary-) demodulated
$\mathcal{F}$-statistic\xspace search would be very sensitive to any errors in the
sky position of binary orbital parameters. It would therefore be necessary to construct
a bank of templates spanning the parameter space defined by the
uncertainties in these parameters. Adding dimensions to the parameter space increases computational costs and
the search becomes unfeasible considering we are already searching over frequency. A fully coherent search of this type would be possible for
known sources with known emission frequencies, for example pulsing
sources like \acp{msp}.
In this section we show how prior information regarding the binary
orbit of a source can be used to increase the sensitivity of our
semi-coherent approach, without increasing computational costs. By
performing a ``best guess'' binary phase demodulation within the
$\mathcal{F}$-statistic\xspace, we show that the number of sidebands in the template is
reduced by a factor proportional to the fractional uncertainty in the
orbital semi-major axis. Consequently a reduction in the number of
sidebands increases the sensitivity of the search by reducing the
number of degrees of freedom (see Sec. \ref{sec:stats}).
Expressing our current best estimate for each parameter $\bm{\theta}$ as the sum of
the true value $\bm{\theta_\submath{0}}$ and an error $\Delta\bm{\theta_\submath{0}}$, such that
\begin{equation}
\bm{\theta} = \bm{\theta_\submath{0}} +\Delta\bm{\theta_\submath{0}},
\end{equation}
we can determine the phase offset of the binary orbit from the
error in the binary orbital parameters. The offset in phase
is the difference between the true and best estimate binary phase and
using Eq.~\ref{eq:phi_bin} can be approximated by
\begin{eqnarray}
\Delta \phi_\text{bin} &\simeq&-2\pi f\left\{\phantom{\frac{}{}}\left(\atrue\Delta f + f\Delta\atrue\right)\sin\left[\Omega_\submath{0}(t -
\ta)\right] \right. \nonumber \\
& +&
\left. \left[f \atrue\left(\Delta\Omega_\submath{0}\left(t-t_{\text{a}}\right)-\Omega_\submath{0}\Delta
\ta\right)\right]\cos\left[\Omega_\submath{0}(t -
\ta)\right]\phantom{\frac{}{}}\right\}+\mathcal{O}(\bm{\Delta\theta}^{2}) \nonumber \\
&\simeq& -2\pi f \Delta \atrue\,\kappa \sin\left[\Omega_\submath{0}(t-\ta) + \gamma\right],
\end{eqnarray}
with
\begin{eqnarray}
\kappa &=& \sqrt{\left(1+\frac{\atrue}{\Delta \atrue}\frac{\Delta f}{f}\right)^{2}+\left(\frac{\atrue}{\Delta
\atrue}\left(\Delta\Omega_\submath{0}(t-\ta)-\Omega_\submath{0}\Delta \ta\right)\right)^{2}}, \label{eq:moddepth}\\
\gamma &=& \text{tan}^{-1} \left[
\frac{\Delta\Omega_\submath{0}(t-\ta) - \Omega_\submath{0}\Delta
t_{\text{a}}}{(\Delta \atrue/\atrue)-(\Delta f/f)} \right]+\left\{
\begin{array}{l l}
0 &\,\text{if}\,\left(\frac{\Delta \atrue}{\atrue}\right)-\left(\frac{\Delta f}{f}\right)\ge 0\\
\pi &\,\text{if}\,\left(\frac{\Delta \atrue}{\atrue}\right)-\left(\frac{\Delta f}{f}\right)< 0.
\end{array} \right.\nonumber\\
\end{eqnarray}
Here we have expanded the binary phase difference to leading order in
the parameter uncertainties and obtained a phase expression similar in
form to the original binary phase. In the specific regime where the
fractional uncertainty in the orbital semi-major axis far exceeds the
fractional uncertainty in the intrinsic frequency we see that the
first term in Eq.~\ref{eq:moddepth} becomes $\approx 1$. Similarly,
if the fractional uncertainty in the orbital semi-major axis also far
exceeds the deviation in orbital angular position
$\Delta\Omega(t-\ta)-\Omega\Delta \ta$ then the
second term $\approx 0$. This is generally the case for the known
\acp{LMXB} (see Table~\ref{tab:targets}) and in this regime $\kappa$
can be accurately approximated as unity, yielding
\begin{align}
\Delta \phi_{\text{bin}} &\approx -2\pi f\Delta \atrue\sin[\Omega(t-\ta)+\gamma]. \label{eq:Dphi_bin}
\end{align}
Hence, after approximate binary demodulation, the argument of the
Bessel function and the summation limits in the expected form of the
$\mathcal{F}$-statistic\xspace (in Eq.~\ref{eq:Fstat3} for example) can be replaced with
$\Delta z_\submath{0}=2\pi f \Delta \atrue$. The number of frequency-modulated
sidebands is now reduced by a factor of $\Delta \atrue/\atrue < 1$. We must
stress that $\Delta a$ is an unknown quantity and is the difference
between the best estimate value of $\asini$ and the true value $\atrue$. The
$\mathcal{F}$-statistic\xspace after such a demodulation process will therefore have a
reduced but unknown number of sidebands, although it will still retain
the standard sideband frequency spacing $1/P$. The sideband phasing
$\phi_{n}$ will also be unknown due to the presence of the phase term
$\gamma$ but is of no consequence to the search since the $\mathcal{F}$-statistic\xspace is
insensitive to phase.
\section{Parameter space}\label{sec:params}
In this section we will discuss each of the parameters involved in the
search and how the search sensitivity depends upon the uncertainty in
these parameters. Demodulation of the signal phase due to the Earth's
motion requires accurate knowledge of the source sky position. If the
observation time is long enough, we need to consider the sky position
as a search parameter, as discussed in Sec.~\ref{subsec:skypos}. The
gravitational wave frequency is the primary search parameter. In
Sec.~\ref{subsec:spinf} we discuss the limitations on
our search strategy due to its uncertainty. The orbital period
and semi-major axis are discussed in Sections \ref{subsec:period} and
\ref{subsec:asini} respectively. The effects of ignoring the orbital
eccentricity are discussed in Sec. \ref{subsec:ecc}.
\subsection{Sky position and proper motion}\label{subsec:skypos}
In order to quantify the allowable uncertainty in sky position we will
define a simplistic model describing the phase $\Psi(t)$ received at
Earth from a monochromatic source at infinity at sky position
$(\alpha,\delta)$. If we neglect the detector motion due to the spin of
the Earth and consider only the Earth's orbital motion then we have
\begin{equation}
\Psi(t) = 2\pi\fo \left[t + R_{\oplus}\cos\delta\cos\left(\Omega_{\oplus} (t-t_{\text{ref}}) + \alpha\right)\right],
\end{equation}
where $\fo$ is the signal frequency, $\alpha$ and $\delta$ are the
true right ascension and declination and $R_{\oplus}$ and
$\Omega_{\oplus}$ are the distance of the Earth from the \ac{SSB} and
the Earth's orbital angular frequency respectively. We also define a
reference time $t_{\text{ref}}$ that represents the time at which the
detector passes through the vernal equinox. For an observed sky
position $(\alpha',\delta')=(\alpha+\Delta\alpha,\delta+\Delta\delta)$
the corresponding phase offset $\Delta
\Psi(t,\Delta\alpha,\Delta\delta) = \Psi(t,\alpha',\delta') -
\Psi(t,\alpha,\delta)$ amounts to
\begin{eqnarray}
\Delta \Psi &\approx& -2\pi\fo R_{\oplus}
\Big[\Delta\delta\sin\delta\cos\left(\Omega_{\oplus}(t-t_{\text{ref}})+\alpha\right)\nonumber\\
&&+\Delta\alpha\cos\delta\sin\left(\Omega_{\oplus}(t-t_{\text{ref}})+\alpha\right)\Big],
\end{eqnarray}
where we have expanded the expression to leading order in the sky
position errors. We now make the reasonable assumption that our
analysis would be unable to tolerate a deviation in phase between the
signal and our template of more than $\mathcal{O}(1)$ radian over the
course of an observation on the same timescale of the Earth's orbit.
If we also notice that the worst case scenario (smallest allowable sky
position errors) corresponds to sky positions for which the
trigonometric terms in the previous expression are largest, i.e. of
order unity, then we have
\begin{equation} \label{eq:dbeta}
|\Delta \alpha,\Delta\delta| \leq (2\pi\fo R_{\oplus})^{-1}.
\end{equation}
If we consider signals of frequency $1$kHz, this gives a maximum
allowable sky position offset of $|\Delta \alpha, \Delta\delta| \simeq
100$ mas. This expression also validates our model
assumption that the sky position sensitivity to the Earth spin would
be dominated by the effect from the Earth orbit for long observation
times.
A similar argument can be made for the proper motion of the source
where we would be safe to model the sky position as fixed if the
change
$(\Delta\alpha,\Delta\delta)=(\mu_{\alpha},\mu_{\delta})\Tspan$, over
the course of the observation also satisfied Eq.~\ref{eq:dbeta}.
\subsection{Spin frequency}\label{subsec:spinf}
The spin frequency $\spinf$ of some \acp{LMXB} can be directly
measured from X-ray pulsations, believed to originate from a hot-spot
on the stellar surface, where accreted material is funneled onto the
magnetic pole with the magnetic axis generally misaligned with the
spin axis. X-ray pulsations have been observed in 13 \ac{LMXB} systems
so far, three of which are intermittent~\cite{LambEA_2011}.
Some \acp{LMXB} exhibit recurrent thermonuclear X-ray bursts. Fourier
spectra reveal oscillations during the rise and tail of many bursts,
which are believed to originate from asymmetric brightness patterns on
the stellar surface. In seven \acp{LMXB} which exhibit both pulsations
and bursts, the asymptotic burst oscillation frequency at late times
matches the pulse frequency. Where there are no pulsations, many
bursts need to be observed to measure the asymptotic burst oscillation
reliably. The spin frequency of an additional ten systems has been
determined from burst oscillations only~\cite{Watts_2012}, but due to
the uncertainties involved, are usually quoted to within uncertainties
of $\pm(5-10)$ Hz.
Another class of \acp{LMXB} exhibit high frequency \acp{QPO} in their
persistent X-ray emission. These kHz \acp{QPO} usually come in pairs,
although singles and triples are occasionally observed and the
\ac{QPO} peak frequencies usually change over time. In some cases the
separation of the \ac{QPO} peaks is roughly constant, but this is not
always the case \cite{vanderKlis_1998, ZhangEA_2012,
vanderKlisEA_1996}. For the few \ac{QPO} systems where $\spinf$ can
be determined from pulses or burst oscillations there has been no
evidence suggesting consistency with an existing \ac{QPO} model that
links the \ac{QPO} and spin frequencies. For our purposes, $\spinf$
is considered unknown in sources without pulsations or confirmed
bursts.
In addition to potentially broad uncertainties in $\spinf$, we know
little about how its value may fluctuate over time due to
accretion. Changes in the accretion flow will exert a time varying
torque on the star which will result in a stochastic wandering of the
spin frequency. In this case the signal can no longer be assumed
monochromatic over a given observation time. To quantify the resulting
phase wandering, we assume that the fluctuating component of the
torque $\delta \Na$ flips sign randomly on the timescale $\tspinw$
consistent with the inferred variation in accretion rate. If the mean
torque $\Na = \dot{M}(GMR)^{1/2}$ due to steady-state disk-fed
accretion, then the angular spin frequency $\Omega_{s}=2\pi\spinf$
experiences a random walk with step size $(\delta \Na/I)\tspinw$, where
$I$ is the stellar moment of inertia. After time $\Tspan$, the
root-mean-square drift is
\begin{equation}
\langle (\delta \Omega)^2 \rangle ^{1/2} = \left({\Tspan}/{\tspinw}\right)^{1/2} \frac{\delta \Na \tspinw}{I}.
\end{equation}
This frequency drift will wander outside a Fourier frequency bin width if $\langle
(\delta \Omega)^2 \rangle ^{1/2} > 2\pi/\Tspan$. If we choose $\tspinw$ such that the accretion rate can vary up to a factor of two in this time, then the worst case $\delta\Na=\Na$ leads to the restriction
\begin{eqnarray} \label{eq:Ts_spin}
\Tspansw < \frac{(2\pi)^{2/3}}{(GMR)^{1/3}}
\left(\frac{I}{\dot{M}}\right)^{2/3} \left(\frac{1}{\tspinw}\right)^{1/3}.
\end{eqnarray}
This is the primary reason why an application of the the basic
sideband search, as described here, must be limited in the length of
data it is allowed to analyse. By exceeding this limit it becomes
increasingly likely that the spin wandering inherent to a true signal
will cause signal power to leak between adjacent frequency bins.
Consequently the assumption that $\mathcal{F}$-statistic\xspace signal power is localized in
frequency-modulated sidebands will become invalid and the sensitivity
of the $\mathcal{C}$-statistic\xspace will deteriorate.
\subsection{Orbital Period}\label{subsec:period}
The sideband search relies on relatively precise \ac{EM} measurements
of the orbital period in order to construct a search template. The
duration of the orbit defines the minimum observation time, since $T
\gtrsim 3\Porb /2$ is required before sidebands
become clearly resolved in the spectrum~\cite{RansomEA_2003}. The
uncertainty in the orbital period will determine the number of
templates required to fully sample the search space, or equivalently,
the maximum observation time allowed for a single value of $\Porb$.
We will now provide an indication of the sensitivity of the search to
errors in the orbital period. If our estimate (observation) of the orbital
period $\Pmeas$ is offset from the true value $P_\submath{0}$ by an amount
$\Delta P$, we would expect the error to seriously affect the $\mathcal{C}$-statistic\xspace recovered
from the search once it is large enough to shift the outermost ``tooth'' in the sideband template by one canonical frequency bin away from the true sideband location. In this case, the offset between the template and true sideband frequency is
proportional to the number of sidebands from the central spike. There
will be low mismatch at the center of the template extending to
$\mathcal{O}(100\%)$ mismatch at the edges. It follows that the
average signal power recovered from such a mismatched template will
be $\mathcal{O}(50\%)$ and therefore serves as a useful threshold by
which to determine the maximum allowed $\Delta\Porb$.
If we use the measured value of $\Porb$ as our template parameter, the template
centered at frequency $f$ then consists of $\simeq 4\pi
f\asini$ unit spikes (or teeth) separated by $1/\Porb$. Assuming that the
central spike is exactly equal to the true intrinsic gravitational wave frequency, any
errors in the orbital period will be propagated along the comb, causing the
offset between the true and template frequency of any particular sideband to
grow progressively larger. The frequency difference $\Delta f_P$
between the outermost template sideband, at frequency $f + 2\pi f
\asini/\Porb'$, and the outermost signal sideband at $f + 2\pi f
\asini/\Porb$, is given by
\begin{equation} \label{eq:deltaf}
\Delta f_P \approx 2\pi f \asini \left(\frac{|\Delta \Porb|}{\Porb^{2}}\right),
\end{equation}
for $\Delta \Porb \ll \Porb$. To satisfy the condition described
above we now require that this frequency shift should be less than the size of
one frequency bin. The true frequency bin size $\fres$ is determined by
the observation time span and is given by
\begin{equation}\label{eq:fres}
\fres=\frac{1}{r\Tspan},
\end{equation}
where $r$ is the resolution used in the $\mathcal{F}$-statistic\xspace calculation.\footnote{The default resolution factor is $r=2$} Using Eqs. \ref{eq:deltaf} and \ref{eq:fres} and imposing the condition that $\Delta f_P < \fres$ provides an estimate for the maximum allowable (orbital period limited) coherent
observation timespan,
\begin{eqnarray}\label{eq:Ts_Porb}
\Tspan^{\Porb} \approx \frac{\Porb^{2}}{2\pi f\asini |\Delta \Porb|}.
\end{eqnarray}
Given a relatively poorly constrained orbital period uncertainty, this
restriction may provide too short a duration. This could be because
it is then in conflict with the requirement that $T>3\Porb/2$ or
simply because more \ac{SNR} is desired from the signal. In either
case, the orbital period space must then be divided into templates
with spacing $\delta \Porb$ derived from simply rearranging
Eq.~\ref{eq:Ts_Porb} to solve for $\Delta \Porb$. In relative terms the sideband search places very strong constraints
on the prior knowledge of the orbital period compared to the other
search parameters.
\subsection{Semi-major axis}\label{subsec:asini}
An error in the value of the orbital semi-major axis results in an
incorrect choice for the number of sidebands in the template. As can
be seen in Eq.~\ref{eq:ECstat}, an underestimate results in the
summation of a fraction of the total power in the signal whereas an
overestimate results in a dilution of the total power by summing
additional noise from sideband frequencies containing no signal
contribution.
If we define the true semi-major axis parameter $\atrue$ as the measured value
$\ameas$ and some (unknown) fraction $\xi$ (where $\xi\in \mathbb{R}$) of the
measurement error $\Delta \ameas$ (i.e. $\atrue = \ameas +\xi\Delta \ameas$), we can investigate
the effects of errors on the semi-major axis parameter in terms of this offset
parameter $\xi$. We consider the advantage of using a deliberately offset value
$\atemp$ instead of the observed value $\ameas$ in order to minimize losses in
recovered \ac{SNR}.
The \ac{ROC} curves shown in Fig. \ref{fig:asiROCflat} show the
effects of these offsets, and clearly illustrate degradation in the
performance of the $\mathcal{C}$-statistic\xspace as $|\Delta \ameas|$ ($|\xi|$) increases. The
reduction in detection confidence at a given false alarm probability
is much faster for $\atemp < \atrue$ ($\xi<0$), when the template underestimates
the width of the sideband structure, than for $\atemp>\atrue$ ($\xi>0$). This is
natural considering the ``horned'' shape of the signal (see the left
hand panel of Fig.~\ref{fig:search} and
Sec.~\ref{subsec:template}). Although it is already clear from this
figure that the performance of the search is not symmetric about
$\atemp=\atrue$, this asymmetry is much better illustrated in
Fig. \ref{fig:asiROCperformance} where for different values of the
false alarm rate we show the detection probability plotted against the
offset parameter $\xi$.
This plot provides us with a rough scheme by which to improve the
search performance by exploiting the asymmetry in search sensitivity
with respect to $\xi$. In general, we are keen to probe the
low false alarm and high detection probability regime in which it is
clear that using a template based on an orbital semi-major axis value
$>$ the best estimate reduces the possibility that the bulk of the
signal power (in the horns) will be missed. Based on
Fig.~\ref{fig:asiROCperformance} we choose
\begin{equation}
\atemp = \ameas + \Delta \ameas
\end{equation}
as our choice of semi-major axis with which to generate the search template.
\begin{figure}
\includegraphics[width=\columnwidth]{figure3}
\caption{\label{fig:asiROCflat}\ac{ROC} curves showing how the
performance of the flat template $\mathcal{C}$-statistic\xspace is affected by an offset in
the orbital semi-major axis assuming it is measured exactly (i.e. $\ameas=\atrue$).
The thick black curve represents a zero offset ($\xi=0$). Thick colored curves
represent a positive offset in the semi-major axis ($\xi>0$). Dashed colored
curves represent negative offsets ($\xi<0$). The fainter black curve is
constructed from a statistic governed by a non-central $\chi^2_{4M}(\lambda)$
distribution with a non-centrality parameter $\lambda=\ro^2$ and represents
the theoretical expected behavior of a perfectly matched template. Signal
parameters are the same as described in Fig. \ref{fig:ROCtempcompare}, with
$\ro=20$, $M=2001$ and using $10^6$ realizations of noise.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figure4}
\caption{\label{fig:asiROCperformance}Performance of the $\mathcal{C}$-statistic\xspace with respect to offsets in the semi-major axis at given false alarm probability $F_a$. The
offset parameter $\xi$ quantifies the semi-major axis error in terms of known parameters $\ameas$ and $\Delta \ameas$. The color-bar represents the log of the false alarm probability, ranging from $10^{-3}$ (bottom, blue) to 0 (top, red). Signal parameters are the same as Figs. \ref{fig:ROCtempcompare} and \ref{fig:asiROCflat}, with $\ro=20$, $M=2001$ and a $\Delta \ameas/\ameas=10\%$ fractional uncertainty on the semi-major axis, using $10^6$ realizations of noise.}
\end{figure}
\subsection{Orbital eccentricity}\label{subsec:ecc}
The orbital eccentricity $e$ of the \ac{LMXB} sources is expected to be
highly circularized ($e<10^{-3}$) by the time mass transfer occurs
within the system. In Eq.~\ref{eq:bintime2} we give the first-order
correction (proportional to $e$) of the retarded time at the
\ac{SSB}. If we include higher order terms in the expansion, the phase
(Eq.~\ref{eq:binphase}) can be written as
\begin{align}
\Phi(t) \simeq
2\pi\fo\atrue\left\{\sum_{k=1}^{\infty}c_{k}\sin\omega\cos\left[k\Omega\left(t-\tp\right)\right]\right.\nonumber\\
+d_{k}\cos\omega\sin\left[k\Omega\left(t-\tp\right)\right]\Bigg\},
\end{align}
where the first 4 coefficients (expanded to $\mathcal{O}(e^4)$) in the
sum are given by
\begin{subequations}
\begin{eqnarray}
c_{1} &=& 1-\frac{3}{8}e^{2}+\frac{5}{192}e^{4} +
\mathcal{O}(e^{6})\\
d_{1} &=& 1-\frac{5}{8}e^{2}-\frac{11}{192}e^{4} +
\mathcal{O}(e^{6})\\
c_{2} &=& \frac{1}{2}e-\frac{1}{3}e^{3} + \mathcal{O}(e^{5})\\
d_{2} &=& \frac{1}{2}e-\frac{5}{12}e^{3} + \mathcal{O}(e^{5}).
\end{eqnarray}
\end{subequations}
Hence the phase for $e \neq 0$ is a sum of harmonics of the orbital
frequency. When including additional eccentric phase components in
this way, the sum inside the exponential can be expressed as a product of sums such that the Jacobi-Anger expansion (Eq.~\ref{eq:eiz}) can be
modified such that
\begin{equation}
\exp\left[i\sum_{k}z_{k}\sin k\theta\right]=\prod_{k=1}^{\infty}\sum_{n=-\infty}^{\infty}J_{n}(z_{k})e^{ink\theta}, \label{eq:eizP}
\end{equation}
where $z_k$ corresponds to the $k^\text{th}$ amplitude term (on the left hand side) that defines the argument of the Bessel function for each $k$ in the product (on the right hand side). Equation \ref{eq:eizP} tells us that eccentric signals can be thought of in a similar
way to circular orbit cases. The signal can be modeled as being
composed of many harmonics all separated by some integer number of the
inverse of the orbital period. In the eccentric case $k$ is allowed
to be $>1$ and power can be spread over a far greater range of
harmonics. What is important to note however, is that the signal
power remains restricted to those discrete harmonics.
If we consider only leading order terms in the eccentricity expansion
(as in Eq.~\ref{eq:bintime2}) the form of the Jacobi-Anger expression given
above becomes the product of 2 sums where we consider only $k=1,2$.
The $k=1$ terms are simply the circular orbit terms and describe how
the signal power is distributed amongst $\approx 2z_{1}$
sidebands at frequencies offset from the intrinsic source frequency by
integer multiples of $\theta$.
In our low eccentricity case we notice that the next to leading order
term in the expansion, $k=2$, has a corresponding Bessel function
argument of $\mathcal{O}(z_{1}e)$ and will therefore have far fewer,
$\sim 2z_{1}e$, non-negligible terms in sum over $n$. Taking the
product between the $k=1$ and $k=2$ sums will then produce a
redistribution of the signal power amongst a slightly expanded range
of harmonic frequencies. For the circular orbit case we expect power
to be spread amongst $\approx 2z_{1}$ sidebands whereas we now expect
the same power to be divided amongst $\approx 2z_{1}(1+2e)$ sidebands.
In general, orbital eccentricity causes a redistribution of signal
power amongst the existing circular orbit sidebands and will cause
negligible leakage of signal power into additional sidebands at the
boundaries of the sideband structure. Orbital eccentricity also has
the effect of modifying the phase of each sideband. However, as shown
in Section~\ref{subsec:fmod}, the standard sideband search is
insensitive to the phase of individual sidebands.
\section{Primary sources}\label{sec:sources}
The benefit of the sideband search is that it is robust and
computationally cheap enough to be run over a wide frequency
band~\cite{MW2007}. The most suitable targets are those with
well-measured sky position and orbital periods, reasonably well
constrained semi-major axes, and poorly or unconstrained spin
frequency.
The most suitable candidates in terms of these criteria are \acp{LMXB} due to their high accretion rate (directly related to gravitational wave amplitude) and their visibility in the electromagnetic regime (predominantly X-ray, but optical and radio observations also provide accurate sky position, ephemeris and sometimes orbital information). They are classified into three main types depending on the behavior of their X-ray emissions: pulsing, bursting or \ac{QPO} sources. Pulsating and frequently bursting \acp{LMXB} usually have a well determined spin frequency and are better suited to the more sensitive, narrow-band
techniques, such as \ac{LIGO}'s known pulsar pipeline \cite{LIGO_S5_Crab_2008,LIGO_S5_KnownPulsars_2010} including corrections for the binary motion. Non-pulsing burst sources with irregular or infrequent bursts still have a fairly wide ($\mathcal{O}$ a few Hz) range around the suspected spin frequency. A convincing relationship between \acp{QPO} and the spin frequency of the neutron star has not yet been determined, so the spin frequency of purely \ac{QPO} sources is considered unknown for our applications.
The gravitational wave strain amplitude is directly
proportional to the square root of the x-ray flux $\ho \propto
\Fx^{1/2}$ (Eq. \ref{eq:hc_Fx_300}), so the most luminous sources,
which are usually the \ac{qpo} sources, will be the most detectable. In addition, the
(already weak) gravitational wave strain amplitude is proportional to the inverse of the distance to the source, so closer (i.e. galactic) sources
are also favorable.
In this section we present possible sources to which the sideband
search can be applied. We start with galactic \acp{LMXB} and consider
the most detectable sources in terms of their parameter
constraints. We exclusively consider sources requiring wide frequency
search bands ($\gtrsim 5$ Hz), and so neglect the accreting
millisecond pulsars. The detectability of a wider range of accreting
sources, with some measurement or estimate of spin frequency, in terms
of general gravitational wave searches was reviewed in \cite{WattsEA_2008}.
\subsection{Galactic LMXBs}\label{subsec:population}
\begin{table*}
\caption{Target sources for the sideband method. The different columns list the X-ray flux $F_X$ (in units of $10^{-8} \text{ erg cm}^{-2}\text{s}^{-1}$), distance $d$, sky position uncertainty $\Delta\beta$, fractional error on the semi-major axis $\Delta a/a$ and orbital period $\Delta P/P$, and the orbital period limited observation time at a frequency of 1 kHz $\Tspan^\Porb|_\text{1 kHz}$. The horizontal line separates \ac{qpo} (top) and burst (bottom) sources.}\label{tab:targets}
\begin{ruledtabular}
\begin{tabular}{lcccccc}
Source & $\Fx \;(F_*)$ \footnote{$F_*= 10^{-8} \text{ erg cm}^{-2}\text{s}^{-1}$} & d (kpc) &
$\Delta\beta$ (arc sec) & $\Delta \ameas/\ameas$ & $(\Delta \Porb/\Porb)
(\times 10^{-7})$ & $\Tspan^\Porb|_\text{1 kHz}$ \\ \hline
Sco X-1 & 40 & 2.8 & $3\times 10^{-4}$ & 0.13 & 9 & 50 days \\
4U 1820-30 & 2.1 & 7.4 & 0.15 & 0.48 & 2 & 300 days \\
Cyg X-2 & 1.1 & 10.55 & 0.5 & 0.12 & 0.004 & 400 years \\
J 2123-058 & 0.21 & 9.6 & 0.6 & 0.19 & 0.9 & 3.5 years \\ \hline
4U 1636-536 & 0.84 & 6 & $< 60$ & 0.11 & 10 & 17 days \\
X 1658-298 & 0.67 & 12 & 0.1 & 0.82 & 0.1 & 5 years \\
XB 1254-690 & 0.09 & 13 & 0.6 & 0.12 & 500 & 6 hours \\
EXO 0748-676 & 0.036 & 7.4 & 0.7 & 0.77 & 6 & 40 days \\
4U 1916-053 & 0.027 & 8 & 0.6 & 0.72 & 3 & 2 years
\end{tabular}
\end{ruledtabular}
\end{table*}
The sideband search is best suited to \acp{LMXB} with a relatively
large uncertainty in the spin frequency ($\gtrsim$ a few Hz), so
\ac{qpo} and poorly constrained burst sources are the best targets. The requirement of a
relatively well defined sky position and orbital period excludes many
sources including those that are considered to be X-ray bright.
Table~\ref{tab:targets} lists some of the galactic \acp{LMXB}, and
their limiting parameters, for which the sideband search is most
applicable. The parameters displayed in the table allow us to
determine the most suitable targets for the search.
For each source the table lists the bolometric X-ray flux $\Fx$, the
distance to the source $\dist$, the error in the sky position $\Delta\beta=(\Delta
\alpha,\Delta\delta)$, the fractional error in the semi-major axis
$\Delta \ameas/\ameas$ and the orbital period limited observation time
$\Tspan^\Porb|_\text{1 kHz}$ calculated using
Eq.~\ref{eq:Ts_Porb} at a frequency of 1 kHz. Although we could expect gravitational wave emission up to $\sim1500 Hz$ (from the currently measured spin distributions of \acp{LMXB} maxing out at $\sim720 Hz$), 1 kHz is chosen as an upper bound on the search frequency since the sensitivity of \ac{LIGO} detectors is limited at high frequencies and the amplitudes of these systems are not expected to be very strong. Sources with poorly constrained ($\Delta \asini/\asini > 0.9$) semi-major axis and sky position ($\Delta\beta > 60''$) have not been included. The sources are
listed in order of their bolometric X-ray flux within each source
group with \ac{QPO} sources in the top and burst sources in the bottom
half of the table. The distance is included as a reference but is
already taken into account in calculation of $\Fx$. From these factors
alone, \ac{ScoX1} is already the leading candidate source. The sky
position error $\Delta\beta$ should be less than 100
mas for a source with $\fo=1$ kHz (see
Sec. \ref{subsec:skypos}). \ac{ScoX1} is the only candidate that falls
easily within this basic limit, although a few other sources are
borderline cases. The fractional error in semi-major axis is included
also as a guide. Although a smaller error on this parameter improves
our sensitivity, as shown in Sec.~\ref{subsec:asini} we are relatively
insensitive to $\asini$ uncertainties on the scale of $10$'s of
percent. The final column lists the orbital period limited
observation timespan $\Tspan^\Porb$ at a frequency of 1 kHz. Although the spin frequencies of the burst sources are better constrained than \ac{QPO} sources, the comparison of $\Tspan^\Porb$ is still made at 1 kHz so that a direct comparison on the source parameters (rather than search performance) can be made. This column is included for reference as the orbital
period may not be the tightest constraint on the observation time
(c.f. Sec. \ref{subsec:spinf}). It does, however, give an indication of
how well the orbital period of the source is constrained and specifically how it affects the search performance.
\subsection{Sco X-1}\label{subsec:ScoX1}
\begin{table*}
\caption{Sco X-1 system parameters required for the sideband search. Directly observable parameters are presented in the top half of the table. The bottom half, separated by the horizontal line, displays search limits and constraints derived from these.}\label{tab:ScoX1}
\begin{ruledtabular}
\begin{tabular}{lccrllcr}
Parameter & Symbol && Value & Units & Uncertainty && References \\ \hline
Right Ascension & $\alpha$ && $16^h19^m55^s.0850$ & mas & $\pm0.3$ && \cite{BradshawEA_1999} \\
Declination & $\delta$ && $-15^\circ 38' 24.9" $ & mas & $\pm0.3$ && \cite{BradshawEA_1999} \\
Proper motion & $\mu$ && $14.1$ & mas yr${}^{-1}$ & && \cite{BradshawEA_1999} \\
Parallax & $\pi_\beta$ && $0.36 $ & mas &$\pm0.04$ && \cite{BradshawEA_1999} \\
Moment of inertia & $I$ && $10^{38}$ & $\text{kg m}^2$ & && \cite{BradshawEA_1999} \\
Accretion rate & $\dot{M}$ && $1.23 \times 10^{15}$ & kg s${}^{-1}$& && \cite{BradshawEA_1999} \\
Bolometric X-ray flux & $\Fx$ && $40 \times 10^{-8}$ & $\text{erg cm}^{-2}\text{s}^{-1}$ & && \cite{WattsEA_2008} \\
Projected semi-major axis light travel time & $\ameas$ && 1.44 & s & $\pm 0.18$ && \cite{Steeghs_Casares_2002} \\
Orbital Period & $\Pmeas$ && 68023.82 & s & $\pm 0.06048$ && \cite{GallowayEA_2012} \\
NS spin inclination angle & $\iota$ && $44 $ & deg & $\pm 6$ && \cite{FomalontEA_2001b} \\
GW polarization angle & $\psi$ && $234$ & deg & $\pm 3$ && \cite{FomalontEA_2001b} \\
Time of periapse passage (SSB) & $\tp$ && 614638484 & s & $\pm$ 400 && \cite{Steeghs_Casares_2002, Messenger2005} \\ \hline \\
Strain amplitude (at $\spinf = 300$ Hz) & $\ho^{300}$ && $3.5 \times 10^{-26}$ && && Eq. \ref{eq:hc_Fx_300} \\
Spin limited observation timespan & $\Tspansw$ && 13 &days & && Eq. \ref{eq:Ts_spin}
\end{tabular}
\end{ruledtabular}
\end{table*}
\ac{ScoX1}, the first \ac{LMXB} to be discovered, is also the
brightest extra-solar x-ray source in the sky. The direct relation
between gravitational wave strain and x-ray flux given by Eq.~\ref{eq:hc_Fx_300}
makes it also the most likely to be a strong gravitational wave emitter. This, as
well as the parameter constraints displayed in
Table~\ref{tab:targets}, makes it an ideal candidate for the sideband
search. Table~\ref{tab:ScoX1} provides a list of \ac{ScoX1}
parameters determined from various electromagnetic observations. The
table includes the parameters required to run the sideband search
together with some values used for calculating limits and constraints
on the performance and sensitivity of the search. The bottom section
of the table lists some of the limits and constraints derived using
the above mentioned parameters.
Running the standard version of the sideband search requires accurate
knowledge of the sky position and orbital period and approximate
knowledge of the semi-major axis. The sky position $\beta= (\alpha,\delta)$ listed for \ac{ScoX1} is accurate to within 0.3 mas. This error is well within the 100 mas limit defined in Sec.~\ref{subsec:skypos}, justifying the assumption of a fixed sky
position. The accuracy of measurements of the orbital period require
only a single sideband template if the observation timespan is within
$\Tspan < \Tspan^\text{obs} \approx 49$ days (for \ac{ScoX1} at 1
kHz). The semi-major axis and its measurement error are also required
for construction of the sideband template.
Estimates of the primary (accreting neutron star) and secondary (donor
star) masses, as well measurements of the bolometric X-ray flux
($\Fx$) are required to estimate the indirect, torque balance, gravitational wave
strain upper limit $\htorq$ using Eq.~\ref{eq:hc_Fx_300}, displayed in
the bottom section of the table. The spin frequency limited
observation timespan is also listed here and requires values for the
accretion rate $\dot{M}$ and moment of inertia $I$ to calculate this
value for \ac{ScoX1} using Eq.~\ref{eq:Ts_spin} assuming a
spin-wandering timescale $\tspinw = 1$ day.\footnote{Assuming the instantaneous accretion rate does not vary more than the x-ray flux, observations of the x-ray variability of \ac{ScoX1} show that the accretion rate can vary by roughly a factor of two over a timescale $\tspinw =1$ day. \cite{BradtEA_1975, HertzEA_1992}} The corresponding value of
$\Tspansw\approx 13$ days displayed in the table is more
restrictive than the orbital period limited timespan and is our
limiting time constraint in the search.
\section{Statistical analysis}\label{sec:stats}
Let us first assume that our analysis has yielded no significant
candidate signal given a designated significance threshold. In this
case, with no evidence for detection, we place an upper limit on the
possible strength of an underlying signal. In the literature on
continuous-wave gravitational signals, it is common to determine these
upper limits
numerically~\cite{LIGO_S2_ScoX1_2007,LIGO_2008_PRD77,LIGO_S5_2012arXiv}
or semi-analytically~\cite{LIGO_CasA_2010, Wette2012} using
frequentist Monte Carlo methods. In these cases simulated signals are
repeatedly added to data over a range of frequencies and recovered
using a localized, computationally cheap, search around the point of
injection.
The sideband algorithm combines signal from many (typically $\sim
10^3$) correlated $\mathcal{F}$\xspace-statistic frequency bins which must be
computed over a relatively wide frequency band for each simulated
signal. Such computations represent a computational cost far in excess
of existing methods and are only manageable for a small parameter
space, e.g. injection studies where the signal frequency is known and
$\mathcal{O}(10^2)$ realizations are feasible. The computations become
daunting for a wide-band search covering more than a few Hz.
We choose to optimize the process by calculating upper limits within a Bayesian
framework. This is an especially appealing alternative since the \ac{pdf} of the
$\mathcal{C}$\xspace-statistic takes a relatively simple, closed, analytic form. Bayesian
upper limits have been computed in time-domain gravitational wave
searches targeting known sources (pulsars)
\cite{DupuisWoan_2005,LIGO_S5_KnownPulsars_2010,LIGO_Vela_2011}, and
cross-correlation searches for the stochastic background
\cite{LIGO_S4_ScoX1_2007,LIGO_2007_PRD76,LIGO_S5stoch_ScoX1_2011}. Comparisons
on specific data sets have shown that Bayesian and frequentist upper limits are
consistent \cite{LIGO_2004_PRD69,RMP2011,LIGO_Vela_2011}.
\subsection{Bayes Theorem}\label{subsec:Bayes}
In the Bayesian framework, the posterior probability density of the hypothesis
$H$ given the data $D$ and our background information $I$ is defined as
\begin{equation}\label{eq:Bayes}
p(\bm{\theta}|D,H,I) = p(\bm{\theta}|H,I)\frac{p(D|\bm{\theta},H,I)}{p(D|H,I)},
\end{equation}
where $p(\bm{\theta}|H,I)$ denotes the prior probability distribution
of our model parameters $\bm{\theta}$ given a model $H$ assuming the
background information $I$. The quantity $p(D|\bm{\theta},H,I)$ is
the direct probability density (or likelihood function) of the data
given the parameters, model and background information. The term
$p(D|H,I)$ is known as the evidence of $D$ given our model and acts as
a normalisation constant and does not affect the shape of the
posterior distribution
$p(\bm{\theta}|D,H,I)$~\cite{Bretthorst1988}. The background
information $I$ (which represents our signal model, assumptions on
Gaussian noise, physicality of parameters etc.) remains constant
throughout our analysis and will not be mentioned hereafter.
\subsection{Likelihood}\label{subsec:likelihood}
When there is no signal in the data, we will say the null hypothesis
$\nullH$, that the data contains any Gaussian noise, is true. Under
these conditions, each $\mathcal{C}$ value is drawn from a central
$\chi^2_{4M}$ distribution. Hence the $\nullH$ model is parametrized
entirely by $M=2m+1$, the number of sidebands in the template, where
$m=\texttt{ceil}[2\pi f a]$ and depends on the search
frequency $f$ and semi-major axis $a$.
The signal hypothesis $\sigH$ is true if the data contains Gaussian
noise plus a signal. The signal is defined by the set of parameters $\bm{\theta}=\{\ho, \cos\iota, \psi, \phio, a, P\}$. In the case of a signal present in the data, each $\mathcal{C}$\xspace-statistic is drawn from
a non-central $\chi^2_{4M}[\lambda(\bm{\theta})]$ distribution. The non-centrality
parameter $\lambda(\bm{\theta})$ is defined by the signal parameters $\bm{\theta}$ and is given by
\begin{equation}
\lambda(\bm{\theta}) = \ro^{2}\sum_{n=-m}^{m}J^{2}_{n}(2\pi
\fo\atrue)|\tilde{W}(f_{n}-f_{k}+l(n)\Delta f)|^2. \label{eq:lambda}
\end{equation}
It represents the total recovered optimal \ac{SNR} contained within
the sidebands. The likelihood function (the probability of our measured $\mathcal{C}$
value given a parameter set $\bm{\theta}$) is then given by
\begin{equation}\label{eq:likelihood}
p(\mathcal{C}|\bm{\theta}) = \frac{1}{2}
\exp\left(-\frac{1}{2}\left[\mathcal{C}+\lambda\left(\bm{\theta}\right)\right]\right)
\left(\frac{\mathcal{C}}{\lambda\left(\bm{\theta}\right)}\right)^{M-\frac{1}{2}}
\textrm{I}_{2M-1}\Bigg(
\sqrt{\mathcal{C} \lambda\left(\bm{\theta}\right)}\Bigg).
\end{equation}
It should be noted that although the quantity $M$ is a function of the
semi-major axis and intrinsic gravitational wave frequency, it has been
fixed according to the predefined number of teeth used in the sideband
template. It is therefore not a function of $\bm{\theta}$.
\subsection{Priors}\label{subsec:priors}
When searching for weak signals, an overly prescriptive prior is
undesirable because it may dominate the posterior. Hence, to be
conservative, we adopt a uniform prior on $\ho\geq 0$; the possibility
of $\ho=0$ excludes the use of a fully scale-invariant Jeffreys prior
$\propto 1/\ho$ \cite{DupuisWoan_2005}. The upper limit thus derived
is consistent with the data, not just a re-iteration of the prior. The
same $\ho$ prior has been adopted in previous searches
\cite{LIGO_2004_PRD69, LIGO_S2_ScoX1_2007, LIGO_2008_PRD77,
PrixKrishnan_2009}; the motivation is discussed in more detail in
\cite{DupuisWoan_2005}.
Electromagnetic measurements of the orbital period $\Pmeas$ and
semi-major axis $\ameas$ are assumed to carry normally distributed
random errors. Hence we adopt Gaussian priors on the actual values
$\Po$ and $\atrue$. Specifically we take
$p(\Po)=\mathcal{N}(\Pmeas,\Delta\Pmeas)$ and $p(\atrue)=
\mathcal{N}(\ameas,\Delta\ameas)$, where $\mathcal{N}(\mu,\sigma)$
denotes a Gaussian (normal) distribution with mean $\mu$ given by the
electromagnetic observation and standard
deviation $\sigma$ taken as the error in that observation.
The reference phase $\phi_{0}$ is automatically maximized over
within the $\mathcal{F}$\xspace stage of the analysis and therefore does not
directly affect our (semi-coherent) analysis. The remaining amplitude
parameters serve only to influence the optimal \ac{SNR}, and therefore
also the $\mathcal{C}$\xspace. Without prior information from electromagnetic
observations, we select the least informative (ignorant)
\emph{physical} priors such that $p(\cos\iota)=1/2$ and
$p(\psi)=1/2\pi$ on the domains $(-1,1)$ and $(0,2\pi)$ respectively.
Any prior informative measurements (e.g. electromagnetic) on the
amplitude parameters can be incorporated into the analysis, and serve
to narrow the prior probability distributions. For the Sco X-1
search, we can deduce measurements for $\cos\iota$ and $\psi$ from
observations if we assume the rotation axes of the neutron star and
accretion disk are aligned. This implies the neutron star inclination
$\iota$ is equal to the orbital inclination. We can then set $\iota =
44^\circ \pm 6^\circ$ from the inclination of the orbital plane suggested
from observations of the radio components of Sco
X-1~\cite{FomalontEA_2001b}. The same observations measure a position
angle of these radio jets of $54\pm 3^\circ$. Under the alignment
assumption, the position angle is directly related to the gravitational wave
polarization angle $\psi$, but with a phase shift of $180^\circ$,
i.e. $\psi = 234\pm 3^\circ$.
The above assumes the usual mass-quadrupole emission; for current-quadrupole
emission from $r$-modes the results are the same with $\psi \rightarrow \psi + 45^\circ$
\cite{Owen2010}.
\subsection{Posteriors}\label{subsec:posts}
The \ac{PDF} on our search parameters given a single $\mathcal{C}$\xspace-statistic
value is
\begin{equation}
p({\bm{\theta}}|\mathcal{C}) \propto p(\mathcal{C}|\bm{\theta})p(\bm{\theta}),
\end{equation}
and assuming that the prior \acp{PDF} on our parameters are
independent, we can express the posterior \ac{PDF} as
\begin{eqnarray}
p(\ho,\cos\iota,\psi, \Porb, \asini| \mathcal{C})\propto p(\mathcal{C}|\bm{\theta})p(\ho)p(\cos\iota)p(\psi)p(\Porb)p(\asini). \nonumber \\
\end{eqnarray}
To perform inference on the gravitational wave strain $\ho$, we can marginalize
this joint distribution over the other parameters leaving us with
\begin{eqnarray}\label{eq:h0_post}
p(\ho|\mathcal{C})\!&\propto&\!\int\limits_{-\infty}^{\infty}\!d\asini
\!\int\limits_{\infty}^{\infty}\!d\Porb\!\int\limits_{0}^{2\pi}\!d\psi
\!\int\limits_{-1}^{1}\!d\cos\iota~p(\mathcal{C}|\bm{\theta})\mathcal{N}(\Porb,\Delta{P})\mathcal{N}(\asini,\Delta{\asini}), \nonumber \\
\end{eqnarray}
where the flat priors on $\ho$, $\cos\iota$ and $\psi$ are absorbed into the
proportionality. Note that the amplitude parameters act through the
non-centrality parameter $\lambda(\bm{\theta})$ (Eq.~\ref{eq:lambda})
via the optimal \ac{SNR} term (Eq.~\ref{eq:rho2}), in the likelihood.
The orbital parameters $\asini,P$ dictate the fraction of recovered
\ac{SNR} based on the mismatch in the predicted quantity and location
of frequency-modulated sidebands (Eq.~\ref{eq:lambda}).
\subsection{Detection criteria and upper limits}\label{subsed:detection}
To determine whether or not a signal is present in the data, we
compute a threshold value of the $\mathcal{C}$-statistic\xspace such that the probability of
achieving such a value or greater due to noise alone is $\Pa$, the
false alarm probability. For a single measurement of the $\mathcal{C}$-statistic\xspace
this threshold is computed via
\begin{eqnarray}
\Pa &=& \int\limits_{\cstar}^{\infty}p(\mathcal{C}|\ho=0) \nonumber \\
&=& 1 - \mathcal{P}\left(2M,\cstar/2\right),\label{eq:Pa}
\end{eqnarray}
where the likelihood on $\mathcal{C}$ in the noise only case becomes a
central $\chi^2$ distribution and $\mathcal{P}(k/2,x/2)$ is the regularized Gamma function with $k$ degrees of freedom (the cumulative distribution function of a central $\chi^2_k$ distribution) defined at $x$.
In the case of $N$ measurements of the $\mathcal{C}$-statistic\xspace, assuming statistically
independent trials, the false alarm probability is given by
\begin{eqnarray}
P_{a|N} &=& 1-(1-\Pa)^N \nonumber \\
&=& 1-\left[\mathcal{P}\left(2M, \cstar/2\right)\right]^N.
\end{eqnarray}
The corresponding threshold $\cstar_{N}$ such that the probability that one or more of these values
exceeds that threshold is obtained by solving
\begin{equation}
\mathcal{P}(2M,\cstar_{N}/2)= \left(1-P_{a|N}\right)^{1/N}. \label{eq:P_cstar}
\end{equation}
This solution is obtained numerically but can be represented notationally by
\begin{equation}\label{eq:cstar}
\cstar_N = 2 \mathcal{P}^{-1}\left(2M,\left[1-P_{a|N}\right]^{1/N}\right),
\end{equation}
where $\mathcal{P}^{-1}$ represents the inverse function of $\mathcal{P}$.
In practice the $\mathcal{C}$-statistic\xspace values will not be statistically
independent as assumed above. The level of independence between
adjacent frequency bins will be reduced (i.e. values will be
become increasingly correlated) as the frequency resolution of the
$\mathcal{C}$-statistic\xspace is made finer. Additionally, due to the comb structure of the
signal and template we find that results at frequencies separated by
an integer number of frequency-modulated sideband spacings $j/P$ Hz for $j<m$ are
highly correlated. This is due to the fact that these results will
have been constructed from sums of $\mathcal{F}$-statistic\xspace values containing many
common values. This latter effect is dominant over the former and as
an approximation it can be assumed that within the frequency span of a
single comb template there are $rT/P$ independent $\mathcal{C}$-statistic\xspace results.\footnote{This comes from the number of bins in between each sideband, given by the sideband separation $1/P$ divided by the bin size $\fres = (r\Tspan)^{-1}$ (Eq. \ref{eq:fres}).} The number of templates spans per unit search frequency is
$\sim P/rM$ which leaves us with $\sim T/M$ independent $\mathcal{C}$-statistic\xspace values per unit Hz.
This is a reduction by a factor of $M$ in the number of statistically
independent results expected.
In the event of there being no candidate $\mathcal{C}$-statistic\xspace values, the search
allows us to compute upper-limits on the amplitude of gravitational waves from
our target source. We define the upper limit on the wave strain
$\ho$ as the value $\hUL$ that bounds the fraction UL of the area of
the marginalized posterior distribution $p(\ho|\mathcal{C})$. This value is
obtained numerically by solving
\begin{equation}
\text{UL} = \int\limits_{0}^{\hUL}p(\ho|\mathcal{C}) \,\, d\ho. \label{eq:h0_UL}
\end{equation}
We note that this procedure allows us to compute an upper-limit for
each $\mathcal{C}$-statistic\xspace value output from a search. The standard practice in
continuous gravitational wave data analysis is to perform a frequentist
upper-limit using computationally expensive Monte-Carlo simulations
involving repeated signal injections. The results of these injections
are then compared to loudest detection statistic recovered from the
actual search~\cite{LIGO_S2_ScoX1_2007}. In our approach, by virtue
of the fact that we are able to compute upper-limits very efficiently
for each $\mathcal{C}$-statistic\xspace value and the upper-limit value is a monotonic
function of $\mathcal{C}$ we naturally also include the worst case (loudest
event) result. The difference in the upper-limits obtained from both
strategies then becomes an issue of Bayesian versus Frequentist
interpretation. However, as shown in~\cite{RMP2011}, in the limit of
large \ac{SNR} these upper-limit results become indistinguishable.
When searching wide parameter spaces with large numbers of templates,
as is the case for the sideband search, the most likely largest
detection statistic value will be consistent with large \ac{SNR}.
\section{Sensitivity}\label{sec:sensitivity}
\begin{figure
\includegraphics[width=\columnwidth]{figure5}
\caption{\label{fig:Bayes_sens}
Sensitivity estimate for a 10 day standard, approximate demodulated and approximate demodulated with known priors sideband search (fine, medium and bold solid curves respectively) using \ac{LIGO} (H1L1) S5 data (upper, purple group), and using the 3-detector (H1L1V) advanced \ac{LIGO} configuration (lower, red group). Also shown are results from the the previous coherent search for \ac{ScoX1} in S2 data (solid black dashes) \cite{LIGO_S2_ScoX1_2007} and the maximum upper limits for each Hz band of the directed stochastic (radiometer) search in S4 and S5 data (light and dark blue dashed curves, respectively) \cite{LIGO_S4_ScoX1_2007,LIGO_S5stoch_ScoX1_2011}. The theoretical torque-balance gravitational wave strain upper
limit ($\htorq$ from Eq. \ref{eq:hc_Fx_300}) for Sco X-1 is indicated by the thick gray straight line.}
\end{figure}
The sensitivity of a future search can be predicted in a variety of ways. We choose to estimate the expected gravitational wave strain upper-limits
for Initial \ac{LIGO} data in order to compare against previous results. We also compare this to the expected sensitivity of the search with Advanced \ac{LIGO}.
If the search is conducted such that the frequency space is split into small sub-bands, the sensitivity can be estimated by computing upper limits on the expected maximum from each of the sub-bands in Gaussian noise. This is equivalent to assigning a false alarm probability $\Pa=50\%$ for $N=\Tspan/M$ trials for
each, say one Hz frequency sub-band, and using Eq.~\ref{eq:cstar} as the expected $\mathcal{C}$-statistic\xspace. We can then calculate the posterior distribution of $\ho$ from Eqs.~\ref{eq:likelihood} and Eq ~\ref{eq:h0_post}.
Figure~\ref{fig:Bayes_sens} shows the sensitivity estimate of the
$90\%$ upper limit (UL=0.9) for the sideband search in different
modes: standard (described in Section \ref{sec:sideband}, represented by the thin solid curves), binary demodulated
(described in Section \ref{subsec:demod}, represented by the medium solid curves), and binary demodulated with known priors
on $\cos\iota$ and $\psi$ (described in Section \ref{subsec:priors}, represented by the bold solid curves). It compares
the sensitivity of the search in two-detector (H1L1) LIGO S5 data (upper, purple group) and three-detector (H1L1V) Advanced LIGO data (red group) with previous searches for \ac{ScoX1} in LIGO S2 (black dashes) \cite{LIGO_S2_ScoX1_2007}, S4 and S5 data (light and dark blue dashed curves, respectively) \cite{LIGO_S4_ScoX1_2007,LIGO_S5stoch_ScoX1_2011}. The $h_{\text{rms}}$ upper limit quoted in the latter two (radiometer) searches is optimized for the special case of a circularly polarized signal and hence less conservative than the angle averaged $\ho$ quoted in \cite{LIGO_S2_ScoX1_2007} and commonly used when quoting upper limits for continuous gravitational wave searches. Converting detector-strain rms upper limits $h_\text{rms}$ to source-strain amplitude upper limit $\ho$ requires $\ho\sim 2.43 h_{\text{rms}}$ (see \cite{Messenger2010}). The different confidence on the coherent S2 analysis and S4 ans S5 radiometer analyses (90 and 95$\%$ respectively) also complexify any direct comparisons. The theoretical indirect wave strain limit $\htorq$ for gravitational waves from \acp{LMXB} represented by the thick gray line comes from Eq. \ref{eq:hc_Fx_300}.
The sensitivity curves in Fig. \ref{fig:Bayes_sens} show that the standard sideband search should improve current upper limits on gravitational waves from \ac{ScoX1}, even though it is limited to only 10 days of consecutive data. Running a demodulated search with known $\cos \iota$ and $\psi$ comes close to setting constraints on the indirect (torque-balance) upper limits in the advanced detector era.
\section{Discussion}\label{sec:discussion}
We have described the sideband algorithm and shown that it provides a
computationally efficient method to search for gravitational waves from sources
in binary systems. It requires accurate knowledge of the sky position
of the source and the orbital period of the binary, and less accurate
knowledge on the semi-major axis. Effects of spin wandering can be
ignored over a short enough coherent integration time.
The tolerance on the errors of relevant search parameters was computed, defining the range over which they can be assumed constant (Section \ref{sec:params}). In light of these limits, electromagnetic observations suggest several candidates (Section \ref{sec:sources}). Of these sources, \ac{ScoX1} is
identified as the strongest candidate based on the gravitational wave strain
recovered from the torque-balance argument (Eq.
\ref{eq:hc_Fx_300}). In future, the search can also be directed at
several other of the suitable \ac{LMXB} candidates presented in
Section \ref{sec:sources}.
A Bayesian upper limit strategy was presented in Section \ref{sec:stats}, rather than the frequentist methods commonly employed in
frequency-based (\ac{LIGO}) searches. Knowing the likelihood function in closed
analytic form makes the Bayesian approach computationally more
feasible than Monte Carlo simulations (see
\ref{subsec:likelihood}). Knowing the gravitational wave polarization angle and
inclination leads to additional sensitivity improvements using this
framework (see \ref{subsec:priors}).
The sensitivity of the search, described in Section
\ref{sec:sensitivity}, is estimated by performing the Bayesian
analysis on the design curves of the S5 and Advanced \ac{LIGO} noise
floors. The sensitivity of a 10 day limited \ac{ScoX1} directed sideband search compared to previous \ac{LIGO} searches is shown in Fig. \ref{fig:Bayes_sens}.
It shows that measurements of an orbital reference time and phase (the time and argument of periapse)
can be employed to improve search sensitivity by a factor of $~1.5$ in the approximate
demodulated version of the search. Also, prior information on the polarization and inclination of the gravitational wave signal constrains the upper limit calculation improving the sensitivity by another factor of $1.5$. In its most sensitive configuration (approximated binary demodulated assuming known $\cos\iota$ and $\psi$ in the Advanced detector era), the sideband search brings us closer to testing the theoretical indirect torque-balance limit.
The studies presented here assume pure Gaussian noise. The performance
for realistic \ac{LIGO}-like noise will be presented elsewhere, in a report on the results from the \ac{ScoX1} directed search performed
on \ac{LIGO} (S5) data. The search could also look forward to running on
the next-generation Advanced \ac{LIGO} data.
\section*{ACKNOWLEDGEMENTS}
\input acknowledgement.tex
|
1,477,468,750,897 | arxiv | \section{Introduction}
\label{sec:intro}
For about a century, three main research fields have taken an interest in the various space plasma environments found around the Sun. On the one hand, two of them, namely planetary science and solar physics, have been exploring the solar system, to understand the functioning and history of its central star, and of its myriad of orbiting bodies. On the other hand, the third one, namely fundamental plasma physics, has been using the solar wind as a handy wind tunnel which allows researchers to study fundamental plasma phenomena not easily reproducible on the ground in laboratories. During the last two decades, bridges between these communities have been developing, as the growing knowledge of each community was bringing the fields ever closer, to a point where overlapping topics made the communities work together. For instance, if initially planetary scientists were studying the interaction between solar system bodies and a steady, ideally laminar solar wind, they soon had to consider the event-full and turbulent nature of the solar wind to go further in the in situ space data analysis, further in their understanding of the interactions at various obstacles. If plasma physicists were originally interested in a pristine solar wind unaffected by the presence of obstacles, they realised that the environment close to these obstacles could provide combinations of plasma parameters otherwise not accessible to their measurements in the unaffected solar wind. For a while now, we have seen planetary studies focusing on the effects of solar wind transient effects (such as Coronal mass Ejection CME or Co-rotational Interaction Region CIR) on planetary plasma environments, at Mars \cite{ramstad2017grl}, Mercury \cite{exner2018pss}, Venus \cite{luhmann2008jgr} and comet 67P/C-G \cite{edberg2016mnras, hajra2018mnras} to only cite a few, the effect of large scale fluctuations in the upstream flow on Earth's magnetosphere \cite{tsurutani1987pss}, and more generally the effect of solar wind turbulence on Earth magnetosphere and ionosphere \cite{damicis2020fass, guio2021fass}. Similarly, plasma physicists have developed comprehensive knowledge of plasma waves and plasma turbulence in the Earth magnetosheath, presenting relatively high particle densities and electromagnetic field strengths favourable for space instrumentation, in a region more easily accessible to space probes than regions of unaffected solar wind \cite{borovsky2003jgr, rakhmanova2021fass}. More recently, the same community took an interest in various planet magnetospheres, depicting plasma turbulence in various locations and of various parameters \cite{saur2021fass} and all references therein.
Various numerical codes have been used for the global simulation of the interaction between a laminar solar wind and solar system bodies, using MHD \cite{gombosi2004cse}, hybrid \cite{bagdonat2002jcp}, or fully kinetic \cite{markidis2010mcs} solvers. Similarly, solar wind turbulence in the absence of an obstacle has also been simulated using similar MHD \cite{boldyrev2011apj}, hybrid \cite{franci2015apj}, and fully kinetic \cite{valentini2007jcp} solvers. In this context, we identify the lack of a numerical approach for the study of the interaction between a turbulent plasma flow (such as the solar wind) and an obstacle (such as a magnetosphere, either intrinsic or induced). Such a tool would provide the first global picture of these complex interactions. By shading new lights on the long-lasting dilemma between intrinsic phenomena and phenomena originating from the upstream flow, it would allow invaluable comparisons between self-consistent, global, numerical results, and the worth of observational results provided by the various past, current and future exploratory space missions in our solar system.\\
The main points of interest and main questions motivating such a model can be organised as such:
\begin{itemize}
\item Macroscopic effects of turbulence on the obstacle
\begin{itemize}
\item shape and position of the plasma boundaries (e.g. bow shock, magnetopause),
\item large scale magnetic reconnection,
\item atmospheric escape,
\item dynamical evolution of the magnestosphere.
\end{itemize}
\item Microscopic physics and instabilities within the interaction region, induced by upstream turbulence
\begin{itemize}
\item energy transport by plasma waves,
\item energy conversion by wave-particle interactions,
\item energy transfers by instabilities.
\end{itemize}
\item The way incoming turbulence is processed by planetary plasma boundaries
\begin{itemize}
\item sudden change of spatial and temporal scales,
\item change of spectral properties,
\item existence of a memory of turbulence downstream magnetospheric boundaries.
\end{itemize}
\end{itemize}
Indirectly, because of the high numerical resolution required to properly simulate plasma turbulence, this numerical experiment will provide an exploration of the various obstacles with the same high resolution in both turbulent and laminar runs, resolutions that have rarely been reached for planetary simulations, except for Earth's magnetosphere.\\
\emph{Menura}, the new code presented in this publication, splits the numerical modelling of the interaction into two steps. Step 1 is a decaying turbulence simulation, in which electromagnetic energies initially injected at the large spatial scales of the simulation box cascades towards smaller scales. Step 2 uses the output of Step 1 to introduce an obstacle moving through this turbulent solar wind.
The code is written in \texttt{c++} and uses \texttt{CUDA} APIs for running its solver exclusively on Graphics Processing Units (GPUs). Section \ref{sec:solver} introduces the solver, which is tested against classical plasma phenomena in Section \ref{sec:physical_tests}. Sections \ref{sec:step1} and \ref{sec:step2} tackle the first and second step of the new numerical modelling approach, illustrating the decaying turbulence phase, and introducing the algorithm for combining the output of Step 1 together with the modelling of an obstacle (Step 2). Section \ref{sec:result} presents the first global result of \emph{Menura}, providing a glimpse of the potential of this numerical approach, and introducing the forthcoming studies.\\
\emph{Menura} source code is open source, available under the GNU General Public License license.
\section{The solver}
\label{sec:solver}
In order to (i) achieve global simulations of the interactions while (ii) modelling the plasma kinetic behaviour, with regard to the computation capabilities currently available, a hybrid Particle-In-Cell (PIC) solver has been chosen for \emph{Menura}. This well-established type of models resolves the Vlasov equation for the ions by discretising the ion distribution function as macro-particles characterized by discrete positions in phase space, and electrons as a fluid, with characteristics evaluated at the nodes\footnote{Only the discrete nodes of the grid are considered in the solver, though the term ``cell'' is equivalently used by other authors.} of a grid, together with ion moments and electromagnetic fields. The fundamental computational steps of a hybrid PIC solver are:
\begin{itemize}
\item Particles' position advancement, or ``push''.
\item Particles' moments mapping, or ``gathering'': density, current, eventually higher order, as required by the chosen Ohm's law.
\item Electromagnetic field advancement, using either an ideal, resistive or generalised Ohm's law and Faraday's law.
\item Particles' velocity advancement, or ``push''.
\end{itemize}
These steps are summarised in Figure \ref{fig:algo_inj}. Details about these classical principles can be found in \cite{tskhakaya2008} and references therein. The bottleneck of PIC solvers is the particles' treatment, especially the velocity advancement and the moments computation (namely density and current). The simulation of plasma turbulence especially requires large amounts of macro-particles per grid nodes. We therefore want to minimise both the amount of operations done on the particles and the number of particles itself. A popular method which minimises the amount of these computational passes through all particles is the Current Advance Method (CAM) \cite{matthwes1994jcp}, for instance used for the hybrid modelling of turbulence by \cite{franci2015apj}. Figure \ref{fig:algo} presents \emph{Menura}'s solver algorithm, built around the CAM, similar to the implementation of \cite{bagdonat2002jcp}. In this scheme, only four passes through all particles are performed, one position and one velocity pushes and two particle moments mappings. The second moment mapping in Figure \ref{fig:algo}, i.e. step 2, also produces the two pseudo-moments \texttt{$\Lambda$} and \texttt{$\Gamma$} used to advance the current as:
\begin{align}
\Lambda & = \sum_{p} \frac{q^2}{m} W(\mathbf{r}_{n+1}) , \\
\Gamma & = \sum_{p} \frac{q^2}{m} \mathbf{v}_{n+1/2} W(\mathbf{r}_{n+1}) , \\
J_{n+1} & = J_{n+1/2} + \frac{\Delta t}{2} (\Lambda \mathbf{E}^* + \Gamma \times \mathbf{B}) ,
\end{align}
with $\mathbf{E}^*$ the estimated electric field after the magnetic field advancement of step 4. $W(\mathbf{r}_{n+1})$ is the shape function, which attributes different weights for each node surrounding the macro-particle \cite{tskhakaya2008}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/algo_cam.pdf}
\caption{Algorithm of \emph{Menura}'s solver, with its main operations numbered from 0 to 8, as organised in the \texttt{main} file of the code. $\mathbf{r}$ and $\mathbf{v}$ are the position and velocity vectors of the macro particles. Together with the magnetic field $\mathbf{B}$, they are the only variables necessary for the time advancement. The electric field $\mathbf{E}$, the current $\mathbf{J}$, the charge density $\rho$, as well as the CAM pseudo-moments $\Lambda$ and $\Gamma$, are obtained from $\mathbf{r}$, $\mathbf{v}$ and $\mathbf{B}$.}
\label{fig:algo}
\end{figure}
Central finite differences for evaluating derivatives and second order interpolations are used throughout the solver. The grid covering the physical simulation domain has an additional 2-node wide band, the guard or ghost nodes, allowing to solve derivatives using (central) finite differences at the very edge of the physical domain. For periodic boundary conditions, as used along all directions during Step 1 of the simulation, the value at the opposite edge of the physical domain are copied in the guard nodes. Other boundary conditions will be discussed later when introduced.
The mapping of the particle moments are done using an order-two, triangular shape function: one macro-particle contributes to 9 grid nodes in 2D space ( respectively 27 in 3D space), using 9 (respectively 27) different weights. The interpolation of the fields values from the nodes to the macro-particles' positions uses the exact same weights, with 9 (respectively 27) neighbouring nodes contributing to the fields values at a particle position.
As illustrated in Figure \ref{fig:algo}, the position and velocity advancements are done at interleaved times, similarly as a classical second order leap-frog scheme. However, since the positions of the particles are needed to evaluate their acceleration, the CAM scheme is not strictly speaking a leap-frog integration scheme. Another difference in this implementation is that velocities are advanced using the Boris method \cite{boris1970relativistic}.\\
The Ohm's law is at the heart of the hybrid modelling of plasmas. \emph{Menura} uses the following form of the law, here given in SI units. In this formulation, the electron inertia is neglected, and the quasi-neutral approximation $n\sim n_i \sim n_e$ is used \cite{valentini2007jcp}. Additionally, neglecting the time derivative of the electric field in the Ampere-Maxwell's law (Darwin's hypothesis), one get the total current through the curl of the magnetic field. This formulation highlights the need for only three variables to be followed through time, namely the magnetic field, and the particles position and velocity, while all other variables can be reconstructed from these three.
\begin{equation}\label{eq:ohm}
\mathbf{E} = -\mathbf{u_i}\times\mathbf{B} + \frac{1}{e\, n \mu_0} \mathbf{J} \times \mathbf{B} + \frac{1}{e\, n} \nabla\cdot p_e - \eta_h \nabla^2 \mathbf{J}
\end{equation}
The Faraday's law is used for advancing the magnetic field in time:
\begin{equation}
\frac{\partial \mathbf{B}}{\partial t} = -\mathbf{\nabla}\times \mathbf{E}
\end{equation}
The electron pressure is obtained assuming it results from a polytropic process, with an arbitrary index $\kappa$, to be carefully chosen by the user.
\begin{equation}
p_e = p_{e0}\left(\frac{n_e}{n_{e0}}\right)^\kappa
\end{equation}
Using much less memory than the particles' variables, the fields can be advanced in time using a smaller time step and another leap-frog-like approach, as illustrated in Figure \ref{fig:algo}, step 4 \cite{matthwes1994jcp}.\\
Additional spurious high-frequency oscillations are the default behaviour of finite differences schemes. Two main families of methods are used to filter out these features, the first being an additional step of field smoothing, the second using the direct inclusion of a diffusive term in the differential equation of the system, acting as a filter \cite{maron2008apj}. For \emph{Menura}, we have retained the second approach, implementing a term of hyper-resistivity in the Ohm's law.\\
The stability of hybrid solvers is sensitive to low ion densities. We use a threshold value equal to a few percent of the background density, 5\% in the following examples, threshold below which a node is considered as a vacuum node, and only the resistive terms of the generalised Ohm's law of Equation \ref{eq:ohm} are solved using a higher value of resistivity $\eta_{h\ \text{vacuum}}$ \cite{holmstrom2013}. This way, terms proportional to $1/n$ do not exhibit nonphysical values where the density may get locally very low, due to the thermal noise of the PIC macro-particle discretisation.
All variables in the code are normalised using the background magnetic field amplitude $B_0$ and the background plasma density $n_0$. All variables are then expressed in terms of either these two background values, or equivalently in terms of the proton gyrofrequency $\omega_{ci0}$ and the Alfven velocity $v_{A0}$. We define normalised variables $\tilde{a}$ as obtained by dividing its physical value by its ``background'' value:
\begin{equation}
\tilde{a} = \frac{a}{a_0}
\end{equation}
All background values are given in Table \ref{tab:normalisation}, and the normalised equations of the solver are given in \ref{app:normalised_equations}.
\begin{table}[]
\centering
\begin{tabular}{c|c}
$B_0$ & $B_0$ \\
$n_0$ & $n_0$ \\
$v_0$ & $v_{A0}=B_0/\sqrt{\mu_0 m_i n_0}$\\
$\omega_0$ & $\omega_{ci0}=e B_0/m_i$\\
$x_0$ & $d_{i0}=v_{A0}/\omega_{ci0}$ \\
$t_0$ & $1/\omega_{ci0}$\\
$p_0$ & $B_0^2/(2\mu_0)$\\
$m_0$ & $m_i n_0 x_0^3$\\
$q_0$ & $e\, n_0 x_0^3$\\
\end{tabular}
\caption{Caption}
\label{tab:normalisation}
\end{table}
\section{Physical tests}
\label{sec:physical_tests}
In this section, the code is tested against well-known, collisionless plasma processes, and their solutions given by the linear full kinetic solver \emph{WHAMP} \cite{ronnmark1982waves}. We first explore MHD scales, simulating Alfv\'enic and magnetosonic modes. We use a 2-dimensional spatial domain with one preferential dimension chosen as $\mathbf{x}$. A sum of six cosine modes in the component of the magnetic field along the $x$-direction direction are initialised, corresponding to the first six harmonics of this periodic box. The amplitude of these modes is 0.05 times the background magnetic field $B_0$, which is taken either along (Alfv\'en mode) or across (magnetosonic mode) the propagation direction $\mathbf{x}$. Data are recorded along time and along the main spatial dimension $\mathbf{x}$, resulting in the 2D field $B(x, t)$. The 2-dimensional Fourier transform of this field is given in Figure \ref{fig:MHD}. In this ($\omega, k$)-plane, each mode can be identified as a point of higher power, six points for six initial modes. The solutions given by \emph{WHAMP} for the same plasma parameters are shown by the solid lines, and a perfect match is found between the two models. Close to the ion scale $k.d_i=1$, \emph{WHAMP} and \emph{Menura} display two different branches that originate from the Alfv\'en mode, splitting for higher frequencies into the whistler and the ion cyclotron branches. The magnetosonic modes were also tested using different polytropic indices, resulting in a shift of the dispersion relation along the $\omega$-axis. Changing polytropic indices for both models resulted in the same agreement.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/MHD.pdf}
\caption{MHD modes dispersion relations, as solved by \emph{WHAMP} and \emph{Menura}. $B_0=1.8$ nT, $n_0=1.$ cm$^{-3}$, $T_{i0}=10^4$ K, $T_{e0}=10^5$ K}
\label{fig:MHD}
\end{figure}
With the MHD scales down to ion inertial scales now validated, we explore the ability of the solver to account for further ion kinetic phenomena, first with the classical case of the two-stream instability (also known as the ion-beam instability, given the following configuration). Two Maxwellian ion beams are initialised propagating with opposite velocities along the main dimension $\mathbf{x}$. A velocity separation of $15 v_{th}$ is used in order to excite only one unstable mode. The linear kinetic solver \emph{WHAMP} is used to identify the expected growth rate associated to the linear phase of the instability, before both beams get strongly distorted and mixed in phase space during the nonlinear phase of the instability (not capture by WHAMP). During this linear phase, \emph{Menura} results in a growing circularly polarised wave, and the amplitude's growth of the wave is shown in Figure \ref{fig:kinetics}. Both growths match perfectly.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/kinetics.pdf}
\caption{Left-hand side, growth during the linear phase of the ion-ion two-stream instability; Right-hand side, Landau damping of an ion acoustic mode. Two-stream instability: $B_0=1.8$ nT, $n_0=1.$ cm$^{-3}$, $T_{i0}=10^2$ K, $T_{e0}=10^3$ K. Landau Damping: $B_0=1.8$ nT, $n_0=5.$ cm$^{-3}$, $T_{i0}=1.5\cdot 10^4$ K, $T_{e0}=10^5$ K}
\label{fig:kinetics}
\end{figure}
Finally, we push the capacities of the model to the case of the damping of an ion acoustic wave through Landau resonance. A very high number of macro-particles per grid node is required to resolve this phenomenon, so enough resonant particles take part in the interaction with the wave. The amplitude of the initial, single acoustic mode is taken as 0.01 times the background density, taken along the main spatial dimension of the box. The decrease in the density fluctuation through time, spatially averaged, is shown in Figure \ref{fig:kinetics}, with again the corresponding solution from \emph{WHAMP}. A satisfying agreement is found during the first 6 oscillations, before the noise in the hybrid solver output (likely associated to the macroparticle thermal noise) takes over. Admittedly, the amount of particle per node necessary to well resolve this phenomenon is not practical for the global simulations which \emph{Menura} (together with all global PIC simulations) aims for.
For the classical tests presented above, spanning over MHD and ion kinetic scales tests, \emph{Menura} agrees with theoretical and linear results. In the next section, the simulation of a decaying turbulent cascade provides one final physical validation of the solver, through all these scales at once.
\section{Step 1: Decaying turbulence}
\label{sec:step1}
We use \emph{Menura} to simulate plasma turbulence using a decaying turbulent cascade approach: at initial time $t=0$, a sum of sine modes with various wave vectors $\mathbf{k}$, spanning over the largest spatial scales of the simulation domain, are added to both the homogeneous background magnetic field $\mathbf{B}_0$ and the particles velocities $\mathbf{u}_i$. Particle velocities are initialised according to a Maxwellian distribution, with a thermal speed equal to one Alfv\'en speed. Without any other forcing later-on, this initial energy cascades, as time advances, towards lower spatial and temporal scales, while forming vortices and reconnecting current sheets \cite{franci2015apj}. Using such Alfv\'enic perturbation is motivated by the predominantly Alfv\'enic nature of the solar wind turbulence measured at 1 au \cite{bruno2013turbulence}.
In this 2-dimensional set-up, $\mathbf{B}_0$ is taken along the $\mathbf{z}$-direction, perpendicular to the simulated spatial domain $(\mathbf{x}, \mathbf{y})$, whereas all initial perturbations are defined within the simulation plane. Their amplitude is 0.5 $B_0$, while their wave vectors are taken with amplitude between $k_\text{inj\, min}=0.01\ d_{i0}$ and $k_\text{inj\, max}=0.1\ d_{i0}$, so energy is only injected in MHD scales, in the inertial range. Because we need these perturbation fields to be periodic along both directions, the $k_x$ and $k_y$ of each mode corresponds to harmonics of the simulation box dimensions. Therefore, a finite number of wave vector directions is initialised. Along these constrained directions, each mode in both fields has two different, random phases. The magnetic field is initialised such that is it divergence-free.
For this example, the box is chosen to be 500 $d_{i0}$ wide on both dimensions, subdivided by a grid 1000$^2$ nodes wide. The corresponding $\Delta x$ is 0.5 $d_{i0}$, and spatial frequencies are resolved over the range [0.0062, 6.2] $d_{i0}$. The time step is 0.05 $\omega_{ci0}^{-1}$. 2000 particles per grid node are initialised with a thermal speed of 1 $v_A$. The temperature is isotropic and a plasma beta of 1 is chosen for both the ion macro-particles and the electronic massless fluid.
At time $t=500\ \omega_{ci0}^{-1}$, the perpendicular (in-plane) fluctuations of the magnetic field have reached the state displayed in Figure \ref{fig:decay}, left-hand panel. Vortices and current sheets give a maximum $B_{\perp}/B_0$ of about 1, a result consistent with solar wind turbulence observed at 1 au \cite{bruno2013turbulence}. The omni-directional power spectra of both the in-plane magnetic field fluctuations and the in-plane ion bulk velocity fluctuations are shown in the right-hand panel of the same figure. Omni-directional spectra are computed as follows, with $\hat f$ the (2D) Fourier transform of $f$:
\begin{equation}
P_{f}(k_\perp, k_\parallel) = |\hat f|^2
\end{equation}
These spectra are not further normalised and are given in arbitrary units. We then compute a binned statistics over this 2-dimensional array to sum up its values within the chosen bins of $k_\perp$, which correspond to rings in the $(k_\perp, k_\parallel)$-plane. The width of the rings is arbitrarily chosen so the resulting 1-dimensional spectrum is well resolved (not too few bins), and not too noisy (not too many bins).
\begin{equation}
P_{f}(k_\perp) = \sum_{k_\perp \in [k_{\perp0},\ k_{\perp0}+\delta k_\perp]}|\hat f|^2
\end{equation}
For a vector field such as $\mathbf{B}_\perp=(B_x, B_y)$, the spectrum is computed as the sum of the spectra of each field component:
\begin{equation}
P_{\mathbf{B}_\perp}(k_\perp) = P_{B_x}(k_\perp) + P_{B_y}(k_\perp) .
\end{equation}
The perpendicular magnetic and kinetic energy spectra exhibit power laws over the inertial (MHD) range consistent with spectral indexes -5/3 and -3/2, respectively, between $k_\text{inj\, max}=0.1\ d_{i0}$ and break points around 0.5 $d_{i0}$. We remind that a spectral index -5/3 is consistent with the Goldreich-Sridhar strong turbulence phenomenology~\cite{Goldreich&Sridhar1997} that leads to a Kolmogorov-like scaling in the plane perpendicular to the background magnetic field, while a spectral index -3/2 is consistent with the Iroshnikov-Kraichnan scaling~\cite{Kraichnan1965PhFl}. These spectral sloped are themselves consistent with observations of magnetic and kinetic energy spectra associated to solar wind turbulence~\cite{Podestaetal2007,ChapmanHnat2007}. For higher wavenumbers, both spectral slopes get much steeper, and after a transition region within [0.5, 1.] $d_{i0}$ get to a value of about -3.2 and -4.5 when reaching the proton kinetic scales for respectively the perpendicular magnetic and kinetic energies, consistent with spectral index found at sub-ion scales by previous authors~\cite[e.g.]{franci2015apj, Sahraouietal2010}.
Additionally, the initial spectra of the magnetic field and bulk velocity perturbations are over-plotted, to show respectively where the energy is injected in spatial frequencies and the level of noise introduced by the finite number of particles per node used.
\begin{table}
\centering
\begin{tabular}{c|c}
$B_0$ & 2.5 nT \\
$n_0$ & 1 cm$^{-3}$ \\
$\omega_{ci0}$ & 0.24 s \\
$d_{i0}$ & 228 km \\
$v_{A0}$ & 55 km/s \\
$v_{th i0}$ & 55 km/s \\
$\beta_{i0}=\beta_{e0}$ & 1 \\
$B_{\perp 0}/B_0$ & 0.7 \\
\end{tabular}
\caption{Initial parameters of the decaying turbulence run}
\label{tab:decay_param}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/spectra_technical_2.pdf}
\caption{Decaying turbulence at time 500 $\omega_{ci0}^{-1}$.}
\label{fig:decay}
\end{figure}
\section{Step 2: Obstacle}
\label{sec:step2}
\emph{Menura} has shown satisfactory results on plasma turbulence, over three orders of magnitude in wavenumbers. We now start the second phase of the simulation, resuming it at $t=500\ \omega_{ci0}^{-1}$, corresponding to the snapshot studied in the previous section. We keep \emph{all} parameters unchanged, but add an obstacle with a relative velocity with regard to the frame used in the first phase, evolving through this developed turbulence. Particles and field are advanced with the exact same time and spatial resolutions as previously, so the interaction between this obstacle and the already-present turbulence is solved with the same self-consistency as in the first phase, with only one ingredient added: the obstacle.
\subsection{A comet}
This obstacle is chosen here to be an intermediate activity comet, meaning that its neutral outgassing rate is typical of an icy nucleus at a distance of about 2 au from the Sun. Comprehensive knowledge on this particular orbital phase of comets has recently been generated by the European \emph{Rosetta} mission, which orbited its host comet for two years \cite{glassmeier2007rosetta}. The first and foremost interest of such an object for this study is its size, which can be evaluated using the gyroradius of water ions in the solar wind at 2 au. The expected size of the interaction region is about 4 times this gyro-radius \cite{behar2018aa}, and with the characteristic physical parameters of Table~\ref{tab:decay_param}, the estimated size of the interaction region is 480 $d_{i0}$. In other words, the interaction region spans exactly over the range of spatial scales probed during the first phase of the simulation, including MHD and ion kinetic scales.
The second interest of a comet is its relatively simple numerical implementation. Without a solid body, without gravity and without an intrinsic magnetic field, the obstacle is only made of cometary neutral particles being photo-ionised \emph{within} the solar wind. Over the scales of interest for this study, the neutral atmosphere can be modelled by a $1/r^2$ radial density profile, and considering the coma to be optically thin, ions are injected in the system with a rate following the same profile. This is the Haser Model \cite{haser1957distribution}, and simulating a comet over scales of hundreds of $d_{i0}$ only requires to inject cold cometary ions at each time step with the rate
\begin{equation}\label{eq:haser}
q_i(r) = \nu_i \cdot n_0(r) = \frac{\nu_i Q}{4 \pi u_0 r^2} ,
\end{equation}
with $r$ the distance from the comet nucleus of negligible size, $\nu_i$ the ionisation rate of cometary neutral molecules, $n_0$ the neutral cometary density, $Q$ the neutral outgassing rate, $u_0$ the radial expansion speed of the neutral atmosphere.
\subsection{Reference frame}
The first phase of the simulation, the decaying turbulence phase, was done in the plasma frame, in which the average ion bulk velocity is 0. Classically, planetary plasma simulation are done in the planet reference frame: the obstacle is static and the wind flows through the simulation domain. In this case, a global plasma reference frame is most of the time not defined. In \emph{Menura}, we have implemented the second phase of the simulation -- the interaction phase -- in the exact same frame as the first phase, which then corresponds to the plasma frame of the upstream solar wind, before interaction. In other words, the turbulent solar wind plasma is kept ``static'', and the obstacle is moving through this plasma. The reason motivating this choice is to keep the turbulent solar wind ``pristine'', by continuing its resolution over the exact same grid as in phase one. Another motivation for working in the solar wind reference frame is illustrated in Figure \ref{fig:orf_swrf}, in which we compare the exact same simulation done in each frame, using a laminar upstream flow. If the macroscopic result remains unchanged between the two frames, we find strong small scale numerical artifacts propagating upstream of the interaction in the comet reference frame, absent in the solar wind reference frame. Small scale oscillations are current in hybrid PIC simulations, and are usually filtered with either resistivity and/or hyper-resistivity, or with an ad-hoc smoothing method. Note that none of these methods are used in the present example. We demonstrate here the role of the reference frame in the production of one type of small scale oscillations, and insure that their influence over the spectral content of upstream turbulence is minimised, already without the implemented hyper-resistivity.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/ORF_SWRF.pdf}
\caption{The interaction between a comet and a laminar flow, in the object rest frame (right) and the upstream solar wind reference frame (left). The magnetic field amplitude is shown. }
\label{fig:orf_swrf}
\end{figure}
To summarise, by keeping the same reference frame during Step 1 and 2, the only effective difference between the two phases is the addition of sunward moving cometary macro-particle.
\subsection{Algorithm}
By working in the solar wind reference frame, the obstacle is moving within the simulation domain. Eventually, the obstacle would reach the boundaries of the box, before steady-state is reached. We therefore need to somehow keep the obstacle close to the centre of the simulation domain. This is done by shifting all particles and fields of $n \Delta x$ every $m$ iterations, $n, m \in \mathbb{N}$, as illustrated in Figure \ref{fig:algo_inj}. Using integers, the shift of the field is simply a side-way copy of themselves without the need of any interpolation, and the shift of the particles is simply the subtraction of $n \Delta x$ to their $x$-coordinate. Field values as well are particles ending up downstream of the simulation domain are discarded. \\
This leaves only the injection boundary to be dealt with. There, we simply inject a slice of fields and particles picked from the output of Step 1, using the right slice index in order to inject the continuous turbulent solution, as shown in Figure \ref{fig:algo_inj}. These slices are $n \Delta x$ wide.
With \texttt{idx\_it} the index of the iteration, the algorithm illustrated in Figure \ref{fig:algo_inj} is then:
\begin{itemize}
\item Inject cometary ions according to $q_i(r)$ (cf. Eq. \ref{eq:haser})
\item Advance particles and fields (cf. Figure \ref{fig:algo})
\item If \texttt{idx\_it\%m=0}
\begin{itemize}
\item Shift particles and fields of \texttt{-n$\Delta$x}
\item Discard downstream values
\item Inject upstream slice \texttt{idx\_{slice}} from Step 1 output
\item Increment \texttt{idx\_{slice}}
\end{itemize}
\item Increment \texttt{idx\_it}
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/algo_injection.pdf}
\caption{Injection algorithm for simulating a moving object within the simulation domain.}
\label{fig:algo_inj}
\end{figure}
This approach has one constraint, we cannot fine-tune the relative speed $v_0$ between the wind and the obstacle, which has to be
\begin{equation}
v_0 = \frac{n}{m} \frac{\Delta x}{\Delta t}
\end{equation}
in order for the obstacle to come back to its position every \texttt{m} iterations, and therefore not drift up- or downstream of the simulation domain.
\subsection{CUDA and MPI implementation, performances}
The computation done by \emph{Menura}'s solver (Figure \ref{fig:algo}) is entirely executed on multiple GPUs (Graphics Processing Units), written in \texttt{c++} in conjunction with the \texttt{CUDA} programming model and the \texttt{MPI} standard, which allows to split the problem and distribute it over multiple cards (i.e. processors). GPUs can run simultaneously thousands of threads, and can therefore tremendously accelerate such applications. The first implementation of a hybrid-PIC model on such devices was done by \cite{fatemi2017jp}. However their still limited memory (up to 80GB at the time of writing) is a clear constraint for large problems, especially for turbulence simulation which requires a large range of spatial scales \emph{and} a very large number of particles per grid node. The use of multiple cards becomes then unavoidable, and the communications between them is implemented using a CUDA-aware version of MPI, the Message Parsing Interface. The division of the simulation domain in the current version of \emph{Menura} is kept very simple, with equal size rectangular sub-domains distributed along the direction perpendicular to the motion of the obstacle: one sub-domain spans the entire domain along the x-axis with its major dimension, as shown in Figure \ref{fig:algo_inj}. MPI communications are done for particles after each position advancement, and for fields after each solution of the Ohm's law and the Faraday's law. But since the shift of fields and particles described in the previous section is done purely along the obstacle motion direction, no MPI communication is needed after the shifts, thanks to the orientation of the sub-domains.
Another limitation in using GPUs is the data transfer time between the CPU and the cards. In \emph{Menura}, all variables are initialised on the CPU, and are saved from the CPU. Data transfers are then unavoidable, before starting the main loop, and every time we want to save the current state of the variables. During Step 2 of the simulation, a copy of the outputs of Step 1 is needed, which effectively doubles the memory needed for Step 2. This copy is kept on the CPU (in the \texttt{tank} object) in order to make the most out of the GPUs memory, in turn implying that more CPU-GPU communications are needed for this second step. Every time we inject a slice of fields and particles upstream of the domain, only this amount of data is copied from the CPU to the GPUs, using the \texttt{injector} variable as sketched in Figure \ref{fig:algo_inj}.\\
For Step 1, the decaying turbulence ran 10000 iterations, four nvidia V100 GPUs were used with 16GB memory each, corresponding to one complete node of the IDRIS cluster Jean Zay. A total of 2 billion particles (500 million per card) were initialised. The time for the solver on each card reached a bit more than three hours, with a final total coast of about 13 hours of computation time for this simulation, taking into account all four cards, and the variables initialisation and output. Step 2 was executed on larger V100 32GB cards, providing much more room for the addition of 60000 cometary macro-particles per iteration.
During Step 1, 87.3\% of the computation time was spent on moments mapping, i.e. steps 0 and 2 in the algorithm of Figure \ref{fig:algo}, while respectively 2.7\% and 0.8\% were spent on advancing the particles velocity and position. The computation of the Ohm's and Faraday's laws sums up to 0.5\%. 0.9\% was utilised for MPI communications of field variables, while only 0.08\% was dedicated for particles MPI communications, due to the limited particle transport happening in Step 1.
91\% of the total solver computation time is devoted to particles treatment, with 96\% of that part spent on particles moment mapping, which might seem a suspiciously large fraction. We note however that such a simulation is characterised by its large number of particles per node, 2000 in our case. 99.6\% of the total allocated memory is devoted to particles. The time spent to map the particles on the grid is also remarkably larger than the time spent to update their velocity, despite both operations being based on the same interpolation scheme. However, during the mapping of the particles moments, thousands of particles need to \emph{increment} the value at particular memory addresses (corresponding to macro-particle density and flux), whereas during the particle velocity advancement, thousands of particles only need to \emph{read} the value of the same addresses (electric and magnetic field).
\section{First result}
\label{sec:result}
We now focus on the result of Step 2, in which cometary ions were steadily added to the turbulent plasma of Step 1, moving at a super-Alfv\'enic and super-sonic speed. Table \ref{tab:comet_param} lists the physical parameters used for Step 2. After 4000 iterations, the total number of cometary macro-particles in the simulation domain reaches a constant average value: the comet is fully developed and has reached an average ``steady'' state. From this point, we simulate more than one full injection period, 1500 iterations after which we loop over the domain of the injection tank in Figure \ref{fig:algo_inj}. As an example, Figure \ref{fig:result} displays the state of the system at iteration 6000, focusing here again on the perpendicular fluctuations of the magnetic field. This time the colour scale is logarithmic, since magnetic field fluctuations are spanning over a much wider range than previously. While being advected through a dense cometary atmosphere, the solar wind magnetic field \emph{piles up} (augmentation of its amplitude because of the slowing down of the total plasma bulk velocity) and \emph{drapes} (deformation of its field lines due to the differential pile-up around the density profile of the coma), as first theorised by \cite{alfven1957theory}. This general result was always applied to the global, average magnetic field, and was observed in situ at the various comets visited by space probes.
Without diving very deep in the first results of \emph{Menura}, we see that the pile-up and the draping of upstream perpendicular magnetic field fluctuations also has an important impact on the tail of the comet, with the creation of large amplitude magnetic field vortices of medium and small size. This phenomenon, together with a deeper exploration of the impact of solar wind turbulence on the physics of a comet, are gathered in a subsequent publication.
\begin{table}[]
\centering
\begin{tabular}{c|c}
$u_0$ & 363 km/s (6.66 $v_A$) \\
$Q$ & $5\cdot 10^{26}$ s$^{-1}$\\
$\nu_i$ & $2\cdot 10^{-7}$ s$^{-1}$\\
$u_0$ & 1 km/s\\
\end{tabular}
\caption{Physical parameters of the comet.}
\label{tab:comet_param}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1.\textwidth]{figures/B_perp_comet.pdf}
\caption{Perpendicular magnetic fluctuations during the interaction.}
\label{fig:result}
\end{figure}
\section{Conclusion}
This publication introduces a new hybrid PIC plasma solver, \emph{Menura}, that allows for the first time to investigate the impact of a turbulent plasma flow on an obstacle. For this purpose, a new 2-step simulation approach has been developed which consist in, first, developing a turbulent plasma, and second, injecting it periodically in a box containing an obstacle. The model has been validated with respect to well-known fluid and kinetic plasma phenomena. \emph{Menura} has also proven to provide the results expected at the output of this first step of the model -- namely decaying magnetised plasma turbulence.
Until now, all planetary science oriented simulations have ignored all-together the turbulent nature of the solar wind plasma flow, in terms of structures and in terms of energy. \emph{Menura} has been design to fulfill this lack and it will now allow us to explore, for the first time, some fundamental questions that have remained open regarding the impact of the solar wind on different solar system objects, such as: what happens to turbulent magnetic field structures when it impacts on an obstacle such as a magnetosphere? How are the properties of a turbulent plasma flow reset as is crosses a shock, such as the solar wind crossing a planetary bow shock? How do the additional energy, stored in the perpendicular magnetic and velocity field components impact the large-scale structures and dynamics of planetary magnetospheres?
On top of the study of the interaction between the turbulent solar wind and solar system obstacles, we are confident that the new modeling framework developed in this work, in particular its 2-step approach might as well be suitable for the study of energetic solar wind phenomenons, namely Co-rotating Interaction Regions and Coronal Mass Ejections, which could be similarly simulated first in the absence of an obstacle, to then be used as inputs of a second step including obstacles.
\section{Acknowledgements}
E. Behar acknowledges support from Swedish National Research Council, Grant 2019-06289. This research was conducted using computational resources provided by the Swedish National Infrastructure for Computing (SNIC), Project SNIC 2020/5-290 and SNIC 2021/22-24 at the High Performance Computing Center North (HPC2N), Umeå University, Sweden. This work was granted access to the HPC resources of IDRIS under the allocation 2021-AP010412309 made by GENCI. Work at LPC2E and Lagrange was partly funded by CNES.
|
1,477,468,750,898 | arxiv | \section{Introduction}
\input{01-introduction}
\section{Related Work}
\input{02-related-work}
\section{Annotation Study}
\input{03-annotation-study}
\section{Methodology}
\input{04-methodology}
\section{Experiments}
\input{05-experiments}
\section{Conclusion}
\input{06-conclusion}
\section*{Acknowledgment}
This work was supported by the Program for Leading Graduate Schools, "Graduate Program for Embodiment Informatics" of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan and JST ACCEL (JPMJAC1602).
Computational resource of AI Bridging Cloud Infrastructure (ABCI) provided by National Institute of Advanced Industrial Science and Technology (AIST) was used.
\bibliographystyle{splncs04}
\subsection{Implementation Details}
We conduct our experiments using
BERT's implementation from the Hugging Face library \cite{Wolf2019HuggingFacesTS}.
In all fine-tuning procedures, we use the Adam optimizer \cite{kingma2014adam} with the learning rate $1e-6$, train in batches of size $32$, and apply a dropout at the rate of $0.2$ and a gradient clipping threshold of $5$. We train the model for 1 epoch. To extract text from collected PDF versions of papers, we rely on the Science Parse library\footnote{https://github.com/allenai/science-parse}.
Explicit inline references of figure are identified via the keywords "Figure" or "Fig.". To extract figure captions, we employ the image-based approach from \cite{Siegel2018}.
\subsection{Experimental Setting}
\noindent
\textbf{PubMed.} In \cite{Yang2019Identifying}, $7,295$ biomedical and life science papers from PubMed are annotated for central figure identification. We managed to obtain the PDFs from PubMed for $7,113$ of those papers. We used the training, validation, and test splits provided by Yang et al. Using our figure mention heuristic, we created $40k$ paragraph-figure pairs from the training portion of the dataset.
Following \cite{Yang2019Identifying}, we use the top-1 and top-3 accuracy as evaluation metrics.
\vspace{1em}
\noindent
\textbf{Computer Science (CS).} We additionally evaluate our model on our new CS dataset (Section \ref{sec:data}). Unlike the PubMed dataset, in which only a single figure is annotated as central, our annotators ranked three figures for each CS paper. Consequently, we use Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR) as our evaluation metrics on the CS dataset.
For the training, we collect papers from the same subdomains as the annotated test data, from between $2015$ and $2018$. Here we also obtain around $40k$ paragraph-figure instances for model training.
\subsection{Experiments}
\noindent
\textbf{Performance on the PubMed dataset.} We first evaluate our self-supervised approach using the PubMed dataset (Table \ref{tab:pubmed_result}).
We follow \cite{Yang2019Identifying} and make use of two baselines: random and `select first image'. For comparison, we also provide the results of two methods from \cite{Yang2019Identifying}, a text-only model that uses cosine similarity of TF-IDF between the abstract and the figure caption as the input feature, and a full model that takes the figure type label (e.g., diagram, plot) and layout (e.g., section index, figure order) as inputs, as well as text features.
We compare these against the the performance of our models based on three different pretrained Transformers, namely vanilla BERT \cite{devlin-etal-2019-bert}, RoBERTa \cite{liu2019roberta} and SciBERT \cite{beltagy-etal-2019-scibert}.
\setlength{\tabcolsep}{16pt}
\begin{table}[t]
\centering
\caption{Performance evaluation on the PubMed dataset \cite{Yang2019Identifying}.}
\label{tab:pubmed_result}
\begin{tabularx}{\linewidth}{llcc}
\toprule
Method & Model & Accuracy@1 & Accuracy@3 \\ \hline
\multirow{2}{*}{Baseline}&Random & 0.280 & 0.701 \\
&Pick first & 0.301 & 0.733 \\\hline
\multirow{2}{*}{Yang et al. \cite{Yang2019Identifying}}&Text-only & 0.333 & \textbf{0.810} \\
&Full & 0.344 & 0.793 \\ \hline
\multirow{3}{*}{Ours}&Vanilla BERT & 0.331 & 0.770 \\
&RoBERTa & 0.347 & 0.741 \\
&SciBERT & \textbf{0.383} & 0.787 \\
\bottomrule
\end{tabularx}
\end{table}
Regardless of the text encoder, our approach outperforms the baselines in terms of both top-1 and top-3 accuracy.
This result indicates that our method for generating training data creation is effective for central figure identification.
Among the text encoders, SciBERT performs the best for both metrics, arguably because it has been trained on a corpus of scientific papers, thus minimizing problems related to domain transfer.
Despite not requiring manual annotation for training, our approach with SciBERT also outperforms the supervised approach of Yang et al. \cite{Yang2019Identifying} in terms of top-1 accuracy.
\vspace{1em}
\noindent
\textbf{Performance on the CS dataset.} We also evaluate the performance of the model on our CS dataset (Table \ref{tab:cs_result}). We follow the same setting as for the PubMed data and use a random and `choose first image' methods as baselines.
Here, we compare SciBERT with vanilla BERT and RoBERTa, so as to additionally verify the effectiveness of SciBERT in the CS domain -- since over $80\%$ of the papers in the corpus for SciBERT pre-training are from the biomedical domain and the ratio of CS papers account for only $18\%$ of SciBERT's pretraining corpus \cite{beltagy-etal-2019-scibert}.
\setlength{\tabcolsep}{24pt}
\begin{table}[t]
\centering
\caption{Performance evaluation on CS papers.}
\label{tab:cs_result}
\begin{tabularx}{\linewidth}{llcc}
\toprule
Method & Model & MAP & MRR \\ \hline
\multirow{2}{*}{Baseline}&Random & 0.616 & 0.693 \\
&Pick first & 0.754 & 0.827 \\\hline
\multirow{3}{*}{Ours}&Vanilla BERT & 0.694 & 0.773 \\
&RoBERTa & 0.702 & 0.793 \\
&SciBERT & 0.731 & 0.822 \\
\bottomrule
\end{tabularx}
\end{table}
Our approach outperforms the random baseline both in terms of MAP and MRR.
Among base models, SciBERT outperforms vanilla BERT and RoBERTa as in PubMed papers.
Though most of the corpus for SciBERT pre-training is from the biomedical domain, a certain number of CS papers seen in pretraining still contributes to the downstream performance on central figure identification.
However, as opposed to the case of PubMed papers, the `pick first' baseline here is much stronger and hard-to-beat, even for our Transformer-based approach.
This result indicates that CS papers tend to use Graphical Abstract (GA) in the beginning, and empirically highlights that our new dataset is more challenging than the PubMed-based dataset.
\begin{comment}
We further analyze individual cases whether `select first image' is the best way for central figure identification on any CS papers (Table \ref{tab:first}).
The result shows that model prediction is better in a certain number of papers on MAP and MRR, which suggests that pick first is not necessarily the best choice.\todo{SP: I do not understand the message of this paragraph or the table...}
\begin{table}[t]
\centering
\caption{The ratio of papers that model prediction is better than pick first baseline on CS papers. SciBERT is used as a base model.}
\label{tab:first}
\begin{tabular}{c|cc}
\toprule
Metric & MAP & MRR\\ \hline
Model $>$ Pick first (\%) & 39.9 & 27.5 \\
\bottomrule
\end{tabular}
\end{table}
\end{comment}
\vspace{1em}
\noindent
\textbf{Cross-domain Experiments.} Image usage in scientific publications is known to be different across scientific fields \cite{Lee2018}. Accordingly, we set next to empirically evaluate the robustness of our approach in a domain transfer setup. Due to the different granularity of our PubMed and CS datasets, the latter including papers from four different research areas of computer science (AI, NLP, ML, and CV), we are able to perform two sets of domain transfer experiments, namely biomedical vs.\ computer science as well as across different CS subdomains.
\setlength{\tabcolsep}{6pt}
\begin{table}[t]
\centering
\caption{Performance comparison of training on different research fields.}
\label{tab:field}
\begin{minipage}[t]{.45\textwidth}
\centering
(a) Test on PubMed papers.
\begin{tabular}{c|cc}
\toprule
Training data & ACC@1 & ACC@3 \\ \hline
-- (Random b.) & 0.280 & 0.701 \\ \hline
PubMed & 0.383 & 0.787 \\
CS & 0.368 & 0.777 \\
\bottomrule
\end{tabular}
\\
\end{minipage}
\begin{minipage}[t]{.45\textwidth}
\centering
(b) Test on CS papers.
\\
\begin{tabular}{c|cc}
\toprule
Training data & MAP & MRR \\ \hline
-- (Random b.) & 0.616 & 0.693 \\ \hline
PubMed & 0.728 & 0.822 \\
CS & 0.731 & 0.822 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
We first compare model performance by training and testing on datasets from different domains -- i.e., biomedical papers from PubMed vs.\ computer science publications from our CS dataset -- using SciBERT as a base model (Table \ref{tab:field}).
In the test with PubMed papers, training on the same domain performs better both in terms of top-1 and top-3 accuracy.
Despite the slightly lower performance, training on the CS domain also outperforms the random baseline.
On the other hand, training on different domains, somewhat surprisingly, does not degrade the performance.
This would imply that papers from different domains exhibit similar text-figure (caption) matching properties.
\begin{table*}[t]
\centering
\caption{Performance comparison of the model trained on papers from different research topics in CS.}
\label{tab:topic}
(a) MAP
\\
\begin{tabular}{c|cccc|cc}
\toprule
\multirow{2}{*}{Test data} & \multicolumn{4}{c|}{Training data} & \multicolumn{2}{c}{Baseline} \\ \cline{2-7}
& NLP & CV & AI & ML & Random &Pick first \\ \hline
NLP & 0.727 & 0.728 & 0.728 & 0.730 & 0.631 &0.751 \\
CV & 0.716 & 0.721 & 0.716 & 0.719 & 0.585 &0.758 \\
AI & 0.727 & 0.729 & 0.728 & 0.730 & 0.637 &0.776 \\
ML & 0.676 & 0.682 & 0.679 & 0.681 & 0.617 &0.732 \\
\bottomrule
\end{tabular}
\\
\vspace{5pt}
(b) MRR
\\
\begin{tabular}{c|cccc|cc}
\toprule
\multirow{2}{*}{Test data} & \multicolumn{4}{c|}{Training data} & \multicolumn{2}{c}{Baseline} \\ \cline{2-7}
& NLP & CV & AI & ML & Random &Pick first \\ \hline
NLP & 0.791 & 0.795 & 0.795 & 0.798 & 0.705 &0.816 \\
CV & 0.826 & 0.833 & 0.826 & 0.831 & 0.664 &0.831 \\
AI & 0.828 & 0.834 & 0.830 & 0.828 & 0.711 &0.847 \\
ML & 0.759 & 0.769 & 0.763 & 0.769 & 0.686 &0.814 \\
\bottomrule
\end{tabular}
\end{table*}
Next, we examine the results of domain transfer for different CS subdomains dataset belonging to different areas of computer science, due to the fact that image usage and volume may potentially vary among fields like, e.g., natural language processing and computer vision, with papers from the latter containing typically more images.
We train models on four different areas (NLP, CV, AI, and ML) and test them on all others. Domain comparison within several research topics in CS is summarized in Table \ref{tab:topic}.
Overall, the results are rather consistent across areas and indicate that, within computer science, the research topics of papers do not affect the model performance.
Among the four topics, the performance is the lowest on machine learning (ML) papers.
One possible explanation is that many ML papers tend to describe the research from a more theoretical perspective, which does not require the use of a Graphical Abstract. The `pick first' scores higher for CV papers than for NLP and ML papers, whereas the random baseline naturally performs worse on CV papers, which contain more figures.
\vspace{1em}
\noindent
\textbf{Model Analysis.} To understand the model behavior, we analyze the attention in SciBERT.
We visualize the attention in the Transformer model.
We find that most attention maps are consistent with typical classes reported in \cite{kovaleva-etal-2019-revealing}, such as vertical or diagonal attention patterns. In some attention heads, the model attends to the lexical overlap between abstract and caption.
The examples of attention matrices produced by heads attending over the same or semantically similar tokens, are shown in Figure \ref{fig:attention}.
In this example, instances of tokens like 'tracking' and 'when' appearing in both in abstract and caption have mutually high attention weights. Additionally, pairs of tokens with similar meaning like 'restore' and 'recovering' also receive high mutual attention weights.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/attention.eps}
\caption{Examples of attentions from SciBERT which attends to semantically similar tokens (training data is from the CS domain).}
\label{fig:attention}
\end{figure}
We also compare attention patterns among the model trained with different topics (NLP, CV, AI, and ML).
Following \cite{kovaleva-etal-2019-revealing}, we calculate the cosine similarity of attention maps.
We show the mean cosine similarity of flattened attention map for randomly selected $100$ samples in Table \ref{tab:attention}.
Cosine similarity is high for all combinations; this means that the attention patterns across the different CS domains are virtually identical, confirming empirically our previous assumption that there are no relevant differences between CS domains when it comes to text-figure matching (see Table \ref{tab:topic}).
\begin{table}[t]
\centering
\caption{Cosine similarities of attention weight maps obtained from models trained for different CS domains. Attention maps from all layers are flattened into a single vector.}
\begin{tabular}{c|ccc}
\toprule
& NLP & CV & AI \\ \hline
CV & 0.9997 & 0.9998 & 0.9998 \\
AI & 0.9998 & 0.9998 & - \\
ML & 0.9997 & - & - \\
\bottomrule
\end{tabular}
\label{tab:attention}
\end{table}
Kovaleva et al. reported that after fine-tuning attention maps change the most in the last two transformer layers \cite{kovaleva-etal-2019-revealing}. We therefore analyze the change in attention patterns after our task-specific fine-tuning.
We compare the standard fine-tuning, in which we update all SciBERT's parameters (and which we used in all our previous experiments), and the feature-based training, in which we freeze SciBERT's parameters and train only the regressor's parameters. The comparison of attention patterns between fine-tuned and frozen SciBERT is summarized in Table \ref{tab:finetuning}.
On the one hand, if we freeze SciBERT's parameters, we observe a major drop in performance (6 MAP points). On the other hand, high cosine similarity of attention maps between the fine-tuned and frozen SciBERT that the two transformers still exhibit similar attention patterns.
This suggests that only slight changes in the parameters of Transformer's attention heads have the potential to substantially change the predictions of the regressor.
\begin{table}[t]
\centering
\caption{Comparison between fine-tuned and frozen SciBERT trained on CS papers.}
\centering
(a) Model performance. \\
\begin{tabular}{c|cc}
\toprule
SciBERT & MAP & MRR \\ \hline
fine-tune & 0.733 & 0.827 \\
freeze & 0.677 & 0.752 \\
\bottomrule
\end{tabular}
\vspace{5pt}
\raggedright
(b) Cosine similarity of attention map for randomly sampled 100 sentence-caption pairs in each layer.\\
\centering
\begin{tabular}{c|cccccc}
\toprule
Layer & 1 & 2 & 3 & 4 & 5 & 6 \\
Similarity & 0.9999 & 0.9993 &0.9983 &0.9982 &0.9983 &0.9980 \\ \hline
Layer& 7 & 8 & 9 & 10 & 11 & 12 \\
Similarity&0.9977 &0.9971 &0.9964 &0.9951 &0.9948 &0.9945 \\
\bottomrule
\end{tabular}
\label{tab:finetuning}
\end{table}
|
1,477,468,750,899 | arxiv |
\section{Introduction}
\label{sec:Introduction}
The present work is part of an
ongoing project \cite{Dunne2016:NewPerspective,Dunne2017:OnTheStructure} to
marry conceptually the monoidal approach to quantum
theory initiated by Abramsky and Coecke
\cite{AbramskyCoecke2004:CategoricalSemantics},
and the topos approach
to quantum theory initiated by Butterfield, Doering, and Isham
\cite{Doering2008:WhatIsAThing,IshamButterfield1998:AToposPerspective}.
Both of these
approaches to
quantum theory are algebraic, in that they seek to represent some aspect of
physical reality with algebraic structures. By taking the concept of
a ``physical observable'' as a fixed point of reference we cast the difference
between these
approaches
as internal vs. external algebraic perspectives; that is, algebras
\emph{internal} to a monoidal category $\ensuremath{\mathcal{A}}\xspace$ vs.
representations of algebras on $\ensuremath{\mathcal{A}}\xspace$, a construction \emph{external} to
$\ensuremath{\mathcal{A}}\xspace$. The topos approach to quantum theory
considers
representations of commutative algebraic structures (for example
$C^*$--algebras, or von Neumann algebras
\cite{Conway2000:ACourseInOperatorTheory}) on $\ensuremath{\textnormal{\textbf{Hilb}}}$. The topos approach makes
essential use of the fact that the sets $\ensuremath{\textnormal{\text{Hom}}}(H,H)$ in $\ensuremath{\textnormal{\textbf{Hilb}}}$ carry the
structure of a $C^*$--algebra. In
\cite{Dunne2016:NewPerspective} we
showed that the categories considered in the monoidal approach have a similarly
rich algebraic structure on their sets of endomorphisms $\ensuremath{\textnormal{\text{Hom}}}(A,A)$, thus
allowing
one to take the ``external perspective'' for any such $\ensuremath{\mathcal{A}}\xspace$, and not just
$\ensuremath{\textnormal{\textbf{Hilb}}}$.
There are various incarnations of the topos approach to quantum theory, here we
follow a construction introduced in
\cite{Doering2008:WhatIsAThing}, which is developed in
\cite{Flori2013:Topos}. For
a fixed Hilbert space $H$ one takes $\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\textnormal{-\textbf{Alg}}}(H)$ to be the poset of
commutative
$C^*$--subalgebras of $\ensuremath{\textnormal{\text{Hom}}}(H,H)$ considered as a category, and
$\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\Alg_{\textnormal{vN}}}(H)$ its subcategory whose objects are the commutative von
Neumann $C^*$--subalgebras of $\ensuremath{\textnormal{\text{Hom}}}(H,H)$. We will briefly
discuss a physical interpretation for this definition. Physical experiments have
made clear that quantum mechanical
systems are faithfully
represented by non--commutative $C^*$--algebras of the form
$\ensuremath{\textnormal{\text{Hom}}}(H,H)$. What nature does not make clear however is
how to
interpret this algebraic structure. According to Bohr's
interpretation of quantum theory \cite{Bohr1949:DiscussionWithEinstein},
although physical reality is by its nature quantum, as classical beings
conducting
experiments in our labs we only have
access to the ``classical parts'' of a quantum system. Much of
classical physics can be reduced to the study of commutative algebras; this
approach is carefully constructed and motivated in
\cite{Nestruev2003:SmoothManifoldsAndObservables} where the following picture is
given:
\begin{align*}
\text{Physics lab}& \qquad\qquad\to&&\text{Commutative unital} \\
& &&\mathbb{R}\text{--algebra } A \\
\text{Measuring device}& \qquad\qquad\to&&\text{Element of the algebra } A\\
\text{State of the observed}& \qquad\qquad\to&& \text{Homomorphism of unital}
\\
\text{physical system}& &&\mathbb{R} \text{--algebras } h:A \to \mathbb{R}\\
\text{Output of the}& \qquad\qquad\to&& \text{Value of this function }
h(a), \\
\text{ measuring device}& && a \in A
\end{align*}
\begin{center}Figure 1: Algebraic interpretation of classical physics
\end{center}
In \cite{Nestruev2003:SmoothManifoldsAndObservables} the author stresses that
the choice of ground ring is somewhat unimportant to this construction and
interpretation, however $\mathbb{R}$ is a reasonable choice given that in
classical physics most of the quantities we want to measure, length, energy,
time, etc., can be represented by real numbers. In quantum mechanics one
traditionally takes scalar values in $\mathbb{C}$, but one can take any ring,
or, as we will see, a semiring in its place and the physical
interpretation of Figure 1. remains valid.
According to Bohr's interpretation of quantum theory, a quantum system
represented by a non--commutative algebra $\ensuremath{\textnormal{\text{Hom}}}(H,H)$, can only be
understood in terms of of its
classical components, that is, the commutative subalgebras of
$\ensuremath{\textnormal{\text{Hom}}}(H,H)$; in particular, we can only make observations on the classical
subsystems. Hence the category $\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\Alg_{\textnormal{vN}}}(H)$ is a collection of observable
subsystems of a physical system, and we consider all of these subsystems at one
by considering the category of
presheaves $[\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\Alg_{\textnormal{vN}}}(H)^{\ensuremath{^{\mathrm{op}}}\xspace}, \,\ensuremath{\textnormal{\textbf{Set}}}]$, which is a topos. One presheaf of
particular significance is the presheaf which is characterised by the
\emph{Gelfand
spectrum}.
Recall the Gelfand spectrum of a commutative $C^*$--algebra $\ensuremath{\textnormal{\textbf{A}}}$ is
characterised by the set
$\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}}) = \{ \ \rho:\ensuremath{\textnormal{\textbf{A}}} \to \mathbb{C} \ | \ \rho \, \text{ a } \
C^*\text{--algebra homomorphism } \}
$ of \emph{characters}
which defines a functor
\[
\begin{tikzpicture}
\node(A) at (0,0) {$\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\Alg_{\textnormal{vN}}}(H)^{\ensuremath{^{\mathrm{op}}}\xspace}$};
\node(B) at (3,0) {$\ensuremath{\textnormal{\textbf{Set}}}$};
\draw[->](A) to node [above]{$\ensuremath{\Spec_\textnormal{\text{G}}}$}(B);
\end{tikzpicture}
\]
with the action on morphisms given by restriction. By Figure 1.
we interpret
this functor as assigning to each classical subsystem its set of possible
states.
\begin{remark}\label{rem:DifferentSpectra}
The \emph{prime spectrum} $\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$ of a commutative $C^*$--algebra
$\ensuremath{\textnormal{\textbf{A}}}$ is defined to be the set of prime ideals of $\ensuremath{\textnormal{\textbf{A}}}$, and is naturally
isomorphic to the Gelfand spectrum. The correspondence comes from
the fact that an ideal $J \subset \ensuremath{\textnormal{\textbf{A}}}$ is prime if and only if it is the
kernel of a character $\rho: \ensuremath{\textnormal{\textbf{A}}} \to \mathbb{C}$. The prime spectrum is also
equivalent to
the \emph{maximal spectrum}, taken to be the collection of maximal ideals.
\end{remark}
In a presheaf category one can generalise the notion of elements of a set by
considering the morphisms out of the terminal object. The terminal object $T:
\ensuremath{\mathcal{C}}\xspace^{\ensuremath{^{\mathrm{op}}}\xspace}\to \ensuremath{\textnormal{\textbf{Set}}}$ in a presheaf category sends all objects to the singleton
set $\{*\}$ and
all morphisms to the identity $\id{}:\{* \} \to \{*\}$. A \emph{global section}
(or \emph{global element}) of a presheaf $F:
\ensuremath{\mathcal{C}}\xspace^{\ensuremath{^{\mathrm{op}}}\xspace} \to \ensuremath{\textnormal{\textbf{Set}}}$ is a natural
transformation $\chi: T \rightarrow F$.
The Kochen--Specker
theorem \cite{KochenSpecker1975:LogicalStructures} asserts the
\emph{contextual} nature of quantum theory. The principle of
\emph{non--contextuality} is that the outcome of a measurement should not depend
on the \emph{context} in which that measurement is performed, that is, it should
not depend on which other measurements are made simultaneously. Classical
physics is typically formulated as non--contextual \cite[Chap.
4]{Isham1995:LecturesOnQuantumTheory}. The Kochen--Specker theorem
states that it is a feature of quantum
theory that one can find collections of measurements for which the outcomes are
context dependent. For a mathematical
treatment of this theorem see in \cite[Chap.
9]{Isham1995:LecturesOnQuantumTheory}.
The following
theorem was first shown in \cite{Doering2008:WhatIsAThing} but here we
present it as in \cite[Corollary 9.1]{Flori2013:Topos}.
\begin{theorem}\label{thm:IshamKS}
The Kochen-Specker theorem is equivalent to the statement that for a Hilbert
space $H$ with $\dim (H)\geq 3$, the presheaf
$\ensuremath{\Spec_\textnormal{\text{G}}}:\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\Alg_{\textnormal{vN}}}(H)^{\ensuremath{^{\mathrm{op}}}\xspace}\to \ensuremath{\textnormal{\textbf{Set}}}$
has no global sections.
\end{theorem}
The monoidal approach to quantum theory of Abramsky and Coecke
\cite{AbramskyCoecke2004:CategoricalSemantics} is an entirely separate approach
to quantum theory using very different mathematical structures. This approach
begins with identifying the essential
properties of the category $\ensuremath{\textnormal{\textbf{Hilb}}}$ which one needs to
formulate concepts from quantum theory.
\begin{definition}
A $\ensuremath{\dag}$\emph{--category} consists of a category $\ensuremath{\mathcal{A}}\xspace$ together with an
identity on objects functor
$\ensuremath{\dag}: \ensuremath{\mathcal{A}}\xspace^{\ensuremath{^{\mathrm{op}}}\xspace}
\to \ensuremath{\mathcal{A}}\xspace$ satisfying $\ensuremath{\dag} \circ \ensuremath{\dag} = \id{\ensuremath{\mathcal{A}}\xspace}$. A
$\ensuremath{\dag}$\emph{--symmetric monoidal category} consists of a symmetric monoidal
category $(\ensuremath{\mathcal{A}}\xspace,\otimes,I)$ which is a $\ensuremath{\dag}$--category such that $\ensuremath{\dag}$ is a
strict
monoidal functor and all of the
symmetric monoidal structure isomorphisms satisfy $\lambda^{-1} =
\lambda^{\ensuremath{\dag}}$.
\end{definition}
\begin{definition}
A category $\ensuremath{\mathcal{A}}\xspace$ is said to have \emph{finite biproducts} if it has a zero
object $0$, and if for all objects $X_1$ and $X_2$ there exists an object $X_1
\oplus X_2$ which is both the coproduct and the product of $X_1$ and $X_2$.
If $\ensuremath{\mathcal{A}}\xspace$ is a $\ensuremath{\dag}$--category and has finite biproducts such
that the
coprojections $\kappa_i : X_i \to X_1 \oplus X_2$ and projections $\pi_i :
X_1 \oplus X_2 \to X_i$ are related by $\kappa_i^{\ensuremath{\dag}} = \pi_i$, then we say
$\ensuremath{\mathcal{A}}\xspace$ has \emph{finite} $\ensuremath{\dag}$\emph{--biproducts}.
\end{definition}
In a category with a zero object $0$, for every pair of objects $X$ and $Y$
we call the unique map $X \to 0 \to Y$ the \emph{zero--morphism}, which we
denote $0_{X,Y}: X \to Y$, or simply $0:X \to Y$.
For a category with finite biproducts each hom-set $\ensuremath{\textnormal{\text{Hom}}}(X,Y)$ is
equipped with a commutative monoid operation
\cite[Lemma 18.3]{Mitchell1965:TheoryOfCategories} called \emph{biproduct
convolution}, where for
$f,g:X \to Y$, we define $f+g:X \to Y$ by the composition
\begin{equation*}\label{eq:Enrich}
\begin{tikzpicture}[baseline=(current bounding box.center)]
\node(A) at (0,0) {$X$};
\node(B) at (1.9,0) {$X\oplus X$};
\node(C) at (4.1,0) {$Y\oplus Y$};
\node(D) at (6,0) {$Y$};
\draw[->](A) to node [above]{$\Delta$}(B);
\draw[->](B) to node [above]{$f \oplus g$}(C);
\draw[->](C) to node [above]{$\nabla$}(D);
\end{tikzpicture}
\end{equation*}
where the monoid unit is given by the zero--morphism $0_{X,Y}:X\to Y$.
Categories with finite $\ensuremath{\dag}$--biproducts admit a matrix
calculus \cite[Chap. I. Sect. 17.]{Mitchell1965:TheoryOfCategories}
characterised as follows. For
$X= \bigoplus\limits_{j=1}^{n}
X_j$ and $Y= \bigoplus\limits_{i=1}^{m} Y_i$ a morphism $f:X \to Y$ is
determined
completely by the morphisms $f_{i,j} : X_i \to Y_j$, and
morphism composition is given by matrix multiplication. If $f$ has matrix
representation $f_{i,j}$ then $f^\ensuremath{\dag}$ has matrix representation $f_{j,i}^\ensuremath{\dag}$.
The category $\ensuremath{\textnormal{\textbf{Hilb}}}$ is the archetypal example of a category with these
properties.
A notion of ``observable'' in quantum theory can be axiomatised in terms
of the monoidal
structure of the category of Hilbert spaces by Frobenius algebras
\cite{CoeckeEtAl2008:NewDescriptionOrthogonal} or $H^*$--algebras
\cite{AbramskyHeunen2012:HAlgebras}, and hence we can consider any
$\ensuremath{\dag}$--symmetric
monoidal category as a categorical model for a toy theory of
observables. For example, Spekkens Toy Theory \cite{Spekkens2007:Epistemic}
is a
toy physical theory
exhibiting some quantum--like properties but which is given by a local hidden
variable model. This theory can be modelled in the category of sets and
relations $\ensuremath{\textnormal{\textbf{Rel}}}$ using Frobenius algebras to represent observables
\cite{CoeckeEdwards2008:ToyQuantum}.
The monoidal approach provides general framework in which a broad class of
physical theories
can be compared in a high--level but mathematically rigorous way. This is
useful
for exploring interdependencies of quantum or quantum--like phenomena, for
example the many notions of non--locality and contextuality. In particular, in
\cite{GogiosoZeng2015:MerminNonlocality} an abstract notion of
Mermin--locality is formulated in the language of Frobenius algebras, and the
category of finite sets and relations $\ensuremath{\textnormal{\textbf{FRel}}}$ is shown to be Mermin--local.
In this work we present a completely abstract notion of Kochen--Specker
contextuality and we show that categories of quantale--valued relations do not
exhibit this form of contextuality. This is done using abstract
Gelfand spectrum introduced in \cite{Dunne2016:NewPerspective}. In order to
prove this non--contextuality result we define
the \emph{prime spectrum}
which we relate to the physical interpretation of Figure 1. by examining the
topological structure which these spectra carry.
\section{The Spectrum and Kochen--Specker Contextuality}
\label{sec:StateSpaceTopos}
In this section we review a construction introduced
in \cite{Dunne2016:NewPerspective}, and we introduce an abstract
definition of
Kochen--Specker contextuality. This is done using the language of semirings,
semimodules \cite{Golan1992:TheoryOfSemirings} and semialgebras.
\begin{definition}\label{def:Semiring}
A \emph{semiring} $(R,\cdot,1,+,0)$ consists of a set $R$ equipped with a
commutative
monoid operation $+: R \times R \to R$ with unit
$0\in R$, and a monoid
operation $\cdot : R \times R \to R$, with unit $1\in R$, such that $\cdot$
distributes over $+$ and such that $s\cdot 0 = 0 \cdot s =0$ for all $s\in R$.
A semiring is called \emph{commutative} if $\cdot$ is commutative. A
\emph{$*$-semiring}, or \emph{involutive semiring} is one equipped
with an operation $*: R \to R$ which is an involution, a homomorphism for
$(R,+,0)$, and satisfies $(s \cdot t)^* = t^* \cdot s^*$ and $1^* = 1$.
\end{definition}
As the
notation
suggests we will refer to the monoid operations of a semiring
as \emph{addition} and \emph{multiplication} respectively. We say that a
semiring $R$ is \emph{zero--divisor free (ZDF)} if for all
$s,t \in R$ we have $s\cdot t = 0 $ implies $s=0 $ or $t=0$.
Many concepts associated with rings can be lifted directly to the level of
semirings in the obvious way, for example homomorphisms, kernels and direct
sums. However, some concepts have to be treated with care when generalising to
semirings, for example \emph{ideals}.
\begin{definition}
Let $R$ be a commutative semiring. A subset $ J\subset R$ is
called an \emph{ideal} if it contains $0$, is closed under addition, and
such that for all
$s \in R$ and $a \in J$, $as \in J$. An ideal is called \emph{prime} if $st
\in J$
implies $s \in J$ or $t \in J$. A $k$--\emph{ideal} is an ideal $J$ such that
if $a \in J$ and $a+b \in J$ then $b \in J$. A $k^*$--ideal of a $*$--semiring
is a $k$--ideal closed under involutions.
\end{definition}
The $k$--ideals are to a semirings what normal
subgroups are to a groups;
they are the ideals which one can quotient by. For any ring considered as
a semiring the ideals and $k$--ideals coincide.
\begin{definition}
Let $(R,\cdot ,1, +,0)$ be a commutative semiring, an
$R$--\emph{semimodule} consists
of a commutative
monoid $+_M :M\times M \to M$, with unit $0_M$, together with a \emph{scalar
multiplication}
$\bullet:
R \times M \to M$ such that for all $r,s \in R$ and $m,n \in M$:
\begin{multicols}{2}
\begin{enumerate}
\item $s \bullet (m +_M n) = s\bullet m +_M s \bullet n$ ;
\item $(r\cdot s) \bullet m = r\bullet ( s \bullet m)$ ;
\item $(r+s) \bullet m = (r \bullet m) +_M (s \bullet m) $;
\item $0 \bullet m = s \bullet 0_M = 0_M$;
\item $1 \bullet m = m$.
\end{enumerate}
\end{multicols}
\end{definition}
\begin{definition}
An \emph{$R$--semialgebra} $(M,\cdot_M,1_M,+_M,0_M)$ consists of an
$R$-semimodule $(M,+_M,0_M)$
equipped with a monoid operation $\cdot_M:M \times M \to M$, with unit $1_M$,
such that
$(M,\cdot_M,1_M,+_M,0_M)$ forms a semiring, and where scalar multiplication
obeys
$s
\bullet (m \cdot_M n) = (s
\bullet m )\cdot_M n =
m \cdot_M (s \bullet n)$. An $R$--semialgebra is called
\emph{commutative} if $\cdot_M$ is commutative.
\end{definition}
The ideals and $k$--ideals of a semialgebra are defined in the obvious way.
Notice that every semiring $R$ is an $R$-semialgebra, where the scalar
multiplication by $R$ is taken to be the usual multiplication in $R$. Non--zero
elements $s,t$ of a semialgebra are \emph{orthogonal} if $s\cdot t = 0$. A
\emph{subunital idempotent} in a semialgebra is an idempotent element $p$ such
that there is an orthogonal idempotent $q$ where $p+q = 1_M$. A
\emph{primitive subunital idempotent} is a subununital idempotent $p$ such that
there exists no non--trivial subunital idempotents $s$ and $t$ with $s+t =
p$.
\begin{definition}\label{def:DaggerSemialgebra}
Let $R$ be a $*$--semiring. An \emph{$R^*$--semialgebra}
consists of an $R$--semialgebra $(M,\cdot_M,1_M,+_M,0_M)$, such that $M$ and
$R$ have compatible involutions, that is, one that satisfies $(s \bullet
m)^{*}=s^{*} \bullet m^{*}$.
\end{definition}
A \emph{unital subsemialgebra} $i: N \hookrightarrow M$
of $M$ is a subset $N$ containing $0_M$ and $1_M$ closed under all the
algebraic
operations. A \emph{subsemialgebra} $N\subset M$ is a subset $N$ containing
$0_M$ which is closed
under multiplication and which is a semialgebra in its own right, though may
have a different unit from $M$.
A
\emph{(unital) $*$--subsemialgebra} of a $*$--semialgebra is a
(unital) subsemialgebra closed
under taking involutions.
An $R$--semialgebra is said to be \emph{indecomposable} if it cannot be
expressed as a
non--trivial direct sum of $R$--semialgebras. An $R$--semialgebra is
\emph{completely
decomposable} if it is isomorphic to the direct sum of its indecomposable
subsemialgebras.
The following two results are shown in detail in
\cite{Dunne2016:NewPerspective}.
\begin{theorem}
For a locally small $\ensuremath{\dag}$--symmetric monoidal category $(\ensuremath{\mathcal{A}}\xspace, \otimes, I)$
with finite $\ensuremath{\dag}$--biproducts
the
set $S=\ensuremath{\textnormal{\text{Hom}}}(I,I)$ is a commutative $*$--semiring.
\end{theorem}
Biproduct convolution gives us the additive operation on $S$ while
morphism
composition gives us the multiplicative operation, and the functor $\ensuremath{\dag}$
provides the involution. It is shown in
\cite{KellyLaplaza1980:CoherenceForCompact} that this multiplicative operation
is commutative.
\begin{theorem}
Let $(\ensuremath{\mathcal{A}}\xspace, \otimes, I)$ be a locally small $\ensuremath{\dag}$--symmetric monoidal
category and let $S=\ensuremath{\textnormal{\text{Hom}}}(I,I)$. For
any pair of objects the set $\ensuremath{\textnormal{\text{Hom}}}(X,Y)$ is an $S$--semimodule, and
$\ensuremath{\textnormal{\text{Hom}}}(X,X)$ is a $S^*$--semialgebra.
\end{theorem}
Addition on the set $\ensuremath{\textnormal{\text{Hom}}}(X,Y)$ is given by biproduct convolution. For a
morphism $f:X \to Y$ scalar multiplication $s \bullet f$ for $s:I \to I$ is
defined
\[
\begin{tikzpicture}
\node(A) at (0,0) {$X$};
\node(B) at (1.8,0) {$X\otimes I$};
\node(C) at (4.2,0) {$Y \otimes I$};
\node(D) at (6,0) {$Y$};
\draw[->] (A) to node [above]{$\sim$}(B);
\draw[->] (B) to node [above]{$f \otimes s$}(C);
\draw[->] (C) to node [above]{$\sim$}(D);
\end{tikzpicture}
\]
which gives a semimodule structure \cite{Heunen2008:SemimoduleEnrichment}.
For $\ensuremath{\textnormal{\text{Hom}}}(X,X)$ multiplication is given by morphism composition and the
functor $\ensuremath{\dag}$ provides the involution.
\begin{definition}
For $(\ensuremath{\mathcal{A}}\xspace, \otimes, I)$ a locally small $\ensuremath{\dag}$--symmetric monoidal category
and
$X$ and object, we define the category $\ensuremath{\mathcal{A}}\xspace\ensuremath{\textnormal{-\textbf{Alg}}}(X)$ to be the category with
objects commutative unital $S^*$--subsemialgebras of $\ensuremath{\textnormal{\text{Hom}}}(X,X)$, and arrows
inclusion
of unital subsemialgebras.
\end{definition}
The for any subset of $B\subset \ensuremath{\textnormal{\text{Hom}}}(X,X)$ the set $B' = \{\ f:X \to X \ | \
f\circ g= g \circ f \text{ for all } g\in B \ \}$ is called the
\emph{commutant} of $B$ \cite[Sect.
12]{Conway2000:ACourseInOperatorTheory}. We
define the full subcategory of
\emph{von Neumann $S^*$--subsemialgebras}
\[
\begin{tikzpicture}
\node(A) at (0,0) {$\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(X)$};
\node(B) at (3,0) {$\ensuremath{\mathcal{A}}\xspace\ensuremath{\textnormal{-\textbf{Alg}}}(X)$};
\draw[right hook->](A) to node [above]{}(B);
\end{tikzpicture}
\]
to have objects those $S^*$--subsemialgebras $\ensuremath{\textnormal{\textbf{A}}}$ which satisfy $\ensuremath{\textnormal{\textbf{A}}} =
\ensuremath{\textnormal{\textbf{A}}}''$.
\begin{example}
If we take $(\ensuremath{\mathcal{A}}\xspace, \otimes,I)$ to be $(\ensuremath{\textnormal{\textbf{Hilb}}},\otimes, I)$ then the category
$\ensuremath{\textnormal{\textbf{Hilb}}}\ensuremath{\Alg_{\textnormal{vN}}}(H)$ is precisely the category considered in the topos approach
\cite{Doering2008:WhatIsAThing,Flori2013:Topos}.
\end{example}
\begin{remark}\label{rem:FrobeniusLifts}
In \cite{Dunne2017:OnTheStructure} we showed that any special commutative
unital
Frobenius algebra, and any (possibly non--unital) commutative $H^*$--algebra
$\mu:A \otimes A \to A$ in
$\ensuremath{\mathcal{A}}\xspace$ generates an object
$\ensuremath{\textnormal{\textbf{A}}}$ in $\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(A)$. Furthermore, there is a natural
correspondence between
the set--like elements of the internal algebra and the Gelfand spectrum of the
semialgebra it generates. Hence the notion observable in the
monoidal
approach naturally lifts to the notion of observable
in our
generalised topos approach.
\end{remark}
We can
generalise the
spectrum of a commutative $C^*$--algebra to an $S^*$--semialgebra
\cite{Dunne2016:NewPerspective}.
\begin{definition}
Let $(\ensuremath{\mathcal{A}}\xspace, \otimes, I)$ be a locally small $\ensuremath{\dag}$--symmetric monoidal category
with finite
$\ensuremath{\dag}$--biproducts, and $X$ an object. The \emph{Gelfand spectrum}
for $X$ is the presheaf
\[
\begin{tikzpicture}
\node(A) at (0,0) {$\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(X)^{\ensuremath{^{\mathrm{op}}}\xspace}$};
\node(B) at (3,0) {$\ensuremath{\textnormal{\textbf{Set}}}$};
\draw[->](A) to node [above]{$\ensuremath{\Spec_\textnormal{\text{G}}}$}(B);
\end{tikzpicture}
\]
defined on objects
$
\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}}) = \{ \ \rho:\ensuremath{\textnormal{\textbf{A}}} \to S \ | \ \rho \text{ an }
S^*\text{--semialgebra homomorphism } \}
$ to be the set of
\emph{characters}
while the action on morphism is defined in the obvious way by restriction.
\end{definition}
\begin{definition}
Let $(\ensuremath{\mathcal{A}}\xspace,\otimes, I)$ be a locally small $\ensuremath{\dag}$--symmetric monoidal category
with finite
$\ensuremath{\dag}$--biproducts, and $X$ an object. The \emph{prime spectrum}
for $X$ is the presheaf
\[
\begin{tikzpicture}
\node(A) at (0,0) {$\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(X)^{\ensuremath{^{\mathrm{op}}}\xspace}$};
\node(B) at (3,0) {$\ensuremath{\textnormal{\textbf{Set}}}$};
\draw[->](A) to node [above]{$\ensuremath{\Spec_\textnormal{\text{P}}}$}(B);
\end{tikzpicture}
\]
defined on objects $\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}}) = \{ \ J \subset \ensuremath{\textnormal{\textbf{A}}} \ | \ J \text{ a prime
$k^*$--ideal}
\}$
while for $i:\ensuremath{\textnormal{\textbf{A}}}
\hookrightarrow \ensuremath{\textnormal{\textbf{B}}}$ the action on morphisms is given by $\tilde{i}:
\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{B}}}) \to
\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$
is defined $\tilde{i}(K) = \{ \ x \in
\ensuremath{\textnormal{\textbf{A}}} \ | \ i(x) \in K \ \}$.
\end{definition}
To see that $\tilde{i}(K)$ is a prime $k^*$--ideal one can see the proof of a
similar
statement \cite[Proposition 6.13]{Golan1992:TheoryOfSemirings}.
\begin{remark}
One can also define a functor which assigns to each $\ensuremath{\textnormal{\textbf{A}}}$ the collection of
all prime ideals, not just the prime $k^*$--ideals
\cite[Chap. 6]{Golan1992:TheoryOfSemirings}, although for the purposes of this
work $k^*$--ideals are a more natural choice. One can define the maximal
spectrum
for an
arbitrary semialgebra or semiring, although this fails to be functorial in
general, \cite[Chap 2. Sect. 5]{Smith2014:AlgebraicGeometry}.
\end{remark}
We have already discussed that for $\ensuremath{\textnormal{\textbf{Hilb}}}$ the prime spectrum and Gelfand
spectrum coincide. In \cite{Dunne2016:NewPerspective} we
showed that the same is true for the category of sets and relations $\ensuremath{\textnormal{\textbf{Rel}}}$,
although we will
see in Example \ref{example:1} that this
is not the case in general.
The Gelfand spectrum presheaf formulation of the
Kochen--Specker theorem justifies the
following definition.
\begin{definition}\label{def:KScontextual}
Let $\ensuremath{\mathcal{A}}\xspace$ be a locally small $\ensuremath{\dag}$--symmetric monoidal category with finite
$\ensuremath{\dag}$--biproducts. An object $X$ in $\ensuremath{\mathcal{A}}\xspace$ is said to be \emph{Kochen--Specker
contextual} if the presheaf $\ensuremath{\Spec_\textnormal{\text{G}}}$ on $\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(X)$
has no global sections. We say $X$ is \emph{Kochen--Specker non--contextual}
if $\ensuremath{\Spec_\textnormal{\text{G}}}$
does admit a global section.
\end{definition}
Such a global section, if it exists, will pick out an element
$\chi_\ensuremath{\textnormal{\textbf{A}}} : \{*\}\to \ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ from each spectrum, i.e. according to
Figure 1. it would specify a state
from
each ``classical subsystem'' $\ensuremath{\textnormal{\textbf{A}}}$. Naturality ensures that these choices of
states are consistent with measurement outcomes irrespective of which subsystem
-- that is, which ``context'' -- the measurement
appears in.
There are more general formulations of contextuality and non--locality using
the
language
of sheaves and presheaves
\cite{AbramskyBrandenburger2011:UnifiedSheafTheoretic,
AbramskyEtAl2015:ContextualityCohomology}.
Future work will show how the
categories $\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(X)$ naturally generate \emph{empirical
models} which can be examined within the framework of
\cite{AbramskyEtAl2015:ContextualityCohomology}, and how the contextual nature
of those empirical models is related to the existence of global sections for
the
corresponding Gelfand spectrum. This connection, together with Remark
\ref{rem:FrobeniusLifts} will give us a means of applying the techniques of
\cite{AbramskyEtAl2015:ContextualityCohomology}, for example sheaf cohomology,
to the Frobenius algebras in an
arbitrary $\ensuremath{\mathcal{A}}\xspace$.
\section{Quantale--Valued Relations}
\label{sec:Quantales}
We now turn our attention to a class of categories for which we will prove a
non--contextuality result, namely quantale--valued relations over a fixed
quantale $Q$. A standard reference for quantales is
\cite{Rosenthal1990:Quantales}.
\begin{definition}
A \emph{quantale} $(Q, \bigvee, \cdot, 1_Q)$ is a
complete join--semilattice $(Q,\bigvee)$ equipped with a monoid
operation $\cdot : Q\times Q \to Q$ with unit $1_Q$ such that for any $x \in Q$
and
$P \subset Q$
\[
x \cdot (\bigvee\limits_{y \in P} y) = \bigvee\limits_{y \in P}(x \cdot
y)\qquad \text{and}\qquad (\bigvee\limits_{y \in P} y)\cdot x =
\bigvee\limits_{y \in P}(y \cdot x)
\]
An \emph{involutive quantale} in one equipped with an involution
map $*:Q \to Q$ which is a semilattice homomorphism which is an involution
$(x^*)^* = x$ satisfying $(x\cdot y)^* = y^*\cdot x^*$ and $1_Q^* = 1_Q$.
A \emph{commutative quantale} is one for which the monoid operation is
commutative. A \emph{subquantale} is a subset of $Q$ closed under all joins and
the
monoid operation and containing $1_Q$.
\end{definition}
We are primarily interested in involutive commutative quantales, but note
every
commutative quantale can be equipped with the trivial involution. A
quantale has a least element $\bot$, defined to be the join of the
empty set, and this is an absorbing element, i.e. for all $x \in Q$ we
have $x \cdot \bot = \bot$. We assume all quantales are non--trivial, that
is, $\bot
\not=\top$, where $\top = \bigvee\limits_{x \in Q} x$.
\begin{remark}\label{rem:QuantalIsSemiring}
An involutive quantale $Q$ is a $*$--semiring with addition given by the join
and multiplication
given by the monoid operation. The bottom element
$\bot$ is the zero element of the semiring and will hence be denoted $0$. We
say a quantale is
\emph{zero--divisor free} if it is zero--divisor free as a semiring.
\end{remark}
\begin{definition}
For a commutative involutive quantale $Q$, the category of
\emph{quantale--valued
relations} $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ has sets as objects and morphisms $f:X \to Y$ consist of
functions $f: X\times Y \to Q$. For $f:X \to Y$ and $g :Y \to Z$ composition
is defined where $g\circ f: X \times Z \to Q$ by $g\circ f(x,z) =
\bigvee\limits_{y \in Y} f(x,y)\cdot g(y,z)$.
We say that a morphism $f:X \to Y$ in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$
\emph{relates} $x\in X$ to $y \in Y$ if $f(x,y) \not=0$.
\end{definition}
The category $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ is a
$\ensuremath{\dag}$--symmetric monoidal category with $\ensuremath{\dag}$--biproducts. The monoidal
product is given by the cartesian product, with unit the
one element set, the biproduct is given by
disjoint union, and the dagger is given by reordering and pointwise application
of the involution $f^\ensuremath{\dag}(y,x) = f(x,y)^*$.
\begin{example}
Any complete Heyting algebra or Boolean algebra is a quantale. In particular
the two--element Boolean algebra $\textbf{2}= \{0,1\}$, where the corresponding
category $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{\textbf{2}}}$ is the usual the category of sets and relations $\ensuremath{\textnormal{\textbf{Rel}}}$. The
intervals $[0,1]$ and $[0,\infty]$ are quantales when equipped with the usual
multiplication, and where $\bigvee\limits S= \sup \, S$.
\end{example}
We now turn our attention to the category $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$ for a set $X$.
Clearly the scalars $\ensuremath{\textnormal{\text{Hom}}}(I,I)\cong Q$, and for each set $X$ the
$Q$--semialgebra $\ensuremath{\textnormal{\text{Hom}}}(X,X)$ is a quantale, with the join given pointwise
and multiplication given by morphism composition.
\begin{lemma}\label{thm:IsSubquantale}
Each $\ensuremath{\textnormal{\textbf{A}}}$ in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$ is a subquantale of $\ensuremath{\textnormal{\text{Hom}}}(X,X)$.
\proof{ By definition $\ensuremath{\textnormal{\textbf{A}}}$ is a subsemiring, we need to show that $\ensuremath{\textnormal{\textbf{A}}}$ is
closed under arbitrary joins. Let $B \subset \ensuremath{\textnormal{\textbf{A}}}$ be any subset, we need to
show that $\bigvee\limits_{x
\in B} x \in \ensuremath{\textnormal{\textbf{A}}}$. Let $g \in \ensuremath{\textnormal{\textbf{A}}}'$ then for all $x \in B$ we have $g\cdot
x= x
\cdot g$. So we have $g \cdot \bigl( \bigvee\limits_{x \in B} x \bigr)=
\bigvee\limits_{x \in B} (g \cdot x) = \bigvee\limits_{x \in B} (x \cdot g) =
\bigl( \bigvee\limits_{x \in B} x \bigr) \cdot g$ and hence $\bigvee\limits_{x
\in B} x \in \ensuremath{\textnormal{\textbf{A}}}''$, and since $\ensuremath{\textnormal{\textbf{A}}}$ is von Neumann $\bigvee\limits_{x
\in B} x \in \ensuremath{\textnormal{\textbf{A}}}$, as required.
\ensuremath{\null\hfill\square}}
\end{lemma}
We now give an important structure theorem for these semialgebras.
\begin{theorem}\label{thm:CompletelyDecomposable}
Let $\ensuremath{(Q,\leq,\bigvee,\bot, \cdot, 1_Q)}$ be a commutative ZDF quantale and let $\ensuremath{\textnormal{\textbf{A}}} \in
\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$. There are primitive subunital idempotents $\{ e_i \}$ such
that $\ensuremath{\textnormal{\textbf{A}}} = \prod\limits_{i} e_i \ensuremath{\textnormal{\textbf{A}}}$, a direct product of
$S^*$--semialgebras.
\proof{ Let $f:X \to X$ be a $Q$--relation. Let $\ensuremath{\textnormal{\text{supp}}}(f)\subset X$, the
\emph{support} of $f$ be the set of elements $x$ such that there exits $y\in X$
such that $f$ relates $x$ to $y$. Let $\ensuremath{\textnormal{\text{cosupp}}}(f)\subset X$, the
\emph{cosupport} of $f$ be the set of elements $x$ such that there exists $y\in
X$ such that $f$ relates $y$ to $x$. First, we
claim
that for $Q$--relations satisfying $f \circ f^\ensuremath{\dag} = f^\ensuremath{\dag} \circ f$ we have
$\ensuremath{\textnormal{\text{supp}}}(f)= \ensuremath{\textnormal{\text{cosupp}}}(f)$. Suppose $x\in \ensuremath{\textnormal{\text{supp}}}(f)$ then if $Q$ is ZDF then $f^\ensuremath{\dag}
\circ f$ relates $x$ to itself. However if $x \not\in \ensuremath{\textnormal{\text{cosupp}}}(f)$ then clearly
$f \circ f^\ensuremath{\dag}$ cannot relate $x$ to any other element and hence $x\in \ensuremath{\textnormal{\text{supp}}}(f)$
iff $x \in \ensuremath{\textnormal{\text{cosupp}}}(f)$. So $X = \ensuremath{\textnormal{\text{supp}}}(f) \sqcup \overline{\ensuremath{\textnormal{\text{supp}}}(f)}$ and $f$
has a corresponding matrix representation
$f = \bigl(\begin{smallmatrix}
f_1 & 0 \\
0 & 0 \\
\end{smallmatrix}\bigr)$.
For each $f\in \ensuremath{\textnormal{\textbf{A}}}$ let $f_\ensuremath{\textnormal{\text{supp}}} = \bigl(\begin{smallmatrix}
\id{} & 0 \\
0 & 0 \\
\end{smallmatrix}\bigr) $ be the relation which is the identity
on
the support of $f$ and zero otherwise.
Let $g = \bigl(\begin{smallmatrix}
g_1 & g_2 \\
g_3 & g_4 \\
\end{smallmatrix}\bigr)\in \ensuremath{\textnormal{\textbf{A}}}'$ then in particular $g\circ f = f \circ
g$ and hence $\bigl(\begin{smallmatrix}
g_1 f_1 & 0 \\
g_3 f_1 & 0 \\
\end{smallmatrix}\bigr) = \bigl(\begin{smallmatrix}
f_1 g_1 & f_2 g_2 \\
0 & 0 \\
\end{smallmatrix}\bigr)$ and so if $Q$ is ZDF then $g_2= 0$ and $g_3 =
0$, and hence $g = \bigl(\begin{smallmatrix}
g_1 & 0 \\
0 & g_4 \\
\end{smallmatrix}\bigr)$. Then clearly $\bigl(\begin{smallmatrix}
g_1 & 0 \\
0 & g_4 \\
\end{smallmatrix}\bigr) \bigl(\begin{smallmatrix}
1 & 0 \\
0 & 0 \\
\end{smallmatrix}\bigr) =\bigl(\begin{smallmatrix}
1 & 0 \\
0 & 0 \\
\end{smallmatrix}\bigr) \bigl(\begin{smallmatrix}
g_1 & 0 \\
0 & g_4 \\
\end{smallmatrix}\bigr)$, i.e. $f_\ensuremath{\textnormal{\text{supp}}} \circ g = g \circ f_\ensuremath{\textnormal{\text{supp}}}$, and
hence we have shown that $f_\ensuremath{\textnormal{\text{supp}}} \in \ensuremath{\textnormal{\textbf{A}}}''$, and hence by the assumption
that $\ensuremath{\textnormal{\textbf{A}}}$ is von Neumann we have $f_\ensuremath{\textnormal{\text{supp}}} \in \ensuremath{\textnormal{\textbf{A}}}$. By a similar argument
$f_{\overline{\ensuremath{\textnormal{\text{supp}}}}} = \bigl(\begin{smallmatrix}
0 & 0 \\
0 & \id{} \\
\end{smallmatrix}\bigr)$ also belongs to $\ensuremath{\textnormal{\textbf{A}}}$.
Consider the collection of elements $f_\ensuremath{\textnormal{\text{supp}}}$ for all $f \in \ensuremath{\textnormal{\textbf{A}}}$. Each
$f_\ensuremath{\textnormal{\text{supp}}}$ corresponds with a subset of $X$ and hence this collection forms a
Boolean subalgebra
of $P(X)$, the powerset of $X$. By Lemma \ref{thm:IsSubquantale} $\ensuremath{\textnormal{\textbf{A}}}$ has
all joins and hence this collection
of subunital maps forms a complete Boolean subalgebra of $P(X)$ which by
\cite[Chap. 14, Theorem 8]{GivantHalmos2009:BooleanAlgebras} is atomic. The
atoms $e_i$ of this Boolean algebra are the primitive subunital idempotents of
$\ensuremath{\textnormal{\textbf{A}}}$, and $1_\ensuremath{\textnormal{\textbf{A}}} =
\bigvee e_i$. For every
element $f \in \ensuremath{\textnormal{\textbf{A}}}$ we have $f = \bigvee f\circ e_i$ for pairwise orthogonal
subunital idempotents, and hence $\ensuremath{\textnormal{\textbf{A}}}$ is the direct product of the
subalgebras $e_i \ensuremath{\textnormal{\textbf{A}}}$.
\ensuremath{\null\hfill\square}}
\end{theorem}
For a commutative ZDF quantale $Q$ and for any set $X$ Theorem
\ref{thm:CompletelyDecomposable} states that all
semialgebras in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$ are \emph{completely
decomposable}, that is, a direct sum of their indecomposable
(non--unital) subalgebras $e_i \ensuremath{\textnormal{\textbf{A}}} \subset \ensuremath{\textnormal{\textbf{A}}}$. We
call each $e_i \ensuremath{\textnormal{\textbf{A}}}$ for $e_i$ a primitive subunital idempotent an
\emph{indecomposable component} of $\ensuremath{\textnormal{\textbf{A}}}$.
We now give a characterisation of the prime spectrum for semialgebras of
quantale--valued relations.
\begin{lemma}\label{lem:CharacterisingPSpec}
For $Q$ a commutative ZDF quantale and $\ensuremath{\textnormal{\textbf{A}}}$ an
object in
$\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$, then $J \subset \ensuremath{\textnormal{\textbf{A}}}$ is a $k^*$--prime ideal iff if it the
kernel
of some $S^*$--semialgebra homomorphism $\gamma: \ensuremath{\textnormal{\textbf{A}}} \to \ensuremath{\textnormal{\textbf{2}}}$.
\end{lemma}
\begin{lemma}\label{lem:SpecAndPrimitiveIdempotents}
For each semialgebra homomorphism $\gamma:\ensuremath{\textnormal{\textbf{A}}}\to \ensuremath{\textnormal{\textbf{2}}}$ there is
exactly one primitive subunital idempotent $e_a$ in $\ensuremath{\textnormal{\textbf{A}}}$ such that
$\gamma(e_a) = 1$.
\proof{First we show there is at most one primitive idempotent $e_a$ such that
$\gamma(e_a)=
1$. Suppose there is
another $e_b$ such that $\gamma(e_b)= 1$. Since $e_a$ and $e_b$ are
orthogonal we have $\gamma(e_a) \gamma(e_b) \not= \gamma(e_a \circ e_b)$, and
hence there is at most one $e_a$ such that $\gamma(e_a) = 1$. Suppose however
there are no primitive idempotents which map to 1. We still have
$\gamma(1_\ensuremath{\textnormal{\textbf{A}}}) = \gamma(\bigvee\limits{e_i}) = 1$. If there is only a finite
number
of primitive idempotents then we have $\gamma(\bigvee\limits{e_i}) =
\bigvee\gamma(e_i)$, a contradiction, and hence there is exactly one primitive
idempotent satisfying $\gamma(e_a) = 1$. Suppose then that there are an
infinite number of primitive idempotents. Partition the primitive idempotents
into two infinite sets $K$ and $L$. Then $\gamma(1_\ensuremath{\textnormal{\textbf{A}}}) =\gamma(\bigvee e_i)
= \gamma(\bigvee\limits_K e_k) + \gamma(\bigvee\limits_L e_l) = 1$ but clearly
$\gamma(\bigvee\limits_K e_k) \gamma(\bigvee\limits_L e_l)=0$ and hence either
$\gamma(\bigvee\limits_K e_k)=0$ or $\gamma(\bigvee\limits_L e_l)=0$. Suppose
$\gamma(\bigvee\limits_L e_l)=0$ then by Lemma \label{lem:CharacterisingPSpec}
$\ker(\gamma)$ is a prime ideal, however there are elements $e_{k_1}$ and
$e_{k_2}$ in $K$ and
therefore not in $\ker(\gamma)$ and hence we have $e_{k_1} \cdot e_{k_2}= 0$
contradicting the primeness of $\ker(\gamma)$, and hence for each semialgebra
homomorphism $\gamma: \ensuremath{\textnormal{\textbf{A}}} \to \ensuremath{\textnormal{\textbf{2}}}$ there is exactly one primitive
idempotent such that $\gamma(e_a)=1$.
\ensuremath{\null\hfill\square}}
\end{lemma}
\begin{theorem}\label{thm:NicePrimeIdeals}
For $Q$ a ZDF quantale and $\ensuremath{\textnormal{\textbf{A}}}$ in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$ with decomposition
$\ensuremath{\textnormal{\textbf{A}}} = \prod\limits_{i} e_i \ensuremath{\textnormal{\textbf{A}}}$. For each indecomposable subalgebra $e_a
\ensuremath{\textnormal{\textbf{A}}}$ the \emph{complement} $\overline{e_a \ensuremath{\textnormal{\textbf{A}}}}= \prod\limits_{i \not=a} e_i
\ensuremath{\textnormal{\textbf{A}}}$ of $e_a \ensuremath{\textnormal{\textbf{A}}}$ is a prime ideal.
\proof{This follows directly form Lemma \ref{lem:SpecAndPrimitiveIdempotents},
simply define the map $\gamma_a:\ensuremath{\textnormal{\textbf{A}}} \to \ensuremath{\textnormal{\textbf{2}}}$ with kernel $\overline{e_a
\ensuremath{\textnormal{\textbf{A}}}}$, sending all other elements to 1.
\ensuremath{\null\hfill\square}}
\end{theorem}
Although we will see in Example \ref{example:1} that the prime spectrum and the
Gelfand spectrum for
$\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ do not coincide in general, the following theorem shows that they are
related.
\begin{lemma}\label{lem:NaturalTransformationsGP}
For $Q$ a commutative ZDF quantale there are natural transformations
$\xi:\ensuremath{\Spec_\textnormal{\text{G}}} \to \ensuremath{\Spec_\textnormal{\text{P}}}$ and $\tau: \ensuremath{\Spec_\textnormal{\text{P}}} \to \ensuremath{\Spec_\textnormal{\text{G}}}$ such that $\xi\circ \tau
\cong \id{}$.
\proof{ For $Q$ a quantale there is exactly one quantale homomorphism
$!:\ensuremath{\textnormal{\textbf{2}}} \to Q$. For $Q$ a ZDF quantale there is at least one homomorphism
$w :Q \to \ensuremath{\textnormal{\textbf{2}}}$,
which sends all non--zero elements to 1.
Since $\ensuremath{\Spec_\textnormal{\text{P}}}$ can be characterised by the collection of homomorphisms
$\gamma: \ensuremath{\textnormal{\textbf{A}}} \to \ensuremath{\textnormal{\textbf{2}}}$ let $\tau(\gamma) = ! \circ \gamma$. Similarly
for
$\rho:\ensuremath{\textnormal{\textbf{A}}} \to Q$ define $\xi(\rho)= w \circ \rho$. Naturality is easy to
check and clearly $w \circ ! \circ \gamma = \gamma$, as required. \ensuremath{\null\hfill\square}}
\end{lemma}
In Sect. \ref{sec:Topologising} we discuss a topological interpretation of
this map $\xi_\ensuremath{\textnormal{\textbf{A}}}: \ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}}) \to \ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$, in particular how to
relate the prime spectrum to the state space interpretation of the Gelfand
spectrum of Figure 1.
The following theorem follows directly from Lemma
\ref{lem:NaturalTransformationsGP}.
\begin{theorem}\label{thm:GSpecAndPSpec}
For a ZDF quantale $Q$, the Gelfand spectrum for $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$ has a global
section if and only if the prime spectrum has a global section.
\end{theorem}
\begin{example}\label{example:1}
Let $Q$ be the commutative involutive quantale $[0,1]$ with usual
multiplication, trivial involution, and where $\bigvee\limits S= \sup \, S$.
Let
$X$ be a two element set and consider $\ensuremath{\textnormal{\textbf{A}}}$ the von Neumann $Q$--semialgebra
$
\ensuremath{\textnormal{\textbf{A}}} \, =\, \big\{ \bigl(\begin{smallmatrix}
p & 0 \\
0 & q
\end{smallmatrix}\bigr) \ | \ p,q \in Q \ \big\}\, \cong \, Q \oplus
Q$
It is easy to see that there are four elements of $\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$:
\[
J_1 = \big\{ \bigl(\begin{smallmatrix}
p & 0 \\
0 & 0
\end{smallmatrix}\bigr) \ | \ p \in Q \ \big\} \qquad \qquad J_2 =
\big\{
\bigl(\begin{smallmatrix}
p & 0 \\
0 & q
\end{smallmatrix}\bigr) \ | \ p \in Q, \ q < 1 \ \big\}
\]\[
K_1 = \big\{ \bigl( \begin{smallmatrix}
0 & 0 \\
0 & q
\end{smallmatrix}\bigr) \ | \ q \in Q \ \big\} \qquad \qquad K_2 =
\big\{
\bigl(\begin{smallmatrix}
p & 0 \\
0 & q
\end{smallmatrix}\bigr) \ | \ q \in Q, \ p < 1 \ \big\}
\]
There are three semialgebra homomorphisms from $Q$ to itself: $u:Q \to Q$
defined $u(x) = 1$ for all
$x\not=0$; $d:Q \to Q$ defined $d(x)=0$ for all $x<1$; and the
identity
$\id{}:Q \to Q$. Hence there are six homomorphisms
\[
\varphi_1= \langle d ,0\rangle: Q \oplus Q
\to Q \quad\quad \varphi_2 = \langle u,0\rangle: Q \oplus Q
\to Q \quad\quad \varphi_3= \langle \id{},0\rangle: Q \oplus Q
\to Q
\]
\[
\theta_1= \langle 0,d \rangle: Q \oplus Q
\to Q \quad \quad \theta_2 = \langle 0, u \rangle: Q \oplus Q
\to Q \quad \quad \theta_3= \langle 0,\id{} \rangle: Q \oplus Q
\to Q
\]
corresponding to the six elements of $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$.
\end{example}
\section{A Proof of Non--Contextuality}
We now show that for a ZDF quantale $Q$ every object $X$ in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ is
Kochen--Specker non--contextual. We do this by showing that picking an
element from the underlying set $X$ allows one to construct a global section
of $\ensuremath{\Spec_\textnormal{\text{P}}}$. By Theorem
\ref{thm:GSpecAndPSpec} we can then conclude that $\ensuremath{\Spec_\textnormal{\text{G}}}$ has global
sections and thus every object
in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ is Kochen--Specker non--contextual. We then show a partial converse
to this result, that every global section for $\ensuremath{\Spec_\textnormal{\text{P}}}$ in turn picks out an
element from $X$.
\begin{theorem}\label{thm:MainTheorem}
For $Q$ a commutative ZDF quantale, and $X$ a set, each $x \in X$
determines a global section of the prime spectrum $\ensuremath{\Spec_\textnormal{\text{P}}}: \ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)^{\ensuremath{^{\mathrm{op}}}\xspace}
\to \ensuremath{\textnormal{\textbf{Set}}}$.
\proof{ We show that each element $x \in X$ determines a global section. By
Theorem \ref{thm:CompletelyDecomposable} each semialgebra $\ensuremath{\textnormal{\textbf{A}}}$ in
$\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$ has a decomposition $\prod\limits_i e_i \ensuremath{\textnormal{\textbf{A}}}$ for subunital
idempotents $e_i$. Note that $x$ is in the support of exactly one of the
primitive subunital idempotents, which we will denote $e_x$.
By Theorem \ref{thm:NicePrimeIdeals} $\overline{e_x \ensuremath{\textnormal{\textbf{A}}}}$ is a prime ideal.
Let $\tilde{x}_\ensuremath{\textnormal{\textbf{A}}}: \ensuremath{\textnormal{\textbf{A}}} \to \ensuremath{\textnormal{\textbf{2}}}$ be
the map corresponding to this prime ideal defined $\tilde{x}_\ensuremath{\textnormal{\textbf{A}}}(e_x)= 1$.
The claim is that $\tilde{x}$ determines a natural transformation. We
need to show that for each $\ensuremath{\textnormal{\textbf{A}}} \hookrightarrow \ensuremath{\textnormal{\textbf{B}}}$ that the restriction
of
$\tilde{x}_\ensuremath{\textnormal{\textbf{B}}}$ so $\ensuremath{\textnormal{\textbf{A}}}$ is equal to $\tilde{x}_\ensuremath{\textnormal{\textbf{A}}}$. Let $\ensuremath{\textnormal{\textbf{B}}} =
\prod\limits_j d_j \ensuremath{\textnormal{\textbf{B}}}$ with $\tilde{x}_\ensuremath{\textnormal{\textbf{B}}} (d_x)=1$. Since
$e_x$ and $d_x$ both relate $x$ to itself we have $e_x \circ d_x \not=
0$. Clearly then $e_x \circ d_x$ is a non--zero element of the subsemialgebra
$d_x \ensuremath{\textnormal{\textbf{B}}} \subset \ensuremath{\textnormal{\textbf{B}}}$ and hence $\tilde{x}(e_x \circ d_x) = 1$. This
implies that $\tilde{x}_\ensuremath{\textnormal{\textbf{B}}}(e_x) = 1$ and therefore $\tilde{x}_\ensuremath{\textnormal{\textbf{A}}} (e_x)=
\tilde{x}_\ensuremath{\textnormal{\textbf{B}}} (e_x)$.
\ensuremath{\null\hfill\square}}
\end{theorem}
Central to the proof of Theorem \ref{thm:MainTheorem} is reducing the problem
to consider the partitions of the underlying set $X$. The proof of the
Kochen--Specker theorem
also reduces the problem to a consideration of the ``partitions'' on the
Hilbert space $H$, that is, the orthonormal bases of $H$. At the heart
of the difference between the contextuality results for $\ensuremath{\textnormal{\textbf{Hilb}}}$ and $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ is
that given an element of a set $X$ we can pick a component from every partition
of $X$ in a canonical way. However, for a Hilbert space if we choose a vector
$| \psi \rangle \in H$ there is not a canonical way of picking an element from
each orthonormal basis of $H$.
This non--contextuality result for $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ is consistent with a theorem which
states
the category of finite sets and relations is Mermin--local
\cite{GogiosoZeng2015:MerminNonlocality}, lending some credibility to our
definition of
Kochen--Specker contextuality.
We now show a partial converse of Theorem \ref{thm:MainTheorem}, that
is,
every global section of $\ensuremath{\Spec_\textnormal{\text{P}}}$ isolates some $x \in X$, although we do not
claim that every global section is of the form $\tilde{x}$ as defined in the
proof of
Theorem \ref{thm:MainTheorem}.
\begin{lemma}\label{lem:TrivialIsVonNeumann}
For $X$ a set, the set of relations $\ensuremath{\textnormal{\textbf{E}}} = \{ \ q \bullet \id{X}:X \to X \ |
\ q \in Q \ \}$ belongs to $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$.
\end{lemma}
We call $\ensuremath{\textnormal{\textbf{E}}}$ (as defined in Lemma \ref{lem:TrivialIsVonNeumann}) the
\emph{trivial semialgebra on} $X$. Clearly there is an inclusion $\ensuremath{\textnormal{\textbf{E}}}
\hookrightarrow \ensuremath{\textnormal{\textbf{A}}}$ for every $\ensuremath{\textnormal{\textbf{A}}}$ in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$.
\begin{lemma}\label{lem:ClosedUnderJoins}
Suppose $\ensuremath{\textnormal{\textbf{A}}} \subset \ensuremath{\textnormal{\text{Hom}}}(A,A)$ belongs to $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(A)$ and suppose
$\ensuremath{\textnormal{\textbf{B}}} \subset \ensuremath{\textnormal{\text{Hom}}}(B,B)$ belongs to $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(B)$ then $\ensuremath{\textnormal{\textbf{A}}} \oplus \ensuremath{\textnormal{\textbf{B}}}
\subset \ensuremath{\textnormal{\text{Hom}}}(A\sqcup B, A \sqcup B)$ belongs to $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(A\sqcup B)$.
\end{lemma}
\begin{lemma}\label{lem:ClosedUnderRestriction}
If $\ensuremath{\textnormal{\textbf{A}}} = e_1\ensuremath{\textnormal{\textbf{A}}} \oplus e_2\ensuremath{\textnormal{\textbf{A}}}$ belongs to $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(A)$ where $e_1$
is the identity morphism
on some subset $E \subset A$ then $e_1\ensuremath{\textnormal{\textbf{A}}}$ viewed as a subsemialgebra
$e_1\ensuremath{\textnormal{\textbf{A}}} \subset \ensuremath{\textnormal{\text{Hom}}}(E,E)$ belongs to $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(E)$.
\end{lemma}
\begin{theorem}
Let $Q$ be a ZDF quantale and $X$ an object in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$. Every global section
$\chi:T \to \ensuremath{\Spec_\textnormal{\text{P}}}(-)$ uniquely determines some $x \in X$.
\proof{ By Lemma \ref{lem:SpecAndPrimitiveIdempotents} for $\ensuremath{\textnormal{\textbf{A}}} =
\prod\limits_i e_i \ensuremath{\textnormal{\textbf{A}}}$ there is one primitive idempotent element $e_a$ such
that $\chi(e_a) = 1$. For $\ensuremath{\textnormal{\textbf{B}}} = \prod\limits_j d_j \ensuremath{\textnormal{\textbf{B}}}$ there is one $d_b$
such that $\chi(d_b)=1$. We claim that for $e_a$ and $d_b$ we have $e_a \circ
d_b\not= 0$.
Let $E_a = \ensuremath{\textnormal{\text{supp}}}(e_a)$ and $E_b = \ensuremath{\textnormal{\text{supp}}}(e_b)$ and let
$\ensuremath{\textnormal{\textbf{E}}}_1$ be the
trivial semialgebra defined on the set $X \backslash (E_a \sqcup E_b)$. Let
$\ensuremath{\textnormal{\textbf{E}}}_2$ be the trivial semialgebra on $X\backslash E_a$ and $\ensuremath{\textnormal{\textbf{E}}}_3$ be the
trivial semialgebra on $X\backslash E_b$. Hence we have unital subsemialgebra
inclusions
\[
\begin{tikzpicture}
\node(A) at (0,0) {$e_a \ensuremath{\textnormal{\textbf{A}}} \oplus \ensuremath{\textnormal{\textbf{E}}}_2$};
\node(B) at (-1,1) {$\ensuremath{\textnormal{\textbf{A}}}$};
\node(C) at (1,1) {$e_a \ensuremath{\textnormal{\textbf{A}}} \oplus d_b \ensuremath{\textnormal{\textbf{B}}} \oplus \ensuremath{\textnormal{\textbf{E}}}_1$};
\node(E) at (2,0) {$d_b \ensuremath{\textnormal{\textbf{B}}} \oplus \ensuremath{\textnormal{\textbf{E}}}_3$};
\node(F) at (3,1) {$\ensuremath{\textnormal{\textbf{B}}}$};
\draw[left hook->] (A) to node {}(B);
\draw[right hook->] (A) to node {}(C);
\draw[left hook->] (E) to node {}(C);
\draw[right hook->] (E) to node {}(F);
\end{tikzpicture}
\]
By naturality, if $\chi_\ensuremath{\textnormal{\textbf{A}}}(e_a)=1$ then $\chi_{e_a \ensuremath{\textnormal{\textbf{A}}} \oplus e_b
\ensuremath{\textnormal{\textbf{B}}} \oplus \ensuremath{\textnormal{\textbf{E}}}_1}(e_a)=1$ which implies that $\chi_{e_a \ensuremath{\textnormal{\textbf{A}}}
\oplus e_b
\ensuremath{\textnormal{\textbf{B}}} \oplus \ensuremath{\textnormal{\textbf{E}}}_1}(e_b)=0$, which in turn implies that $\chi_\ensuremath{\textnormal{\textbf{B}}}(e_b)=0$,
which
is a contradiction.
Since there is an algebra $\ensuremath{\textnormal{\textbf{A}}} = \prod\limits_{x \in X} Q$, picking a
global section for this algebra amounts to picking a singleton from $X$.
\ensuremath{\null\hfill\square}}
\end{theorem}
\begin{remark}
Spekkens toy theory \cite{Spekkens2007:Epistemic} can be modelled in $\ensuremath{\textnormal{\textbf{Rel}}}$
using Frobenius algebras as a notion of observable
\cite{CoeckeEdwards2008:ToyQuantum}, and hence by Remark
\ref{rem:FrobeniusLifts} can be modelled by commutative von Neumann
semialgebras. In
Spekkens Toy theory the \emph{ontic states} of
the physical system, which represent local hidden variables, are
represented by the singleton elements of the underlying set, and hence we see a
correspondence between the ontic states of the theory and the global sections
in the model.
\end{remark}
\section{Topologising the State Space}
\label{sec:Topologising}
The concept of the ``spectrum'' of an algebraic object is a broad one,
appearing across many fields of
mathematics: it lies at the heart of a family of deep
results connecting algebra and topology \cite{Johnstone1982:StoneSpaces}; it
is a fundamental concept in algebraic geometry
\cite{Smith2014:AlgebraicGeometry}; and it is central to the algebraic
approach to classical physics described in Figure 1.
\cite{Nestruev2003:SmoothManifoldsAndObservables}. In each case one endows the
spectrum of an algebraic object with a topology called the \emph{Zariski
topology}. Here we
extend the definition of Zariski topology to the $k^*$--ideals of a semialgebra
and to the characters
on a semialgebra and hence to the prime spectrum and Gelfand spectrum of
$S^*$--semialgebras.
\begin{definition}\label{def:ZariskiTopology}
Let $\ensuremath{\mathcal{A}}\xspace$ be a locally small $\ensuremath{\dag}$--symmetric monoidal category with finite
$\ensuremath{\dag}$--biproducts. Let $X$ be some object, and let $\ensuremath{\textnormal{\textbf{A}}}$ be an object in
$\ensuremath{\mathcal{A}}\xspace\ensuremath{\Alg_{\textnormal{vN}}}(X)$. For each ideal $J\subset \ensuremath{\textnormal{\textbf{A}}}$ define the sets
$\mathbb{V}_P(J) = \{ K \in \ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}}) \ | \
J \subset K \
\}$. We take the collection of $\mathbb{V}_P(J)$ to be a basis of closed sets
for
the \emph{Zariski topology} on $\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$.
Consider the set $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$. For each ideal
$J\subset \ensuremath{\textnormal{\textbf{A}}}$ define the set $\mathbb{V}_G(J) = \{ \rho \in \ensuremath{\textnormal{\text{Spec}}}(\ensuremath{\textnormal{\textbf{A}}}) \ |
\
J
\subset \ker(\rho) \ \}$. We take the collection of $\mathbb{V}_G(J)$ to be a
basis of
closed sets for the \emph{Zariski topology} on $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$.
\end{definition}
Hence, under the interpretation of Figure
1. we see that our sets of states are in fact topological spaces. Recall, a
space is $T_0$ if all points are \emph{topologically
distinguishable},
that is, for every pair of points $x$ and $y$ there is at least one open set
containing one but not both of these points.
\begin{theorem}\label{thm:FunctorToTop}
For an $S^*$--semialgebra $\ensuremath{\textnormal{\textbf{A}}}$ the Zariski topology on $\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$ is
compact and $T_0$, and for $i:\ensuremath{\textnormal{\textbf{A}}} \hookrightarrow \ensuremath{\textnormal{\textbf{B}}}$ the
function
$\tilde{i}: \ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{B}}}) \to \ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$ in continuous with respect to this
topology.
For an $S^*$--semialgebra $\ensuremath{\textnormal{\textbf{A}}}$ the Zariski topology on $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ is
compact, and for $i:\ensuremath{\textnormal{\textbf{A}}} \hookrightarrow \ensuremath{\textnormal{\textbf{B}}}$ the function
$\tilde{i}: \ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{B}}}) \to \ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ in continuous with respect to this
topology.
\end{theorem}
Theorem \ref{thm:FunctorToTop} states the the prime spectrum and Gelfand
spectrum give us functors of the form
\[
\begin{tikzpicture}
\node(A) at (0,0) {$\ensuremath{\mathcal{A}}\xspace\ensuremath{\textnormal{-\textbf{Alg}}}(A)^{\ensuremath{^{\mathrm{op}}}\xspace}$};
\node(B) at (3,0) {$\ensuremath{\textnormal{\textbf{Top}}}$};
\draw[->](A) to node [above]{$\ensuremath{\Spec_\textnormal{\text{P}}}$}(B);
\end{tikzpicture}\qquad\qquad
\begin{tikzpicture}
\node(A) at (0,0) {$\ensuremath{\mathcal{A}}\xspace\ensuremath{\textnormal{-\textbf{Alg}}}(A)^{\ensuremath{^{\mathrm{op}}}\xspace}$};
\node(B) at (3,0) {$\ensuremath{\textnormal{\textbf{Top}}}$};
\draw[->](A) to node [above]{$\ensuremath{\Spec_\textnormal{\text{G}}}$}(B);
\end{tikzpicture}
\]
Note the Gelfand
spectrum need not even be $T_0$ in general, as we will see in
Example \ref{example:2}.
Theorem \ref{thm:GSpecAndPSpec} relates the prime spectrum and the Gelfand
spectrum for the case when $\ensuremath{\mathcal{A}}\xspace$ is the category $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}$ for a $Q$ a ZDF
quantale. The following theorem gives us an insight into the
nature of this relationship in terms of the topological structure on these
spectra.
\begin{theorem}\label{thm:GSpecAndPSpecAsTopologcalSpaces}
For $Q$ a ZDF quantale and $\ensuremath{\textnormal{\textbf{A}}}$ in $\ensuremath{\ensuremath{\textnormal{\textbf{Rel}}}_{Q}}\ensuremath{\Alg_{\textnormal{vN}}}(X)$, each $\xi_\ensuremath{\textnormal{\textbf{A}}}:
\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}}) \to \ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$, as defined in Theorem
\ref{thm:GSpecAndPSpec},
is a quotient
of topological spaces
where $\rho_1 \sim \rho_2$ iff $\rho_1$ and $\rho_2$ are not distinguishable by
the Zariski topology.
\end{theorem}
The map $\xi_\ensuremath{\textnormal{\textbf{A}}}$ identifies those characters which have the same kernel,
which are precisely those characters which the Zariski topology on
$\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ cannot distinguish.
Theorem \ref{thm:GSpecAndPSpecAsTopologcalSpaces} allows us to think of
$\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$ as a coarse--graining of the state space $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ of our
physical system. To illustrate this we revisit Example \ref{example:1}.
\begin{example}\label{example:2}
Let $\ensuremath{\textnormal{\textbf{A}}}$ be as in Example \ref{example:1}. The Zariski
topology on $\ensuremath{\Spec_\textnormal{\text{P}}}(\ensuremath{\textnormal{\textbf{A}}})$ has a basis consisting of the sets $\{ J_1, J_2
\}$, $\{K_1, K_2\}$,
$\{J_1\}$, and $\{K_1\}$. It is easy to check that this topology is $T_0$ but
that it is not $T_1$
and therefore not
Hausdorff. For $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ the Zariski topology has a basis of
closed
sets $\{ \varphi_1,
\varphi_2, \varphi_3 \}$, $\{ \varphi_1 \}$, $\{ \theta_1,
\theta_2, \theta_3 \}$, $\{ \theta_1 \}$. It is easy to check that there is no
open set distinguishing $\varphi_2$ and $\varphi_3$ from one
another, nor
$\theta_2$ from $\theta_3$ as these respective pairs of characters have the
same kernels and
hence $\ensuremath{\Spec_\textnormal{\text{G}}}(\ensuremath{\textnormal{\textbf{A}}})$ fails even to be $T_0$. Note that
$\xi_\ensuremath{\textnormal{\textbf{A}}}(\varphi_2) = \xi_\ensuremath{\textnormal{\textbf{A}}}(\varphi_3) = K_1$ and $\xi_\ensuremath{\textnormal{\textbf{A}}}(\theta_2) =
\xi_\ensuremath{\textnormal{\textbf{A}}}(\theta_3) = J_1$, and hence the topologically indistinguishable
points are identified by $\xi_\ensuremath{\textnormal{\textbf{A}}}$.
\end{example}
|
1,477,468,750,900 | arxiv | \section{Introduction}
Backward stochastic differential equations (in short BSDE's) were first
introduced by Bismut in 1973 in the paper \cite{Bismut:73} as equation for the
adjoint process in the stochastic version of Pontryagin maximum principle. In
1990, Pardoux and Peng \cite{Pardoux/Peng:90} generalized and consecrated the
well known now notion of nonlinear backward stochastic differential equation
and they provided existence and uniqueness results for the solution of this
kind of equation. Starting with the paper of Pardoux and Peng
\cite{Pardoux/Peng:92}, a stochastic approach to the existence problem of a
solution for many types of deterministic partial differential equations has
been developed. Since then the interest in BSDEs has kept growing, both in the
direction of generalization of the emerging equations and construction of
approximation schemes for them. BSDEs have been widely used as a very useful
instrument for modelling various physical phenomena, in stochastic control and
especially in mathematical finance, as any pricing problem, by replication,
can be written in terms of linear BSDEs, or non-linear BSDEs with portfolios
constraints. Pardoux and R\u{a}\c{s}canu \cite{Pardoux/Rascanu:98} proved,
using a probabilistic interpretation, the existence of the viscosity solution
for a multivalued PDE (with subdifferential operator) of parabolic and
elliptic type.
Backward stochastic variational inequalities (for short, BSVIs) were first
analyzed by Pardoux and R\u{a}\c{s}canu in \cite{Pardoux/Rascanu:98} and
\cite{Pardoux/Rascanu:99} (the extension for Hilbert spaces case), by using a
method that consisted of a penalizing scheme, followed by its convergence.
Even though this type of penalization approach is very useful when dealing
with multivalued backward stochastic dynamical systems governed by a
subdifferential operator, it fails when dealing with a general maximal
monotone operator. This motivated a new approach for the later case of
equations, via convex analysis instruments. In \cite{Rascanu/Rotenstein:11},
R\u{a}\c{s}canu and Rotenstein established, using the Fitzpatrick function, a
one-to-one correspondence between the solutions of those types of equations
and the minimum points of some proper, convex, lower semicontinuous functions,
defined on well-chosen Banach spaces.
Multi-dimensional backward stochastic differential equations with oblique
reflection (in fact BSDEs reflected on the boundary of a special unbounded
convex domain along an oblique direction), which arises naturally in the study
of optimal switching problem were recently studied by Hu and Tang in
\cite{Hu/Tang:10}. As applications, the authors apply the results to solve the
optimal switching problem for stochastic differential equations of functional
type, and they give also a probabilistic interpretation of the viscosity
solution to a system of variational inequalities.
It worth mentioning that, until now, even for quite complex problems like the
ones analyzed by Maticiuc and R\u{a}\c{s}canu in \cite{Maticiuc/Rascanu:07} or
\cite{Maticiuc/Rascanu:10}, when dealing with BSVIs, the reflection was made
upon the normal direction at the frontier of the domain and it was caused by
the presence of the subdifferential operator of a convex lower semicontinuous
function. As the main achievement of this paper we prove the existence and
uniqueness of the solution for the more general BSVI with oblique subgradient
\[
\left\{
\begin{array}
[c]{l
-dY_{t}+H\left( t,Y_{t}\right) \partial\varphi\left( Y_{t}\right) \left(
dt\right) \ni F\left( t,Y_{t},Z_{t}\right) dt-Z_{t}dB_{t},\quad t\in\left[
0,T\right] ,\smallskip\\
Y_{T}=\eta,
\end{array}
\right.
\]
where $B$ is a standard Brownian motion defined on a complete probability
space, $F$ is the generator function and the random variable $\eta$ is the
terminal data. The term $H(X)$ acts on the set of subgradients, fact which
will determine a oblique reflection for the feedback process. A similar setup
was constructed and studied for forward stochastic variational inequalities by
Gassous, R\u{a}\c{s}canu and Rotenstein in
\cite{Gassous/Rascanu/Rotenstein:12} by considering first a (deterministic)
generalized Skorokhod problem with oblique subgradients, prior to the general
stochastic case. In the current paper the problems also rise when we operate
with the product $H\left( t,Y_{t}\right) \partial\varphi\left(
Y_{t}\right) $, which does not inherit neither the monotonicity of the
subdifferential operator nor the Lipschitz property of the matrix involved,
problems which will be overcome by using different methods compared to the
ones used for subgradients reflected upon the normal direction. We will split
our problem into two new ones. For the situation when we have only a time
dependence for the matrix $H$ we obtain the existence of a strong solution,
together with the existence of a absolutely continuous feedback-subgradient
process. However, for the general case of a state dependence for $H$ we will
use tightness criteria in order to get a solution for the equation. In
\cite{Buckdahn/Engelbert/Rascanu:04}, Buckdahn, Engelbert and R\u{a}\c{s}canu
discussed the concept of weak solutions of a certain type of backward
stochastic differential equations (not multivalued). Using weak convergence in
the Meyer--Zheng topology, they provided a general existence result. We will
put also our problem into a Markovian framework. The problem consists in
answering in which sense can we take the limit in the sequence $\{\left(
Y^{n},Z^{n},U^{n}\right) \}_{n}$, given by the solutions of the approximating
equations. We have to prove that it is tight in a certain topology. Even the
$S-$topology introduced by Jakubowski in \cite{Jakubowski:97} (and used for
similar setups by Boufoussi and Casteren \cite{Boufoussi/Casteren:04} or LeJai
\cite{LeJai:02}) seems suitable for our context, the regularity of the
subgradient process given by the approximating equation as part of its
solution permits us to show a convergence in the sense of the Meyer-Zheng
topology, that is the laws converge weakly if we equip the space of paths with
the topology of convergence in $dt-$measure. The tightness of $\{Z^{n}\}_{n}$
is hard to get, therefore we renounce at the dependence on $Z$ for the
generator function of the multivalued backward equation. This framework
permits also to analyze the existence of viscosity solutions for systems of
parabolic variational inequalities driven by generalized subgradients.
The article is organized as follows. Section 2 presents the framework of our
study, the assumptions and the hypotheses on the data, the notions of weak and
strong solution for the equations and it closes with the enunciations of the
main results of the paper, the complete proofs representing the core of
Sections 4 and 5. Section 3 is dedicated to some useful apriori estimates for
the solutions of the approximating equations. Section 4 proves the strong
existence and uniqueness of the solution when the matrix $H$ does not depend
on the state of the system, while Section 5 deals with the existence of a weak
solution for the general case of $H=H(t,y)$. For the clarity of the
presentation, the last part of the paper groups together, under the form of an
Annex with three subsections, some useful results that are used throughout
this article.
\section{Setting the problem}
This section is dedicated to the construction of the problem that we will
study in the sequel. We present the hypothesis imposed on the coefficients and
we formulate the main results of this article. The proofs will be detailed in
the next three sections.
Let $T>0$ be fixed and consider the backward stochastic variational inequality
with oblique reflection (for short, we will write $BSVI\left( H\left(
t,y\right) ,\varphi,F\right) $, $BSVI\left( H\left( t\right)
,\varphi,F\right) $ or, respectively, $BSVI\left( H\left( y\right)
,\varphi,F\right) $ if the matrix $H$ depends only on time or, respectively,
on the state of the system), $\mathbb{P}-a.s.~\omega\in\Omega$
\begin{equation}
\left\{
\begin{array}
[c]{l
Y_{t}
{\displaystyle\int_{t}^{T}}
H\left( s,Y_{s}\right) dK_{s}=\eta
{\displaystyle\int_{t}^{T}}
F\left( s,Y_{s},Z_{s}\right) ds
{\displaystyle\int_{t}^{T}}
Z_{s}dB_{s},\quad t\in\left[ 0,T\right] ,\smallskip\\
dK_{s}\in\partial\varphi\left( Y_{s}\right) \left( ds\right) ,
\end{array}
\right. \label{main oblique BSVI
\end{equation}
where
\begin{enumerate}
\item[$\left( H_{1}\right) $] $\quad\left( \Omega,\mathcal{F
,\mathbb{P},\{\mathcal{F}_{t}\}_{t\geq0}\right) $ is a stochastic basis and
$\{B_{t}:t\geq0\}$ is a $\mathbb{R}^{k}-$valued Brownian motion. Moreover,
$\mathcal{F}_{t}=\mathcal{F}_{t}^{B}=\sigma(\{B_{s}:0\leq s\leq t\})\vee
\mathcal{N}$.
\item[$\left( H_{2}\right) $] $\quad H(\cdot,\cdot,y):\Omega\times
\mathbb{R}_{+}\rightarrow\mathbb{R}^{d\times d}$ is progressively measurable
for every $y\in\mathbb{R}^{d}$; there exists $\Lambda,b>0$ such that
$\mathbb{P-}a.s.$ $\omega\in\Omega$, $H=\left( h_{i,j}\right) _{d\times
d}\in C^{1,2}\left( \mathbb{R}_{+}\mathbb{\times R}^{d};\mathbb{R}^{d\times
d}\right) $ and, for all $t\in\left[ 0,T\right] $ and $y,\tilde{y
\in\mathbb{R}^{d}$, $\mathbb{P-}a.s.$ $\omega\in\Omega$
\begin{equation}
\left\{
\begin{array}
[c]{ll
\left( i\right) \quad & h_{i,j}\left( t,y\right) =h_{j,i}\left(
t,y\right) ,\quad\forall i,j\in\overline{1,d},\medskip\\
\left( ii\right) \quad & \left\langle H\left( t,y\right) u,u\right\rangle
\geq a\left\vert u\right\vert ^{2},\quad\forall u\in\mathbb{R}^{d}\text{ (for
some }a\geq1\text{)},\medskip\\
\left( iii\right) \quad & |H\left( t,\tilde{y}\right) -H\left(
t,y\right) |+|\left[ H\left( t,\tilde{y}\right) \right] ^{-1}-\left[
H\left( t,y\right) \right] ^{-1}|\leq\Lambda|\tilde{y}-y|,\medskip\\
\left( iv\right) \quad & |H\left( t,y\right) |+|\left[ H\left(
t,y\right) \right] ^{-1}|~\leq b,
\end{array}
\right. \label{hypothesis on H
\end{equation}
where $\left\vert H\left( x\right) \right\vert \overset{def}{=}\left(
\sum_{i,j=1}^{d}\left\vert h_{i,j}\left( x\right) \right\vert ^{2}\right)
^{1/2}$. We denoted by $\left[ H\left( t,y\right) \right] ^{-1}$ the
inverse matrix of $H\left( t,y\right) $. Therefore, $\left[ H\left(
t,y\right) \right] ^{-1}$ has the same properties (\ref{hypothesis on H
$-\left( i\right) ,\left( ii\right) $) as $H\left( t,y\right) $.
\item[$\left( H_{3}\right) $] $\quad$the functio
\[
\varphi:\mathbb{R}^{d}\rightarrow\left] -\infty,+\infty\right] \text{ is a
proper lower semicontinuous convex function.
\]
\end{enumerate}
The generator function $F\left( \cdot,\cdot,y,z\right) :\Omega\times\left[
0,T\right] \rightarrow\mathbb{R}^{d}$ is progressively measurable for every
$\left( y,z\right) \in\mathbb{R}^{d}\times\mathbb{R}^{d\times k}$ and there
exist $L,\ell,\rho\in L^{2}\left( 0,T;\mathbb{R}_{+}\right) $ such tha
\begin{equation}
\left\{
\begin{array}
[c]{l
\begin{array}
[c]{l
\left( i\right) \quad\text{\textit{Lipschitz conditions: for all}
}y,y^{\prime}\in\mathbb{R}^{d},\;z,z^{\prime}\in\mathbb{R}^{d\times
k},\;d\mathbb{P}\otimes dt-a.e.:\medskip\\
\quad\quad\qua
\begin{array}
[c]{l
\left\vert F(t,y^{\prime},z)-F(t,y,z)\right\vert \leq L\left( t\right)
|y^{\prime}-y|\text{,}\medskip\\
|F(t,y,z^{\prime})-F(t,y,z)|\leq\ell\left( t\right) |z^{\prime}-z|\text{;
\end{array}
\end{array}
\medskip\\%
\begin{array}
[c]{l
\left( ii\right) \quad\text{\textit{Boundedness condition:}}\medskip\\
\quad\quad
\begin{array}
[c]{ll}
& \left\vert F\left( t,0,0\right) \right\vert \leq\rho\left( t\right)
,\quad d\mathbb{P}\otimes dt-a.e.\text{.
\end{array}
\end{array}
\end{array}
\right. \tag{$H_4$}\label{hypothesis on F
\end{equation}
$\smallskip$
Denote by $\partial\varphi$ the subdifferential operator of $\varphi$
\[
\partial\varphi\left( x\right) \overset{def}{=}\left\{ \hat{x}\in
\mathbb{R}^{d}:\left\langle \hat{x},y-x\right\rangle +\varphi\left( x\right)
\leq\varphi\left( y\right) ,\text{ for all }y\in\mathbb{R}^{d}\right\}
\]
and $Dom\left( \partial\varphi\right) =\{x\in\mathbb{R}^{d}:\partial
\varphi\left( x\right) \neq\emptyset\}$. We will use the notation
$(x,\hat{x})\in\partial\varphi$ in order to express that $x\in Dom\left(
\partial\varphi\right) $ and $\hat{x}\in\partial\varphi\left( x\right) $.
The vector given by the quantity $H\left( x\right) \hat{x}$, with $\hat
{x}\in\partial\varphi\left( x\right) $ will be called in what follows
\textit{oblique subgradient}.
\begin{remark}
If $E$ is a closed convex subset of $\mathbb{R}^{d}$ then the convex indicator
functio
\[
\varphi\left( x\right) =I_{E}\left( x\right) =\left\{
\begin{array}
[c]{cc
0, & \text{if }x\in E,\smallskip\\
+\infty, & \text{if }x\notin E,
\end{array}
\right.
\]
is a convex l.s.c. function and, for $x\in E$
\[
\partial I_{E}\left( x\right) =\{\hat{x}\in\mathbb{R}^{d}:\left\langle
\hat{x},y-x\right\rangle \leq0,\quad\forall y\in E\}=N_{E}\left( x\right) ,
\]
where $N_{E}\left( x\right) $ is the closed external normal cone to $E$ at
$x$. We have $N_{E}\left( x\right) =\emptyset$ if $x\notin E$ and
$N_{E}\left( x\right) =\{0\}$ if $x\in int(E)$ (we denote by $int(E)$ the
interior of the set $E$).
\end{remark}
\noindent We shall call \textit{oblique reflection directions} at time $t$ the
vectors given b
\[
\nu_{t,x}=H\left( t,x\right) n_{x},\quad x\in Bd\left( E\right) ,
\]
where $n_{x}\in N_{E}\left( x\right) $ (we denote by $Bd(E)$ the boundary of
the set $E$).
Let $k:\left[ t,T\right] \rightarrow\mathbb{R}^{d}$, where $0\leq t\leq T$.
We denote, $\left\Vert k\right\Vert _{\left[ t,T\right] }\overset{def
{=}\sup\left\{ \left\vert k\left( s\right) \right\vert :t\leq s\leq
T\right\} $ and, for $t=0$, $\left\Vert k\right\Vert _{T}\overset{def
{=}\left\Vert k\right\Vert _{\left[ 0,T\right] }$. Considering
$\mathcal{D}\left[ t,T\right] $ the set of the partitions of the time
interval $\left[ t,T\right] $, of the form $\Delta=(t=t_{0}<t_{1
<...<t_{n}=T)$, le
\[
S_{\Delta}(k)
{\displaystyle\sum\limits_{i=0}^{n-1}}
|k(t_{i+1})-k(t_{i})|
\]
and $\left\updownarrow k\right\updownarrow _{\left[ t,T\right]
\overset{def}{=}\sup\limits_{\Delta\in\mathcal{D}}S_{\Delta}(k)$; if $t=0$,
denote $\left\updownarrow k\right\updownarrow _{T}\overset{def}{=
\left\updownarrow k\right\updownarrow _{\left[ 0,T\right] }$. We consider
the space of bounded variation functions $BV(\left[ 0,T\right]
;\mathbb{R}^{d})=\{k~|~k:\left[ 0,T\right] \rightarrow\mathbb{R}^{d},$
$\left\updownarrow k\right\updownarrow _{T}<\infty\}.$ Taking on the space of
continuous functions $C\left( \left[ 0,T\right] ;\mathbb{R}^{d}\right) $
the usual supremum norm, we have the duality connection $(C(\left[
0,T\right] ;\mathbb{R}^{d}))^{\ast}=\{k\in BV(\left[ 0,T\right]
;\mathbb{R}^{d})~|~k(0)=0\}$, with the duality between these spaces given by
the Riemann-Stieltjes integral $\left( y,k\right) \mapsto\int_{0
^{T}\left\langle y\left( t\right) ,dk\left( t\right) \right\rangle .$ We
will say that a function $k\in BV_{loc}(\mathbb{R}_{+};\mathbb{R}^{d})$ if,
for every $T>0$, $k\in BV(\left[ 0,T\right] ;\mathbb{R}^{d})$.
\begin{definition}
Given two functions $x,k:\mathbb{R}_{+}\rightarrow$ $\mathbb{R}^{d}$ we say
that $dk\left( t\right) \in\partial\varphi\left( x\left( t\right)
\right) \left( dt\right) $ i
\
\begin{array}
[c]{ll
\left( a\right) \quad & x:\mathbb{R}_{+}\rightarrow\mathbb{R}^{d
\quad\text{is continuous,}\smallskip\\
\left( b\right) \quad &
{\displaystyle\int_{0}^{T}}
\varphi\left( x\left( t\right) \right) dt<\infty,\text{ for all
T\geq0,\smallskip\\
\left( c\right) \quad & k\in BV_{loc}\left( \mathbb{R}_{+};\mathbb{R
^{d}\right) ,\text{\quad}k\left( 0\right) =0\text{,}\smallskip\\
\left( d\right) \quad &
{\displaystyle\int_{s}^{t}}
\left\langle y\left( r\right) -x(r),dk\left( r\right) \right\rangle
{\displaystyle\int_{s}^{t}}
\varphi\left( x\left( r\right) \right) dr\le
{\displaystyle\int_{s}^{t}}
\varphi\left( y\left( r\right) \right) dr,\smallskip\\
\multicolumn{1}{r}{} & \multicolumn{1}{r}{\text{for all }0\leq s\leq t\leq
T\quad\text{and }y\in C\left( \left[ 0,T\right] ;\mathbb{R}^{d}\right)
.\smallskip
\end{array}
\]
\end{definition}
We introduce now the notion of solution for Eq.(\ref{main oblique BSVI}). We
will study two types of solution, given by the following Definitions. For the
case $H\left( t,y\right) \equiv H\left( t\right) $ we obtain the existence
of a strong solution while, for $H\left( t,y\right) $ we obtain a weak
solution for Eq.(\ref{main oblique BSVI}).
\begin{definition}
\label{Def of strong solution}Given $\left( \Omega,\mathcal{F},\mathbb{P
,\{\mathcal{F}_{t}\}_{t\geq0}\right) $ a fixed stochastic basis and
$\{B_{t}:t\geq0\}$ a $\mathbb{R}^{k}-$valued Brownian motion, we state that a
triplet $\left( Y,Z,K\right) $ is a strong solution of the $BSVI\left(
H\left( t\right) ,\varphi,F\right) $ if $(Y,Z,K):\Omega\times\left[
0,T\right] \rightarrow\mathbb{R}^{d}\times\mathbb{R}^{d\times k
\times\mathbb{R}^{d}$ are progressively measurable continuous stochastic
processes and $\mathbb{P}-a.s.~\omega\in\Omega$
\[
\left\{
\begin{array}
[c]{l
Y_{t}
{\displaystyle\int_{t}^{T}}
H\left( s\right) dK_{s}=\eta
{\displaystyle\int_{t}^{T}}
F\left( s,Y_{s},Z_{s}\right) ds
{\displaystyle\int_{t}^{T}}
Z_{s}dB_{s},\quad\forall t\in\left[ 0,T\right] ,\smallskip\\
dK_{s}\in\partial\varphi\left( Y_{s}\right) \left( ds\right) .
\end{array}
\right.
\]
\end{definition}
\noindent Consider now the case when the matrix $H$ depends on the state of
the system. We can reconsider the backward stochastic variational inequality
with oblique reflection in the following manner, $\mathbb{P}-a.s.~\omega
\in\Omega$
\begin{equation}
\left\{
\begin{array}
[c]{l
Y_{t}
{\displaystyle\int_{t}^{T}}
H\left( s,Y_{s}\right) dK_{s}=\eta
{\displaystyle\int_{t}^{T}}
F\left( s,Y_{s},Z_{s}\right) ds-\left( M_{T}-M_{t}\right) ,\quad\forall
t\in\left[ 0,T\right] ,\smallskip\\
dK_{s}\in\partial\varphi\left( Y_{s}\right) \left( ds\right) ,
\end{array}
\right. \label{eq with martingale
\end{equation}
where $M$ is a continuous martingale (possible with respect to its natural
filtration if not any other filtration available). I
\[
H\left( \omega,t,y\right) \equiv H\left( t,y\right) \quad\text{and}\quad
F\left( \omega,t,y,z\right) \equiv F(t,y,z)
\]
we introduce the notion of weak solution of the equation.
\begin{definition}
\label{Def of weak solution}If there exists a probability space $\left(
\Omega,\mathcal{F},\mathbb{P}\right) $ and a triplet $\left( Y,M,K\right)
:\Omega\times\left[ 0,T\right] \rightarrow(\mathbb{R}^{d})^{3}$ such tha
\
\begin{array}
[c]{cl
\begin{array}
[c]{c
\left( a\right) \bigskip\medskip\\
\end{array}
&
\begin{array}
[c]{l
M\text{ is a continuous martingale with respect to the filtration given, for
}\forall t\in\left[ 0,T\right] \text{, by}\\
\mathcal{F}_{t}\overset{def}{=}\mathcal{F}_{t}^{Y,M}=\sigma(\{Y_{s
,M_{s}:0\leq s\leq t\})\vee\mathcal{N}\text{,}\smallskip
\end{array}
\\
\left( b\right) & Y,K\text{ are c\`{a}dl\`{a}g stochastic processes, adapted
to }\{\mathcal{F}_{t}\}_{t\geq0}\text{,}\smallskip\\
\left( c\right) & \text{relation }(\ref{eq with martingale})\text{ is
verified for every }t\in\left[ 0,T\right] \text{, }\mathbb{P}-a.s.~\omega
\in\Omega\text{,
\end{array}
\]
the collection $(\Omega,\mathcal{F},\mathbb{P},\mathcal{F}_{t},Y_{t
,M_{t},K_{t})_{t\in\left[ 0,T\right] }$ is called a weak solution of the
$BSVI\left( H\left( y\right) ,\varphi,F\right) $.
\end{definition}
\noindent In both cases given by Definition \ref{Def of strong solution} or
Definition \ref{Def of weak solution} we will say that $(Y,Z,K)$ or $(Y,M,K)$
is a solution of the considered oblique reflected backward stochastic
variational inequality.\medskip
Now we are able to formulate the main results of this article. Denot
\[
\nu_{t}
{\displaystyle\int_{0}^{t}}
L\left( s\right) \left[ \mathbb{E}^{\mathcal{F}_{s}}\left\vert
\eta\right\vert ^{p}\right] ^{1/p}\quad\text{and}\quad\theta=\sup
_{t\in\left[ 0,T\right] }\left( \mathbb{E}^{\mathcal{F}_{t}}\left\vert
\eta\right\vert ^{p}\right) ^{1/p}~.
\]
\begin{theorem}
\label{Th. for strong existence}Let $p>1$ and the assumptions $(H_{1}-H_{4})$
be satisfied, with $l(t)\equiv l<\sqrt{a}$. I
\begin{equation}
\mathbb{E}e^{\delta\theta}+\mathbb{E}\left\vert \varphi\left( \eta\right)
\right\vert <\infty\label{boundedness of exp moments
\end{equation}
for all $\delta>0$ then the $BSVI\left( H\left( t\right) ,\varphi,F\right)
$ admits a unique strong solution $\left( Y,Z,K\right) \in S_{d}^{0}\left[
0,T\right] \times\Lambda_{d\times k}^{0}\left( 0,T\right) \times S_{d
^{0}\left[ 0,T\right] $ such that, for all $\delta>0$
\begin{equation}
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }e^{\delta p\nu_{s}}\left\vert
Y_{s}\right\vert ^{p}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta\nu_{s}}\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}<\infty.
\label{boundedness with weight
\end{equation}
Moreover, there exists a positive constant, independent of the terminal time
$T$, $C=C(a,b,\Lambda)$ such that, $\mathbb{P}-a.s.~\omega\in\Omega$
\[
\left\vert Y_{t}\right\vert \leq C\left( 1+\left[ \mathbb{E}^{\mathcal{F
_{t}}\left\vert \eta\right\vert ^{p}\right] ^{1/p}\right) ,\quad~\text{for
all }t\in\left[ 0,T\right]
\]
and the process $K$ can be represented a
\[
K_{t}
{\displaystyle\int_{0}^{t}}
U_{s}ds,
\]
wher
\[
\mathbb{E}\int_{0}^{T}|U_{t}|^{2}dt+\mathbb{E}\int_{0}^{T}|Z_{t}|^{2}dt\leq
C\left( \mathbb{E}|\eta|^{2}+\mathbb{E}\left\vert \varphi\left( \eta\right)
\right\vert +\mathbb{E}\int_{0}^{T}|F(t,0,0)|^{2}dt\right) .
\]
\end{theorem}
\begin{remark}
The boundedness conditions imposed to the exponential moments from
(\ref{boundedness of exp moments}) is not a very restrictive one. For example,
it takes place if we consider $k=1$ and $\eta=B_{T}^{\alpha}$, with
$0<\alpha<2$.
\end{remark}
\begin{theorem}
\label{Th. for weak existence}Let the assumptions $(H_{2}-H_{4})$ be
satisfied. Then the $BSVI\left( H\left( t,y\right) ,\varphi,F\right) $
(\ref{main oblique BSVI}) admits a unique weak solution $(\Omega
,\mathcal{F},\mathbb{P},\mathcal{F}_{t},B_{t},Y_{t},M_{t},K_{t})_{t\in\left[
0,T\right] }$.
\end{theorem}
The proofs of the above results are detailed in the next sections. Section 4
deals with a sequence of approximating equations and apriori estimates of
their solutions. The estimates will be valid for both cases covered by Theorem
\ref{Th. for strong existence} and Theorem \ref{Th. for weak existence}. After
this, the proof is split in Section 5 and Section 6, each one being dedicated
to the particularities brought by Theorem \ref{Th. for strong existence} and
Theorem \ref{Th. for weak existence}.
\section{Approximating problems and apriori estimates}
In order to prove the existence of the solution (strong or weak) we can
assume, without loosing the generality, tha
\[
\varphi\left( y\right) \geq\varphi\left( 0\right) =0
\]
because, otherwise, we can change the functions $\varphi$, $F$ and $H$ as
follow
\begin{align*}
\tilde{\varphi}\left( y\right) & =\varphi\left( y+u_{0}\right)
-\varphi\left( u_{0}\right) -\left\langle \hat{u}_{0},y\right\rangle
\geq0,\smallskip\\
\tilde{F}\left( t,y,z\right) & =F\left( t,y+u_{0},z\right) -H\left(
t,y+u_{0}\right) \hat{u}_{0},\smallskip\\
\tilde{H}\left( t,y\right) & =H\left( t,y+u_{0}\right) ,
\end{align*}
with $u_{0}\in Dom\left( \partial\varphi\right) $ and $\hat{u}_{0
\in\partial\varphi\left( u_{0}\right) $. The solution is now given by
$\left( Y,Z,K\right) =(\tilde{Y}+u_{0},\tilde{Z},\tilde{K})$, wher
\[
\left\{
\begin{array}
[c]{l
\tilde{Y}_{t}
{\displaystyle\int_{t}^{T}}
\tilde{H}(s,\tilde{Y}_{s})d\tilde{K}_{s}=\left( \eta-u_{0}\right)
{\displaystyle\int_{t}^{T}}
F(s,\tilde{Y}_{s},\tilde{Z}_{s})ds
{\displaystyle\int_{t}^{T}}
\tilde{Z}_{s}dB_{s},\text{ }\forall t\in\left[ 0,T\right] ,\smallskip\\
d\tilde{K}_{s}\left( \omega\right) \in\partial\tilde{\varphi}(\tilde{Y
_{s}\left( \omega\right) )\left( ds\right) ,\text{ }\forall s,\text{
}\mathbb{P}-a.s.~\omega\in\Omega.
\end{array}
\right.
\]
We start simultaneously the proofs of Theorem \ref{Th. for strong existence}
and Theorem \ref{Th. for weak existence} by obtaining some apriori estimates
for the solutions of the approximating equations.\medskip
\begin{proof}
Let $p>1$.
\noindent\textbf{Step 1.} \textit{Boundedness under the assumption
\[
0\leq\ell\left( t\right) \equiv\ell<\sqrt{a}.
\]
\noindent Let $0<\varepsilon\leq1.$ Consider the approximating BSD
\begin{equation}
Y_{t}^{\varepsilon}
{\displaystyle\int_{t}^{T}}
H\left( s,Y_{s}^{\varepsilon}\right) \nabla\varphi_{\varepsilon}\left(
Y_{s}^{\varepsilon}\right) ds=\eta
{\displaystyle\int_{t}^{T}}
F\left( s,Y_{s}^{\varepsilon},Z_{s}^{\varepsilon}\right) ds
{\displaystyle\int_{t}^{T}}
Z_{s}^{\varepsilon}dB_{s},\quad\forall t\in\left[ 0,T\right] .
\label{approximating eq for general case
\end{equation}
Let $\tilde{F}\left( t,y\right) =F\left( t,y,z\right) -H\left(
t,y\right) \nabla\varphi_{\varepsilon}\left( y\right) $. Using the
Lipschitz and boundedness hypothesis imposed on $F$ and $H$, we have, for all
$t\in\left[ 0,T\right] $, $y,y^{\prime}\in\mathbb{R}^{d}$, $z,z^{\prime
\in\mathbb{R}^{d\times k}$
\begin{align*}
& |\tilde{F}(t,y^{\prime},z^{\prime})-\tilde{F}(t,y,z)|\\
& \leq|F(t,y^{\prime},z^{\prime})-F(t,y,z)|+|\left[ H\left( t,y\right)
-H\left( t,y^{\prime}\right) \right] \nabla\varphi_{\varepsilon}\left(
y\right) |+|H\left( t,y^{\prime}\right) \left[ \nabla\varphi_{\varepsilon
}\left( y\right) -\nabla\varphi_{\varepsilon}\left( y^{\prime}\right)
\right] |\\
& \leq L\left( t\right) |y^{\prime}-y|+\ell|z^{\prime}-z|+\dfrac{\Lambda
}{\varepsilon}|y^{\prime}-y|\left\vert y\right\vert +\dfrac{b}{\varepsilon
}|y^{\prime}-y|\\
& \leq\left( L\left( t\right) +\frac{\Lambda}{\varepsilon}+\frac
{b}{\varepsilon}\right) (1+\left\vert y\right\vert \vee|y^{\prime
}|)|y^{\prime}-y|+\ell|z^{\prime}-z|
\end{align*}
an
\[
|\tilde{F}(t,y,0)|\leq\rho\left( t\right) +L\left( t\right) \left\vert
y\right\vert +\dfrac{b}{\varepsilon}\left\vert y\right\vert .
\]
By Theorem \ref{Corollary existence sol} (see Annex 6.1), the BSDE
(\ref{approximating eq for general case}) has a unique solution
$(Y^{\varepsilon},Z^{\varepsilon})\in S_{d}^{0}\left[ 0,T\right]
\times\Lambda_{d\times k}^{0}\left( 0,T\right) $ such that for all
$\delta>0$
\begin{equation}
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }e^{\delta p\nu_{s}}\left\vert
Y_{s}^{\varepsilon}\right\vert ^{p}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta\nu_{s}}\left\vert Z_{s}^{\varepsilon}\right\vert ^{2}ds\right)
^{p/2}<\infty. \label{space of sol for approximating equation
\end{equation}
By Energy Equality we obtai
\begin{align*}
|Y_{t}^{\varepsilon}|^{2}+
{\displaystyle\int_{t}^{s}}
\left\langle Y_{r}^{\varepsilon},H\left( r,Y_{r}^{\varepsilon}\right)
\nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right) \right\rangle
dr
{\displaystyle\int_{t}^{s}}
|Z_{r}^{\varepsilon}|^{2}dr & =|Y_{s}^{\varepsilon}|^{2}+
{\displaystyle\int_{t}^{s}}
\left\langle Y_{r}^{\varepsilon},F\left( r,Y_{r}^{\varepsilon},Z_{r
^{\varepsilon}\right) \right\rangle dr\\
& -
{\displaystyle\int_{t}^{s}}
\left\langle Y_{r}^{\varepsilon},Z_{r}^{\varepsilon}dB_{r}\right\rangle .
\end{align*}
Since $y\longmapsto\varphi_{\varepsilon}\left( y\right) :\mathbb{R
^{d}\rightarrow\mathbb{R}$ is a convex $C^{1}-$function, then by the
subdifferential inequality (\ref{subdiff ineq 1}) (see Annex 6.3
\begin{align*}
& \varphi_{\varepsilon}\left( Y_{t}^{\varepsilon}\right)
{\displaystyle\int_{t}^{s}}
\left\langle \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right)
,H(r,Y_{r}^{\varepsilon})\nabla\varphi_{\varepsilon}\left( Y_{r
^{\varepsilon}\right) \right\rangle dr\\
& \leq\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right)
{\displaystyle\int_{t}^{s}}
\left\langle \nabla\varphi_{\varepsilon}(Y_{r}^{\varepsilon}),F(r,Y_{r
^{\varepsilon},Z_{r}^{\varepsilon})\right\rangle dr
{\displaystyle\int_{t}^{s}}
\left\langle \nabla\varphi_{\varepsilon}(Y_{r}^{\varepsilon}),Z_{r
^{\varepsilon}dB_{r}\right\rangle .
\end{align*}
As consequence, combining the previous two inequalities, we obtai
\begin{align}
& |Y_{t}^{\varepsilon}|^{2}+\varphi_{\varepsilon}(Y_{t}^{\varepsilon})
{\displaystyle\int_{t}^{s}}
\left\langle \nabla\varphi_{\varepsilon}(Y_{r}^{\varepsilon}),H(r,Y_{r
^{\varepsilon})\nabla\varphi_{\varepsilon}(Y_{r}^{\varepsilon})\right\rangle
dr
{\displaystyle\int_{t}^{s}}
|Z_{r}^{\varepsilon}|^{2}dr\label{ineq2}\\
& \leq|Y_{s}^{\varepsilon}|^{2}+\varphi_{\varepsilon}(Y_{s}^{\varepsilon})+
{\displaystyle\int_{t}^{s}}
\left\langle Y_{r}^{\varepsilon},F(r,Y_{r}^{\varepsilon},Z_{r}^{\varepsilon
})\right\rangle dr\nonumber\\
&
{\displaystyle\int_{t}^{s}}
\left\langle \nabla\varphi_{\varepsilon}(Y_{r}^{\varepsilon}),F(r,Y_{r
^{\varepsilon},Z_{r}^{\varepsilon})-2H(r,Y_{r}^{\varepsilon})^{\ast
Y_{r}^{\varepsilon}\right\rangle dr
{\displaystyle\int_{t}^{s}}
\left\langle 2Y_{r}^{\varepsilon}+\nabla\varphi_{\varepsilon}(Y_{r
^{\varepsilon}),Z_{r}^{\varepsilon}dB_{r}\right\rangle .\nonumber
\end{align}
Let $\lambda>0$. In the sequel we denote by $C$ a generic positive constant,
independent of $\varepsilon,\delta\in(0,1]$, constant which can change from
one line to another, without affecting the result. The assumptions $(H_{2})$
and (\ref{hypothesis on F}) lead to the following estimates:
\begin{itemize}
\item $\quad\left\langle \nabla\varphi_{\varepsilon}\left( Y_{r
^{\varepsilon}\right) ,H\left( r,Y_{r}^{\varepsilon}\right) \nabla
\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right) \right\rangle \geq
a\left\vert \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right)
\right\vert ^{2}$\medskip
\item $\quad2\left\langle Y_{r}^{\varepsilon},F\left( r,Y_{r}^{\varepsilon
},Z_{r}^{\varepsilon}\right) \right\rangle \leq2\ell\left\vert Y_{r
^{\varepsilon}\right\vert \left\vert Z_{r}^{\varepsilon}\right\vert +2L\left(
r\right) \left\vert Y_{r}^{\varepsilon}\right\vert ^{2}+2\left\vert
Y_{r}^{\varepsilon}\right\vert \left\vert F\left( r,0,0\right) \right\vert
$\medskip
$\quad\leq\lambda\left\vert Z_{r}^{\varepsilon}\right\vert ^{2}+\left(
2L\left( r\right) +\dfrac{\ell^{2}}{\lambda}+1\right) \left\vert
Y_{r}^{\varepsilon}\right\vert ^{2}+\rho^{2}\left( r\right) $\medskip
\item $\quad\left\langle \nabla\varphi_{\varepsilon}\left( Y_{r
^{\varepsilon}\right) ,F\left( r,Y_{r}^{\varepsilon},Z_{r}^{\varepsilon
}\right) -2H\left( r,Y_{r}^{\varepsilon}\right) ^{\ast}Y_{r}^{\varepsilon
}\right\rangle $\medskip
$\quad\leq\left\vert \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon
}\right) \right\vert \left[ \ell\left\vert Z_{r}^{\varepsilon}\right\vert
+L\left( r\right) \left\vert Y_{r}^{\varepsilon}\right\vert +\left\vert
F\left( r,0,0\right) \right\vert +2b\left\vert Y_{r}^{\varepsilon
}\right\vert \right] $\medskip
$\quad\leq\lambda\left\vert Z_{r}^{\varepsilon}\right\vert ^{2}+\dfrac
{1}{4\lambda}\ell^{2}\left\vert \nabla\varphi_{\varepsilon}\left(
Y_{r}^{\varepsilon}\right) \right\vert ^{2}+\dfrac{a}{4\lambda}\left\vert
\nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right) \right\vert
^{2}+\dfrac{2\lambda}{a}\left[ \left( L\left( r\right) +2b\right)
^{2}\left\vert Y_{r}^{\varepsilon}\right\vert ^{2}+\left\vert F\left(
r,0,0\right) \right\vert ^{2}\right] $\medskip
\end{itemize}
\noindent Inserting the above estimates in (\ref{ineq2}), we obtain,
$\mathbb{P}-a.s.,$ for all $0\leq t\leq s\leq T$
\
\begin{array}
[c]{l
\left\vert Y_{t}^{\varepsilon}\right\vert ^{2}+\varphi_{\varepsilon}\left(
Y_{t}^{\varepsilon}\right) +\left( a-\dfrac{a+\ell^{2}}{4\lambda}\right)
{\displaystyle\int_{t}^{s}}
\left\vert \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right)
\right\vert ^{2}dr+\left( 1-2\lambda\right)
{\displaystyle\int_{t}^{s}}
\left\vert Z_{r}^{\varepsilon}\right\vert ^{2}dr\medskip\\
\quad\quad\quad\leq\left\vert Y_{s}^{\varepsilon}\right\vert ^{2
+\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right)
{\displaystyle\int_{t}^{s}}
\left( 1+\dfrac{2\lambda}{a}\right) \left\vert F\left( r,0,0\right)
\right\vert ^{2}dr\medskip\\
\quad\quad\quad
{\displaystyle\int_{t}^{s}}
\left( 2L\left( r\right) +\dfrac{\ell^{2}}{\lambda}+1+\dfrac{2\lambda
{a}\left( L\left( r\right) +2b\right) ^{2}\right) \left\vert
Y_{r}^{\varepsilon}\right\vert ^{2}dr
{\displaystyle\int_{t}^{s}}
\left\langle 2Y_{r}^{\varepsilon}+\nabla\varphi_{\varepsilon}\left(
Y_{r}^{\varepsilon}\right) ,Z_{r}^{\varepsilon}dB_{r}\right\rangle .
\end{array}
\]
Denot
\[
K_{t}^{\lambda}
{\displaystyle\int_{0}^{t}}
\left[ \left( 1+\dfrac{2\lambda}{a}\right) \left\vert F\left(
r,0,0\right) \right\vert ^{2}-\left( a-\dfrac{a+\ell^{2}}{4\lambda}\right)
\left\vert \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right)
\right\vert ^{2}-\left( 1-2\lambda\right) \left\vert Z_{r}^{\varepsilon
}\right\vert ^{2}\right] dr
\]
an
\[
A\left( t\right)
{\displaystyle\int_{0}^{t}}
\left( 2L\left( r\right) +\dfrac{\ell^{2}}{\lambda}+1+\dfrac{2\lambda
{a}\left( L\left( r\right) +2b\right) ^{2}\right) dr.
\]
Since $\varphi_{\varepsilon}\left( y\right) \geq\varphi_{\varepsilon}\left(
0\right) =0$ we hav
\
\begin{array}
[c]{c
\left\vert Y_{t}^{\varepsilon}\right\vert ^{2}+\varphi_{\varepsilon}\left(
Y_{t}^{\varepsilon}\right) \leq\left\vert Y_{s}^{\varepsilon}\right\vert
^{2}+\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right)
{\displaystyle\int_{t}^{s}}
\left[ dK_{r}^{\lambda}+\left[ \left\vert Y_{r}^{\varepsilon}\right\vert
^{2}+\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right) \right]
dA\left( r\right) \right] \medskip\\
{\displaystyle\int_{t}^{s}}
\left\langle 2Y_{r}^{\varepsilon}+\nabla\varphi_{\varepsilon}\left(
Y_{r}^{\varepsilon}\right) ,Z_{r}^{\varepsilon}dB_{r}\right\rangle
\end{array}
\]
and, by Proposition \ref{exp prop ineq} (see Annex 6.3), we infe
\begin{equation
\begin{array}
[c]{r
e^{A\left( t\right) }\left( \left\vert Y_{t}^{\varepsilon}\right\vert
^{2}+\varphi_{\varepsilon}\left( Y_{t}^{\varepsilon}\right) \right) \leq
e^{A\left( s\right) }\left[ \left\vert Y_{s}^{\varepsilon}\right\vert
^{2}+\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) \right]
{\displaystyle\int_{t}^{s}}
e^{A\left( r\right) }dK_{r}^{\lambda}\medskip\\
{\displaystyle\int_{t}^{s}}
e^{A\left( r\right) }\left\langle 2Y_{r}^{\varepsilon}+\nabla\varphi
_{\varepsilon}\left( Y_{r}^{\varepsilon}\right) ,Z_{r}^{\varepsilon
dB_{r}\right\rangle .
\end{array}
\label{ineq3
\end{equation}
Let $\lambda=\dfrac{1}{2}\left( \dfrac{a+\ell^{2}}{4a}+\dfrac{1}{2}\right) $
be fixed. It follows that, for all $0\leq t\leq s\leq T$
\begin{equation
\begin{array}
[c]{l
\left\vert Y_{t}^{\varepsilon}\right\vert ^{2}+\varphi_{\varepsilon}\left(
Y_{t}^{\varepsilon}\right) +\mathbb{E}^{\mathcal{F}_{t}
{\displaystyle\int_{t}^{s}}
\left\vert \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right)
\right\vert ^{2}dr+\mathbb{E}^{\mathcal{F}_{t}
{\displaystyle\int_{t}^{s}}
\left\vert Z_{r}^{\varepsilon}\right\vert ^{2}dr\medskip\\
\quad\quad\quad\leq C\mathbb{E}^{\mathcal{F}_{t}}\left\vert Y_{s
^{\varepsilon}\right\vert ^{2}+\mathbb{E}^{\mathcal{F}_{t}}\varphi
_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) +C\mathbb{E}^{\mathcal{F
_{t}
{\displaystyle\int_{t}^{s}}
\left\vert F\left( r,0,0\right) \right\vert ^{2}dr.
\end{array}
\label{ineq4
\end{equation}
In particular, we consider $s=T$ and, since $0\leq\varphi_{\varepsilon}\left(
\eta\right) \leq\varphi\left( \eta\right) $
\begin{equation}
\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\vert \nabla\varphi_{\varepsilon}\left( Y_{r}^{\varepsilon}\right)
\right\vert ^{2}dr+\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\vert Z_{r}^{\varepsilon}\right\vert ^{2}dr\leq C\left[ \mathbb{E
\left\vert \eta\right\vert ^{2}+\mathbb{E}\varphi\left( \eta\right)
+\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\vert F\left( r,0,0\right) \right\vert ^{2}dr\right] =\tilde{C}.
\label{ineq5
\end{equation}
Using the definition of $\nabla\varphi_{\varepsilon}$ we also obtai
\begin{equation}
\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\vert Y_{r}^{\varepsilon}-J_{\varepsilon}(Y_{r}^{\varepsilon})\right\vert
^{2}dr\leq\tilde{C}\varepsilon^{2}. \label{ineq6
\end{equation}
\noindent We write the approximating BSDE
(\ref{approximating eq for general case}) under the for
\[
Y_{t}^{\varepsilon}=\eta
{\displaystyle\int_{t}^{T}}
dK_{s}^{\varepsilon}
{\displaystyle\int_{t}^{T}}
Z_{s}^{\varepsilon}dB_{s},
\]
wher
\[
dK_{s}^{\varepsilon}=\left[ F\left( s,Y_{s}^{\varepsilon},Z_{s
^{\varepsilon}\right) -H\left( s,Y_{s}^{\varepsilon}\right) \nabla
\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) \right] ds.
\]
\noindent If we denot
\[
N_{t}
{\displaystyle\int_{0}^{t}}
\left[ \left\vert F\left( s,0,0\right) \right\vert +b\left\vert
\nabla\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) \right\vert
\right] ds\quad\text{and}\quad V\left( t\right)
{\displaystyle\int_{0}^{t}}
\left( L\left( s\right) +\ell^{2}\right) ds,
\]
the
\[
\left\langle Y_{t}^{\varepsilon},dK_{t}^{\varepsilon}\right\rangle
\leq\left\vert Y_{t}^{\varepsilon}\right\vert dN_{t}+\left[ \left\vert
F\left( t,0,0\right) \right\vert +b\left\vert \nabla\varphi_{\varepsilon
}\left( Y_{t}^{\varepsilon}\right) \right\vert \right] dt+\left\vert
Y_{t}^{\varepsilon}\right\vert ^{2}dV\left( t\right) +\dfrac{1}{4}\left\vert
Z_{t}^{\varepsilon}\right\vert ^{2}dt.
\]
We apply Proposition \ref{ineq cond exp} (see Annex 6.3) and it infers, for
$p=2$
\
\begin{array}
[c]{l
\mathbb{E}^{\mathcal{F}_{t}}\sup\limits_{s\in\left[ t,T\right] }\left\vert
e^{V\left( s\right) }Y_{s}^{\varepsilon}\right\vert ^{2}+\mathbb{E
^{\mathcal{F}_{t}}\left(
{\displaystyle\int_{t}^{T}}
e^{2V\left( s\right) }\left\vert Z_{s}^{\varepsilon}\right\vert
^{2}ds\right) \medskip\\
\quad\quad\quad\leq C\mathbb{E}^{\mathcal{F}_{t}}\left[ \left\vert
e^{V\left( T\right) }\eta\right\vert ^{2}+\left(
{\displaystyle\int_{t}^{T}}
e^{V\left( s\right) }\left[ \left\vert F\left( s,0,0\right) \right\vert
+b\left\vert \nabla\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right)
\right\vert \right] ds\right) ^{2}\right] .
\end{array}
\]
Taking into account (\ref{ineq5}) it follow
\begin{equation}
\left\vert Y_{0}^{\varepsilon}\right\vert ^{2}\leq\mathbb{E}\sup
\limits_{s\in\left[ 0,T\right] }\left\vert Y_{s}^{\varepsilon}\right\vert
^{2}\leq C\left[ \mathbb{E}\left\vert \eta\right\vert ^{2}+\mathbb{E
\varphi\left( \eta\right) +\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\vert F\left( r,0,0\right) \right\vert ^{2}dr\right] . \label{ineq8
\end{equation}
\noindent The Lipschitz and the boundedness hypotheses $\left( H_{4}\right)
$ imposed on $F$ lead, due to the fact that $l$ is constant, $L\in
L^{2}\left( 0,T;\mathbb{R}_{+}\right) $ and $\rho\in L^{1}\left(
0,T;\mathbb{R}_{+}\right) $, t
\
\begin{array}
[c]{l
\mathbb{E
{\displaystyle\int_{0}^{T}}
|F(r,Y_{r}^{\varepsilon},Z_{r}^{\varepsilon})|^{2}dr\leq2\mathbb{E
{\displaystyle\int_{0}^{T}}
|F(r,Y_{r}^{\varepsilon},Z_{r}^{\varepsilon})-F(r,Y_{r}^{\varepsilon
,0)|^{2}dr+2\mathbb{E
{\displaystyle\int_{0}^{T}}
|F(r,Y_{r}^{\varepsilon},0)|^{2}dr\medskip\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\leq2l^{2}\mathbb{E
{\displaystyle\int_{0}^{T}}
|Z_{r}^{\varepsilon}|^{2}dr+4\mathbb{E
{\displaystyle\int_{0}^{T}}
L^{2}(r)|Y_{r}^{\varepsilon}|^{2}dr+4\mathbb{E
{\displaystyle\int_{0}^{T}}
|F(r,0,0)|^{2}dr\leq C,
\end{array}
\
\begin{equation}
\mathbb{E
{\displaystyle\int_{0}^{T}}
|F(r,Y_{r}^{\varepsilon},Z_{r}^{\varepsilon})|dr\leq\mathbb{E
{\displaystyle\int_{0}^{T}}
\left[ L\left( r\right) |Y_{r}^{\varepsilon}|+l|Z_{r}^{\varepsilon
}|+|F(r,0,0)|\right] dr\leq C \label{ineq7
\end{equation}
For the convenience of the reader, we will group together, under the form of a
Lemma, some useful estimations on the solution of the approximating equation,
estimation that we just obtained in Step 1.
\begin{lemma}
\label{Lemma with the estimations from Step 1}Consider the approximating BSDE
(\ref{approximating eq for general case}), with its solution $\left(
Y^{\varepsilon},Z^{\varepsilon}\right) $ and denote $U^{\varepsilon
=\nabla\varphi_{\varepsilon}(Y^{\varepsilon})$. There exists a positive
constant $C=C(a,b,\Lambda,l,L(\cdot))$, independent of $\varepsilon$, such
tha
\begin{equation}
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }\left\vert Y_{s}^{\varepsilon
}\right\vert ^{2}+\mathbb{E
{\displaystyle\int_{0}^{T}}
(\left\vert U_{r}^{\varepsilon}\right\vert ^{2}+\left\vert Z_{r}^{\varepsilon
}\right\vert ^{2})dr\leq C\left[ \mathbb{E}\left\vert \eta\right\vert
^{2}+\mathbb{E}\varphi\left( \eta\right) +\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\vert F\left( r,0,0\right) \right\vert ^{2}dr\right] .
\label{ineq Y,Z,U
\end{equation}
$\smallskip$
\end{lemma}
\noindent\textbf{Step 2.} \textit{Convergences under the assumption
\[
0\leq\ell\left( t\right) \equiv\ell<\sqrt{a}.
\]
The estimations of Step 1 imply that there exist a sequence $\left\{
\varepsilon_{n}:n\in\mathbb{N}^{\ast}\right\} ,$ $\varepsilon_{n
\rightarrow0$ as $n\rightarrow\infty$, and six progressively measurable
stochastic processes $Y,Z,U,F,\chi,h$ such tha
\begin{align*}
Y_{0}^{\varepsilon_{n}} & \rightarrow Y_{0},\quad\text{in }\mathbb{R}^{d},\\
Z^{\varepsilon_{n}} & \rightharpoonup Z,\quad\text{weakly in }L^{2
(\Omega\times\left( 0,T\right) ;\mathbb{R}^{d\times k}),
\end{align*}
and, weakly in $L^{2}(\Omega\times\left( 0,T\right) ;\mathbb{R}^{d})$
\
\begin{array}
[c]{c
Y^{\varepsilon_{n}}\rightharpoonup Y,\quad\quad\nabla\varphi_{\varepsilon_{n
}\left( Y^{\varepsilon_{n}}\right) \rightharpoonup U,\quad\quad H\left(
\cdot,Y^{\varepsilon_{n}}\right) \rightharpoonup h,\medskip\\
H\left( \cdot,Y^{\varepsilon_{n}}\right) \nabla\varphi_{\varepsilon_{n
}\left( Y^{\varepsilon_{n}}\right) \rightharpoonup\chi\quad\quad
\text{and}\quad\quad F\left( \cdot,Y^{\varepsilon_{n}},Z^{\varepsilon_{n
}\right) \rightharpoonup F.\medskip
\end{array}
\]
The convergence $Y^{\varepsilon_{n}}\rightharpoonup Y$ and the inequality
(\ref{ineq6}), written for $\varepsilon=\varepsilon_{n}$, imply that, on the
sequence $\left\{ \varepsilon_{n}:n\in\mathbb{N}^{\ast}\right\} $
\[
J_{\varepsilon_{n}}(Y^{\varepsilon_{n}})\rightharpoonup Y,\quad\text{weakly in
}L^{2}(\Omega\times\left( 0,T\right) ;\mathbb{R}^{d})\text{.
\]
We write (\ref{space of sol for approximating equation}) for $\varepsilon
=\varepsilon_{n}$ and, passing to $\liminf_{n\rightarrow+\infty}$, we obtai
\[
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }e^{\delta p\nu_{s}}\left\vert
Y_{s}\right\vert ^{p}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta\nu_{s}}\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}<\infty.
\]
From the approximating BSDE (\ref{approximating eq for general case}) we have
that, at the limit
\[
Y_{t}+\int_{t}^{T}\chi_{s}ds=\eta+\int_{t}^{T}F_{s}ds-\int_{t}^{T}Z_{s
dB_{s}.
\]
The continuity of the three integrals from the above equation imply also the
continuity of the process $Y$, but the previous convergences are not yet
sufficient to conclude that $\left( Y,Z\right) $ is a solution of the
considered equation. The remaining problems consist in proving that, for every
$s\in\left[ 0,T\right] $, $\mathbb{P}-a.s.~\omega\in\Omega$
\[
\chi_{s}=h_{s}U_{s},\quad h_{s}=H(s,Y_{s}),\quad Us\in\partial\varphi
(Y_{s})\quad\text{and}\quad F_{s}=F(s,Y_{s},Z_{s}).
\]
$\smallskip$
\noindent\textbf{Step 3.} \textit{Boundedness under the assumptions
\[
0\leq\ell\left( t\right) \equiv\ell<\sqrt{a}\quad\text{\textit{and}
\quad\left\vert \eta\right\vert ^{2}+\left\vert \varphi\left( \eta\right)
\right\vert \leq c,\quad\mathbb{P}-a.s.~\omega\in\Omega.
\]
\noindent From inequality (\ref{ineq4}), written for $s=T$ it follows,
$\mathbb{P}-a.s.$
\[
\left\vert Y_{t}^{\varepsilon}\right\vert ^{2}+\varphi_{\varepsilon}\left(
Y_{t}^{\varepsilon}\right) \leq C\left( c
{\displaystyle\int_{0}^{T}}
\rho\left( r\right) dr\right) =C^{\prime},\quad\text{for all }t\in\left[
0,T\right] .
\]
\hfill
\end{proof}
Starting with this point, the proofs of Theorem \ref{Th. for strong existence}
and Theorem \ref{Th. for weak existence} will take two separate paths.
\section{Strong existence and uniqueness for $H\left( t,y\right) \equiv
H_{t}$}
We will continue in this section the proof of Theorem
\ref{Th. for strong existence}.\medskip
\begin{proof}
We continue the proof of the existence of a solution. Under the assumptions of
Step 3 (Section 3) we prove that $\left\{ Y^{\varepsilon}:0<\varepsilon
\leq1\right\} $ is a Cauchy sequence. To simplify the presentation of this
task we assume $k=1$.
The form of the matrix $H$ leads t
\[
H\nabla\varphi_{\varepsilon_{n}}\left( Y^{\varepsilon_{n}}\right)
\rightharpoonup HU,\quad\text{weakly in }L^{2}(\Omega\times\left( 0,T\right)
;\mathbb{R}^{d}),
\]
that i
\[
\lim_{n\rightarrow\infty}\mathbb{E}\int_{0}^{T}H_{r}\nabla\varphi
_{\varepsilon_{n}}\left( Y_{r}^{\varepsilon_{n}}\right) dr=\mathbb{E
\int_{0}^{T}H_{r}U_{r}dr.
\]
Starting from here, by the symmetric and strictly positive matrix $H_{s
{}^{-1}$ we will understand the inverse of the matrix $H_{s}$ and not the
inverse of the function $H$.
We hav
\[
H_{t}{}^{-1/2}=H_{T}{}^{-1/2}
{\displaystyle\int_{t}^{T}}
D_{s}ds\quad\text{and}\quad H_{t}^{-1}=H_{T}^{-1}
{\displaystyle\int_{t}^{T}}
\tilde{D}_{s}ds,
\]
where $D_{s}=-\dfrac{1}{2}H_{s}^{-3/2}\dfrac{d}{ds}H_{s}$ and $\tilde{D
_{s}=-\dfrac{d}{ds}H_{s}^{-1}$ are $\mathbb{R}^{d\times d}-$valued
progressively measurable stochastic processes such that, $\mathbb{P-}a.s.$,
$|D_{s}|\leq C=\frac{1}{2}b^{3/2}\Lambda$ and $|\tilde{D}_{s}|\leq\Lambda$.
Denote $\Delta_{s}^{\varepsilon,\delta}=H_{s}^{-1/2}\left( Y_{s
^{\varepsilon}-Y_{s}^{\delta}\right) $. We hav
\
\begin{array}
[c]{l
\Delta_{t}^{\varepsilon,\delta}=
{\displaystyle\int_{t}^{T}}
dH_{s}^{-1/2}\left( Y_{s}^{\varepsilon}-Y_{s}^{\delta}\right)
{\displaystyle\int_{t}^{T}}
H_{s}^{-1/2}d(Y_{s}^{\varepsilon}-Y_{s}^{\delta})\medskip\\
\quad
{\displaystyle\int_{t}^{T}}
d\mathcal{K}_{s}^{\varepsilon,\delta}
{\displaystyle\int_{t}^{T}}
\mathcal{Z}_{s}^{\varepsilon,\delta}dB_{s},
\end{array}
\]
wher
\
\begin{array}
[c]{l
d\mathcal{K}_{s}^{\varepsilon,\delta}=D_{s}\left( Y_{s}^{\varepsilon
-Y_{s}^{\delta}\right) ds+H_{s}^{-1/2}\left[ F\left( s,Y_{s}^{\varepsilon
},Z_{s}^{\varepsilon}\right) -F\left( s,Y_{s}^{\delta},Z_{s}^{\delta
}\right) \right] ds\medskip\\
\quad\quad\quad-H_{s}^{-1/2}\left[ H_{s}\nabla\varphi_{\varepsilon}\left(
Y_{s}^{\varepsilon}\right) -H_{s}\nabla\varphi_{\delta}(Y_{s}^{\delta
})\right] ds\medskip\\
\quad\quad\quad=D_{s}\left( Y_{s}^{\varepsilon}-Y_{s}^{\delta}\right)
ds+H_{s}^{-1/2}\left[ F\left( s,Y_{s}^{\varepsilon},Z_{s}^{\varepsilon
}\right) -F\left( s,Y_{s}^{\delta},Z_{s}^{\delta}\right) \right]
ds\medskip\\
\quad\quad\quad-H_{s}^{1/2}\left[ \nabla\varphi_{\varepsilon}\left(
Y_{s}^{\varepsilon}\right) -\nabla\varphi_{\delta}(Y_{s}^{\delta})\right] ds
\end{array}
\]
and $\mathcal{Z}_{s}^{\varepsilon,\delta}=H_{s}^{-1/2}\left( Z_{s
^{\varepsilon}-Z_{s}^{\delta}\right) $. By denoting with $C$ a generic
positive constant independent of $\varepsilon$ and $\delta$ that can change
from one line to another we obtain tha
\
\begin{array}
[c]{l
\left\langle \Delta_{s}^{\varepsilon,\delta},d\mathcal{K}_{s}^{\varepsilon
,\delta}\right\rangle \leq C\left( |D_{s}|+L\left( s\right) \right)
|Y_{s}^{\varepsilon}-Y_{s}^{\delta}|^{2}ds+Cl|Y_{s}^{\varepsilon
-Y_{s}^{\delta}||Z_{s}^{\varepsilon}-Z_{s}^{\delta}|\medskip\\
\quad\quad\quad\quad\quad-\left\langle \nabla\varphi_{\varepsilon}\left(
Y_{s}^{\varepsilon}\right) -\nabla\varphi_{\delta}(Y_{s}^{\delta
),Y_{s}^{\varepsilon}-Y_{s}^{\delta}\right\rangle \medskip\\
\quad\quad\quad\quad\leq C\left( |D_{s}|+L\left( s\right) \right)
|Y_{s}^{\varepsilon}-Y_{s}^{\delta}|^{2}ds+Cl|Y_{s}^{\varepsilon
-Y_{s}^{\delta}||\mathcal{Z}_{s}^{\varepsilon,\delta}|ds\medskip\\
\quad\quad\quad\quad\quad+\left( \varepsilon+\delta\right) \left\vert
\nabla\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) \right\vert
|\nabla\varphi_{\delta}(Y_{s}^{\delta})|ds.
\end{array}
\]
Therefore, from the formula of $\Delta_{s}^{\varepsilon,\delta}$ we hav
\[
\left\langle \Delta_{s}^{\varepsilon,\delta},d\mathcal{K}_{s}^{\varepsilon
,\delta}\right\rangle \leq\left( \varepsilon+\delta\right) \left\vert
\nabla\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) \right\vert
|\nabla\varphi_{\delta}(Y_{s}^{\delta})|ds+|\Delta_{s}^{\varepsilon,\delta
}|^{2}dV_{s}+\frac{1}{4}|\mathcal{Z}_{s}^{\varepsilon,\delta}|^{2},
\]
where, for $\tilde{C}=\tilde{C}(l,a,b,\Lambda)>0$, $V_{t}=\tilde{C}\int
_{0}^{t}(|D_{s}|+L(s))ds$. We apply now Proposition \ref{ineq cond exp} (see
Annex 6.3) with $p\geq2,$ $\lambda=1/2,$ $D=N\equiv0$ and we obtain, for a
positive constant $C=C(l,a,b,p)$ and for $C_{1}>0$ well chosen,
$C_{1}\mathbb{E}\sup_{s\in\left[ 0,T\right] }|Y_{s}^{\varepsilon
-Y_{s}^{\delta}|^{p}+\mathbb{E
{\displaystyle\int_{0}^{T}}
|Z_{s}^{\varepsilon}-Z_{s}^{\delta}|^{2}ds\smallskip$
$\leq\mathbb{E}\sup_{s\in\left[ 0,T\right] }e^{pV_{s}}|\Delta_{s
^{\varepsilon,\delta}|^{p}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2V_{s}}|\mathcal{Z}_{s}^{\varepsilon,\delta}|^{2}ds\right) ^{p/2
\smallskip$
$\leq C(\varepsilon+\delta)\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2V_{s}}\left\vert \nabla\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon
}\right) \right\vert |\nabla\varphi_{\delta}(Y_{s}^{\delta})|ds\right)
^{p/2}\smallskip$
$\leq C(\varepsilon+\delta)\left( \mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
|\nabla\varphi_{\varepsilon}\left( Y_{s}^{\varepsilon}\right) |^{2
ds\right) ^{p/2}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
|\nabla\varphi_{\delta}\left( Y_{s}^{\delta}\right) |^{2}ds\right)
^{p/2}\right) ,\smallskip$
\noindent which implies, according to (\ref{ineq5}), that $\left\{
Y^{\varepsilon}:0<\varepsilon\leq1\right\} $ is a Cauchy sequence.
With standard arguments, passing to the limit in the approximating equation
(\ref{approximating eq for general case}) we infer tha
\[
Y_{t}+\int_{t}^{T}H_{s}U_{s}ds=\eta+\int_{t}^{T}F(s,Y_{s},Z_{s})ds-\int
_{t}^{T}Z_{s}dB_{s},\quad\forall t\in\left[ 0,T\right] .
\]
From (\ref{space of sol for approximating equation}), by Fatou's Lemma,
(\ref{boundedness with weight}) easily follows. Moreover, since $\nabla
\varphi_{\varepsilon}(x)\in\partial\varphi(J_{\varepsilon}x)$ we have, on the
subsequence $\varepsilon_{n}$
\[
\mathbb{E
{\displaystyle\int_{0}^{T}}
\left\langle \nabla\varphi_{\varepsilon_{n}}(Y_{t}^{\varepsilon_{n}
),v_{t}-Y_{t}^{\varepsilon_{n}}\right\rangle dt+\mathbb{E
{\displaystyle\int_{0}^{T}}
\varphi(J_{\varepsilon_{n}}(Y_{t}^{\varepsilon_{n}}))dt\leq\mathbb{E
{\displaystyle\int_{0}^{T}}
\varphi(v_{t})dt,
\]
for every progressively measurable continuous stochastic process $v$. Hence
$U_{s}\in\partial\varphi(Y_{s})$ for every $s\in\left[ 0,T\right] ,$
$\mathbb{P}-a.s.~\omega\in\Omega$ and we can conclude that the triplet
$\left( Y,Z,K\right) $ is a strong solution of the $BSVI\left( H\left(
t\right) ,\varphi,F\right) $.\medskip
\noindent\textbf{Uniqueness. }Suppose that the $BSVI\left( H\left( t\right)
,\varphi,F\right) $ admits two strong solutions, denoted by $\left(
Y,Z,K\right) $ and respectively $(\tilde{Y},\tilde{Z},\tilde{K})$, with the
processes $K$ and $\tilde{K}$ represented a
\[
K_{t}=\int_{0}^{t}U_{s}ds\quad\text{and}\quad\tilde{K}_{t}=\int_{0}^{t
\tilde{U}_{s}ds.
\]
Following the same arguments found in the existence part of the theorem,
denoting $\Delta_{s}=H_{s}^{-1/2}(Y_{s}-\tilde{Y}_{s})$, we hav
\[
\Delta_{t}
{\displaystyle\int_{t}^{T}}
d\mathcal{K}_{s}
{\displaystyle\int_{t}^{T}}
\mathcal{Z}_{s}dB_{s},
\]
wher
\[
d\mathcal{K}_{s}=D_{s}(Y_{s}-\tilde{Y}_{s})ds+H_{s}^{-1/2}[F\left(
s,Y_{s},Z_{s}\right) -F(s,\tilde{Y}_{s},\tilde{Z}_{s})]ds-H_{s}^{1/2
(U_{s}-\tilde{U}_{s})ds
\]
and $\mathcal{Z}_{s}=H_{s}^{-1/2}(Z_{s}-\tilde{Z}_{s})$.
Since $Y$ and $\tilde{Y}$ are two solutions of the equation, $U_{s}\in
\partial\varphi(Y_{s})$ and $\tilde{U}_{s}\in\partial\varphi(\tilde{Y}_{s})$,
$\forall s\in\left[ 0,T\right] $, $\mathbb{P}-a.s.~\omega\in\Omega$
\[
\left\langle Y_{s}-\tilde{Y}_{s},U_{s}-\tilde{U}_{s}\right\rangle \geq0
\]
and we obtain, for a positive constant $\bar{C}=\bar{C}(l,a,b)$
\
\begin{array}
[c]{l
\left\langle \Delta_{s},d\mathcal{K}_{s}\right\rangle \leq C\left(
|D_{s}|+L\left( s\right) \right) |Y_{s}-\tilde{Y}_{s}|^{2}ds+Cl|Y_{s
-\tilde{Y}_{s}||\mathcal{Z}_{s}|ds\bigskip\\
\quad\quad\quad\quad~\leq\bar{C}|\Delta_{s}|^{2}(|D_{s}|+L(s))ds+\dfrac{1
{4}|\mathcal{Z}_{s}|^{2}.
\end{array}
\]
Sinc
\[
\mathbb{E}\sup_{t\in\left[ 0,T\right] }(e^{pV_{t}}|\Delta_{t}|^{p})\leq
C\mathbb{E}\sup_{t\in\left[ 0,T\right] }|Y_{t}-\tilde{Y}_{t}|^{p}<+\infty
\]
we obtain by Proposition \ref{ineq cond exp} (see Annex 6.3) tha
\[
e^{pV_{t}}|\Delta_{t}|^{p}\leq\mathbb{E}^{\mathcal{F}_{t}}e^{pV_{T}
|\Delta_{T}|^{p}=0
\]
and the uniqueness of a strong solution for $BSVI\left( H\left( t\right)
,\varphi,F\right) $ easily follows.\hfill
\end{proof}
\begin{remark}
Inequality (\ref{ineq2}) permits us to derive now some more estimations
regarding the limit processes. We write (\ref{ineq2}) for $s=T$ and, since
$\varphi(J_{\varepsilon}\left( x\right) )\leq\varphi_{\varepsilon
(x)\leq\varphi(x)$, by passing to $\liminf_{\varepsilon\rightarrow0}$ in
(\ref{ineq2}), we have for all $t\in\left[ 0,T\right] $, $\mathbb{P
-a.s.~\omega\in\Omega$
\begin{equation
\begin{array}
[c]{l
|Y_{t}|^{2}+\varphi(Y_{t})+
{\displaystyle\int_{t}^{T}}
|U_{r}|^{2}dr
{\displaystyle\int_{t}^{T}}
|Z_{r}|^{2}dr\leq|\eta|^{2}+\varphi(\eta)+
{\displaystyle\int_{t}^{T}}
\left\langle Y_{r},F(r,Y_{r},Z_{r})\right\rangle dr\medskip\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
{\displaystyle\int_{t}^{T}}
\left\langle U_{r},F(r,Y_{r},Z_{r})-2H_{r}Y_{r}\right\rangle dr
{\displaystyle\int_{t}^{T}}
\left\langle 2Y_{r}+U_{r},Z_{r}dB_{r}\right\rangle .
\end{array}
\label{estimation from remark
\end{equation}
\end{remark}
\section{Weak existence for $H\left( t,y\right) $}
We will continue in this section the proof of Theorem
\ref{Th. for weak existence}. All the apriori estimates obtained in Section 3
remain valid. In Section 4 we proved that the approximating sequence given by
BSDE (\ref{approximating eq for general case}) is a Cauchy sequence when the
matrix $H$ does not depend on the state of the system and, as a consequence,
we derived the existence and uniqueness of a strong solution for $BSVI\left(
H\left( t\right) ,\varphi,F\right) $. In the current setup, allowing the
dependence on $Y$ we will situate ourselves in a Markovian framework and we
will use tightness criteria in order to prove the existence of a weak solution
for $BSVI\left( H\left( t,y\right) ,\varphi,F\right) $.
First let $b:\left[ 0,T\right] \times\mathbb{R}^{k}\rightarrow\mathbb{R
^{k}$, $\sigma:\left[ 0,T\right] \times\mathbb{R}^{k}\rightarrow
\mathbb{R}^{k\times k}$ be two continuous functions satisfying the classical
Lipschitz conditions, which imply the existence of a non-exploding solution
for the following SD
\begin{equation}
X_{s}^{t,x}=x
{\displaystyle\int_{t}^{s}}
b(r,X_{r}^{t,x})dr
{\displaystyle\int_{t}^{s}}
\sigma(r,X_{r}^{t,x})dB_{r},\quad t\leq s\leq T.
\label{SDE Markovian framework
\end{equation}
According to Friedmann \cite{Friedman:75} it follows that, for every
$(t,x)\in\left[ 0,T\right] \times\mathbb{R}^{k}$, the equation
(\ref{SDE Markovian framework}) admits a unique solution $X^{t,x}$. Moreover,
for $p\geq1$, there exists a positive constant $C_{p,T}$ such tha
\begin{equation}
\left\{
\begin{array}
[c]{l
\mathbb{E}\sup\nolimits_{s\in\left[ 0,T\right] }|X_{s}^{t,x}|^{p}\leq
C_{p,T}(1+|x|^{p})\quad\text{and}\medskip\\
\mathbb{E}\sup\nolimits_{s\in\left[ 0,T\right] }|X_{s}^{t,x}-X_{s
^{t^{\prime},x^{\prime}}|^{p}\leq C_{p,T}(1+|x|^{p})(|t-t^{\prime
|^{p/2}+|x-x^{\prime}|^{p}),
\end{array}
\right. \label{estimations sol forward eq
\end{equation}
for all $x,x^{\prime}\in\mathbb{R}^{k}$ and $t,t^{\prime}\in\left[
0,T\right] $.
Let now consider the continuous generator function $F:\left[ 0,T\right]
\times\mathbb{R}^{k}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ and assume
there exist $L\in L^{2}\left( 0,T;\mathbb{R}_{+}\right) $ such that, for all
$t\in\left[ 0,T\right] $ and $x\in\mathbb{R}^{k}$
\begin{equation}
\left\vert F(t,x,y^{\prime})-F(t,x,y)\right\vert \leq L\left( t\right)
|y^{\prime}-y|\text{,}\quad\text{\textit{for all} }y,y^{\prime}\in
\mathbb{R}^{d}, \tag{$H_4^\prime$}\label{hypothesis on F - weak sol
\end{equation}
Given a continuous function $g:\mathbb{R}^{k}\rightarrow\mathbb{R}^{d}$,
satisfying a sublinear growth condition, consider now the $BSVI\left(
H\left( t,y\right) ,\varphi,F\right)
\begin{equation}
\left\{
\begin{array}
[c]{l
Y_{s}^{t,x}
{\displaystyle\int_{s}^{T}}
H(r,Y_{r}^{t,x})dK_{r}^{t,x}=g(X_{T}^{t,x})
{\displaystyle\int_{s}^{T}}
F(r,X_{r}^{t,x},Y_{r}^{t,x})dr
{\displaystyle\int_{s}^{T}}
Z_{r}^{t,x}dB_{r},\quad t\leq s\leq T,\smallskip\\
dK_{r}^{t,x}\in\partial\varphi(Y_{r}^{t,x})\left( dr\right) ,\quad\text{for
every }r\text{.
\end{array}
\right. \label{BSVI Markovian
\end{equation}
\begin{remark}
The utility of studying the notion of weak solution for our problem is
justified by the non-linear Feynman-Ka\c{c} representation formula. Following
the same arguments as the one from \cite{Pardoux/Rascanu:98}, for $k=1$, it
can easily be proven that $u(t,x)=Y_{t}^{t,x}$ is a continuous function and it
represents a viscosity solution for the following semilinear parabolic PDE
\[
\left\{
\begin{array}
[c]{l
\dfrac{\partial u}{\partial t}(t,x)+\mathcal{A}_{t}u(t,x)+F(t,x,u(t,x))\in
H(t,u(t,x))\partial\varphi(u(t,x)),\smallskip\\
(t,x)\in\lbrack0,T)\times\mathbb{R}^{k}\quad\quad\text{and}\quad\quad
u(T,x)=g(x),\quad\forall x\in\mathbb{R}^{k},
\end{array}
\right.
\]
where the operator $\mathcal{A}_{t}$ is the infinitesimal generator of the
Markov process $\{X_{s}^{t,x},$ $t\leq s\leq T\}$ and it is given b
\[
\mathcal{A}_{t}v(x)=\frac{1}{2}\mathbf{Tr}[(\sigma\sigma^{\ast})(t,x)D^{2
v(x)]+\left\langle b(t,x),\nabla v(x)\right\rangle .
\]
However, for the multi-dimensional case, the situation changes and the proof
of the existence and uniqueness of a viscosity solution for the above system
of parabolic variational inequalities must follow the approach from Maticiuc,
Pardoux, R\u{a}\c{s}canu and Z\u{a}linescu
\cite{Maticiuc/Pardoux/Rascanu/Zalinescu:10}.
\end{remark}
\noindent More details concerning the restriction to the case when the
generator function does not depend on $Z$ can be found in the comments from
Pardoux \cite{Pardoux:99}, Section 6, page 535. Assume also that all
hypothesis given by $(H_{2})$ still hold for the deterministic matrix
$H:\left[ 0,T\right] \times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times d}$.
For the clarity of the presentation we will omit writing the superscript
$t,x$, especially when dealing with sequences of approximating equations and solutions.
Consider now the Skorokhod space $\mathcal{D}(\left[ 0,T\right]
;\mathbb{R}^{m})$ of c\`{a}dl\`{a}g functions $y:\left[ 0,T\right]
\rightarrow\mathbb{R}^{m}$ (i.e. right continuous and with left-hand side
limit). It can be shown (see Billingsley \cite{Billingsley:99}) that, although
$\mathcal{D}(\left[ 0,T\right] ;\mathbb{R}^{m})$ is not a complete space
with respect to the Skorokhod metric, there exists a topologically equivalent
metric with respect to which it is complete and that the Skorokhod space is a
Polish space. The space of continuous functions $C(\left[ 0,T\right]
;\mathbb{R}^{m})$, equipped with the supremum norm topology is a subspace of
$\mathcal{D}(\left[ 0,T\right] ;\mathbb{R}^{m})$; the Skorokhod topology
restricted to $C(\left[ 0,T\right] ;\mathbb{R}^{m})$ coincides with the
uniform topology. We will use on $\mathcal{D}(\left[ 0,T\right]
;\mathbb{R}^{m})$ the Meyer-Zheng topology, which is the topology of
convergence in measure on $\left[ 0,T\right] $, weaker than the Skorokhod
topology. The Borel $\sigma-$field for the Meyer-Zheng topology is the
canonical $\sigma-$field as for Skorokhod topology. Note that for the
Meyer-Zheng topology, $\mathcal{D}(\left[ 0,T\right] ;\mathbb{R}^{m})$ is a
metric space but not a Polish space. Contrary to the Skorokhod topology, the
Meyer-Zheng topology on the product space is the product topology.$\smallskip$
\noindent We continue now the proof of Theorem \ref{Th. for weak existence
.\medskip
\begin{proof}
For any fixed $n\geq1$ consider the following approximating equation, which is
in fact BSDE (\ref{approximating eq for general case}) from Section 3, adapted
to our new setup. We have, $\mathbb{P}-a.s.~\omega\in\Omega$
\begin{equation}
Y_{t}^{n}
{\displaystyle\int_{t}^{T}}
H\left( s,Y_{s}^{n}\right) \nabla\varphi_{1/n}\left( Y_{s}^{n}\right)
ds=g(X_{T}^{t,x})
{\displaystyle\int_{t}^{T}}
F\left( s,X_{s},Y_{s}^{n}\right) ds
{\displaystyle\int_{t}^{T}}
Z_{s}^{n}dB_{s},\quad\forall t\in\left[ 0,T\right] .
\label{approx eq for weak existence
\end{equation}
The estimations obtained in Section 3, Lemma
\ref{Lemma with the estimations from Step 1} apply also to the triplet
$(Y^{n},Z^{n},U^{n})=(Y^{n},Z^{n},\nabla\varphi_{1/n}\left( Y^{n}\right)
),$which satisfies the uniform boundedness condition given by
(\ref{ineq Y,Z,U}) with the positive constant $C=C(a,b,\Lambda,L(\cdot))$ now
independent of $n$. We will prove a weakly convergence in the sense of the
Meyer-Zheng topology, that is the laws converge weakly if we equip the space
of paths with the topology of convergence in $dt-$measure.$\smallskip$
In the sequel we will employ the following notations
\[
M_{t}^{n}
{\displaystyle\int_{0}^{t}}
Z_{s}^{n}dB_{s}\quad\text{and}\quad K_{t}^{n}
{\displaystyle\int_{0}^{t}}
\nabla\varphi_{1/n}\left( Y_{s}^{n}\right) ds.
\]
Our goal is to prove the tightness of the sequence $\{Y^{n},M^{n}\}_{n}$ with
respect to the Meyer-Zheng topology. For doing this we must prove the uniform
boundedness (with respect to $n$) for quantities of the typ
\[
\mathrm{CV}_{T}\left( \Psi\right) +\mathbb{E}\sup_{s\in\lbrack0,T]}|\Psi
_{s}|,
\]
where the conditional variation $\mathrm{CV}_{T}$ is defined for any adapted
process $\Psi$ with paths a.s. in $\mathcal{D}(\left[ 0,T\right]
;\mathbb{R}^{m})$ and with $\Psi_{t}$ a integrable random variable, for all
$t\in\left[ 0,T\right] $. The conditional variation of $\Psi$ is given b
\begin{equation}
\mathrm{CV}_{T}(\Psi)\overset{def}{=}\sup_{\pi}{\sum_{i=0}^{m-1}{\mathbb{E
}\Big[{\big|}}\mathbb{E}^{{{\mathcal{F}}_{t_{i}}}}{[\Psi_{t_{i+1}}-\Psi
_{t_{i}}]{\big|}\Big],} \label{def cond var
\end{equation}
where the supremum is taken over all the partitions $\pi:t=t_{0}<t_{1
<\cdots<t_{m}=T$. If $\mathrm{CV}_{T}(\Psi)<\infty$ then the process $\Psi$ is
called a quasi-martingale. It is clear that if $\Psi$ is a martingale then
$\mathrm{CV}_{T}(\Psi)=0$.
We will denote by $C$ a generic constant that can vary from one line to
another, but which remains independent of $n$. Since $M^{n}$ is a
$\mathcal{F}_{t}^{B}-$martingale, we have, by using the hypothesis on $F$ and
the boundedness of $H$,
$\mathrm{CV}_{T}(Y^{n})=\sup\limits_{\pi
{\displaystyle\sum\limits_{i=0}^{m-1}}
{{\mathbb{E}}\Big[{\big|}}\mathbb{E}^{{{\mathcal{F}}_{t_{i}}}}{[Y_{t_{i+1
}^{n}-Y_{t_{i}}^{n}]{\big|}\Big]\leq}\mathbb{E
{\displaystyle\int_{0}^{T}}
|F(s,X_{s},Y_{s}^{n})|ds
{\displaystyle\int_{0}^{T}}
|H(Y_{s}^{n})|d\left\updownarrow K^{n}\right\updownarrow _{s}$\medskip
$\quad\quad\quad\quad\leq C\mathbb{E
{\displaystyle\int_{0}^{T}}
(1+|Y_{s}^{n}|)ds+b\left\updownarrow K^{n}\right\updownarrow _{T}.\smallskip$
\noindent Since $\left\updownarrow K^{n}\right\updownarrow _{T}
{\displaystyle\int_{0}^{T}}
|U_{s}^{n}|ds\leq\sqrt{T}{\Big(
{\displaystyle\int_{0}^{T}}
|U_{s}^{n}|^{2}ds{\Big)}^{1/2}\leq C$ it infers, along with the uniform
boundedness condition given by (\ref{ineq Y,Z,U}) tha
\[
\sup_{n\geq1}{\Big(}\mathrm{CV}_{T}(Y^{n})+\mathbb{E}\sup_{s\in\left[
0,T\right] }|Y_{s}^{n}|{\Big)}<+\infty.
\]
\noindent For the rest of the quantities, by standard calculus and using
(\ref{ineq Y,Z,U}) we have the following estimations.$\smallskip$
$\mathrm{CV}_{T}(M^{n})=0$ because $M^{n}$ is a $\mathcal{F}_{t}-$martingale.
Using the Burkholder-Davis-Gundy inequality we obtain the second boundedness
which involves $M^{n}$.\medskip
$\mathbb{E}\sup\limits_{t\in\left[ 0,T\right] }|M_{t}^{n}|=\mathbb{E
\sup\limits_{t\in\left[ 0,T\right] }\left\vert
{\displaystyle\int_{0}^{t}}
Z_{s}^{n}dB_{s}\right\vert \leq3\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
|Z_{s}^{n}|^{2}ds\right) ^{1/2}\leq3\left( \mathbb{E
{\displaystyle\int_{0}^{T}}
|Z_{s}^{n}|^{2}ds\right) ^{1/2}\leq C$.\medskip
\noindent Therefore, taking the supremum over $n\geq1$ we obtain that the
conditions from the tightness criteria in $\mathcal{D}(\left[ 0,T\right]
;\mathbb{R}^{d})\times\mathcal{D}(\left[ 0,T\right] ;\mathbb{R}^{d
)[\equiv\mathcal{D}(\left[ 0,T\right] ;\mathbb{R}^{d+d})]$ for the sequence
$\{(Y^{n},M^{n})\}_{n}$ are verified. Using the Prohorov theorem, we have that
there exists a subsequence, still denoted with $n$, such that, as
$n\rightarrow\infty$
\[
(X,B,Y^{n},M^{n})\longrightarrow(X,B,Y,M),\quad\text{in law
\]
in $C(\left[ 0,T\right] ;\mathbb{R}^{k+k})\times\mathcal{D}(\left[
0,T\right] ;\mathbb{R}^{d+d})$. We equipped the previous space with the
product of the topology of uniform convergence on the first factor and the
topology of convergence in measure on the second factor. For each $0\leq s\leq
t$, the mapping $\left( x,y\right) \rightarrow\int_{s}^{t}F(x(r),y(r))dr$ is
continuous from $C(\left[ 0,T\right] ;\mathbb{R}^{k})\times\mathcal{D
(\left[ 0,T\right] ;\mathbb{R}^{d})$ topologically equipped in the same
manner, into $\mathbb{R}$. By the Skorokhod theorem, we can choose now a
probability space $\left( \bar{\Omega},\mathcal{\bar{F}},\mathbb{\bar{P
}\right) $ (it is in fact $([0,1],\mathcal{B}_{[0,1]},\mu)$) on which we
define the processe
\[
\{(\bar{X}^{n},\bar{B}^{n},\bar{Y}^{n},\bar{M}^{n})\}_{n}\quad\text{and
\quad(\bar{X},\bar{B},\bar{Y},\bar{M}),
\]
having the same law as $\{(X,B,Y^{n},M^{n})\}_{n}$ and $(X,B,Y,M)$,
respectively, such that, in the product space $C(\left[ 0,T\right]
;\mathbb{R}^{k+k})\times\mathcal{D}(\left[ 0,T\right] ;\mathbb{R}^{d+d})$,
as $n\rightarrow\infty$
\[
(\bar{X}^{n},\bar{B}^{n},\bar{Y}^{n},\bar{M}^{n})\overset{\mathbb{\bar{P
-}a.s.}{\longrightarrow}(\bar{X},\bar{B},\bar{Y},\bar{M}).
\]
Moreover, for each $n\in\mathbb{N}^{\ast}$, $(\bar{X}^{n},\bar{Y}^{n})$
satisfy, for $t\in\left[ 0,T\right] $, $\mathbb{\bar{P}}-a.s.~\omega\in
\bar{\Omega}$
\begin{equation}
d\bar{X}_{s}^{n}=b(s,\bar{X}_{s}^{n})ds+\sigma(s,\bar{X}_{s}^{n})d\bar{B
_{s}^{n},\quad t\leq s\leq T,\quad\bar{X}_{t}^{n}=x\quad\text{and}
\label{n_bar_forward_equation
\end{equation
\begin{equation}
\bar{Y}_{t}^{n}
{\displaystyle\int_{t}^{T}}
H\left( s,\bar{Y}_{s}^{n}\right) \nabla\varphi_{1/n}\left( \bar{Y}_{s
^{n}\right) ds=g(\bar{X}_{T}^{n})
{\displaystyle\int_{t}^{T}}
F\left( s,\bar{X}_{s}^{n},\bar{Y}_{s}^{n}\right) ds-(\bar{M}_{T}^{n}-\bar
{M}_{t}^{n}). \label{n_bar_equation
\end{equation}
We focus now to the issue of passing to the limit and to the identification of
a solution for our problem. Since $dK_{s}^{n}=\nabla\varphi_{1/n}(Y_{s
^{n})ds\in\partial\varphi(J_{n}(Y_{s}^{n}))(ds)$ we have, for all
$v\in\mathbb{R}^{d}$ and $0\leq t\leq s_{1}\leq s_{2}$
\[
\int_{s_{1}}^{s_{2}}\varphi(J_{n}(Y_{s}^{n}))ds\leq\int_{s_{1}}^{s_{2}
(J_{n}(Y_{s}^{n})-v)\nabla\varphi_{1/n}(Y_{s}^{n})ds+\int_{s_{1}}^{s_{2
}\varphi(v)ds.
\]
Using similar arguments to the ones found in Pardoux and R\u{a}\c{s}canu
\cite{Pardoux/Rascanu:09}, Proposition 1.19, it easily follows that, also for
all $v\in\mathbb{R}^{d},$ $0\leq t\leq s_{1}\leq s_{2}$ and every
$A\in\mathcal{\bar{F}},
\begin{equation}
\mathbb{\bar{E}}\int_{s_{1}}^{s_{2}}\mathbf{1}_{A}\varphi(J_{n}(\bar{Y
_{s}^{n}))ds\leq\mathbb{\bar{E}}\int_{s_{1}}^{s_{2}}\mathbf{1}_{A}(J_{n
(\bar{Y}_{s}^{n})-v)\nabla\varphi_{1/n}(\bar{Y}_{s}^{n})ds+\mathbb{\bar{E
}\int_{s_{1}}^{s_{2}}\mathbf{1}_{A}\varphi(v)ds, \label{n_bar_inequality
\end{equation}
that is, $\mathbb{\bar{P}-}a.s.~\omega\in\bar{\Omega},$ $\nabla\varphi
_{1/n}(\bar{Y}_{s}^{n})\in\partial\varphi(J_{n}(\bar{Y}_{s}^{n}))$, for all
$s\in\left[ t,T\right] $. We write (\ref{ineq Y,Z,U}) for $\bar{Y}^{n}$ and,
by using the definition of the Yosida approximation, we obtain that there
exists a positive constant $C$, independent of $n$, such that $\mathbb{E
{\textstyle\int_{0}^{T}}
|Y_{s}^{n}-J_{n}(Y_{s}^{n})|^{2}ds\leq\frac{1}{n^{2}}C$. The fact that
$Y^{n}\overset{\mathcal{L}}{\sim}\bar{Y}^{n}$ yield
\[
\mathbb{\bar{E}
{\displaystyle\int_{0}^{T}}
|\bar{Y}_{s}^{n}-J_{n}(\bar{Y}_{s}^{n})|^{2}ds\leq\frac{1}{n^{2}}C.
\]
Consequently, $\bar{Y}^{n}-J_{n}(\bar{Y}^{n})\longrightarrow0$ as
$n\rightarrow\infty$ in $L^{2}(\bar{\Omega}\times(0,T);\mathbb{R}^{d})$.
Therefore, $J_{n}(\bar{Y}^{n})$ converges also in $L^{2}(\bar{\Omega
\times(0,T);\mathbb{R}^{d})$ to $\bar{Y}$ when $n\rightarrow\infty$. The
boundedness (\ref{ineq Y,Z,U}) also implies the existence of a process
$\bar{U}$ such tha
\[
\nabla\varphi_{1/n}(\bar{Y}^{n})\rightharpoonup\bar{U}\quad\text{as
}n\rightarrow\infty\text{, in}\quad L^{2}(\bar{\Omega}\times(0,T);\mathbb{R
^{d}).
\]
In addition, passing to $\liminf_{n\rightarrow+\infty}$ in
(\ref{n_bar_inequality}), due to the lower-semicontinuity of $\varphi$ we
obtain, for all $v\in\mathbb{R}^{d}$ and all $0\leq t\leq s_{1}\leq s_{2}$,
$\mathbb{P}-a.s.~\omega\in\Omega$
\[
\int_{s_{1}}^{s_{2}}\varphi(\bar{Y}_{s})ds\leq\int_{s_{1}}^{s_{2}}(\bar{Y
_{s}-v)\bar{U}_{s}ds+\int_{s_{1}}^{s_{2}}\varphi(v)ds,
\]
which means $d\bar{K}_{s}\overset{def}{=}\bar{U}_{s}ds\in\partial\varphi
(\bar{Y}_{s})(ds)$.
Finally, we pass to the limit, as $n\rightarrow\infty$, in the equations
(\ref{n_bar_forward_equation}) and (\ref{n_bar_equation}). The convergence of
$(\bar{X}^{n},\bar{B}^{n},\bar{Y}^{n},\bar{M}^{n})$ to $(\bar{X},\bar{B
,\bar{Y},\bar{M})$ implies, $\mathbb{P}-a.s.~\omega\in\Omega$
\[
\bar{X}_{s}=x
{\displaystyle\int_{t}^{s}}
b(r,\bar{X}_{r})dr
{\displaystyle\int_{t}^{s}}
\sigma(r,\bar{X}_{r})d\bar{B}_{r},\quad t\leq s\leq T
\]
an
\[
\bar{Y}_{t}
{\displaystyle\int_{t}^{T}}
H\left( s,\bar{Y}_{s}\right) \bar{U}_{s}ds=g(\bar{X}_{T})
{\displaystyle\int_{t}^{T}}
F\left( s,\bar{X}_{s},\bar{Y}_{s}\right) ds-(\bar{M}_{T}-\bar{M}_{t}).
\]
Since the processes $\bar{Y}$ and $\bar{M}$ are c\`{a}dl\`{a}g the above
equality takes place for any $t\in\left[ 0,T\right] $.
Summarizing, we obtained that the collection $(\bar{\Omega},\mathcal{\bar{F
},\mathbb{\bar{P}},\mathcal{F}_{t}^{\bar{Y},\bar{M}},\bar{Y}_{t},\bar{M
_{t},\bar{K}_{t})_{t\in\left[ 0,T\right] }$ is a weak solution of
Eq.(\ref{BSVI Markovian}), in the sense of Definition
(\ref{Def of weak solution}), and the proof is now complete.
\hfill
\end{proof}
\begin{remark}
Alternatively, one can use another approximating equation instead of
(\ref{approx eq for weak existence}) to prove the existence of a weak
solution. This new approach comes with additional benefits from the
perspective of constructing numerical approximating schemes for our stochastic
variational inequality. For $n\in\mathbb{N}^{\ast}$ we consider a partition of
the time interval $\left[ 0,T\right] $ of the form $0=t_{0}<t_{1
<...<t_{n}=T$ with $t_{i}=\frac{iT}{n}$ for every $i=\overline{0,n-1}$ and
defin
\begin{equation}
\left\{
\begin{array}
[c]{l
Y_{t_{n}}^{n}=\eta,\medskip\\
Y_{t}^{n}
{\displaystyle\int_{t}^{t_{i+1}}}
H_{s}^{n}dK_{s}^{n}=Y_{t_{i+1}}^{n}
{\displaystyle\int_{t}^{t_{i+1}}}
F(s,X_{s},Y_{s}^{n})ds
{\displaystyle\int_{t}^{t_{i+1}}}
Z_{s}^{n}dB_{s},\text{ }\forall t\in\lbrack t_{i},t_{i+1}),\medskip\\
dK_{s}^{n}=U_{s}^{n}ds\in\partial\varphi(Y_{s}^{n})(ds),
\end{array}
\right. \label{alternatively approx eq for weak existence
\end{equation}
where, for $s\in\lbrack\frac{iT}{n},\frac{(i+1)T}{n})$
\[
H_{s}^{n}\overset{def}{=}\frac{n}{T
{\displaystyle\int_{s-\frac{T}{n}}^{s}}
\mathbb{E}^{\mathcal{F}_{r}}\left( H\left( r,Y_{r+\frac{2T}{n}}^{n}\right)
\right) dr.
\]
For the consistence of (\ref{alternatively approx eq for weak existence}) we
must extend $Y_{t}^{n}=\eta$, $U_{t}^{n}=0$ for $t\notin\left[ 0,T\right] $
and, $\mathbb{P}-a.s.~\omega\in\Omega$, $U_{t}^{n}\in\partial\varphi(Y_{t
^{n})$ a.e. $t\in(0,T)$. The application $s\rightarrow H_{s}^{n}$ is a bounded
$C^{1}$ progressively measurable matrix on each interval $(t_{i},t_{i+1})$;
$H^{n}$ and its inverse $[H^{n}]^{-1}$ satisfy (\ref{hypothesis on H}). We
highlight that all the constants that appear in (\ref{hypothesis on H}) remain
independent of $n$. Also, it is clear that, for any continuous process $V$
\[
\frac{n}{T
{\displaystyle\int_{s-\frac{T}{n}}^{s}}
\mathbb{E}^{\mathcal{F}_{r}}\left( H\left( r,V_{r+\frac{2T}{n}}\right)
\right) dr\underset{n\rightarrow\infty}{\longrightarrow}H(s,V_{s}).
\]
By Theorem \ref{Th. for strong existence} the triplet $(Y^{n},Z^{n},U^{n})$ is
uniquely defined by Eq.(\ref{alternatively approx eq for weak existence}) as
its strong solution. One can rewrite Eq.
(\ref{alternatively approx eq for weak existence}) under a global form on the
entire time interval $\left[ 0,T\right] $. We have, $\mathbb{P
-a.s.~\omega\in\Omega$
\begin{equation}
\left\{
\begin{array}
[c]{l
Y_{t_{n}}^{n}=\eta,\medskip\\
Y_{t}^{n}
{\displaystyle\int_{t}^{T}}
H_{s}^{n}dK_{s}^{n}=\eta
{\displaystyle\int_{t}^{T}}
F(s,X_{s},Y_{s}^{n})ds
{\displaystyle\int_{t}^{T}}
Z_{s}^{n}dB_{s},\text{ }\forall t\in\left[ 0,T\right] ,\medskip\\
dK_{s}^{n}=U_{s}^{n}ds\in\partial\varphi(Y_{s}^{n})(ds)
\end{array}
\right. \label{alternatively global approx eq for weak existence
\end{equation}
and we obtain that the triplet $(Y^{n},Z^{n},U^{n})$ satisfies a boundedness
property similar to (\ref{ineq Y,Z,U}). This permits us to prove, in the same
manner as in Theorem \ref{Th. for weak existence}, the tightness criteria
followed by the existence of a weak solution.
\end{remark}
\section{Annex}
For the clarity of the proofs from the main body of this article we will group
in this section some useful results that are used throughout this paper. For
more details the interested reader can consult the monograph of Pardoux and
R\u{a}\c{s}canu \cite{Pardoux/Rascanu:09}.
\subsection{BSDEs with Lipschitz coefficient}
We first introduce the spaces that will appear in the next results. Denote by
$S_{d}^{p}\left[ 0,T\right] $, $p\geq0$, the space of progressively
measurable continuous stochastic processes $X:\Omega\times\left[ 0,T\right]
\rightarrow\mathbb{R}^{d}$, such tha
\[
\left\Vert X\right\Vert _{S_{d}^{p}}=\left\{
\begin{array}
[c]{ll
\left( \mathbb{E}\left\Vert X\right\Vert _{T}^{p}\right) ^{\frac{1}{p
\wedge1}<{\infty}, & \;\text{if }p>0,\medskip\\
\mathbb{E}\left[ 1\wedge\left\Vert X\right\Vert _{T}\right] , & \;\text{if
}p=0,
\end{array}
\right.
\]
where $\left\Vert X\right\Vert _{T}=\sup_{t\in\left[ 0,T\right] }\left\vert
X_{t}\right\vert $. The space $(S_{d}^{p}\left[ 0,T\right] ,\left\Vert
\cdot\right\Vert _{S_{d}^{p}}),\ p\!\geq1,$ is a Banach space and $S_{d
^{p}\left[ 0,T\right] $, $0\leq p<1$, is a complete metric space with the
metric $\rho(Z_{1},Z_{2})=\left\Vert Z_{1}-Z_{2}\right\Vert _{S_{d}^{p}}$
(when $p=0$ the metric convergence coincides with the probability convergence).
Denote by $\Lambda_{d\times k}^{p}\left( 0,T\right) ,\ p\in\lbrack0,{\infty
})$, the space of progressively measurable stochastic processes $Z:{\Omega
}\times(0,T)\rightarrow\mathbb{R}^{d\times k}$ such that
\[
\left\Vert Z\right\Vert _{\Lambda^{p}}=\left\{
\begin{array}
[c]{ll
\left[ \mathbb{E}\left( \displaystyle\int_{0}^{T}\Vert Z_{s}\Vert
^{2}ds\right) ^{\frac{p}{2}}\right] ^{\frac{1}{p}\wedge1}, & \;\text{if
}p>0,\bigskip\\
\mathbb{E}\left[ 1\wedge\left( \displaystyle\int_{0}^{T}\Vert Z_{s}\Vert
^{2}ds\right) ^{\frac{1}{2}}\right] , & \;\text{if }p=0.
\end{array}
\right.
\]
The space $(\Lambda_{d\times k}^{p}\left( 0,T\right) ,\left\Vert
\cdot\right\Vert _{\Lambda^{p}}),\ p\geq1,$ is a Banach space and
$\Lambda_{d\times k}^{p}\left( 0,T\right) $, $0\leq p<1,$ is a complete
metric space with the metric $\rho(Z_{1},Z_{2})=\left\Vert Z_{1
-Z_{2}\right\Vert _{\Lambda^{p}}$.\medskip
Let consider the following generalized BSD
\begin{equation}
Y_{t}=\eta+\int_{t}^{T}\Phi\left( s,Y_{s},Z_{s}\right) dQ_{s}-\int_{t
^{T}Z_{s}dB_{s},~,\;t\in\left[ 0,T\right] ,\text{ }\mathbb{P}-a.s.~\omega
\in\Omega, \label{gen BSDE Lip
\end{equation}
where
\begin{itemize}
\item $\quad\eta:\Omega\rightarrow\mathbb{R}^{d}$\textit{ }is a\textit{
}$\mathcal{F}_{T}-$measurable random vector;
\item $\quad Q$ is a\textit{ }progressively measurable increasing continuous
stochastic process such that $Q_{0}=0$;
\item $\quad\Phi:\Omega\times\left[ 0,T\right] \times\mathbb{R
^{d}\rightarrow\mathbb{R}^{d}$\textit{ }for which we denote\textit{
$\Phi_{\rho}^{\#}\left( t\right) \overset{def}{=}\sup_{\left\vert
y\right\vert \leq\rho}\left\vert \Phi(t,y,0)\right\vert .$
\end{itemize}
\noindent We shall assume that:\medskip
\noindent\textbf{(BSDE-LH) }:
\begin{itemize}
\item[$\left( i\right) $] \text{for all }$y\in\mathbb{R}^{d}$ and
$z\in\mathbb{R}^{d\times k}$ the function $\Phi\left( \cdot,\cdot,y,z\right)
:\Omega\times\left[ 0,T\right] \rightarrow\mathbb{R}^{d}$ is progressively measurable;
\item[$\left( ii\right) $] there exist the progressively measurable
stochastic processes\textit{ }$L,\ell,\alpha:\Omega\times\left[ 0,T\right]
\rightarrow\mathbb{R}_{+}$ \textit{\ }such tha
\[
\alpha_{t}dQ_{t}=dt\quad\quad\quad\text{and}\quad\quad\qua
{\displaystyle\int_{0}^{T}}
\left( L_{t}dQ_{t}+\ell_{t}^{2}dt\right) <\infty,\;\mathbb{P}-a.s.~\omega
\in\Omega
\]
and, for all $t\in\left[ 0,T\right] $, $y,y^{\prime}\in\mathbb{R}^{d}$
and$\;z,z^{\prime}\in\mathbb{R}^{d\times k},\;\mathbb{P}-a.s.~\omega\in\Omega
\begin{equation
\begin{array}
[c]{r
\text{\textit{Lipschitz conditions :}}\\
\\
\\
\text{\textit{Boundedness condition :}
\end{array
\begin{array}
[c]{rl
\left( a\right) \quad & \left\vert \Phi(t,y^{\prime},z)-\Phi
(t,y,z)\right\vert \leq L_{t}|y^{\prime}-y|,\medskip\\
\left( b\right) \quad & |\Phi(t,y,z^{\prime})-\Phi(t,y,z)|\leq\alpha_{t
\ell_{t}|z^{\prime}-z|,\medskip\\
\left( c\right) \quad &
{\displaystyle\int_{0}^{T}}
\Phi_{\rho}^{\#}\left( t\right) dQ_{t}<\infty,\quad\forall\rho\geq0.
\end{array}
\label{ch5-gl1
\end{equation}
\end{itemize}
\noindent Remark that condition $\alpha_{t}dQ_{t}=dt$ implie
\[
\Phi\left( t,Y_{t},Z_{t}\right) dQ_{t}=F\left( t,Y_{t},Z_{t}\right)
dt+G\left( t,Y_{t}\right) dA_{t},
\]
where $G$ does not depend on the $z$ variable.
\noindent Let $p>1$ and $n_{p}\overset{def}{=}1\wedge\left( p-1\right) $.
The following existence and uniqueness result takes place.
\begin{theorem}
[See Theorem 5.29 from Pardoux and R\u{a}\c{s}canu \cite{Pardoux/Rascanu:09
]\label{ch5-t2aa}Suppose that the assumptions \textbf{(BSDE-LH)} are
satisfied. Conside
\[
V_{t}
{\displaystyle\int_{0}^{t}}
L_{s}dQ_{s}+\dfrac{1}{n_{p}
{\displaystyle\int_{0}^{t}}
\ell_{s}^{2}ds.
\]
If, for all $\delta>1$
\begin{equation}
\mathbb{E}|e^{\delta V_{T}}\eta|^{p}+\mathbb{E}\Big
{\displaystyle\int_{0}^{T}}
e^{\delta V_{t}}\left\vert \Phi\left( t,0,0\right) \right\vert
dQ_{t}\Big)^{p}<\infty\label{ip-V-delta
\end{equation}
then the BSDE (\ref{gen BSDE Lip}) admits a unique solution $\left(
Y,Z\right) \in S_{d}^{0}\left[ 0,T\right] \times\Lambda_{d\times k
^{0}\left( 0,T\right) $ such tha
\[
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }e^{\delta pV_{s}}\left\vert
Y_{s}\right\vert ^{p}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta V_{s}}\left\vert Y_{s}\right\vert ^{2}L_{s}dQ_{s}\right)
^{p/2}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta V_{s}}\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}<\infty.
\]
\end{theorem}
Consider now the BSD
\begin{equation}
Y_{t}=\eta+\int_{t}^{T}F\left( s,Y_{s},Z_{s}\right) ds-\int_{t}^{T
Z_{s}dB_{s},~,\;t\in\left[ 0,T\right] ,\;\mathbb{P}-a.s.~\omega\in\Omega.
\label{ch5-ll8
\end{equation}
where for all $y\in\mathbb{R}^{d}$, $z\in\mathbb{R}^{d\times k}$, the function
$F\left( \cdot,y,z\right) :\left[ 0,T\right] \rightarrow\mathbb{R}^{d}$ is
measurable and there exist some measurable deterministic functions
$L,\kappa,\rho\in L^{1}\left( 0,T;\mathbb{R}_{+}\right) $ and $\ell\in
L^{2}\left( 0,T;\mathbb{R}_{+}\right) $ such that, for all $y,y^{\prime
\in\mathbb{R}^{d}$, $z,z^{\prime}\in\mathbb{R}^{d\times k},$ $dt-a.e.,
\begin{equation
\begin{array}
[c]{l
\left\vert F(t,y^{\prime},z)-F(t,y,z)\right\vert \leq L\left( t\right)
\left( 1+\left\vert y\right\vert \vee\left\vert y^{\prime}\right\vert
\right) |y^{\prime}-y|,\medskip\\
|F(t,y,z^{\prime})-F(t,y,z)|\leq\ell\left( t\right) |z^{\prime
-z|,\medskip\\
\left\vert F(t,y,0)\right\vert \leq\rho\left( t\right) +\kappa\left(
t\right) \left\vert y\right\vert .
\end{array}
\label{ch5-ll9
\end{equation}
Letting $\gamma\left( t\right) =\kappa\left( t\right) +\dfrac{1}{n_{p
}\ell^{2}\left( t\right) $ and $\bar{\gamma}\left( t\right)
{\displaystyle\int_{0}^{t}}
\left( \kappa\left( s\right) +\dfrac{1}{n_{p}}\ell^{2}\left( s\right)
\right) ds$, consider the stochastic process $\beta\in S_{1}^{0}\left[
0,T\right] $ given b
\[
\beta_{t}=C^{\prime}\left( 1+\left( \mathbb{E}^{\mathcal{F}_{t}}\left\vert
\eta\right\vert ^{p}\right) ^{1/p}\right) \geq\left( C_{p}\right)
^{1/p}e^{-\bar{\gamma}\left( t\right) }\left\{ \mathbb{E}^{\mathcal{F}_{t
}\left[ |e^{\bar{\gamma}\left( T\right) }\eta|^{p}+\left(
{\displaystyle\int_{t}^{T}}
e^{\bar{\gamma}\left( s\right) }\rho\left( s\right) ds\right)
^{p}\right] \right\} ^{1/p},
\]
where $C^{\prime}=C^{\prime}\left( p,\bar{\gamma}\left( T\right)
{\displaystyle\int_{0}^{T}}
\rho\left( s\right) ds\right) $.
\noindent Denot
\[
\nu_{t}
{\displaystyle\int_{0}^{t}}
L\left( s\right) \left[ \mathbb{E}^{\mathcal{F}_{s}}\left\vert
\eta\right\vert ^{p}\right] ^{1/p}\quad\quad\text{and}\quad\quad\theta
=\sup_{t\in\left[ 0,T\right] }\left( \mathbb{E}^{\mathcal{F}_{t}}\left\vert
\eta\right\vert ^{p}\right) ^{1/p}~.
\]
\begin{theorem}
\label{Corollary existence sol}Let $p>1$ and the assumptions (\ref{ch5-ll9})
be satisfied. If $\mathbb{E}e^{\delta\theta}<\infty,$ for all $\delta>0$, then
the BSDE (\ref{ch5-ll8}) admits a unique solution $\left( Y,Z\right) \in
S_{d}^{0}\left[ 0,T\right] \times\Lambda_{d\times k}^{0}\left( 0,T\right)
$ such that, for all $\delta>0$
\[
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }e^{\delta p\nu_{s}}\left\vert
Y_{s}\right\vert ^{p}+\mathbb{E~}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta\nu_{s}}\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}<\infty.
\]
Moreover, $\mathbb{P}-a.s.~\omega\in\Omega$,
\[
\left\vert Y_{t}\right\vert \leq C^{\prime}\left( 1+\left( \mathbb{E
^{\mathcal{F}_{t}}~\left\vert \eta\right\vert ^{p}\right) ^{1/p}\right)
,\quad~\text{for all }t\in\left[ 0,T\right] .
\]
\end{theorem}
\begin{proof}
Consider the projector operator $\pi:\Omega\times\left[ 0,T\right]
\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d},
\[
\pi_{t}\left( \omega,y\right) =\pi\left( \omega,t,y\right) =y\left[
1-\left( 1-\frac{\beta_{t}\left( \omega\right) }{\left\vert y\right\vert
}\right) ^{+}\right] =\left\{
\begin{array}
[c]{ll
y, & \quad\text{if }\left\vert y\right\vert \leq\beta_{t}\left(
\omega\right) ,\medskip\\
\dfrac{y}{\left\vert y\right\vert }\beta_{t}\left( \omega\right) , &
\quad\text{if }\left\vert y\right\vert >\beta_{t}\left( \omega\right) .
\end{array}
\right.
\]
Remark that, for all $y,y^{\prime}\in\mathbb{R}^{d}$, $\pi\left( \cdot
,\cdot,y\right) $ is a progressively measurable stochastic process,
$\left\vert \pi_{t}\left( y\right) \right\vert \leq\beta_{t}$ an
\[
|\pi_{t}\left( y\right) -\pi_{t}(y^{\prime})|\leq|y-y^{\prime}|.
\]
Let $\tilde{\Phi}\left( s,y,z\right) =\Phi\left( s,\pi_{s}\left( y\right)
,z\right) .$ The function is globally Lipschitz with respect to $\left(
y,z\right) :
\begin{align*}
|\tilde{\Phi}\left( s,y,z\right) -\tilde{\Phi}(s,y^{\prime},z)| &
=|\Phi\left( s,\pi_{s}\left( y\right) ,z\right) -\Phi(s,\pi_{s}(y^{\prime
}),z)|\\
& \leq L(s)\left( 1+\left\vert \pi_{s}\left( y\right) \right\vert \vee
|\pi_{s}(y^{\prime})|\right) |\pi_{s}\left( y\right) -\pi_{s}(y^{\prime
})|\\
& \leq L(s)\left( 1+\beta_{s}\right) |y-y^{\prime}|
\end{align*}
an
\[
|\tilde{\Phi}\left( s,y,z\right) -\tilde{\Phi}(s,y^{\prime},z)|=\left\vert
\Phi\left( s,\pi_{s}\left( y\right) ,z\right) -\Phi(s,\pi_{s}\left(
y\right) ,z^{\prime})\right\vert \leq\alpha_{s}\ell(s)|z-z^{\prime}|.
\]
Then, according to Theorem \ref{ch5-t2aa}, the BSD
\begin{equation}
Y_{t}=\eta+\int_{t}^{T}\tilde{\Phi}\left( s,Y_{s},Z_{s}\right) dQ_{s
-\int_{t}^{T}Z_{s}dB_{s},~,\;t\in\left[ 0,T\right] . \label{eq-localiz
\end{equation}
admits a unique solution $\left( Y,Z\right) \in S_{d}^{0}\left[ 0,T\right]
\times\Lambda_{d\times k}^{0}\left( 0,T\right) $ satisfyin
\[
\mathbb{E}\sup\limits_{s\in\left[ 0,T\right] }e^{\delta pV_{s}}\left\vert
Y_{s}\right\vert ^{p}+\mathbb{E}\left(
{\displaystyle\int_{0}^{T}}
e^{2\delta V_{s}}\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}<\infty,
\]
wher
\[
V_{t}
{\displaystyle\int_{0}^{t}}
\left[ \kappa\left( s\right) +L\left( s\right) \left( 1+\beta
_{s}\right) +\frac{1}{n_{p}}\ell^{2}\left( s\right) \right] ds\leq C+
{\displaystyle\int_{0}^{t}}
L\left( s\right) \left[ \mathbb{E}^{\mathcal{F}_{s}}\left\vert
\eta\right\vert ^{p}\right] ^{1/p}.
\]
Since we hav
\begin{align*}
\left\langle Y_{t},\tilde{\Phi}\left( t,Y_{t},Z_{t}\right) dQ_{t
\right\rangle & =\left\langle Y_{t},\Phi\left( t,\pi_{t}\left(
Y_{t}\right) ,Z_{t}\right) dQ_{t}\right\rangle \\
& \leq\left\vert Y_{t}\right\vert \rho(t)dQ_{t}+\left\vert Y_{t}\right\vert
^{2}\gamma(t)dQ_{t}+\frac{n_{p}}{4}\left\vert Z_{t}\right\vert ^{2}dt
\end{align*}
then $\left\vert Y_{t}\right\vert \leq\beta_{t}$ and, consequently,
$\tilde{\Phi}\left( t,Y_{t},Z_{t}\right) =\Phi\left( t,Y_{t},Z_{t}\right)
$, that is $\left( Y,Z\right) $ is the unique solution of BSDE
(\ref{ch5-ll8}).
\hfill
\end{proof}
\subsection{Moreau-Yosida regularization of a convex function}
By $\nabla\varphi_{\varepsilon}$ we denote the gradient of the Yosida's
regularization $\varphi_{\varepsilon}$ of the function $\varphi$. More
precisely (see Br\'{e}zis \cite{Brezis:73})
\[
\varphi_{\varepsilon}(x)=\inf\,\{\frac{1}{2\varepsilon}|z-x|^{2
+\varphi(z):\;z\in\mathbb{R}^{d}\}=\dfrac{1}{2\varepsilon}|x-J_{\varepsilon
}x|^{2}+\varphi(J_{\varepsilon}x),
\]
where $J_{\varepsilon}x=x-\varepsilon\nabla\varphi_{\varepsilon}(x).$ The
function $\varphi_{\varepsilon}:\mathbb{R}^{d}\rightarrow\mathbb{R}$ is a
convex and differentiable one and it has the following main properties. For
all $x,y\in\mathbb{R}^{d},$ $\varepsilon>0:
\begin{equation
\begin{array}
[c]{ll
a)\quad & \nabla\varphi_{\varepsilon}(x)=\partial\varphi_{\varepsilon}\left(
x\right) \in\partial\varphi(J_{\varepsilon}x),\text{ and }\varphi
(J_{\varepsilon}x)\leq\varphi_{\varepsilon}(x)\leq\varphi(x),\medskip\\
b)\quad & \left\vert \nabla\varphi_{\varepsilon}(x)-\nabla\varphi
_{\varepsilon}(y)\right\vert \leq\dfrac{1}{\varepsilon}\left\vert
x-y\right\vert ,\medskip\\
c)\quad & \left\langle \nabla\varphi_{\varepsilon}(x)-\nabla\varphi
_{\varepsilon}(y),x-y\right\rangle \geq0,\medskip\\
d)\quad & \left\langle \nabla\varphi_{\varepsilon}(x)-\nabla\varphi_{\delta
}(y),x-y\right\rangle \geq-(\varepsilon+\delta)\left\langle \nabla
\varphi_{\varepsilon}(x),\nabla\varphi_{\delta}(y)\right\rangle .
\end{array}
\label{sub6a
\end{equation}
If $0=\varphi\left( 0\right) \leq\varphi\left( x\right) $ for all
$x\in\mathbb{R}^{d}$ the
\begin{equation
\begin{array}
[c]{l
\left( a\right) \quad\quad0=\varphi_{\varepsilon}(0)\leq\varphi
_{\varepsilon}(x)\quad\text{and}\quad J_{\varepsilon}\left( 0\right)
=\nabla\varphi_{\varepsilon}\left( 0\right) =0,\smallskip\\
\left( b\right) \quad\quad\dfrac{\varepsilon}{2}|\nabla\varphi_{\varepsilon
}(x)|^{2}\leq\varphi_{\varepsilon}(x)\leq\left\langle \nabla\varphi
_{\varepsilon}(x),x\right\rangle ,\quad\forall x\in\mathbb{R}^{d}.
\end{array}
\label{sub6c
\end{equation}
\begin{proposition}
\label{p12annexB}Let\ $\varphi:\mathbb{R}^{d}\rightarrow]-\infty,+\infty]$ be
a proper convex lower semicontinuous function such that $int\left( Dom\left(
\varphi\right) \right) \neq\emptyset.$ Let $\left( u_{0},\hat{u
_{0}\right) \in\partial\varphi,$ $r_{0}\geq0$ an
\[
\varphi_{u_{0},r_{0}}^{\#}\overset{def}{=}\sup\left\{ \varphi\left(
u_{0}+r_{0}v\right) :\left\vert v\right\vert \leq1\right\} .
\]
Then, for all $\,0\leq s\leq t$ and $dk\left( t\right) \in\partial
\varphi\left( x\left( t\right) \right) \left( dt\right) $
\begin{equation}
r_{0}\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right)
{\displaystyle\int_{s}^{t}}
\varphi(x(r))dr\le
{\displaystyle\int_{s}^{t}}
\left\langle x\left( r\right) -u_{0},dk\left( r\right) \right\rangle
+\left( t-s\right) \varphi_{u_{0},r_{0}}^{\#} \label{Ba6a
\end{equation}
and, moreover
\begin{equation
\begin{array}
[c]{l
r_{0}\left( \left\updownarrow k\right\updownarrow _{t}-\left\updownarrow
k\right\updownarrow _{s}\right)
{\displaystyle\int_{s}^{t}}
\left\vert \varphi(x(r))-\varphi\left( u_{0}\right) \right\vert dr\le
{\displaystyle\int_{s}^{t}}
\left\langle x\left( r\right) -u_{0},dk\left( r\right) \right\rangle
\smallskip\smallskip\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
{\displaystyle\int\nolimits_{s}^{t}}
(2\left\vert \hat{u}_{0}\right\vert \left\vert x(r)-u_{0}\right\vert
+\varphi_{u_{0},r_{0}}^{\#}-\varphi\left( u_{0}\right) )dr.
\end{array}
\label{Ba6b
\end{equation}
\end{proposition}
\subsection{Basic inequalities}
We shall derive some important estimations on the stochastic processes
$\left( Y,Z\right) \in S_{d}^{0}\left[ 0,T\right] \times\Lambda_{d\times
k}^{0}\left( 0,T\right) $ satisfying for all $t\in\left[ 0,T\right] $,
$\mathbb{P}-a.s.~\omega\in\Omega$
\[
Y_{t}=Y_{T}+\int_{t}^{T}dK_{s}-\int_{t}^{T}Z_{s}dB_{s},
\]
with $K\in S_{d}^{0}$ be such that $K_{\cdot}\left( \omega\right) \in
BV_{loc}\left( \left[ 0,\infty\right[ ;\mathbb{R}^{d}\right)
,\;\mathbb{P}-a.s.~\omega\in\Omega.$ For more details concerning the results
found in this subsection one can consult Section 6.3.4 from Pardoux and
R\u{a}\c{s}canu \cite{Pardoux/Rascanu:09}.\medskip
\noindent\textbf{Backward It\^{o}'s formula. }\textit{If }$\psi\in
C^{1,2}\left( \left[ 0,T\right] \times\mathbb{R}^{d}\right) $\textit{,
then }$\mathbb{P}-a.s.~\omega\in\Omega$\textit{, for all }$t\in\left[
0,T\right] $
\begin{equation
\begin{array}
[c]{l
\psi\left( t,Y_{t}\right)
{\displaystyle\int_{t}^{T}}
\left\{ \dfrac{\partial\psi}{\partial t}\left( s,Y_{s}\right) +\dfrac{1
{2}\mathbf{Tr}\left[ Z_{s}Z_{s}^{\ast}\psi_{xx}^{\prime\prime}\left(
s,Y_{s}\right) \right] \right\} ds\medskip\\
\;\;\;\;\;=\psi\left( T,Y_{T}\right)
{\displaystyle\int_{t}^{T}}
\left\langle \psi_{x}^{\prime}\left( s,Y_{s}\right) ,dK_{s}\right\rangle
{\displaystyle\int_{t}^{T}}
\left\langle \psi_{x}^{\prime}\left( s,Y_{s}\right) ,Z_{s}dB_{s
\right\rangle
\end{array}
\label{ch5-if
\end{equation}
\noindent According to Lemma 2.35 from \cite{Pardoux/Rascanu:09}, if
$\psi\,:\left[ 0,T\right] \times\mathbb{R}^{d}\rightarrow\mathbb{R}$\textit{
}is a\textit{ }$C^{1}$-class function, convex in the second argument, then,
$\mathbb{P}-a.s.~\omega\in\Omega$, for every $t\in\left[ 0,T\right] $, the
following stochastic subdifferential inequality takes place
\begin{equation}
\psi(t,Y_{t})
{\displaystyle\int_{t}^{T}}
\dfrac{\partial\psi}{\partial t}\left( s,Y_{s}\right) ds\leq\psi(T,Y_{T})
{\displaystyle\int_{t}^{T}}
\left\langle \nabla\psi(s,Y_{s}),\,dK_{s}\right\rangle
{\displaystyle\int_{t}^{T}}
\left\langle \nabla\psi(s,Y_{s}),\,Z_{s}dB_{s}\right\rangle .
\label{subdiff ineq 1
\end{equation}
\medskip
\textbf{A fundamental inequality}\medskip\newline Let $\left( Y,Z\right) \in
S_{d}^{0}\left[ 0,T\right] \times\Lambda_{d\times k}^{0}\left( 0,T\right)
$ satisfying an identity of the for
\begin{equation}
Y_{t}=Y_{T}+\int_{t}^{T}dK_{s}-\int_{t}^{T}Z_{s}dB_{s},\quad\;t\in\left[
0,T\right] ,\quad\mathbb{P}-a.s.~\omega\in\Omega, \label{bsde-Fineq
\end{equation}
where $K\in S_{d}^{0}\left( \left[ 0,T\right] \right) $ and $K_{\cdot
}\left( \omega\right) \in BV\left( \left[ 0,T\right] ;\mathbb{R
^{d}\right) ,\;\mathbb{P}-a.s.~\omega\in\Omega.$\bigskip
\noindent Assume there exist
\begin{itemize}
\item $\quad D,R,N$ - three progressively measurable increasing continuous
stochastic processes with $D_{0}=R_{0}=N_{0}=0,$
\item $\quad V$ - a progressively measurable bounded variation continuous
stochastic process with $V_{0}=0,$
\item $\quad0\leq\lambda<1<p,$
\end{itemize}
\noindent such that, as measures on $\left[ 0,T\right] $, $\mathbb{P
-a.s.~\omega\in\Omega$
\begin{equation}
dD_{t}+\left\langle Y_{t},dK_{t}\right\rangle \leq\left[ \mathbf{1}_{p\geq
2}dR_{t}+|Y_{t}|dN_{t}+|Y_{t}|^{2}dV_{t}\right] +\dfrac{n_{p}}{2
\lambda\left\vert Z_{t}\right\vert ^{2}dt, \label{Ch5-ip1
\end{equation}
wher
\[
n_{p}\overset{def}{=}1\wedge\left( p-1\right) \text{.
\]
Proposition 6.80 from Pardoux and R\u{a}\c{s}canu \cite{Pardoux/Rascanu:09}
yields the following important result.
\begin{proposition}
\label{ineq cond exp}If (\ref{bsde-Fineq}) and (\ref{Ch5-ip1}) hold, and
moreove
\[
\mathbb{E}\left\Vert Ye^{V}\right\Vert _{T}^{p}<\infty,
\]
then
\index{inequality!backward stochastic}
there exists a positive constant $C_{p,\lambda},$ depending only upon $\left(
p,\lambda\right) ,$ such that, $\mathbb{P}-a.s.~\omega\in\Omega$, for all
$t\in\left[ 0,T\right] $
\begin{equation
\begin{array}
[c]{r
\mathbb{E}^{\mathcal{F}_{t}}\sup\limits_{s\in\left[ t,T\right] }\left\vert
e^{V_{s}}Y_{s}\right\vert ^{p}+\mathbb{E}^{\mathcal{F}_{t}}\left(
{\displaystyle\int_{t}^{T}}
e^{2V_{s}}dD_{s}\right) ^{p/2}+\mathbb{E}^{\mathcal{F}_{t}}\left(
{\displaystyle\int_{t}^{T}}
e^{2V_{s}}\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}\medskip\\
+\mathbb{E}^{\mathcal{F}_{t}
{\displaystyle\int_{t}^{T}}
e^{pV_{s}}\left\vert Y_{s}\right\vert ^{p-2}\mathbf{1}_{Y_{s}\neq0
dD_{s}+\mathbb{E}^{\mathcal{F}_{t}
{\displaystyle\int_{t}^{T}}
e^{pV_{s}}\left\vert Y_{s}\right\vert ^{p-2}\mathbf{1}_{Y_{s}\neq0}\left\vert
Z_{s}\right\vert ^{2}ds\medskip\\
\leq C_{p,\lambda}~\mathbb{E}^{\mathcal{F}_{t}}\mathbb{~}\left[ \left\vert
e^{V_{T}}Y_{T}\right\vert ^{p}+\left(
{\displaystyle\int_{t}^{T}}
e^{2V_{s}}\mathbf{1}_{p\geq2}dR_{s}\right) ^{p/2}+\left(
{\displaystyle\int_{t}^{T}}
e^{V_{s}}dN_{s}\right) ^{p}\right] .
\end{array}
\label{ch5-b2
\end{equation}
In addition, if $R=N=0,$ then, for all $t\in\left[ 0,T\right] $
\begin{equation}
e^{pV_{t}}\left\vert Y_{t}\right\vert ^{p}\leq\mathbb{E}^{\mathcal{F}_{t
}e^{pV_{T}}\left\vert Y_{T}\right\vert ^{p},\quad\mathbb{P}-a.s.~\omega
\in\Omega. \label{ch5-b2a
\end{equation}
\end{proposition}
\begin{corollary}
\label{Ch5-p1-cor1} Under the assumptions of Proposition \ref{ineq cond exp},
if $V$ is a determinist process and $\sup_{s\geq0}\left\vert V_{s}\right\vert
\leq c$ then, $\mathbb{P}-a.s.~\omega\in\Omega$, for all $t\in\left[
0,T\right] $
\
\begin{array}
[c]{l
\mathbb{E}^{\mathcal{F}_{t}}\sup\limits_{s\in\left[ t,T\right] }\left\vert
Y_{s}\right\vert ^{p}+\mathbb{E}^{\mathcal{F}_{t}}\left(
{\displaystyle\int_{t}^{T}}
\left\vert Z_{s}\right\vert ^{2}ds\right) ^{p/2}\medskip\\
\quad\quad\quad\quad\quad\quad\leq C_{p,\lambda}e^{2c}\mathbb{E
^{\mathcal{F}_{t}}\left[ \left\vert Y_{T}\right\vert ^{p}+\left(
{\displaystyle\int_{t}^{T}}
\mathbf{1}_{p\geq2}dR_{s}\right) ^{p/2}+\left(
{\displaystyle\int_{t}^{T}}
dN_{s}\right) ^{p}\right] .
\end{array}
\]
\end{corollary}
\begin{proposition}
[See Proposition 6.69 from \cite{Pardoux/Rascanu:09}]\label{exp prop ineq
\textit{Let }$\delta\in\left\{ -1,1\right\} $ and consider\textit{
}$Y,K,A:\Omega\times\mathbb{R}_{+}\rightarrow\mathbb{R}\;$and $G:\Omega
\times\mathbb{R}_{+}\rightarrow\mathbb{R}^{k}$ four progressively measurable
stochastic processes such tha
\
\begin{array}
[c]{rll
i)\;\; & & Y,K,A\;\text{are continuous stochastic processes,}\\
ii)\;\; & & A_{\cdot},K_{\cdot}\in BV_{loc}\left( \left[ 0,\infty\right[
;\mathbb{R}\right) ,\;A_{0}=K_{0}=0,\;\mathbb{P}-a.s.~\omega\in\Omega
\text{,}\\
iii)\;\; & &
{\displaystyle\int_{t}^{s}}
\left\vert G_{r}\right\vert ^{2}dr<\infty,\;\mathbb{P}-a.s.~\omega\in
\Omega,\;\forall0\leq t\leq s.
\end{array}
\]
\textit{If, for all }$0\leq t\leq s$
\[
\delta\left( Y_{t}-Y_{s}\right) \leq\int_{t}^{s}\left( dK_{r}+Y_{r
dA_{r}\right) +\int_{t}^{s}\left\langle G_{r},dB_{r}\right\rangle
,\quad\mathbb{P}-a.s.~\omega\in\Omega,
\]
\textit{then
\[
\delta\left( Y_{t}e^{\delta A_{t}}-Y_{s}e^{\delta A_{s}}\right) \leq\int
_{t}^{s}e^{\delta A_{r}}dK_{r}+\int_{t}^{s}e^{\delta A_{r}}\left\langle
G_{r},dB_{r}\right\rangle ,\quad\mathbb{P}-a.s.~\omega\in\Omega.
\]
\end{proposition}
\bigskip
|
1,477,468,750,901 | arxiv | \section{}
\bibliographystyle{elsarticle-num}
\section{System Model}\label{sec:model}
Consider that a system administrator needs to protect all secret information in the system. The administrator desires to do so in such a way that every secret is protected with at least a certain number of protections and these protections are of at least a certain security level.
Meanwhile, the administrator needs to balance secret protection with the associated cost. There are two sources of cost often considered in practice. One is the cost of purchasing, implementing, and maintaining the device or program for protection. This cost evidently varies depending on the means of protection; for example, a biometric device is much more costly than a password protection. Correspondingly, the higher the cost is, the higher the {\em security level} of the protection becomes.
The other source of cost is due to that secret protection can have the side effect of negatively impacting the convenience of regular users of the system. Unlike intruders, regular users when using the system do not always try to see the secret information (e.g. personal data), but more often use various services that the system provides (e.g. watching a movie, reading an e-book, launch an app). If protecting secrets simultaneously requires regular users to undergo many security checks before using any services, user experience or system's {\em usability} will decline, and if this causes users to stop using the system, the cost can be significant.
In this section, we will formulate the above-described system and cost considerations for secret protection. Our objective is to design for the administrator a {\em protection policy} that ensures the required level of secret protection while minimizes the incurred cost.
To model the system, we employ the framework of discrete-event systems (DES) \cite{wonham2019supervisory,cassandras2009introduction}, and consider the system modeled as a finite-state automaton
\begin{equation}\label{eq:plant:model}
\mathbf{G} = (Q, \Sigma, \delta, q_0, Q_m).
\end{equation}
Here $Q$ is the set of states, $\Sigma$ the set of events, $\delta: Q
\times \Sigma \to Q$ the (partial) transition function,\footnote{It is sometimes convenient to view $\delta$ as a set of triples: $\delta = \{(q,\sigma,q') \mid (q,\sigma) \mapsto q'\}$.} $q_0 \in Q$ the initial state, and $Q_m \subseteq Q$ the set of {\em marker states} which models the set of services/functions provided by the system to its users.
We denote by $Q_s \subseteq Q$ the set of {\em secret states} in
$\mathbf{G}$; no particular relation is assumed between $Q_s$ and $Q_m$, i.e. a secret state may or may not coincide with a marker state.
In addition we extend the transition function $\delta$ to $\delta: Q \times \Sigma^* \to Q$ (where $\Sigma^*$ is the set of all finite-length strings of events in $\Sigma$ including the empty string $\epsilon$) in the
standard manner, and write $\delta(q, s)!$ to mean that string $s$ is defined at state $q$. The {\it closed behavior} of {\bf G}, written $L({\bf G})$, is the set of all strings that are defined at the initial state $q_0$:
\begin{align*}
L({\bf G}) = \{s \in \Sigma^* \mid \delta(q_0, s)!\}.
\end{align*}
Also define the {\it marked behavior} of {\bf G}:
\begin{align*}
L_m({\bf G}) = \{s \in L(\bf G) \mid \delta(q_0, s)! \ \&\ \delta(q_0, s) \in Q_m\}.
\end{align*}
That is, every string in $L_m({\bf G})$ is a member of the closed behavior $L({\bf G})$, and moreover reaches a marker state in $Q_m$.
A state $q \in Q$ is {\em reachable} (from the initial state $q_0$) if there is a string $s$ such that $\delta(q_0, s)!$ and $\delta(q_0, s)=q$.
A state $q \in Q$ is {\em co-reachable} (to the set of marker states $Q_m$) if there is a string $s$ such that $\delta(q, s)!$ and $\delta(q, s) \in Q_m$. $\mathbf{G}$ is said to be {\em trim} if every state is both reachable and co-reachable. Unless otherwise specified, we consider trim automaton {\bf G} for the system model in the sequel.
In practice, not all events in the system can be protected by the administrator for reasons such as exceeding administrative permissions. Thus we partition the event set $\Sigma$ into a disjoint union of the subset of {\em protectable}
events $\Sigma_p$ and the subset of {\em unprotectable} events $\Sigma_{up}$, namely
$\Sigma = \Sigma_p \disjoint \Sigma_{up}$.
Moreover, protecting different events in $\Sigma_p$ may incur different costs. As described at the beginning of this section, we consider two sources of cost.
For the first source of purchasing/implementing/maintaining the protection device/program, we partition the set of protectable events $\Sigma_p$ further into $n$ disjoint subsets $\Sigma_i$ where
$i \in \{0, 1, \dots, n-1\}$, namely
\begin{equation} \label{eq:Sigmai}
\Sigma_p = \bigdisjoint_{i=0}^{n-1} \Sigma_i.
\end{equation}
The index $i$ of $\Sigma_i$ indicates the cost level
when the system administrator protects one or more events in $\Sigma_i$; the larger the index $i$, the higher the cost level of protecting events in $\Sigma_i$. For simplicity we assume that the index is the deciding factor for the first source of cost; that is, the cost of protecting one event in $\Sigma_i$ is sufficiently higher
than the cost of protecting all events in $\Sigma_{i-1}$. While this assumption might be restrictive, it is also reasonable in many situations: for example, the cost of purchasing/installing/maintaining a biometric sensor is more costly than setting multiple password protections.
Since this source of cost is directly related to the strength of protection, we will also refer to these cost levels as {\em security levels}.
For the second source of cost regarding regular users' convenience, we investigate the impact of protecting an event $\sigma \in \Sigma_p$ at a state $q$ on the {\em usability} of services/functions provided by the system (which are modeled by the marker states in $Q_m$). In particular, we define for each pair $(q,\sigma)$, with $\delta(q,\sigma)!$, the following set of non-secret marker states that can be reached from the state $\delta(q,\sigma)$:
\begin{align} \label{eq:U}
U(q,\sigma) := \{ q' \in Q_m \setminus Q_s \mid (\exists s \in \Sigma^*) \delta(\delta(q,\sigma),s)! \ \&\ \delta(q,\sigma s) = q'\}.
\end{align}
This $U(q,\sigma)$ is the set of (non-secret) marker states that would be affected if $\sigma$ is protected at $q$; namely, regular users would also have to go through the protected $\sigma$ in order to use any of the services in this set. The reason why we focus on marker states that are {\em not} secrets is because it is unavoidable to cause inconvenience of the users if the services/functions to be used coincide with the secrets to be protected.
With the set defined in (\ref{eq:U}), it is intuitive that the cost of protecting $\sigma$ at $q$ is large (resp. small) if the size of this set, i.e. $|U(q,\sigma)|$, is large (resp. small). In case the cost is overly large, this event $\sigma$ (at $q$) belonging to (say) $\Sigma_i$ (i.e. the $i$th cost level of the first source) may be just as costly as those events in one-level higher $\Sigma_{i+1}$. For example, if setting up a password at a particular point to protect a secret simultaneously requires all regular users to enter a password for most services the system provides, this could largely reduce the users' satisfaction; hence this password protection may be as costly as using a biometric sensor (when the latter is used to protect a secret but affecting no regular users' experience).
As for how large this cost (measured by $|U(q,\sigma)|$) should $\sigma$ at $q$ be treated as having one-level higher cost is case dependent: different systems (or business) have different criteria. Thus we consider using a positive integer $T (\geq 1)$ as a threshold number: if the cost of the second source exceeds this threshould, i.e. $|U(q,\sigma)| \geq T$, the event $\sigma$ at $q$ belong to $\Sigma_i$ (say) will be treated as having the same cost level as those in $\Sigma_{i+1}$.
The more important the system deems user experience, the smaller threshold $T$ should be set. As a final note, the same event $\sigma$ at different $q$ generally has different $|U(q,\sigma)|$; hence this second souce of cost is state-dependent (in contrast with the state-independent first source of cost).
With the above preparation, we now synergize the aforementioned two souces of cost as follows. Consider the partition of $\Sigma_p$ in (\ref{eq:Sigmai}) and let $T \geq 1$ be the threshold. First define
\begin{align} \label{eq:C0}
C_0 := \{ (\sigma, |U(q,\sigma)|) \mid q \in Q \ \&\ \sigma \in \Sigma_0 \ \&\ \delta(q, \sigma)! \ \&\ |U(q,\sigma)|<T \}.
\end{align}
Thus $C_0$ is the set of pairs in which the event belongs to $\Sigma_0$ (the lowest level of the first cost) and the $|U(q,\sigma)|$ (the second cost) is below the threshold $T$. In other words, these events at their respective states are the least costly ones when the first and second costs combined.
Next for each $i \in \{1,\ldots,n-1\}$, define
\begin{align} \label{eq:Ci}
C_i :=& \{ (\sigma, |U(q,\sigma)|) \mid q \in Q \ \&\ \sigma \in \Sigma_i \ \&\ \delta(q, \sigma)! \ \&\ |U(q,\sigma)|<T \} \notag\\
&\cup \{ (\sigma, |U(q,\sigma)|) \mid q \in Q \ \&\ \sigma \in \Sigma_{i-1} \ \&\ \delta(q, \sigma)! \ \&\ |U(q,\sigma)| \geq T \}.
\end{align}
As defined, $C_i$ is the union of two sets of pairs. The first set is analogous to $C_0$ (here for events in $\Sigma_i$). The second set is the collection of those pairs in which the event belongs to $\Sigma_{i-1}$ (one lower level of the first cost) and the $|U(q,\sigma)|$ (the second cost) is larger than or equal to the threshold $T$.
Thus the events corresponding to the second set have different levels when only the first cost is considered and when the two costs are combined.
Finally define
\begin{align} \label{eq:Cn}
C_n := \{ (\sigma, |U(q,\sigma)|) \mid q \in Q \ \&\ \sigma \in \Sigma_{n-1} \ \&\ \delta(q, \sigma)! \ \&\ |U(q,\sigma)| \geq T \}.
\end{align}
Thus $C_n$ is the set of pairs in which the event belongs to $\Sigma_{n-1}$ (the highest level of the first cost) and the $|U(q,\sigma)|$ (the second cost) exceeds the threshold $T$. That is, these events at their respective states are the most costly ones when the first and second costs combined.
It is convenient to define the set of events corresponding to $C_i$ ($i \in [0,n]$), by projecting the elements (i.e. pairs) to their first components. Hence for $i \in [0,n]$ we write
\begin{align} \label{eq:SigmaCi}
\Sigma(C_i) := \{\sigma \mid (\exists q \in Q) (\sigma, |U(q,\sigma)|) \in C_i\}.
\end{align}
From (\ref{eq:C0})-(\ref{eq:Cn}), it is evident that
\begin{align} \label{eq:SigmaC}
\Sigma(C_0) \subseteq \Sigma_0,\quad \Sigma(C_{n}) \subseteq \Sigma_{n-1},\quad (\forall i \in [1,n-1]) \Sigma(C_i) \subseteq \Sigma_{i-1} \cup \Sigma_i.
\end{align}
To illustrate the system modeling and cost definitions presented so far, we provide the following example, which will also be used as the running example in subsequent sections.
\begin{exmp}\label{exmp:model}
\begin{figure}[htp]
\centering
\subimport{figures/}{plant-general.tex}
\caption{System $\mathbf{G}$: initial state $q_0$ (circule with an incoming arrow), marker state set $Q_m=\{q_3, q_4, q_7, q_{10}\}$ (double circles), secret state set $Q_s = \{q_7, q_8, q_{10}\}$ (shaded circles)}\label{fig:exmp:model:plant}
\end{figure}
The finite-state automaton $\mathbf{G}$ in \cref{fig:exmp:model:plant} represents a
simplified system model of using a software application in which there are three
restricted realms. There are also four services that this system provides. Consider that this application works according to the
users' permission levels. Several authentication
points can be (though need not be) set up so that the users have to pass them in order to obtain the permission to reach the restricted realms.
States $q_7$, $q_8$ and $q_{10}$ represent the restricted realms modeled as secret states, i.e. $Q_s = \{q_7, q_8, q_{10}\}$. On the other hand, states $q_3$, $q_4$, $q_7$, and $q_{10}$ represent the services provided by the system and hence the set of marker states is $Q_m=\{q_3, q_4, q_7, q_{10}\}$.
Thus $q_7$ and $q_{10}$ are simultaneously marker and secret states.
The initial state $q_0$ indicates that a user is about to log into the
system. Accordingly, events $\sigma_0$ and $\sigma_1$ represent logging
into the system as a regular user or a system administrator
respectively; then $q_1$ and $q_2$ mean that the user has logged in
corresponding to $\sigma_0$ and $\sigma_1$ respectively. Typically, an
administrator has higher-level permission in the system compared to a
regular user. Also, $\sigma_2$ indicates switching permission from the
administrator to a regular user, $\sigma_5$ denotes launching the application, and $\sigma_6$ means that a regular user launches the
application with the administrative permission, e.g. \emph{sudo} in
Unix-like operating systems. Events $\sigma_3$ and $\sigma_4$ are respectively the starting and finishing actions of using a system service. Moreover, $\sigma_7$ and $\sigma_8$ indicate the
authentication points to obtain access to the secret states $q_7$ and $q_8$. On the other
hand, the administrative realm denoted by the secret state $q_{10}$ requires users to pass
two-factor authentication represented by $\sigma_9$ (first factor) and
$\sigma_{10}$ (second factor). In order to keep secret states secure, the
system administrator needs to configure several authentication points for
restrict access.
According to the above description, the set of protectable events is
\begin{align*}
\Sigma_p = \{\sigma_0,\sigma_1,\sigma_5,\sigma_6,\sigma_7,\sigma_8,\sigma_9,\sigma_{10}\}
\end{align*}
which can be partitioned into four different cost/security levels (low to high):
\begin{align} \label{eq:exSigmai}
\Sigma_0 = \{\sigma_0,
\sigma_1, \sigma_5\},\quad \Sigma_1 = \{\sigma_6, \sigma_7, \sigma_8\},\quad \Sigma_2 = \{\sigma_9\},\quad
\Sigma_3 = \{\sigma_{10}\}.
\end{align}
That is, $\Sigma_p = \Sigma_0 \disjoint \Sigma_1 \disjoint \Sigma_2 \disjoint \Sigma_3$ and $n = 4$. This is the first source of cost we consider, which corresponds to the level of security of these events. The remaining events are deemed unprotectable, i.e. $\Sigma_{up} = \{\sigma_2, \sigma_3,
\sigma_4\}$.
For the second source of cost due to usability (user experience), in this example we set the threshold $T=2$, namely if protecting an event at a state affects two or more (non-secret) services provided by the system, this cost is deemed so large that the event at the state needs to be move one level up in terms of the total cost. In fact in ${\bf G}$, there are exactly two marker states that are not secret states: $q_3, q_4$; hence if both these two states are affected when protecting an event at a state, the threshold is reached.
Inspecting the set $U(q,\sigma)$ as defind in (\ref{eq:U}), we find $U(q_0,\sigma_1) = \{q_3,q_4\}$ because $\delta(q_0, \sigma_1 \sigma_3) = q_3$ and $\delta(q_0, \sigma_1 \sigma_2 \sigma_5 \sigma_3) = q_4$. As a result, $|U(q_0,\sigma_1)|=2=T$ and $\sigma_1 \in \Sigma_0$ at $q_0$ must be moved one level up in the total cost. Continuing this inspection, in fact $U(q_0,\sigma_1)$ is the only case where the threshold $T=2$ is reached. Also note that event $\sigma_5$ has different $|U(\cdot,\sigma_5)|$ at different states where it is defined: $|U(q_1,\sigma_5)|=1$ whereas $|U(q_2,\sigma_5)|=0$. This shows that the second cost is state-dependent.
Finally we present the cost level sets with the two sources of cost combined:
\begin{align}
C_0 &=\{(\sigma_0, 1), (\sigma_5, 1), (\sigma_5, 0)\} \notag\\
C_1 &=\{(\sigma_1, 2), (\sigma_6, 0), (\sigma_7, 0), (\sigma_8,0)\} \notag\\
C_2 &=\{(\sigma_9, 0)\} \label{eq:exCi} \\
C_3 &=\{(\sigma_{10}, 0)\} \notag\\
C_4 &=\emptyset. \notag
\end{align}
%
\end{exmp}
\section{Usability Aware Heterogeneous Secret Securing with Minimum Cost}\label{sec:multilevel}
In this section, we move on to address Problem~\ref{prob:hssmcp} (UHSCP), in which the set of secret states
$Q_s$ is partitioned into $k (\geq 1)$ groups $Q_{s1},\ldots,Q_{sk}$ with heterogeneous importance; as the index $j \in [1,k]$ increases, the importance of $Q_{sj}$ rises. Similar to the preceding section, we begin with a characterization of the solvability of Problem~\ref{prob:hssmcp}, then present a solution algorithm, and finally use our running example to illustrate the results.
\subsection{Solvability of UHSCP}\label{subsec:solvability:multilevel}
The following theorem provides a necessary and sufficient condition under
which there exists a solution to Problem~\ref{prob:hssmcp}.
\begin{thm}\label{thm:solvable:multilevel}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model}, a set of secret states $Q_s = \dot{\bigcup}^k_{j = 1} Q_{sj}$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), the required least number of protections $u \geq 1$, and the required lowest security levels $v_j \geq 0$ for $Q_{sj}$ such that $v_1 \leq \cdots \leq v_k$.
Problem~\ref{prob:hssmcp} is solvable (i.e.
there exists a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ such that for every $j \in [1,k]$, $Q_{sj}$ is $u-v_j-$securely reachable and the index $i$ of $C_i$ is minimum)
if and only if
there exists $i \in [v_1,n]$ such that
\begin{equation} \label{eq:thm:solvable:multilevel:condition}
\begin{gathered}
\text{$(\forall j \in [1,k]) Q_{sj}$ is $u-v_j-$securely reachable w.r.t. $\tilde{\Sigma}_j = \bigcup^i_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1}$} \\
\sand \\
\text{$(\exists j \in [1,k]) Q_{sj}$ is not $u-v_j-$securely reachable w.r.t. $\tilde{\Sigma}_j = \bigcup^{i-1}_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1}$}.
\end{gathered}
\end{equation}
\end{thm}
Condition
\cref{eq:thm:solvable:multilevel:condition} means that there exists an index $i \in [v_1, n]$ such that for every $j \in [1,k]$, the secret states in $Q_{sj}$ can be protected with at least $u$
protections using protectable events in $\bigcup^i_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1} \subseteq \Sigma^{\geq v_j}_p$, but there is $j \in [1,k]$ such that if only
protectable events in $\bigcup^{i-1}_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1} \subseteq \Sigma^{\geq v_j}_p$ are used, secrets cannot be
protected with $u$ protections. That these two conditions in (\ref{eq:thm:solvable:multilevel:condition}) simultaneously hold indicates that the cost level index $i$ is minimum.
\begin{proof}
($\Rightarrow$) If condition~\cref{eq:thm:solvable:multilevel:condition} holds, then for every $j \in [1,k]$, the secret subset $Q_{sj}$ is
$u-v_j-$securely reachable w.r.t. $\bigcup^i_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1}$, and moreover the index $i$ of $C_i$ is minimum. The latter is because at least one secret subset $Q_{sj'}$ ($j' \in [1,k]$) is not $u-v_{j'}-$securely reachable w.r.t. $\bigcup^{i-1}_{l = v_{j'}} \Sigma(C_l) \setminus \Sigma_{v_{j'}-1}$ and
$\bigcup^{i-1}_{l = v_{j'}} \Sigma(C_l) \setminus \Sigma_{v_{j'}-1} \subseteq \bigcup^{i}_{l = v_{j'}} \Sigma(C_l) \setminus \Sigma_{v_{j'}-1}$.
In this case, for every $Q_{sj}$ there exists a protection policy $\mathcal{P}_j : Q \to \power(\bigcup^{i}_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1})$ such that protectable
events in $\bigcup^{i}_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1}$ may be used to satisfy the required least number of protections $u$ and the lowest security level $v_j$. These protection policies $\mathcal{P}_j$ ($j \in [1,k]$) together comprise a solution for \cref{prob:ssmcp}. Therefore, if
\cref{eq:thm:solvable:multilevel:condition} holds, then \cref{prob:hssmcp}
is solvable.
($\Leftarrow$) If \cref{prob:hssmcp} is solvable with the minimum index of $C_i$ being $i \in [v_1, n]$, then
for every $j\in [1,k]$, $Q_{sj}$ is $u-v_j-$securely reachable w.r.t. $\bigcup^i_{l = v_j} \Sigma(C_l) \setminus \Sigma_{v_j-1}$. Since the index $i$ is minimum, it indicates that there exists at least one $j' \in [1,k]$ such that
$Q_{sj'}$ is not $u-v_{j'}-$securely reachable w.r.t. $\bigcup^{i-1}_{l = v_{j'}} \Sigma(C_l) \setminus \Sigma_{v_{j'}-1}$. Therefore
\cref{eq:thm:solvable:multilevel:condition} holds.
\end{proof}
\subsection{Policy Computation for UHSCP}\label{subsec:computation:multilevel}
When \cref{prob:hssmcp} is solvable under the condition presented in Theorem~\ref{thm:solvable:multilevel}, we design an algorithm to compute a solution protection policy.
To compute such a protection policy, like in Section~4.2 we again convert the security problem to a corresponding control problem by changing protectable events to controllable events. Then we employ Algorithm~1 to compute a control policy for each secret subset $Q_{sj}$ ($j \in [1,k]$) to satisfy the required least number of protections $u$ and the lowest security level $v_j$. This is done by inputting Algorithm~1 with ${\bf G}$ in (\ref{eq:plant:control}), $Q_{sj}$, $u$ and $v_j$.
If a solution exists, Algorithm~1 outputs $u$ supervisors ${\bf S}_{0,j},\ldots,{\bf S}_{u-1,j}$ and the minimum cost index $i_{\min, j}$.
For these supervisors, one obtains
the corresponding control policies $\mathcal{D}_{0,j},\ldots,\mathcal{D}_{u-1,j}$, which may be combined into a single control policy
\begin{align} \label{eq:Dj}
\mathcal{D}_j(q) = \bigcup_{l=1}^{u-1} \mathcal{D}_{l,j}(q),\quad q \in Q.
\end{align}
If the above holds for all $j \in [1,k]$, further combining all resulting $\mathcal{D}_j$ ($j \in [1,k]$) yields an overall control policy $\mathcal{D}$ as follows:
\begin{equation}\label{eq:merge:multilevel}
\mathcal{D}(q) = \bigcup_{j=1}^{k}\mathcal{D}_j(q), \quad q \in Q.
\end{equation}
One the other hand, the overall minimum cost index $i_{\min}$ satisfies:
\begin{align*}
i_{\min} = \max(i_{\min, 1},\ldots,i_{\min, k} ).
\end{align*}
\begin{algorithm}[htp]
\caption{UHCC$u$}\label{alg:2-mrcmc}
\begin{algorithmic}[1]
\Require{System ${\bf G}$ in (\ref{eq:plant:control}), secret state set $Q_s = \dot{\bigcup}_{j=1}^k Q_{sj}$, protection number $u$, security levels $0 \leq v_1 \leq \cdots \leq v_k \leq n$. }
\Ensure{Control policy $\mathcal{D}$, minimum cost index $i_{\min}$}
\For{$j = 1,\ldots,k$}
\State{$\mathbf{S}_{0,j}, \dots, \mathbf{S}_{u-1,j}, i_{\min,j} =$
\Call{UCC$u$}{$\mathbf{G}$, $Q_{sj}$, $u$, $v_{j}$}}
\If{all $\mathbf{S}_{0,j}, \dots, \mathbf{S}_{u-1,j}$ are nonempty (or equivalently $i_{\min,j} \neq -1$)}
\State{Derive $\mathcal{D}_j$ from
$\mathbf{S}_{0,j}, \dots, \mathbf{S}_{u-1,j}$ as in
(\ref{eq:Dj})}%
\EndIf%
\EndFor%
\If{all $i_{\min,1},\ldots,i_{\min,k}$ are not equal to $-1$}
\State{Derive $\mathcal{D}$ from $\mathcal{D}_{1},\ldots,\mathcal{D}_k$ as in (\ref{eq:merge:multilevel})}
\State\Return{$\mathcal{D}$ and $i_{\min} = \max(i_{\min, 1},\ldots,i_{\min, k} )$}%
\EndIf%
\State\Return{Empty control policy $\mathcal{D}$ and index $-1$}
\end{algorithmic}
\end{algorithm}
The above procedure is summarized in Algorithm~2 UHCC$u$.
The time complexity of Algorithm~2 is $k$ (from line~1 and $k$ is the number of heterogeneoous secret subsets) times that of Algorithm~1, namely $O(ku(n-v_1)|Q|^2)$.
In fact, the $k$ calls to Algorithm~1 in line~2 can be done independently; hence the $k$ executions of lines~2--5 may be implemented on multi-core processors in a distributed (thus more efficient) manner.
If Algorithm~2 successfully outputs a (nonempty) control policy $\mathcal{D}$, then we convert it to a protection policy $\mathcal{P}: Q \to \power(\Sigma_p)$ by changing all controllable events back to protectable events. In terms of $\mathcal{P}$, we
interpret disabled events by $\mathcal{D}$ as {\em protected events}.
Our main result in this section below asserts that the converted protection policy $\mathcal{P}$ is a solution for our original security problem UHSCP (\cref{prob:hssmcp}).
\begin{thm}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model}, a set of secret states $Q_s = \dot{\bigcup}^k_{j = 1} Q_{sj}$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), the required least number of protections $u \geq 1$, and the required lowest security levels $v_j \geq 0$ for $Q_{sj}$ such that $v_1 \leq \cdots \leq v_k$.
If
\cref{prob:hssmcp} is solvable, then the protection policy $\mathcal{P}$
derived from $\mathcal{D}$ in (\ref{eq:merge:multilevel}) (computed by Algorithm~2) is a solution.
\end{thm}
\begin{proof}
Suppose that \cref{prob:hssmcp} is solvable. Then it follows from Theorem~\ref{thm:solvable:multilevel} that (\ref{eq:thm:solvable:multilevel:condition}) holds, i.e. there is $i \in [v_1,n]$ such that the two conditions in (\ref{eq:thm:solvable:multilevel:condition}) are satisfied.
Convert all protectable events to controllable events. The first condition in (\ref{eq:thm:solvable:multilevel:condition}) ensures that Algorithm~2 passes the test in line~3 for all $j \in [1,k]$. Hence, $k$ control policies $\mathcal{D}_j$ ($j \in [1,k]$) are obtained, each $\mathcal{D}_j$ ensuring that the secret subset $Q_{sj}$ is protected by $u$ protections, and the lowest security level of these protections is $v_j$. Again by the first condition in (\ref{eq:thm:solvable:multilevel:condition}), Algorithm~2 passes the test in line~7 and a combined control policy $\mathcal{D}$ is obtained from $\mathcal{D}_j$ ($j \in [1,k]$). Converting all controllable events back to protectable events, we derive the corresponding protection policy $\mathcal{P}$ which ensures $u-v_j-$secure reachability of $Q_{sj}$ for all $j \in [1,k]$.
Finally, since each index $i_{\min,j}$ ($j \in [1,k]$) is minimum for the respective call to UCC$u$(${\bf G}, Q_{sj}, u, v_j$) and $i_{\min} = \max_{j \in [1,k]} i_{\min,j}$, it follows from the second condition in (\ref{eq:thm:solvable:multilevel:condition}) that $i_{\min}$ is the minimum cost index for the derived protection policy $\mathcal{P}$ as a solution for \cref{prob:hssmcp}.
\end{proof}
\subsection{Running Example}\label{subsec:example:multilevel}
For illustration let us revisit \cref{exmp:model}.
Consider the system $\mathbf{G}$ in
\cref{fig:exmp:model:plant}, with the secret state set $Q_s$ partitioned into two subsets: $Q_{s1} = \{q_7, q_8\}$ (regular users' secrets) and $Q_{s2}=\{q_{10}\}$ (administrator's secret).
Accordingly, we require the lowest security levels to be $v_1 = 0$ and $v_2=1$, respectively.
For the required number of protections, we let $u=2$ (the same as Section~4.3).
In addition, the security level sets are $\Sigma_i$ ($i \in [0, 3]$) as in (\ref{eq:exSigmai}), and the cost level sets are $C_i$ ($i\in [0, 4]$) as in (\ref{eq:exCi}). We demonstrate how to use Algorithm~2 to compute a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ and the minimum index $i$ of $C_i$ as a solution for \cref{prob:hssmcp}.
First, convert protectable events to controllable events and input Algorithm~2 with the converted ${\bf G}$, $Q_s = Q_{s1} \dot{\cup} Q_{s2}$, $u=2$, $v_1=0$ and $v_2=1$.
For $j=1$, call UCC$u$(${\bf G}, Q_{s1}, u, v_1$) to compute $u$ (nonempty) supervisors $\mathbf{S}_{0,1}, \dots, \mathbf{S}_{u-1,1}$ and the minimum cost index $i_{\min,1}=1$. From these supervisors, we obtain the corresponding control policy $\mathcal{D}_1$ as in (\ref{eq:Dj}):
\begin{align*
\mathcal{D}_1(q) = \begin{dcases}
\{\sigma_5\}, & \text{if $q = q_1$} \\
\{\sigma_7, \sigma_8\}, & \text{if $q = q_5$} \\
\emptyset, & \text{if $q \in Q \setminus \{q_1,q_5\}$}
\end{dcases}
\end{align*}
Similarly for $j=2$, call UCC$u$(${\bf G}, Q_{s2}, u, v_2$) to compute $u$ (nonempty) supervisors $\mathbf{S}_{0,2}, \dots, \mathbf{S}_{u-1,2}$ and the minimum cost index $i_{\min,2}=3$. From these supervisors, we obtain the corresponding control policy $\mathcal{D}_2$ as in (\ref{eq:Dj}):
\begin{align*
\mathcal{D}_2(q) = \begin{dcases}
\{\sigma_9\}, & \text{if $q = q_6$} \\
\{\sigma_9\}, & \text{if $q = q_8$} \\
\{\sigma_{10}\}, & \text{if $q = q_9$} \\
\emptyset, & \text{if $q \in Q \setminus \{q_6, q_8, q_9\}$}
\end{dcases}
\end{align*}
It is interesting to observe that due to the required lowest security level $v_2=1$, events in $\Sigma_0=\{\sigma_0,\sigma_1,\sigma_5\}$ cannot be used (even though the event $\sigma_1$ at state $q_0$ belongs to $\Sigma(C_1)$). Consequently in this example, the events in the highest two security levels $\Sigma_2, \Sigma_3$ have to be used in order to meet this requirement.
Finally combining the above $\mathcal{D}_1$ and $\mathcal{D}_2$ yields an overall control policy $\mathcal{D}$ as in (\ref{eq:merge:multilevel}), which is shown in \cref{fig:policy:example:multilevel}.
Observe that every string from the initial state $q_0$ that can reach the secret states in $Q_{s1}=\{q_7,q_8\}$ has at least two disabled events in $\Sigma(C_0) \cup \Sigma(C_1) \subseteq \Sigma_0 \cup \Sigma_1$. Thus the least number of protections $u=2$ and the lowest security level $v_1=0$ are satisfied.
Moreover, every string from $q_0$ that can reach the secret state in $Q_{s2}=\{q_{10}\}$ has at least two disabled events in $(\Sigma(C_1) \cup \Sigma(C_2) \cup \Sigma(C_3)) \setminus \Sigma_0 \subseteq \Sigma_1 \cup \Sigma_2 \cup \Sigma_3$. Thus the least number of protections $u=2$ and the lowest security level $v_1=1$ are also satisfied.
\begin{figure}[htp]
\centering
\adjustbox{}{%
\subimport{figures/multilevel/}{example-d.tex}
}
\caption{Overall control policy $\mathcal{D}$ for $\mathbf{G}$ (with protectable events converted to controllable events)}
\label{fig:policy:example:multilevel}
\end{figure}
Now changing all disabled
transitions in \cref{fig:policy:example:multilevel} denoted by ``\faTimes'' to ``\faLock'', we obtain a protection policy $\mathcal{P}$ for the system ${\bf G}$ as follows:
\begin{align*
\mathcal{P}(q) = \begin{dcases}
\{\sigma_5\}, & \text{if $q = q_1$} \\
\{\sigma_7, \sigma_8\}, & \text{if $q = q_5$} \\
\{\sigma_9\}, & \text{if $q = q_6$} \\
\{\sigma_9\}, & \text{if $q = q_8$} \\
\{\sigma_{10}\}, & \text{if $q = q_9$} \\
\emptyset, & \text{if $q \in Q \setminus \{q_1,q_5, q_6,q_8, q_9\}$}
\end{dcases}.
\end{align*}
Finally, the minimum cost index is $i_{\min} =\max(i_{\min,1},i_{\min,2}) =3$.
For this example, the protections of each protected event specified by the policy $\mathcal{P}$ may be implemented as follows:
\begin{itemize}
\item $\sigma_5,\sigma_7,\sigma_8$: already described at the end of Section~4.3.
\item $\sigma_9$: setting up the first of two-factor authentification with a security question.
\item $\sigma_{10}$: setting up the second of two-factor authentification with a physical security key.
\end{itemize}
\section{Uniform Level Secret Securing with Minimum Costs}\label{sec:uniform}
\section{Problem Formulation}\label{sec:problem}
Given the system model ${\bf G}$ in (\ref{eq:plant:model}), the $n$ security levels $\Sigma_0,\ldots,\Sigma_{n-1}$ in (\ref{eq:Sigmai}), and the $n+1$ cost levels $C_0,\ldots,C_n$ in (\ref{eq:C0})-(\ref{eq:Cn}), we formulate in this section two secret protection problems.
To proceed, we need several definitions.
Let $u \geq 1$ be the least number of events that are required to be protected before any secret state may be reached from any system trajectory from the initial state. Also let $v \geq 0$ be the least security level that is needed for protecting the secrets. Write
\begin{align} \label{eq:geq-v}
\Sigma^{\geq v}_p := \bigdisjoint_{i=v}^{n-1} \Sigma_i
\end{align}
for the collection of protectable events where security levels are at least $v$.
The following definition formalizes the notion that the secret states are protected with at least $u$ number of protections with at least $v$ security level of protectable events.
\begin{defn}[$u-v-$secure reachability]\label{defn:u-reach:model}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model} with a set of secret states $Q_s$, the security level sets $\Sigma_i$ ($i\in [0, n-1]$) in (\ref{eq:Sigmai}), and let $u \geq
1$, $v \geq 0$, and $\tilde{\Sigma}$ be a nonempty subset of $\Sigma_p^{\geq v}$ in (\ref{eq:geq-v}). We say that $Q_s$ is {\it reachable
with at least $u$ protectable events of security level at least $v$ w.r.t. $\tilde{\Sigma}$} (or simply $Q_s$ is $u-v-$securely reachable) if the following condition holds:
\begin{equation}\label{eq:condition:u-reach}
(\forall s \in \Sigma^*) (\delta(q_0, s)! \sand \delta(q_0, s) \in
Q_s) \implies s \in \underbrace{\Sigma^\ast \tilde{\Sigma}
\Sigma^\ast \cdots \Sigma^\ast \tilde{\Sigma}
\Sigma^\ast}_{\text{$\tilde{\Sigma}$ appears $u$ times}}.
\end{equation}
\end{defn}
Condition~(\ref{eq:condition:u-reach}) means that every string from the initial state that can reach a secret state must contain at least $u$ protectable events of security level at least $v$.
Next we define a \emph{protection policy} that identifies which protectable events to protect at which states. Such a policy is what we aim to design for the system administrator.
\begin{defn}[protection policy]\label{defn:protectpolicy}
For the system $\mathbf{G} = (Q, \Sigma = \Sigma_p \cup \Sigma_{up}, \delta, q_0, Q_m)$ in \cref{eq:plant:model}, a \emph{protection policy} $\mathcal{P}$ is a mapping that assigns to each state a subset of protectable events:
\begin{equation}\label{eq:protectpolicy}
\mathcal{P}: Q \to \power(\Sigma_p)
\end{equation}
where $\power(\Sigma_p)$ denotes the power set of $\Sigma_p$.
\end{defn}
Note that what a protection policy specifies can also be interpreted as the
protection of a transition labeled by a protectable event at a given
state. For example, $\mathcal{P}(q) = \{\sigma_i, \sigma_j\}$ represents that
protectable events $\sigma_i$ and $\sigma_j$ occurring at state $q$ are
protected.
Now we are ready to formulate two secret protection problems studied in this paper.
The first problem is to find a protection policy (if it exists) that protects all the secret states with at least a prescribed number of protections of at least a prescribed security level, and moreover the protection cost should be minimum.
\begin{prob}[Usability Aware Secret Securing with Multiple Protections and
Minumum Cost Problem, USCP]\label{prob:ssmcp}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model} with a set of secret states $Q_s$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), and let $u \geq 1$, $v \geq 0$.
Find a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ such that $Q_s$ is $u-v-$securely reachable and the index $i$ of $C_i$ is minimum.
\end{prob}
More generally, and this is typical in practice, secrets may have different importance. For example in online shopping systems, customers' credit card information is (likely) more important than their email address information (though the latter certainly also needs to be protected).
Thus the set of secret states $Q_s$ may be partitioned into $k \geq 1$ disjoint (nonempty) subsets $Q_{s1}, \cdots, Q_{sk}$; the level of importance rises as the index increases.
Naturally the administrator wants to protect secrets of higher importance with events of higher security levels. Hence we associate each $Q_{sj}$ ($j \in [1,k]$) with a number $v_j$ that indicates the least security level required for protecting the secrets in $Q_{sj}$. These $v_j$ satisfy $0 \leq v_1 \leq \cdots \leq v_k (\leq n-1)$ according to the rising importance. With this additional consideration, we formulate our second problem.
\begin{prob}[Usability Aware Heterogeneous Secret Securing with Multiple Protections and
Minumum Cost Problem, UHSCP]\label{prob:hssmcp}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model}, a set of secret states $Q_s$ paritioned into disjoint (nonempty) subsets $Q_{s1}, \cdots, Q_{sk}$ with rising importance, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), and let $u \geq 1$, $0 \leq v_1 \leq \cdots \leq v_k \leq n-1$.
Find a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ such that for every $j \in [1,k]$ the $j$th important secret state subset $Q_{sj}$ is $u-v_j-$securely reachable and the index $i$ of $C_i$ is minimum.
\end{prob}
Let us revisit \cref{exmp:model} to explain the above formulated two problems.
\begin{exmp}
Consider the system model $\mathbf{G}$ in \cref{fig:exmp:model:plant}, with the secret state set $Q_s = \{q_7, q_8, q_{10}\}$, the security level sets $\Sigma_i$ ($i \in [0, n-1]$) in (\ref{eq:exSigmai}),
and the cost level sets $C_i$ ($i\in [0, n]$) in (\ref{eq:exCi}).
For Problem~\ref{prob:ssmcp}, let $u=2$ and $v=0$; namely it is required that at least $2$ events be protected for every system trajectory (from the initial state) that may reach a secret state in $Q_s$, and the least security level is $0$. Then our goal is to find a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ (if it exists) such that $Q_s$ is $2-0-$securely reachable, and moreover the index $i$ of $C_i$ is minimum (i.e. least cost).
Next for Problem~\ref{prob:hssmcp},
we consider that $Q_s$ is partitioned into two disjoint subsets $Q_{s1} = \{q_7,q_8\}$ and $Q_{s2} = \{q_{10}\}$. This means that $q_{10}$, the administrative realm, is a more important secret than $q_7$ and $q_8$ (regular users' secrets). Accordingly, let $v_1=0$ and $v_2=1$, namely the least security level for $Q_{s1}$ is $0$ while the least security level for $Q_{s2}$ is $1$; the latter means that when protecting the secret state $q_{10} \in Q_{s2}$, events $\sigma_0,\sigma_1,\sigma_5 \in \Sigma_0$ cannot be used due to their insufficient security level.
As for the required number of protections, we again let $u=2$. Then the objective here is to find a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ (if it exists) such that $Q_{s1}$ is $2-0-$securely reachable, $Q_{s2}$ is $2-1-$securely reachable, and moreover the index $i$ of $C_i$ is minimum (i.e. least cost).
\end{exmp}
\section{Usability Aware Secret Securing with Minumum Cost}
In this section, we address Problem~\ref{prob:ssmcp} (USCP). We start by characterizing the solvability of Problem~\ref{prob:ssmcp}, then present an algorithm to compute a solution, and finally illustrate the results using the running example (Example~2.1).
\subsection{Solvability of USCP}\label{subsec:solvability:uniform}
It is evident that if there are too few protectable events or the requirement for protection numbers and security levels is too high, then there might not exist a solution to Problem~\ref{prob:ssmcp}.
The following theorem provides a necessary and sufficient condition under
which there exists a solution of Problem~\ref{prob:ssmcp}.
\begin{thm}\label{thm:solvable:uniform}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model} with a set of secret states $Q_s$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), the required least number of protections $u \geq 1$, and the required lowest security level $v \geq 0$.
Problem~\ref{prob:ssmcp} is solvable (i.e.
there exists a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ such that $Q_s$ is $u-v-$securely reachable and the index $i$ of $C_i$ is minimum)
if and only if
either
\begin{equation}\label{eq:thm:solvable:uniform:condition:1}
\text{$Q_s$ is $u-0-$securely reachable w.r.t. $\tilde{\Sigma} = \Sigma(C_0)$};
\end{equation}
or there exists $i \in [v,n]$ such that
\begin{equation}\label{eq:thm:solvable:uniform:condition:2}
\begin{gathered}
\text{$Q_s$ is $u-v-$securely reachable w.r.t. $\tilde{\Sigma} = \bigcup^i_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$} \\
\sand \\
\text{$Q_s$ is not $u-v-$securely reachable w.r.t. $\tilde{\Sigma} = \bigcup^{i-1}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$}.
\end{gathered}
\end{equation}
\end{thm}
Condition \cref{eq:thm:solvable:uniform:condition:1} means that in the special case where the required lowest security level $v =
0$, every system trajectory reaching the secret states in $Q_s$ contains at least $u$ protectable events in $\Sigma(C_0) \subseteq \Sigma_0$. This is the easiest case, and the index $0$ is minimum.
More generally, condition
\cref{eq:thm:solvable:uniform:condition:2} means that there exists an index $i \in [v, n]$ for which every system trajectory reaching the secret states in $Q_s$ contains at least $u$
protectable events in $\bigcup^i_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1} \subseteq \Sigma^{\geq v}_p$, but there exists at least one trajectory reaching $Q_s$ that contains fewer than $u$
protectable events in $\bigcup^{i-1}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1} \subseteq \Sigma^{\geq v}_p$. That these two conditions in (\ref{eq:thm:solvable:uniform:condition:2}) simultaneously hold indicates that the index $i$ of the cost level sets $C_i$ is minimum.
Note that in \cref{eq:thm:solvable:uniform:condition:2} the set minus ``$\setminus \Sigma_{v-1}$'' is needed because $\Sigma(C_v) \subseteq \Sigma_{v-1} \cup \Sigma_{v}$ (as in (\ref{eq:SigmaC})), and the protectable events in $\Sigma_{v-1}$ do not satisfy the required security level $v$.
\begin{proof}
($\Rightarrow$) If condition~\cref{eq:thm:solvable:uniform:condition:1} holds,
i.e. $Q_s$ is $u-0-$securely reachable w.r.t. $\Sigma(C_0) \subseteq \Sigma_0$, then the index $0$ is evidently the smallest. In this case, there exists a protection policy $\mathcal{P} : Q \to \power(\Sigma(C_0))$ as a
solution for \cref{prob:ssmcp} using protectable events only in
$\Sigma(C_0) \subseteq \Sigma_0$ which satisfies the required security level $0$. Therefore, if \cref{eq:thm:solvable:uniform:condition:1}
holds, then \cref{prob:ssmcp} is solvable (for the special case $v=0$).
If
\cref{eq:thm:solvable:uniform:condition:2} holds, then $Q_s$ is
$u-v-$securely reachable w.r.t. $\bigcup^i_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$, and moreover the index $i$ of $C_i$ is minimum. The latter is because $Q_s$ is not $u-v-$securely reachable w.r.t. $\bigcup^{i-1}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$ and
$\bigcup^{i-1}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1} \subseteq \bigcup^{i}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$.
In this case, there exists a protection policy $\mathcal{P} : Q \to \power(\bigcup^{i}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1})$ as a solution for \cref{prob:ssmcp} using protectable
events in $\bigcup^{i}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1} \subseteq \Sigma_p^{\geq v}$ which satisfies the required security level $v$. Therefore, if
\cref{eq:thm:solvable:uniform:condition:2} holds, then \cref{prob:ssmcp}
is solvable.
($\Leftarrow$) If \cref{prob:ssmcp} is solvable with the minimum index of $C_i$ being $i = 0$, then $Q_s$ is $u-0-$securely reachable w.r.t. $\Sigma(C_0)$. This is exactly condition~\cref{eq:thm:solvable:uniform:condition:1}.
If \cref{prob:ssmcp} is solvable with the minimum index of $C_i$ satisfying $v \leq i \leq n$, then
$Q_s$ is $u-v-$securely reachable w.r.t. $\bigcup^i_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$. Since the index $i$ is minimum, it indicates that
$Q_s$ is not $u-v-$securely reachable w.r.t. $\bigcup^{i-1}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$. Therefore
\cref{eq:thm:solvable:uniform:condition:2} holds.
\end{proof}
\subsection{Policy Computation for USCP}\label{subsec:computation:uniform}
When \cref{prob:ssmcp} is solvable under the condition presented in Theorem~\ref{thm:solvable:uniform}, we design an algorithm to compute a solution, namely a protection policy.
To compute such a protection policy, our approach is to convert \cref{prob:ssmcp} (a security problem) to a corresponding control problem and adapt methods from the superviory control theory.
By this conversion, the sets of protectable events $\Sigma_p$ and
unprotectable events $\Sigma_{up}$ are interpreted as the sets of
\emph{controllable events} $\Sigma_c$ and \emph{uncontrollable events}
$\Sigma_{uc}$, respectively. Accordingly, a system $\mathbf{G}$ in
\cref{eq:plant:model} is changed to
\begin{equation}\label{eq:plant:control}
\mathbf{G} = (Q, \Sigma, \delta, q_0, Q_m)
\end{equation}
where $\Sigma = \Sigma_c \disjoint \Sigma_{uc}$ and $\Sigma_c =
\bigdisjoint_{i=0}^{n-1} \Sigma_i$. Recall from (\ref{eq:Sigmai}) that $\Sigma_i$ ($i =
0, \ldots, n-1$) denote the partition of protectable events in $\Sigma_p$ as
the index $i$ represents the security level (and the first source of cost); accordingly, here $\Sigma_i$ denote the partition of controllable events in $\Sigma_c$. Similar to (\ref{eq:geq-v}), for a given $v \geq 0$ write
\begin{align} \label{eq:Sigmacv}
\Sigma^{\geq v}_c := \bigdisjoint_{i=v}^{n-1} \Sigma_i.
\end{align}
In addition, protection policy
$\mathcal{P}: Q \to
\power(\Sigma_p)$ is changed to {\em control policy} $\mathcal{D}: Q \to
\power(\Sigma_c)$, which is a control decision (of a supervisor) specifying
which controllable events to disable at any given state.
More specifically, let $\mathbf{S}
= (X, \Sigma, \xi, x_0, X_m)$ be a supervisor for system ${\bf G} = (Q,\Sigma,\delta,q_0,Q_m)$ and assume without loss of generality that ${\bf S}$ is a subautomaton of
$\mathbf{G}$. The control policy $\mathcal{D}:Q \to
\power(\Sigma_c)$ is given by
\begin{equation}\label{eq:policy:control}
\mathcal{D}(q) \coloneqq \begin{dcases}
\{\sigma \in \Sigma_c \mid \neg \xi(q, \sigma)! \sand \delta(q, \sigma)!\}, & \text{if $q \in X$} \\
\emptyset, & \text{if $q \in Q \setminus X$}
\end{dcases}
\end{equation}
Based on the above conversion, \cref{defn:u-reach:model} and
\cref{prob:ssmcp} are changed to the following definition and problem.
\begin{defn}[$u-v-$controllable reachability]\label{defn:u-reach:control}
Consider a system $\mathbf{G}$ in \cref{eq:plant:control} with a set of secret states $Q_s$, the (security) level sets $\Sigma_i$ ($i\in [0, n-1]$) in (\ref{eq:Sigmai}), and let $u \geq
1$, $v \geq 0$, and $\tilde{\Sigma}$ be a nonempty subset of $\Sigma_c^{\geq v}$ in (\ref{eq:Sigmacv}). We say that $Q_s$ is {\it reachable
with at least $u$ controllable events of (security) level at least $v$ w.r.t. $\tilde{\Sigma}$} (or simply $Q_s$ is $u-v-$controllably reachable) if the following condition holds:
\begin{equation}\label{eq:condition:u-reach:control}
(\forall s \in \Sigma^*) (\delta(q_0, s)! \sand \delta(q_0, s) \in
Q_s) \implies s \in \underbrace{\Sigma^\ast \tilde{\Sigma}
\Sigma^\ast \cdots \Sigma^\ast \tilde{\Sigma}
\Sigma^\ast}_{\text{$\tilde{\Sigma}$ appears $u$ times}}.
\end{equation}
\end{defn}
\begin{prob}[Usability Aware Reachability Control with Multiple Controllable
Events and Minimum Cost Problem, UCCP]\label{prob:rcmcp}
Consider a system $\mathbf{G}$ in \cref{eq:plant:control} with a set of secret states $Q_s$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), and let $u \geq 1$, $v \geq 0$.
Find a control policy $\mathcal{P}:
Q \to \power(\Sigma_c)$ such that $Q_s$ is $u-v-$controlably reachable and the index $i$ of $C_i$ is minimum.
\end{prob}
The solvability condition of \cref{prob:rcmcp}, stated in the corollary below,
follows directly from \cref{thm:solvable:uniform} and the above presented conversion.
\begin{cor}\label{prop:solvable:uniform}
Consider a system $\mathbf{G}$ in \cref{eq:plant:control} with a set of secret states $Q_s$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), the required least number of protections $u \geq 1$, and the required lowest (security) level $v \geq 0$.
Problem~\ref{prob:rcmcp} is solvable (i.e.
there exists a control policy $\mathcal{P}:
Q \to \power(\Sigma_c)$ such that $Q_s$ is $u-v-$controllably reachable and the index $i$ of $C_i$ is minimum)
if and only if
either
\begin{equation}\label{eq:prop:solvable:uniform:condition:1}
\text{$Q_s$ is $u-0-$controllably reachable w.r.t. $\tilde{\Sigma} = \Sigma(C_0)$};
\end{equation}
or there exists $i \in [v,n]$ such that
\begin{equation}\label{eq:prop:solvable:uniform:condition:2}
\begin{gathered}
\text{$Q_s$ is $u-v-$controllably reachable w.r.t. $\tilde{\Sigma} = \bigcup^i_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$} \\
\sand \\
\text{$Q_s$ is not $u-v-$controllably reachable w.r.t. $\tilde{\Sigma} = \bigcup^{i-1}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$}.
\end{gathered}
\end{equation}
\end{cor}
When Problem~\ref{prob:rcmcp} is solvable (equivalently Problem~\ref{prob:ssmcp} is solvable), we present an algorithm to compute a control policy as a solution for Problem~\ref{prob:rcmcp}.
Such a control policy
specifies at least $u$ controllable events of (security) level at least $v$ to disable in every string from
the initial state $q_0$ to the secret state set $Q_s$.
This control policy will finally be converted back to a protection policy as a solution for Problem~\ref{prob:ssmcp} (our original security problem).
The algorithm that we design to solve Problem~\ref{prob:rcmcp} is presented on the next page (Algorithm~1 UCC$u$). In the following we explain the main ingredients and steps of this algorithm.
First, the inputs of Algorithm~1 are the system $\mathbf{G}$ in \cref{eq:plant:control}, a set of secret states $Q_s$, the least number of protections $u \geq 1$, and the least (security) level $v \geq 0$. Then Algorithm~1 will output $u$
supervisors $\mathbf{S}_0, \dots, \mathbf{S}_{u-1}$ for $\mathbf{G}$ (if they exist) as well as the minimum cost index $i_{\min}$. Each supervisor is computed by the UCC function (lines~14--24), and provides a different control policy such that every string
reaching secret states has at least one controllable event of (security) level at $v$.
So in total, $\mathbf{S}_0, \dots, \mathbf{S}_{u-1}$ specify $u$ controllable events to disable in every string reaching $Q_s$.
To compute the first supervisor $\mathbf{S}_0$, at line~1 of Algorithm~1
we need to design the control specification $\mathbf{G}_K$. This is done by removing from ${\bf G}$ all the secret
states in $Q_s$ and the transition to and from the removed states. Hence
\begin{equation}\label{eq:spec:uniform}
\mathbf{G}_K = (Q \setminus Q_s, \Sigma, \delta_K, q_0, Q \setminus Q_s)
\end{equation}
where $\delta_K = \delta \setminus \{(q, \sigma, q') \mid q \mbox{ or } q' \in Q_s,
\sigma \in \Sigma, \delta(q,\sigma)!, \delta(q,\sigma)=q'\}$.\footnote{Note that in real systems, secret states should still be reachable. Even though the computed supervisors specify
which controllable events to disable in the control context, we consider
the {\em protection} of these specified events so that secret states are still reachable
but protected. Our view is that in real systems, it is not desirable to disable controllable events and make secret states unreachable, because it would prevent regular users from ever accessing these secret states as well.} We remark that for ${\bf G}_K$ we let all of its states be marked; this is because we do not want to introduce extra control actions owing to ensuring nonblocking behavior.
\begin{algorithm}[htp]
\caption{UCC$u$}
\label{alg:rcmc-u}
\begin{algorithmic}[1]
\Require{System $\mathbf{G}$, secret state set $Q_s$, protection number $u$, security level $v$}
\Ensure{Supervisors $\mathbf{S}_0$, $\mathbf{S}_1$, \dots, $\mathbf{S}_{u-1}$, minimum cost index $i_{\min}$}
\State{$\mathbf{G}_0 = (Q,\Sigma^0,\delta^0,q_0, Q_m) = \mathbf{G}, \mathbf{G}_{K,0} = \mathbf{G}_K$ as in (\ref{eq:spec:uniform})}%
\For{$j = 0, 1, \dots, u-1$}%
\State{$(\mathbf{S}_j, i_j) =$ \Call{UCC}{$\mathbf{G}_j$, $\mathbf{G}_{K,j}$, $v$}}%
\If{$\mathbf{S}_j$ is nonempty}%
\State{Derive $\mathcal{D}_j$ from $\mathbf{S}_j$ as in
\cref{eq:policy:control}}%
\State{Form $\mathbf{G}_{j+1} = (Q, \Sigma^{j+1}, \delta^{j+1}, q_0, Q_m)$ from $\mathbf{G}_j$ and $\mathcal{D}_j$ as in \cref{eq:plant:relabeled}}
\State{$\delta_K^{j+1} = \delta^{j+1} \setminus \{(q, \sigma, q') \mid q \mbox{ or } q' \in Q_s,
\sigma \in \Sigma^{j+1}, \delta^{j+1}(q,\sigma)=q'\}$}
\State{$\mathbf{G}_{K,j+1} = (Q \setminus Q_s, \Sigma^{j+1}, \delta_K^{j+1}, q_0,Q \setminus Q_s)$}
\Else%
\State\Return{Empty supervisors, index $-1$}%
\EndIf%
\EndFor%
\State\Return{$\mathbf{S}_0$, $\mathbf{S}_1$, \dots, $\mathbf{S}_{u-1}$, $i_{\min} = i_{u-1}$}
\Statex{}
\Function{UCC}{$\mathbf{G}$, $\mathbf{G}_K$, $v$}%
\State{$K = L(\mathbf{G}_K)$}%
\For{$i = v, v+1, \dots, n$}%
\State{$\Gamma = \bigcup^i_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$}%
\State{Compute a supervisor $\mathbf{S}$ s.t. $L(\mathbf{S}) = \supc(K)$
w.r.t. ${\bf G}$ and $\Gamma$}%
\If{$\mathbf{S}$ is nonempty}%
\State\Return{$(\mathbf{S}, i)$}%
\EndIf%
\EndFor%
\State\Return{(empty supervisor, index $-1$)}%
\EndFunction%
\end{algorithmic}
\end{algorithm}
\begin{exmp}\label{exmp:spec:uniform}
\begin{figure}[htp]
\centering
\adjustbox{}{
\subimport{figures/}{spec-uniform.tex}
}
\caption{Specification automaton $\mathbf{G}_K$}
\label{fig:exmp:spec:uniform}
\end{figure}
Displayed in \cref{fig:exmp:spec:uniform} is the
specification automaton ${\bf G}_K$ derived from the system ${\bf G}$ in \cref{exmp:model} and the secret state set $Q_s = \{q_7, q_8, q_{10}\}$. To design $\mathbf{G}_K$, secret states
in $Q_s = \{q_7, q_8, q_{10}\}$ and transitions $(q_5, \sigma_7, q_7)$,
$(q_5, \sigma_8, q_8)$, $(q_7, \sigma_8, q_8)$, $(q_8, \sigma_9, q_9)$
and $(q_9, \sigma_{10}, q_{10})$ are removed from $\mathbf{G}$ in
\cref{fig:exmp:model:plant} and all the states of $\mathbf{G}_K$ are marked.
\end{exmp}
With ${\bf G}_K$ constructed, line~2 of Algorithm~1 starts from $j=0$ and line~3 calls the RCMC function (with arguments ${\bf G}_0 = {\bf G}$, ${\bf G}_{K,0}={\bf G}_K$, $v$) to compute the first supervisor ${\bf S}_0$ and the minimum cost index $i_0$.
To this end, several
standard concepts of supervisory control theory (SCT) \cite{wonham2019supervisory,Cai:2020,Wonham:2018} are employed and briefly reviewed below.
Consider a system $\mathbf{G} =(Q,\Sigma=\Sigma_c \cup \Sigma_{uc}, \delta, q_0, Q_m)$ in
\cref{eq:plant:control}, and let $K = L(\mathbf{G}_K) \subseteq L(\mathbf{G})$
be a specification language derived from the specification automaton
$\mathbf{G}_K$ in (\ref{eq:spec:uniform}).
For a subset of
the controllable events $\Gamma (\subseteq \Sigma_c)$,
$K$ is said to be {\em controllable} with respect to $\mathbf{G}$ and
$\Gamma$ if $\prefix{K}(\Sigma \setminus \Gamma) \cap L(\mathbf{G}) \subseteq \prefix{K}$
where $\prefix{K}$ is the {\em prefix closure} of $K$.
We denote by the family $\mathcal{C}(K) \coloneqq \{K' \subseteq K \mid
\prefix{K}(\Sigma \setminus \Gamma) \cap L(\mathbf{G}) \subseteq
\prefix{K}\}$ the set of all controllable sublanguages of $K$ with respect to
$\mathbf{G}$ and $\Gamma$, and by $\supc(K) \coloneqq \bigcup\{K' \mid K' \in
\mathcal{C}(K)\}$ the supremal controllable sublanguage of $K$ with respect
to $\mathbf{G}$ and $\Gamma$ (which is known to always exist).
\begin{lem}\label{lem:supc}
(cf.~\cite{wonham2019supervisory}) Consider a plant $\mathbf{G}=(Q,\Sigma=\Sigma_c \cup \Sigma_{uc}, \delta, q_0, Q_m)$ in
\cref{eq:plant:control} and a specification language $K \subseteq L(\mathbf{G})$. It holds that
\begin{equation}\label{eq:lem:supc}
\supc(K) = \emptyset~\text{(w.r.t. $\mathbf{G}$ and $\Sigma_{uc}$)}
\iff (\exists s \in \Sigma_{uc}^*) s \in L(\mathbf{G}) \setminus K.
\end{equation}
\end{lem}
From \cref{lem:supc} and the construction of $\mathbf{G}_K$ in
\cref{eq:spec:uniform}, letting $K = L(\mathbf{G}_K)$ and $i \in [v,n]$, we know that the first supervisor
$\mathbf{S}_0 = \supc(K)$ (with respect to $\mathbf{G}$ in
\cref{eq:plant:control} and $\bigcup^i_{l=v} \Sigma(C_l) \setminus \Sigma_{v-1}$) is nonempty if and only if every
string reaching the secret states in $Q_s$ from the initial state $q_0$ has
at least one controllable event belonging to $\bigcup^i_{l=v} \Sigma(C_l) \setminus \Sigma_{v-1}$. In other words,
$\supc(K) \neq \emptyset$ (with respect to $\mathbf{G}$ and $\bigcup^i_{l=v} \Sigma(C_l) \setminus \Sigma_{v-1}$)
if and only if
\begin{align} \label{eq:nonemptysup}
\left( \forall s \in \left(\Sigma \setminus \left(\bigcup^i_{l=v} \Sigma(C_l) \setminus \Sigma_{v-1}\right) \right)^* \right) \delta(q_0,
s) \not\in Q_s.
\end{align}
The computation of ${\bf S}_0$ is carried out in lines~15--22 of Algorithm~1. If a nonempty ${\bf S}_0$ is obtained (line~19; condition~(\ref{eq:nonemptysup}) holds), then it is returned together with the current index $i$ of the cost level sets (line~20). Since the index is incrementally increased (line~16), we know that the index $i$ in line~20 is minimum (for this is the first time that ${\bf S}_0$ is nonempty).
Once a nonempty supervisor ${\bf S}_j$ ($j \geq 0$) is obtained (line~4), Algorithm~1 proceeds to compute the next supervisor ${\bf S}_{j+1}$ (until we acquire $u$ nonempty supervisors).
To ensure that each supervisor
provides a different control
policy (disabling different transitions) so as to meet the requirement of $u$ protections, we need to change the status of those transitions already disabled by ${\bf S}_j$ from controllable to uncontrollable, so that the next supervisor ${\bf S}_{j+1}$ is forced to disable other controllable transitions.
This status change is done by event relabeling. Specifically, let $\mathbf{G}_{j} = (Q, \Sigma^{j} = \Sigma_{uc,j} \dot\cup \Sigma_{c,j}, \delta^{j}, q_0, Q_m)$ be the $j$th system model and
$\mathcal{D}_j$ be the control policy in \cref{eq:policy:control} corresponding to supervisor ${\bf S}_j$.
Then the set of controllable
transitions specified (or disabled) by $\mathcal{D}_j$ is
\begin{align*}
\delta_{\mathcal{D}_j} := \{(q, \sigma, q') \mid q \in Q \sand \sigma \in
\mathcal{D}_j(q) \sand q' = \delta^j(q,\sigma)\}.
\end{align*}
We relabel the above transitions and obtain
\begin{align*}
\delta'_{\mathcal{D}_j} := \{(q, \sigma', q') \mid (q, \sigma, q') \in \delta_{\mathcal{D}_j} \sand \sigma' \notin \Sigma^j \}.
\end{align*}
Moreover, we designate these relabeled transition as uncontrollable, so the new uncontrollable event set is:
\begin{align*}
\Sigma_{uc,j+1} = \Sigma_{uc,j} \disjoint{} \{\sigma' \mid (q,
\sigma', q') \in \delta'_{\mathcal{D}_j}\}.
\end{align*}
On the other hand, the new controllable event set is:
\begin{align*}
\Sigma_{c,j+1} = \Sigma_{c,j} \setminus \{\sigma \mid (\forall q \in Q) \delta^j(q,\sigma)! \sand \delta^j(q,\sigma) = q' \Rightarrow (q,
\sigma, q') \in \delta_{\mathcal{D}_j}\}.
\end{align*}
In words, those controllable events whose corresponding transitions are all specified by $\mathcal{D}_j$ and therefore relabled no longer exist and are consequently removed from the controllable event set.
Therefore we obtain the new system model
\begin{equation}\label{eq:plant:relabeled}
\mathbf{G}_{j+1} = (Q, \Sigma^{j+1}, \delta^{j+1}, q_0, Q_m)
\end{equation}
where
\begin{align}
\Sigma^{j+1} &= \Sigma_{uc,j+1} \disjoint{} \Sigma_{c,j+1}\\
\delta^{j+1} &= (\delta^j \setminus \delta_{\mathcal{D}_j}) \disjoint
\delta'_{\mathcal{D}_j}.
\end{align}
The above is carried out in lines~5--6 of Algorithm~1. Moreover, lines~7--8 update the specification model ${\bf G}_{K,j+1}$ similar to (\ref{eq:spec:uniform}).
With the updated system ${\bf G}_{j+1}$ and specification ${\bf G}_{K,j+1}$, Algorithm~1 again calls the UCC function (line~3) to compute the next supervisor ${\bf S}_{j+1}$ and the corresponding minimum cost index $i_{j+1}$. This process continues until $j=u-1$, unless an empty supervisor is returned by the UCC function. In the latter case, Algorithm~1 returns empty supervisors and index $-1$.
If \cref{alg:rcmc-u} succeeds to compute $u$ nonempty supervisors ${\bf S}_0, \ldots, {\bf S}_{u-1}$, then these supervisors will be returned, together with the minimum cost index $i_{\min} = \max(i_0,\ldots,i_{u-1})$ (line~13). It is evident from the above construction that the inequality chain $v \leq i_0 \leq \cdots \leq i_{u-1} \leq n$ holds; hence $i_{\min} = i_{u-1}$.
Let $\mathcal{D}_j$ be the control policy of $\mathbf{S}_j$ ($j=0,\ldots,u-1$). Then define the overall control policy $\mathcal{D}: Q \to \power(\Sigma_c)$ by taking the union of the controllable events specified by individual $\mathcal{D}_j$ at every state, namely
\begin{equation}\label{eq:merge:uniform}
\mathcal{D}(q) = \bigcup_{j=0}^{u-1}\mathcal{D}_j(q),\quad q \in Q.
\end{equation}
Since each control policy $\mathcal{D}_j$ ($j \in [0, u-1]$) specifies controllable events such
that every string reaching secret states has at least one disabled event, $\mathcal{D}$ in \cref{eq:merge:uniform} specifies at least
$u$ controllable events to disable in every string reaching secret states
from the initial state. Moreover, it follows from line~16 of Algorithm~1 that the (security) level of all these $u$ events are at least $v$.
The time complexity of Algorithm~1 is $O(u(n-v)|Q|^2)$, where $u$ is from line~2, $n-v$ from line~16, and $|Q|^2$ from line~18.
The correctness of Algorithm~1 is asserted in the following proposition.
\begin{prop}\label{prop:solution:urcmcp}
\cref{alg:rcmc-u} (with inputs $\mathbf{G}$, $Q_s$, $u$ and $v$) returns
$u$ nonempty supervisors and minimum cost index $i_{\min} (\in [v,n])$ if and only if \cref{prob:rcmcp} is solvable.
\end{prop}
\begin{proof}
By the aforementioned constructions in \cref{alg:rcmc-u}, in particular line~16 (incrementally increasing the index of cost level sets) and line~17 ($\bigcup^{i}_{l = v} \Sigma(C_l) \setminus \Sigma_{v-1}$ monotonically becoming larger as index $i$ increases),
Algorithm~1 returns
$u$ nonempty supervisors and minimum cost index $i_{\min} \in [v,n]$ if and only if either of the two conditions (\ref{eq:prop:solvable:uniform:condition:1}), (\ref{eq:prop:solvable:uniform:condition:2})
holds. By Corollary~\ref{prop:solvable:uniform}, the latter is a necessary and sufficient condition for the solvability of \cref{prob:rcmcp}. Therefore our conclusion ensues.
\end{proof}
From the derived control policy $\mathcal{D}$ in \cref{eq:merge:uniform}, a solution for \cref{prob:ssmcp}, namely a protection policy
$\mathcal{P}: Q \to \power(\Sigma_p)$, is obtained by inverse conversion of
controllable events back to protectable events. In terms of $\mathcal{P}$, we
interpret disabled events by $\mathcal{D}$ as {\em protected events}.
Finally, we state the main result in this section, which provides a solution to our original security protection problem USCP (\cref{prob:ssmcp}).
\begin{thm}
Consider a system $\mathbf{G}$ in \cref{eq:plant:model} with a set of secret states $Q_s$, the cost level sets $C_i$ ($i \in [0,n]$) in (\ref{eq:C0})-(\ref{eq:Cn}), the required least number of protections $u \geq 1$, and the required lowest security level $v \geq 0$.
If
\cref{prob:ssmcp} is solvable, then the protection policy $\mathcal{P}$
derived from $\mathcal{D}$ in \cref{eq:merge:uniform} is a solution.
\end{thm}
\begin{proof}
Suppose that \cref{prob:ssmcp} is solvable. Then \cref{prob:rcmcp} is
also solvable by conversion of protectable events to controllable events. Then
by \cref{prop:solution:urcmcp}, \cref{alg:rcmc-u} returns $u$ nonempty supervisors and the minimum cost index $i_{\min} \in [v,n]$. Based on these $u$ supervisors, control policies $\mathcal{D}_0, \dots, \mathcal{D}_{u-1}$ may be derived as in \cref{eq:policy:control}. Hence, a combined control policy $\mathcal{D}$ in \cref{eq:merge:uniform} is obtained. Due to the event relabeling in (\ref{eq:plant:relabeled}), each control policy uniquely
specifies transitions in $\mathbf{G}$ to disable. Also it follows from the specifications
$\mathbf{G}_{K,0}, \dots, \mathbf{G}_{K,u-1}$ in \cref{alg:rcmc-u} that
$Q_s$ is $1-v-$controllably reachable under each of $\mathcal{D}_0, \dots,
\mathcal{D}_{u-1}$. Therefore, under control policy $\mathcal{D}$, $Q_s$
is $u-v-$controllably reachable. Hence, the control policy $\mathcal{D}$ is a solution for
\cref{prob:rcmcp}. Consequently, from the inverse conversion of controllable events back to protectable events, the protection
policy $\mathcal{P}$ derived from $\mathcal{D}$ is a solution for
\cref{prob:ssmcp}.
\end{proof}
\subsection{Running Example}\label{subsec:example:uniform}
Let us again use \cref{exmp:model} to demonstrate our developed solution via Algorithm~1 for
\cref{prob:ssmcp}.
Consider the system $\mathbf{G}$ in \cref{fig:exmp:model:plant}, with the secret state set $Q_s = \{q_7, q_8, q_{10}\}$, the security level sets $\Sigma_i$ ($i \in [0, 3]$) in (\ref{eq:exSigmai}),
and the cost level sets $C_i$ ($i\in [0, 4]$) in (\ref{eq:exCi}). Let $u=2$ and $v=0$; namely it is required that at least $2$ events be protected for every system trajectory (from the initial state) that may reach a secret state in $Q_s$, and the least security level is $0$. We demonstrate how to use Algorithm~1 to compute a protection policy $\mathcal{P}:
Q \to \power(\Sigma_p)$ and the minimum index $i$ of $C_i$ as a solution for \cref{prob:ssmcp}.
First, convert protectable events to controllable events such that
\begin{align*}
\Sigma_c = \{\sigma_0,\sigma_1,\sigma_5,\sigma_6,\sigma_7,\sigma_8,\sigma_9,\sigma_{10}\}.
\end{align*}
Accordingly the uncontrollable event set $\Sigma_{uc} = \{\sigma_2,\sigma_3,\sigma_4\}$.
Then input Algorithm~1 with the converted
system model $\mathbf{G}$, $Q_s$, $u = 2$ and $v = 0$.
In the first iteration ($j=0$), system $\mathbf{G}_0 = {\bf G}$ in \cref{fig:exmp:model:plant} and specification ${\bf G}_{K,0}=\mathbf{G}_K$ in
\cref{fig:exmp:spec:uniform}. Then the RCMC function is called to compute the first supervisor $\mathbf{S}_0$. It is verified that when $i=0$ (line~16), the supervisor ${\bf S}$ is empty (line~18), whereas when $i=1$, the supervisor ${\bf S}$ is nonempty. Thus this nonempty supervisor is returned as ${\bf S}_0$ and the index $1$ is returned as $i_0$ (line~20).
The control policy
$\mathcal{D}_0$ correponding to $\mathbf{S}_0$ is:
\begin{align*
&\mathcal{D}_0(q_1) = \{\sigma_6\},\quad \mathcal{D}_0(q_2) = \{\sigma_5\},\quad \mathcal{D}_0(q_5) = \{\sigma_7, \sigma_8\},\\
&(\forall q \in Q \setminus \{q_1,q_2,q_5\}) \mathcal{D}_0(q) = \emptyset.
\end{align*}
\begin{figure}[htp]
\centering
\adjustbox{}{%
\subimport{figures/uniform/}{example-d0.tex}
}
\caption{Control policy $\mathcal{D}_0$ of $\mathbf{S}_0$}\label{fig:poicy0:example:uniform}
\end{figure}
\cref{fig:poicy0:example:uniform} depicts the control policy $\mathcal{D}_0$
over the plant $\mathbf{G}$ in \cref{fig:exmp:model:plant}, indicating the
disabled transitions by ``\faTimes''.
We remark that since the lowest security level set is $\Sigma_0 = \{\sigma_0, \sigma_1, \sigma_5\}$, it would have been sufficient to disable $\sigma_0,\sigma_1$ at $q_0$ to satisfy the required $v=0$. However, disabling $\sigma_1$ would simultaneously affect regular users' accessing the (non-secret) marker states $q_3,q_4$, and this is deemed too costly in this example setting (threshold number is $T=2$ for the number of affected non-secret marker states). This observation makes it evident that taking into account the cost of usability generally requires the administrator to adopt a different protection policy.
After obtaining $\mathcal{D}_0$, \cref{alg:rcmc-u} proceeds to relabel
the disabled transitions by $\mathcal{D}_0$ as follows:
\begin{align*}
\delta_{\mathcal{D}_0} &= \{(q_1, \sigma_6, q_6), (q_2, \sigma_5, q_6), (q_5, \sigma_7, q_7), (q_5, \sigma_8, q_8)\} \\
\delta_{\mathcal{D}_0}' &= \{(q_1, \sigma'_6, q_6), (q_2, \sigma'_5, q_6), (q_5, \sigma'_7, q_7), (q_5, \sigma'_8, q_8)\}.
\end{align*}
The relabeled events are designated to be uncontrollable events; thus the new uncontrollable event set is
\begin{align*}
\Sigma_{uc,1} = \Sigma_{uc} \disjoint{}\{\sigma'_5, \sigma'_6, \sigma'_7, \sigma'_8\}.
\end{align*}
On the other hand, the new controllable event set is
\begin{align*}
\Sigma_{c,1} = \Sigma_{c} \setminus \{\sigma_6, \sigma_7\}.
\end{align*}
Note that events $\sigma_5, \sigma_8$ remain in $\Sigma_{c,1}$ since they have other instances (of transitions) that are not disabled by $\mathcal{D}_0$.
From the above, the new system becomes $\mathbf{G}_1 = (Q, \Sigma^1, \delta^1, q_0, Q_m)$ where
\begin{align*}
\Sigma^1 = \Sigma_{uc,1} \disjoint \Sigma_{c,1},\quad
\delta^1 = (\delta \setminus \delta_{\mathcal{D}_0}) \disjoint \delta_{\mathcal{D}_0}'
\end{align*}
and the new specification automaton becomes
\begin{align*
\mathbf{G}_{K,1} = (Q \setminus Q_s, \Sigma^1, \delta^1_K, q_0, Q \setminus Q_s)
\end{align*}
where
\begin{align*}
\delta^1_K = \delta^1 \setminus \{(q, \sigma, q') \mid q \mbox{ or } q' \in Q_s, \sigma \in \Sigma^1, \delta^1(q,\sigma)=q'\}.
\end{align*}
The new system $\mathbf{G}_1$ and specification $\mathbf{G}_{K,1}$ are displayed in
\cref{fig:relabeled:example:uniform} and \cref{fig:spec1:example:uniform},
respectively.
\begin{figure}[htp]
\centering
\adjustbox{}{%
\subimport{figures/uniform/}{example-relabeled.tex}
}
\caption{Relabeled system $\mathbf{G}_1$}\label{fig:relabeled:example:uniform}
\end{figure}
\begin{figure}[htp]
\centering
\adjustbox{}{%
\subimport{figures/uniform/}{example-spec1.tex}
}
\caption{Updated specification $\mathbf{G}_{K,1}$}
\label{fig:spec1:example:uniform}
\end{figure}
With $\mathbf{G}_1$ and $\mathbf{G}_{K,1}$, \cref{alg:rcmc-u} in the second iteration ($j=1$) again calls the RCMC function to compute the second supervisor $\mathbf{S}_1$. Like in the first iteration, when $i=0$ (line~16) the supervisor ${\bf S}$ is empty (line~18), whereas when $i=1$ the supervisor ${\bf S}$ is nonempty. Thus this nonempty supervisor is returned as ${\bf S}_1$ and the index $1$ is returned as $i_1$ (line~20).
The control policy
$\mathcal{D}_1$ correponding to $\mathbf{S}_1$ is:
\begin{align*
\mathcal{D}_0(q_0) = \{\sigma_0, \sigma_1\},\quad
(\forall q \in Q \setminus \{q_0\}) \mathcal{D}_0(q) = \emptyset.
\end{align*}
By now Algorithm~1 has succeeded in computing two nonempty supervisors. Since $u=2$, Algorithm~1 terminates and returns ${\bf S}_0$, ${\bf S}_1$, and the minimum cost index $i_{\min}=i_1=1$.
Now we combine the two corresponding control policies into $\mathcal{D}$ as follows:
\begin{align*
\mathcal{D}(q) = \begin{dcases}
\{\sigma_0, \sigma_1\}, & \text{if $q = q_0$} \\
\{\sigma_6\}, & \text{if $q = q_1$} \\
\{\sigma_5\}, & \text{if $q = q_2$} \\
\{\sigma_7,\sigma_8\}, & \text{if $q = q_5$} \\
\emptyset, & \text{if $q \in Q \setminus \{q_0, q_1,q_2, q_5\}$}
\end{dcases}
\end{align*}
This $\mathcal{D}$ is a solution of \cref{prob:rcmcp}.
Finally, by inverse conversion of controllable events back to protectable evvents we obtain a corresponding protection policy
$\mathcal{P}$ as a solution of the original \cref{prob:ssmcp}.
\cref{fig:solution:example:uniform} illustrates this protection policy $\mathcal{P}$, where
``\faLock'' means the transitions that need to be ``protected''.
Observe that based on this protetion policy $\mathcal{P}$,
every string from $q_0$ that can reach the secret states in $Q_s$ has at least two protected events in $\Sigma(C_0) \cup \Sigma(C_1) \subseteq \Sigma_0 \cup \Sigma_1$. Thus the least number of protections $u=2$ and the lowest security level $v=0$ are satisfied; moreover, the minimum cost index is $i_{\min} =1$.
\begin{figure}[htp]
\centering
\adjustbox{}{%
\subimport{figures/uniform/}{example-solution.tex}
}
\caption{Protection policy $\mathcal{P}$ for $\mathbf{G}$}
\label{fig:solution:example:uniform}
\end{figure}
For this example, the protections of each protected event specified by the policy $\mathcal{P}$ may be implemented as follows:
\begin{itemize}
\item $\sigma_0$, $\sigma_1$: setting up a password on each account of the
regular user and the administrator.
\item $\sigma_5$: setting up a password for launching the application.
\item $\sigma_6$: setting up one-time password authentification.
\item $\sigma_7$, $\sigma_8$: setting up fingerprint authentication.
\end{itemize}
\section{Introduction}\label{sec:introduction}
In real networked systems, risks and threats due to cybersecurity breach are increasingly prominent. Effectively protecting systems so that confidential information remains undisclosed to adversarial access has become an indispensable system design requirement \cite{P.Barrett2018, Brooks2018}.
Recently cyber-physical systems (CPS) has emerged to be a general modeling framework for real networked systems consisting of both physical and computational components.
CPS security issues have attracted much attention in the literature \cite{Hoffman:2009,Teixeira:2012,Modi:2013,Pasqualetti:2015}. For example, \cite{Teixeira:2012} discusses several attack scenarios with a typical architecture of networked control systems.
Focusing primarily on the abstracted level of dynamic systems, the research community of discrete-event systems (DES) has actively studied a number of security related problems.
An ealier and widely investigated problem is {\em opacity} (e.g. \cite{Lin:2011aut,Hadjicostis:2011tase,Lafortune:2018arc,Toni:2017tacSO}). This is a system property under partial observation such that an intruder cannot infer a given set of {\em secrets} by (passively) observing the system behavior. Depending on the definitions of secrets, opacity takes different forms. Recent work extends opacity notions to networked, nondeterministic settings as well as Petri net models (e.g. \cite{Yin:2019,Xie:2020,Lan:2020}).
Another well studied problem is {\em fault-tolerance} and {\em attack-resilience} (e.g. \cite{Moor:2016,Fritz:2018,Lin:2019,Paape:2020,Yao:2020}). This is a design requirement that a supervisory controller should remain (reasonably) operational even after faults occur in the system or the system is undre malicious attacks.
\emph{Intrusion detection} is another problem that has recently attracted much interest (e.g. \cite{Lafortune:2018aut,Zhang:2018wodes,Agarwal:2019smcConf,Gao:2019smcConf,Goes:2020}). In this problem, the aim of the system administrator is to detect invasion of intruders by identifying abnormal behaviors in the system; if invasion is detected, an alarm can be set off before any catastrophic damage can be done by intruders.
From a distinct perspective, in our previous work a {\em minimum cost secret protection} problem is introduced \cite{KaiCai:2018Japan,KaiCai:2019cdc,Ma:2020,Matsui:2021}. This problem is concerned with the scenario that the system contains sensitive information or critical components to which attackers want to gain access, and attackers may be able to observe all events and disguise themselves as regular users without being detected. Then the system administrator is required to protect the sensitive information or critical components with proper security levels, while practically balance with the costs associated with the implementation and maintenance of the adopted protection methods.
In this paper, we make two important generalizations of the minimum cost secret protection problem. First, we take into account system's {\em usability}, which means regular users' convenience of using various services and funcitions provided by the system. These services and functions for regular users are often different from sensitive information or critical components that need to be protected. However, bad choices of protection points/locations may simultaneously affect access to services/functions by regular users.
For example, when setting up a password to protect a user's credit card information, it is not reasonable that the user has to input the same password in order to access any websites or files. If system's usability is significantly reduced owing to setting up too many protections at inappropriate locations, users may stop using the system and this can be costly (to different extent depending on specific situations/applications). Accordingly, we formulate usability as another source of protection cost, in addition to the implementation/maintenance cost of protection methods (considered in previous work).
The second extension to the minimum cost secret protection problem is that on top of the usability consideration, we further differentiate sensitive information and critical components (or simply secrets) with distinct degrees of importance. This is a typical situation in practice; for instance, in e-commerce, customers' email addresses and credit card numbers are both sensitive information, but it is common that the latter are deemed more important and expected to be protected with stronger measures.
Accordingly, we formulate heterogeneous secrets by a partition on the set of all secrets, and require that more important secrets be protected using more secure methods (while system usability still needs to be balanced).
The main contributions of this work are summarized as follows.
\begin{itemize}
\item A novel concept of system's usability is introduced and formulated. This notion was absent in our previous work \cite{KaiCai:2018Japan,KaiCai:2019cdc,Ma:2020,Matsui:2021}, and to our best knowledge is new in the DES security literature. Roughly speaking, the formulation of usability is based on counting the number of affected services/functions provided to regular users when a protection is implemented at a certain location, and comparing this number to a prescribed threshold to determine if such a protection is too costly.
\item A new usability-aware minimum cost secret protection problem is formulated, its solvability condition characterized, and an solution algorithm designed. In constrat to the problem without usability consideration \cite{KaiCai:2018Japan,KaiCai:2019cdc,Ma:2020,Matsui:2021}, in our problem less secure protection methods that significantly undermine usability may be just as costly as more secure methods that make little impact on usability. This new feature due to usability makes our problem more challenging because security levels and cost levels of the same protection methods are generally different, and hence need to be treated separately (security levels and cost levels are treated as the same in \cite{KaiCai:2018Japan,KaiCai:2019cdc,Ma:2020,Matsui:2021} since usability is not considered).
\item A new minimum cost secret protection problem featuring both usability awareness and heterogeneous secrets is formulated, its solvability condition characterized, and an solution algorithm developed. Not only are the formulated problem and developed solution algorithm new as compared to the existing literature, but also this problem covers a general and practical scenario in the context of secret protection.
\end{itemize}
The rest of this paper is organized as follows. Section~2 introduces system model and definitions of cost; Section~3 formulates two usability aware minimum cost secret protection problems; Section~4 solves the first problem in which all secrets are deemed equally important, while Section~5 solves the second problem in which the secrets have different importance; finally in Section~6 we state our conclusions and future work.
\section{Conclusions}\label{sec:conclusions}
We have studied a cybersecurity problem of protecting system's secrets with multiple protections and a required security level, while minimizing the associated cost due to implementation/maintenance of these protections as well as the affected system usability. Two usability-aware minimum cost secret protection problems have been formulated; the first one considers secrets of equal-importance, whereas the second considers heterogeneous secrets. In both cases, a necessary and sufficient condition that characterizes problem solvability has been derived and when the condition holds, a solution algorithm has been developed. Finally, we have demonstrated the effectiveness of our solutions with a running example.
In future work, we aim to extend the usability-aware secret protection problem to the setting of decentralized systems (which are typical in CPS), and develop efficient distributed protection policies. Other directions of extension from a broader perspective include generalizing the system model from deterministic purely-logical finite-state automaton with full observation to nondetermistic/probabilistic, timed, nonterminating, or partially-observed settings, and formulate/solve the usability-aware secret protection problem in those settings with different features.
|
1,477,468,750,902 | arxiv | \section{Introduction
A holomorphic vector bundle $H$ on ${\mathbb C}\times M$, $M$ a complex
manifold, with a meromorphic connection $\nabla$ with a
pole of Poincar\'e rank 1 along $\{0\}\times M$ and no pole
elsewhere, is called a~$(TE)$-structure.
The aim of this paper is the local classification of all rank $2$
$(TE)$-structures, over arbitrary germs $\big(M,t^0\big)$ of manifolds.
Before we talk about the results, we will put these structures
into a context, motivate their definition,
mention their occurence in algebraic geometry,
and formulate interesting problems.
The rank $2$ case is the first interesting case and already
very rich. In~many aspects it is probably typical for
arbitrary rank, in some not. And it is certainly the only
case where such a thorough classification is feasible.
The pole of Poincar\'e rank 1 along $\{0\}\times M$ of the
pair $(H,\nabla)$ means the following. Let $t=(t_1,\dots ,t_n)$
be holomorphic coordinates on $M$ with coordinate
vector fields $\partial_1,\dots ,\partial_n$, and~let~$z$ be the standard
coordinate on ${\mathbb C}$. Then $\nabla_{\partial_z}\sigma$ for a
holomorphic section $\sigma\in{\mathcal O}(H)$ of~$H$
is in~$z^{-2}{\mathcal O}(H)$, and $\nabla_{\partial_j}\sigma$ is in
$z^{-1}{\mathcal O}(H)$. The pole of order two along $\partial_z$
is the first case beyond the easy and tame case of a
pole of order~1, i.e., a logarithmic pole.
The pole of order~1 along~$\partial_i$ gives a good variation
property, a generalization of Griffiths transversality
for variations of Hodge structures. It~is the most natural
constraint for an isomonodromic family of bundles on~${\mathbb C}$
with poles of order~2 at~$0$.
So, a pole of Poincar\'e rank~1 is in some sense the first
case beyond the case of connections with logarithmic poles.
(A pole of Poincar\'e rank $r\in{\mathbb N}_0$
is defined for example in~\cite[Section~0.14]{Sa02}.)
In algebraic geometry, such connections arise naturally.
A distinguished case is the Fourier--Laplace transformation
(with respect to the coordinate $z$)
of the Gauss--Manin connection of a~family of
holomorphic functions with isolated singularities
(see \cite[Chapter~8]{He03} and~\cite[Chapter~VII]{Sa02}). The paper~\cite{He03}
defines $(TERP)$-structures, which are $(TE)$-structures
with additional real structure and pairing
and which generalize variations of Hodge structures.
Also the notion $(TEZP)$-structure makes sense, which is a~$(TE)$-structures with a flat ${\mathbb Z}$-lattice bundle
on ${\mathbb C}^*\times M$ and a~certain pairing.
A family of holomorphic functions with isolated singularities
(and some topological well-behavedness) gives rise to a~$(TEZP)$-structure over the base space of the family
(see \cite[Chapter~11.4]{He02} and~\cite[Chapter~8]{He03}).
In~\cite{He02} and other papers of the author,
a Torelli problem is considered.
We formulate it here as the following question:
Does the $(TEZP)$-structure of a holomorphic function germ with
an~isolated singularity determine the $(TEZP)$-structure
of the universal unfolding of the function germ?
The first one is a $(TE)$-structure over a point $t^0$.
The second one is a $(TE)$-structure over a germ
$\big(M,t^0\big)$ of a manifold $M$. It~it an {\it unfolding}
of the first $(TE)$-structure with a {\it primitive
Higgs field}. The base space $M$ is an
{\it $F$-manifold with Euler field}.
We explain these notions. A second $(TE)$-structure over
a manifold $M$ is an unfolding of a~first $(TE)$-structure
over a submanifold of $M$ if the restriction of the second
$(TE)$-structure to the submanifold
is isomorphic to the first $(TE)$-structure.
If $\varphi\colon M'\to M$ is a morphism
and if $(H,\nabla)$ is a $(TE)$-structure over $M$,
then the pull back $\varphi^*(H,\nabla)$ is a $(TE)$-structure
over~$M'$. An unfolding of a $(TE)$-structure
is {\it universal} if it {\it induces}
any unfolding via a unique map $\varphi$
(see Definition~\ref{t3.15}$(b){+}(c)$ for details).
If $(H\to{\mathbb C}\times M,\nabla)$ is a $(TE)$-structure, then
define the vector bundle $K:=H|_{\{0\}\times M}$ on $M$
and the Higgs field
$C:=[z\nabla]\in \Omega^1(M,{\rm End}(K))$ on $K$.
The endomorpisms $C_X=[z\nabla_X]\colon {\mathcal O}(K)\to{\mathcal O}(K)$ for
$X\in {\mathcal T}_M$ commute with one another,
and they commute with the
endomorphism ${\mathcal U}:=\big[z^2\nabla_{\partial_z}\big]\colon {\mathcal O}(K)\to{\mathcal O}(K)$
(see Definition~\ref{t3.8} and Lemma~\ref{t3.12}).
The Higgs field~$C$ is {\it primitive} if on each sufficiently
small subset $U\subset K$ a section $\zeta_U$ exists such
that the map ${\mathcal T}_U\to {\mathcal O}(K),$ $X\mapsto C_X\zeta_U$, is
an isomorphism (see Definition~\ref{t3.13}).
An {\it $F$-manifold} with {\it Euler field}
is a complex manifold $M$
together with a holomorphic commutative and associative
multiplication $\circ$ on ${\mathcal T}_M$ which comes equipped
with the integrability condition~\eqref{2.1},
with a unit field $e\in{\mathcal T}_M$ (with $\circ e=\id$)
and an Euler field $E\in{\mathcal T}_M$
with $\text{Lie}_E(\circ)=\circ$
(see~\cite{HM99} or Definition~\ref{t2.1}).
A $(TE)$-structure over $M$ with
primitive Higgs field indu\-ces on the base manifold $M$ the
structure of an $F$-manifold with Euler field
(see Theorem~\ref{t3.14} for details).
A result of Malgrange~\cite{Ma86} (cited in
Theorem~\ref{t3.16}$(c)$) says that a $(TE)$-structure
over a~point~$t^0$ has a universal unfolding if the
endomorphism ${\mathcal U}\colon K\to K$ (here $K$ is a vector space)
is regular, i.e., it has only one Jordan block for each
eigenvalue. Theorem~\ref{t3.16}$(b)$ gives a generalization
from~\cite{HM04}. A special case of this generalization
says that a $(TE)$-structure with primitive
Higgs field over a germ $\big(M,t^0\big)$ is its own universal
unfolding (see Theorem~\ref{t3.16}$(a)$).
A supplement from~\cite{DH17} says that then the base space
is a {\it regular} $F$-manifold (see Definition~\ref{t2.4}
and Theorem~\ref{t2.5}).
Malgrange's result gives a universal unfolding
if one starts with a $(TE)$-structure over a~point
whose endomorphism ${\mathcal U}$ is regular.
However, if one starts with a~$(TE)$-structure over a~point such that ${\mathcal U}$ is not regular, then in general it
has no universal unfolding, and the study of all its
unfoldings becomes very interesting. The second half of
this paper (Sections~\ref{c6}--\ref{c8}) studies
this situation in rank~2. The Torelli problem for a
holomorphic function germ with an isolated singularity
is similar: The endomorphism
${\mathcal U}$ of its $(TEZP)$-structure is never regular
(except if the function has an $A_1$-singularity),
but I hope that the $(TEZP)$-structure
determines nevertheless somehow the specific
unfolding with primitive Higgs field, which comes from the
universal unfolding of the original function germ.
Now sufficient background is given.
We describe the contents of this paper.
The short Section~\ref{c2} recalls the classification of the
2-dimensional germs of $F$-manifolds with Euler fields
(Theorem~\ref{t2.2} from~\cite{He02} and Theorem~\ref{t2.3}
from~\cite{DH20-3}). It~treats also regular $F$-manifolds
(Definition~\ref{t2.4} and Theorem~\ref{t2.5} from~\cite{DH17}).
Section~\ref{c3} recalls many general facts on
$(TE)$-structures: their definition, their presentation
by matrices, formal $(TE)$-structures, unfoldings and
universal unfoldings of $(TE)$-structures, Malgrange's
result and the generalization in~\cite{HM04},
$(TE)$-structures over $F$-manifolds, $(TE)$-structures
with primitive Higgs fields, regular singular $(TE)$-structures
and elementary sections, Birkhoff normal form for
$(TE)$-structures (not all have one, Theorem~\ref{t3.20} cites
existence results of~Ple\-mely and of Bolibroukh and Kostov).
Not written before, but elementary is a correspondence
between $(TE)$-structures with trace free endomorphism
${\mathcal U}$ and arbitrary $(TE)$-structures (Lem\-mata~\ref{t3.9},~\ref{t3.10} and~\ref{t3.11}).
New is the notion of a marked $(TE)$-structure. It~is needed for the construction of moduli spaces.
Theorem~\ref{t3.29} (which builds on results in
\cite{HS10}) constructs such moduli spaces, but
only in the case of regular singular $(TE)$-structures. It~starts with a {\it good family} of regular
singular $(TE)$-structures. There are two open problems. It~is not clear how to generalize this notion of a good
family beyond the case of regular singular $(TE)$-structures.
We hope, but did not prove for rank $\geq 3$, that
any regular singular $(TE)$-structure (over $M$ with
$\dim M\geq 1$) is a good family of regular singular
$(TE)$-structures. For rank $2$ this is true, it follows
from Theorem~\ref{t8.5}.
Section~\ref{c4} gives the classification of rank $2$
$(TE)$-structures over a point $t^0$. There are 4 types,
which we call (Sem), (Bra), (Reg) and (Log)
(for {\it semisimple, branched, regular singular} and
{\it logarithmic}). In~the type (Sem) ${\mathcal U}$ has two
different eigenvalues, in the type (Log) ${\mathcal U}\in{\mathbb C}\cdot\id$,
in~the types (Bra) and (Reg) ${\mathcal U}$ has a $2\times 2$
Jordan block. In~the cases when ${\mathcal U}$ is trace free,
a $(TE)$-structure of type (Log) has a logarithmic pole,
a $(TE)$-structure of type (Reg) has a~regular singular,
but not logarithmic pole, and the pull back of a~$(TE)$-structure of type (Bra) by a~branched cover of ${\mathbb C}$
of order 4 has a meromorphic connection with
semisimple pole of order~3 (see Lemma~\ref{t4.9}).
The semisimple case (Sem) is not central in this paper.
Therefore we do not discuss it in detail
and do not introduce Stokes structures. For the other types
(Bra), (Reg) and (Log), Section~\ref{c4} discusses normal forms
and their parameters. All $(TE)$-structures of type (Bra)
have nice Birkhoff normal forms (Theorem~\ref{t4.11}),
but not all of type (Reg) (Theorem~\ref{t4.17} and Remark~\ref{t4.19}) and type (Log) (Theorem~\ref{t4.20} and
Remark~\ref{t4.22}). The types (Reg) and (Log) become
transparent by the use of elementary sections.
A $(TE)$-structure of type (Sem) or (Bra) or (Reg) over
a point $t^0$ satisfies the hypothesis of Malgrange's result,
namely, the endomorphism ${\mathcal U}\colon K\to K$ is regular. Therefore it
has a~uni\-ver\-sal unfolding, and any unfolding of it is
induced by this universal unfolding.
Section~\ref{c5} discusses this.
Also because of this fact, the semisimple case is not
central in this paper.
Sections~\ref{c6}--\ref{c8} are devoted to the study
of $(TE)$-structures over a germ $\big(M,t^0\big)$ such that
the restriction to $t^0$ is a $(TE)$-structure of type (Log).
Then the set of points over which the $(TE)$-structure
restricts to one of type (Log) is either a hypersurface or
the whole of $M$. In~the first case, it restricts to a fixed
{\it generic} type (Sem) or (Bra) or (Reg) over points
not in the hypersurface. In~the second case, the generic
type is (Log).
Section~\ref{c6} starts this study. It~considers the
cases with trace free ${\mathcal U}$ and $\dim M=1$. It~has three parts. In~the first part, invariants of
such 1-parameter families are studied. In~a surprisingly
direct way, constraints on the difference of the
{\it leading exponents} (defined in Theorem~\ref{t4.20})
of the logarithmic $(TE)$-structure over $t^0$ are found,
and the monodromy in the generic cases (Sem) and (Bra) turns
out to be semisimple (Theorem~\ref{t6.2}).
By Plemely's result (and our direct calculations),
these cases come equipped with Birkhoff normal forms.
Theorem~\ref{t6.3} in the second part
classifies all $(TE)$-structures over $\big(M,t^0\big)$ with
trace free ${\mathcal U}$, $\dim M=1$, logarithmic restriction to $t^0$
and Birkhoff normal form.
Theorem~\ref{t6.7} in the third part classifies all
generically regular singular $(TE)$-structures over $\big(M,t^0\big)$
with $\dim M=1$, logarithmic restriction to~$t^0$, and
whose monodromy has a $2\times 2$ Jordan block.
The majority of these cases has no Birkhoff normal
form. Theorems~\ref{t6.3} and~\ref{t6.7} overlap
in the cases which have Birkhoff normal forms.
Section~\ref{c7} makes the moduli spaces of marked regular
singular $(TE)$-structures from Theorem~\ref{t3.28}
explicit in the rank $2$ cases. It~builds on the classification
results for the types (Reg) and (Log) in Section~\ref{c4}.
The long Theorem~\ref{t7.4} describes the moduli spaces
and offers~5 figures in order to make this more transparent.
The moduli spaces have countably many topological components,
and each component consists of an infinite chain of
projective spaces which are either the projective line $\P^1$
or the Hirzebruch surface ${\mathbb F}_2$ or $\widetilde{\mathbb F}_2$ (which is
obtained by blowing down in ${\mathbb F}_2$ the unique $(-2)$-curve).
These moduli spaces simplify in the generic case (Reg)
the main proof in Section~\ref{c8}, the proof of Theorem~\ref{t8.5}.
\looseness=1
Section~\ref{c8} gives complete classification results,
from different points of view. It~has three parts.
Theorem~\ref{t8.1} lists all rank~2 $(TE)$-structures
over a 2-dimensional germ $\big(M,t^0\big)$ such that the restriction
to $t^0$ has a logarithmic pole, such that the
Higgs field is generically primi\-tive,
and such that the induced structure of an $F$-manifold with
Euler field extends to all of~$M$. Theorem~\ref{t8.1}$(d)$
offers explicit normal forms.
Corollary~\ref{t8.3} starts with any logarithmic
rank~2 $(TE)$-structure over a point $t^0$ and lists
the $(TE)$-structures in Theorem~\ref{t8.1}$(d)$
which unfold~it.
Theorem~\ref{t8.5} is the most fundamental result of
Section~\ref{c8}. Table~\eqref{8.12} in it is a sublist
of the $(TE)$-structures in Theorem~\ref{t8.1}$(d)$. Theorem~\ref{t8.5} states that {\it any} unfolding of a rank~2
$(TE)$-structure of type (Log) over a point
is induced by one $(TE)$-structure in table~\eqref{8.12}. In~the generic cases (Reg) and (Log) these are precisely
those in Theorem~\ref{t8.1}$(d)$ with primitive Higgs field,
but in the generic cases (Sem) and (Bra) table~\eqref{8.12}
contains many $(TE)$-structures with only generically
primitive Higgs field. All the $(TE)$-structures in
table~\eqref{8.12} are universal unfoldings of themselves,
also those with only generically primitive Higgs field.
Almost all logarithmic $(TE)$-structures over a point
have several unfoldings which do not induce one another.
Only the logarithmic $(TE)$-structures over a point whose
monodromy has a $2\times 2$ Jordan block and whose
two leading exponents coincide have a universal unfolding.
This follows from Theorem~\ref{t8.5} and Corollary~\ref{t8.3}.
The second part of Section~\ref{c8} starts from the
2-dimensional $F$-manifolds with Euler fields and
discusses how many and which $(TE)$-structures exist over
each of them. It~turns out that the nilpotent $F$-manifold
${\mathcal N}_2$ with the Euler field
$E=t_1\partial_1+t_2^r\big(1+c_3t_2^{r-1}\big)\partial_2$ for $r\geq 2$
(case~\eqref{2.12} in~Theorem~\ref{t2.3}) does not have
any $(TE)$-structure over it if $c_3\neq 0$,
and it has no $(TE)$-structure with primitive Higgs field
over it if $c_3\neq 0$ or $r\geq 3$.
However, most 2-dimensional $F$-manifolds with Euler fields
have one or countably many families of $(TE)$-structures
with 1 or 2 parameters over them.
The third part of Section~\ref{c8} is the proof of Theorem~\ref{t8.5}.
In many aspects, the $(TE)$-structures of rank $2$ are probably
typical also for higher rank. But Section~\ref{c9} makes
one phenomenon explicit which arises only in rank $\geq 3$.
Section~\ref{c9} presents a~family of
rank 3 $(TE)$-structures with primitive Higgs fields
over a fixed 3-dimensional globally irreducible $F$-manifold
with nowhere regular Euler field, such that the family
has a {\it functional parameter}. The example is essentially
due to M.~Saito, it is a Fourier--Laplace transformation
of the main example in a preliminary version of~\cite{SaM17}
(though he considers only the bundle and connection over
a 2-dimensional submanifold of the $F$-manifold).
This paper has some overlap with~\cite{DH20-3}
and~\cite{DH20-2}. In~\cite{DH20-3} $(TE)$-structures over the 2-dimensional
$F$-manifold ${\mathcal N}_2$ (with all possible Euler fields)
were studied. They are of generic types (Bra), (Reg) or (Log). In~\cite[Chapter~8]{DH20-2} $(TE)$-structures over the
2-dimensional $F$-manifolds $I_2(m)$ were studied.
They are of generic type (Sem).
However, in~\cite{DH20-3} and~\cite{DH20-2} the focus was on
$(TE)$-structures with primitive Higgs fields.
Those with generically primitive, but not primitive Higgs
fields were not considered. And the approach to the
classification was very different. It~relied on the formal
classification of rank $2$ $(T)$-structures in~\cite{DH20-1}.
The approach here is independent of these three papers.
\section[The two-dimensional $F$-manifolds and their Euler fields]{The two-dimensional $\boldsymbol{F}$-manifolds and their Euler fields}\label{c2}
$F$-manifolds were first defined in~\cite{HM99}.
Their basic properties were developed in~\cite{He02}.
An overview on them and on more recent results is given
in~\cite{DH20-2}.
\begin{Definition}\label{t2.1}\quad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] An {\it $F$-manifold} $(M,\circ,e)$ (without Euler field)
is a complex manifold $M$ with a holomorphic
commutative and associative multiplication $\circ$
on the holomorphic tangent bundle $TM$, and with a
global holomorphic vector field $e\in{\mathcal T}_M$ with
$e\circ=\id$ ($e$ is called a {\it unit field}),
which satisfies the following integrability condition:
\begin{gather}\label{2.1}
\Lie_{X\circ Y}(\circ)= X\circ\Lie_Y(\circ)+Y\circ\Lie_X(\circ)\qquad
\text{for}\quad X,Y\in{\mathcal T}_M.
\end{gather}
\item[$(b)$] Given an $F$-manifold $(M,\circ,e)$, an {\it Euler field} on it is a global
vector field $E\in{\mathcal T}_M$ with $\Lie_E(\circ)=\circ$.
\end{enumerate}
\end{Definition}
In this paper we are mainly interested in the 2-dimensional
$F$-manifolds and their Euler fields.
They were classified in~\cite{He02}.
\begin{Theorem}[{\cite[Theorem~4.7]{He02}}]\label{t2.2}
In dimension $2$, $($up to isomorphism$)$ the germs of
$F$-manifolds fall into three types:
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The semisimple germ. It~is called $A_1^2$,
and it can be given as follows
\begin{gather*}
(M,0)=\big({\mathbb C}^2,0\big)\qquad\text{with coordinates}\quad
u=(u_1,u_2)\quad\text{and}\quad e_k=\frac{\partial}{\partial u_k},
\\
e= e_1+e_2,\qquad e_j\circ e_k=\delta_{jk}\cdot e_j.
\end{gather*}
Any Euler field takes the shape
\begin{gather}\label{2.3}
E= (u_1+c_1)e_1+(u_2+c_2)e_2\qquad\text{for some}\quad
c_1,c_2\in{\mathbb C}.
\end{gather}
\item[$(b)$] Irreducible germs, which $($i.e., some holomorphic
representatives of them$)$ are at generic points semisimple.
They form a series $I_2(m)$, $m\in{\mathbb Z}_{\geq 3}$.
The germ of type $I_2(m)$ can be given as follows
\begin{gather*}
(M,0)=\big({\mathbb C}^2,0\big)\qquad \text{with coordinates}\quad t=(t_1,t_2)\quad
\text{and}\quad\partial_k:=\frac{\partial}{\partial t_k},
\\
e=\partial_1,\qquad \partial_2\circ\partial_2 =t_2^{m-2}e.\label{2.4}
\end{gather*}
Any Euler field takes the shape
\begin{gather*
E= (t_1+c_1)\partial_1 + \frac{2}{m}t_2\partial_2
\qquad\text{for some}\quad c_1\in{\mathbb C}.
\end{gather*}
\item[$(c)$] An irreducible germ, such that the multiplication is
everywhere irreducible. It~is called~${\mathcal N}_2$, and it
can be given as follows
\begin{gather*}
(M,0)=\big({\mathbb C}^2,0\big)\qquad \text{with coordinates}\quad t=(t_1,t_2)
\quad \text{and}\quad
\partial_k:=\frac{\partial}{\partial t_k},\\
e=\partial_1,\quad \partial_2\circ\partial_2 =0
\end{gather*}
Any Euler field takes the shape
\begin{gather}
E= (t_1+c_1)\partial_1 + g(t_2)\partial_2
\qquad\text{for some}\quad c_1\in{\mathbb C}\nonumber
\\
\text{and some function}\quad
g(t_2)\in{\mathbb C}\{t_2\}.\label{2.7}
\end{gather}
\end{enumerate}
\end{Theorem}
The family of Euler fields in~\eqref{2.7} on ${\mathcal N}_2$
can be reduced by coordinate changes, which respect the
multiplication of ${\mathcal N}_2$, to a family
with two continuous parameters and one discrete parameter.
This classification is proved in~\cite{DH20-3}. It~is recalled
in Theorem~\ref{t2.3}.
The group $\Aut({\mathcal N}_2)$ of automorphisms of the germ
${\mathcal N}_2$ of an $F$-manifold is the group of coordinate
changes of $\big({\mathbb C}^2,0\big)$ which respect the multiplication
of ${\mathcal N}_2$. It~is
\begin{gather*
\Aut({\mathcal N}_2)=\{(t_1,t_2)\mapsto(t_1,\lambda(t_2))\,|\,
\lambda\in {\mathbb C}\{t_2\}\text{ with }\lambda'(0)\neq 0\text{ and }\lambda(0)=0\}.
\end{gather*}
\begin{Theorem}\label{t2.3}
Any Euler field on the germ ${\mathcal N}_2$ of an $F$-manifold
can be brought by a coordinate change in $\Aut({\mathcal N}_2)$
to a unique one in the following family of Euler fields
\begin{gather}\label{2.9}
E = (t_{1} + c_1) \partial_{1} +\partial_{2},
\\
E = (t_{1}+c_1) \partial_{1},\label{2.10}
\\
E = (t_{1}+c_1) \partial_{1} +{c}_2 t_{2}\partial_{2},\label{2.11}
\\
E = (t_{1}+c_1) \partial_{1} + t_{2}^{r}\big(1 + c_3 t_{2}^{r-1}\big)\partial_{2},\label{2.12}
\end{gather}
where $c_1, c_3\in {\mathbb C}$, $c_2\in {\mathbb C}^*$ and $r\in
{\mathbb Z}_{\geq 2}.$
The group $\Aut({\mathcal N}_2,E)$ of coordinate changes of
$\big({\mathbb C}^2,0\big)$ which respect the multiplication of ${\mathcal N}_2$
and this Euler field is
\begin{gather
\Aut({\mathcal N}_2,E)=\{(t_1,t_2)\mapsto (t_1,\gamma(t_2)t_2)\,|\,
\gamma\text{ as in~\eqref{2.14}}\},\nonumber
\\[1ex]
\def1.4{1.3}
\begin{tabular}{c|c|c|c|c}
\hline
\text{Case} &~\eqref{2.9} &~\eqref{2.10} &~\eqref{2.11}&~\eqref{2.12}
\\ \hline
$\gamma\in$ & \{1\} & ${\mathbb C}\{t_2\}^*$ & ${\mathbb C}^*$
&$\big\{{\rm e}^{2\pi {\rm i} l/(r-1)}\,|\, l\in{\mathbb Z} \big\}$\label{2.14}
\\
\hline
\end{tabular}
\end{gather}
\end{Theorem}
A special class of $F$-manifolds, the regular $F$-manifolds,
is related to a result of Malgrange on universal unfoldings
of $(TE)$-structures, see Remarks~\ref{t3.17}.
\begin{Definition}[{\cite[Definition 1.2]{DH17}}]\label{t2.4}
A {\it regular $F$-manifold} is an $F$-manifold
$(M,\circ,e)$ with Euler field $E$ such that at each
$t\in M$ the endomorphism $E\circ|_t\colon T_tM\to T_tM$
is a regular endomorphism, i.e., it has for each eigenvalue
only one Jordan block.
\end{Definition}
\begin{Theorem}[{\cite[Theorem~1.3(ii)]{DH17}}]\label{t2.5}
For each regular endomorphism of a finite dimensional
${\mathbb C}$-vector space, there is a unique (up to unique
isomorphism) germ $\big(M,t^0\big)$ of a regular $F$-manifold
such that $E\circ|_{t^0}$ is isomorphic to this endomorphism.
\end{Theorem}
\begin{Remarks}\label{t2.6}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] For a normal form of this germ of an $F$-manifold,
see~\cite[Theorem~1.3(i)]{DH17}.
\item[$(ii)$]
In dimension 2, this theorem is an easy consequence
of Theorems~\ref{t2.2} and~\ref{t2.3}. The germs of
regular 2-dimensional $F$-manifolds are as follows:
\begin{enumerate}\itemsep=0p
\item[$(a)$]
The germ $A_1^2$ in Theorem~\ref{t2.2}$(a)$ with any Euler field
$E=(u_1+c_1)e_1+(u_2+c_2)e_2$
as in~\eqref{2.3} with $c_1,c_2\in{\mathbb C}$, $c_1\neq c_2$.
\item[$(b)$]
The germ ${\mathcal N}_2$ in Theorem~\ref{t2.2}$(c)$ with any Euler field
$E=(t_1+c_1)\partial_1+\partial_2$ as in~\eqref{2.9}
with $c_1\in{\mathbb C}$.
\end{enumerate}
\end{enumerate}
\end{Remarks}
\section[$(TE)$-structures in general]{$\boldsymbol{(TE)}$-structures in general}\label{c3}
\subsection{Definitions
A $(TE)$-structure is a holomorphic vector bundle
on ${\mathbb C}\times M$, $M$ a complex manifold, with a~meromorphic connection $\nabla$ with a pole of
Poincar\'e rank 1 along $\{0\}\times M$ and no pole elsewhere.
Here we consider them together with the weaker notion
of $(T)$-structure and the more rigid notions
of a $(TL)$-structure and a $(TLE)$-structure.
The structures had been considered before in~\cite{HM04},
and they are related to structures in~\cite[Chapter~VII]{Sa02}
and in~\cite{Sa05}.
\begin{Definition}\label{t3.1}\qquad
\begin{enumerate}\itemsep=-2pt
\item[$(a)$] Definition of a {\it $(T)$-structure}
$(H\to{\mathbb C}\times M,\nabla)$:
$H\to{\mathbb C}\times M$ is a holomorphic vector bundle.
$\nabla$ is a map\vspace{-.5ex}
\begin{gather}\label{3.1}
\nabla\colon\ {\mathcal O}(H)\to z^{-1}{\mathcal O}_{{\mathbb C}\times M}\cdot \Omega^1_M\otimes {\mathcal O}(H),
\end{gather}
which satisfies the Leibniz rule,\vspace{-.5ex}
\begin{gather*}
\nabla_X(a\cdot s)= X(a)\cdot s+a\cdot \nabla_X s
\qquad\text{for}\quad X\in{\mathcal T}_M,\quad a\in{\mathcal O}_{{\mathbb C}\times M},\quad s\in {\mathcal O}(H),
\end{gather*}
and which is flat (with respect to $X\in{\mathcal T}_M$,
not with respect to $\partial_z$),\vspace{-.5ex}
\begin{gather*}
\nabla_X\nabla_Y-\nabla_Y\nabla_X=\nabla_{[X,Y]}
\qquad\text{for}\quad X,Y\in {\mathcal T}_M.
\end{gather*}
Equivalent: For any $z\in{\mathbb C}^*$, the restriction of $\nabla$ to
$H|_{\{z\}\times M}$ is a flat holomorphic connection.
\item[$(b)$] Definition of a {\it $(TE)$-structure}
$(H\to{\mathbb C}\times M,\nabla)\colon H\to{\mathbb C}\times M$ is a holomorphic vector bundle.
$\nabla$ is a flat connection on $H|_{{\mathbb C}^*\times M}$ with a pole
of Poincar\'e rank 1 along $\{0\}\times M$, so it is a map\vspace{-.5ex}
\begin{gather*
\nabla\colon\ {\mathcal O}(H)\to
\bigl(z^{-1}{\mathcal O}_{{\mathbb C}\times M}\cdot\Omega^1_M
+z^{-2}{\mathcal O}_{{\mathbb C}\times M}\cdot{\rm d}z\bigr)\otimes{\mathcal O}(H)
\end{gather*}
which satisfies the Leibniz rule and is flat.
\item[$(c)$] Definition of a {\it $(TL)$-structure} $\big(H\to\P^1\times M,\nabla\big)\colon
H\to\P^1\times M$ is a holomorphic vector bundle.
$\nabla$ is a map\vspace{-.5ex}
\begin{gather*
\nabla\colon\ {\mathcal O}(H)\to \big(z^{-1}{\mathcal O}_{\P^1\times M}+{\mathcal O}_{\P^1\times M}\big)
\cdot \Omega^1_M\otimes {\mathcal O}(H),
\end{gather*}
such that for any $z\in\P^1\setminus\{0\}$, the restriction
of $\nabla$ to $H|_{\{z\}\times M}$ is a flat connection. It~is called {\it pure} if for any $t\in M$ the restriction
$H|_{\P^1\times\{t\}}$ is a trivial holomorphic bundle on~$\P^1$.
\item[$(d)$] Definition of a {\it $(TLE)$-structure}
$\big(H\to\P^1\times M,\nabla\big)$:
It is simultaneously a $(TE)$-structure and a
$(TL)$-structure, where the connection $\nabla$
has a logarithmic pole along $\{\infty\}\times M$.
The $(TLE)$-structure is called {\it pure} if the
$(TL)$-structure is pure.
\end{enumerate}
\end{Definition}
\begin{Remark}\label{t3.2}
Here we write the data in Definition~\ref{t3.1}$(a)$--$(b)$
and the compatibility conditions between them in terms
of matrices.
Consider a $(TE)$-structure $(H\to{\mathbb C}\times M,\nabla)$
of rank $\rk H=r\in{\mathbb N}$.
We will fix the notations for a trivialization
of the bundle $H|_{U\times M}$ for some small neighborhood
$U\subset {\mathbb C}$ of 0. Trivialization means the choice of a
basis $\underline{v}=(v_1,\dots ,v_r)$ of the bundle $H|_{U\times M}$.
Also, we choose local coordinates $t=(t_1,\dots ,t_n)$
with coordinate vector fields $\partial_i=\partial/\partial t_i$ on $M$.
We write\vspace{-1ex}
\begin{gather}
\nabla\underline{v}=\underline{v}\cdot\Omega\qquad\text{with}\quad
\Omega = \sum_{i=1}^r z^{-1}\cdot A_i(z,t){\rm d} t_i +z^{-2}B(z,t){\rm d} z,
\label{3.4}
\\
A_i(z,t)= \sum_{k\geq 0}A_i^{(k)}z^k\in M_{r\times r}({\mathcal O}_{U\times M}),
\label{3.5}
\\
B(z,t)=\sum_{k\geq 0} B^{(k)}z^k\in M_{r\times r}({\mathcal O}_{U\times M}),
\label{3.6}
\end{gather}
with $A_i^{(k)},B^{(k)}\in M_{r\times r}({\mathcal O}_M)$,
but this dependence on $t\in M$ is usually not written
explicity.
The flatness $0={\rm d} \Omega+\Omega\land\Omega$ of the connection
$\nabla$ says for $i,j\in\{1,\dots ,n\}$ with $i\neq j${\samepage
\begin{gather}\label{3.7}
0= z\partial_iA_j-z\partial_jA_i+[A_i,A_j],\\
0= z\partial_i B-z^2\partial_z A_i + zA_i + [A_i,B].
\label{3.8}
\end{gather}}
\pagebreak
\noindent
These equations split into the parts for the different powers
$z^k$ for $k\geq 0$ as follows \big(with $A_i^{(-1)}=B^{(-1)}=0$\big),
\begin{gather}\label{3.9}
0= \partial_iA_j^{(k-1)}-\partial_jA_i^{(k-1)}+\sum_{l=0}^k\big[A_i^{(l)},A_j^{(k-l)}\big],
\\
0= \partial_i B^{(k-1)}-(k-2)A_i^{(k-1)}+\sum_{l=0}^k\big[A_i^{(l)},B^{(k-l)}\big].
\label{3.10}
\end{gather}
In the case of a $(T)$-structure, $B$ and all
equations except~\eqref{3.4} which contain $B$ are dropped.
Consider a second $(TE)$-structure
$\big(\widetilde H\to{\mathbb C}\times M,\widetilde\nabla\big)$ of rank $r$ over $M$,
where all data except~$M$ are written with a tilde.
Let $\underline{v}$ and $\underline{\widetilde v}$ be trivializations.
A holomorphic isomorphism from the first to the second
$(TE)$-structure maps $\underline{v}\cdot T$ to $\underline{\widetilde v}$,
where
$T=T(z,t) =\sum_{k\geq 0}T^{(k)}z^k\in M_{r\times r}({\mathcal O}_{({\mathbb C},0)\times M})$
with $T^{(k)}\in M_{r\times r}({\mathcal O}_{M})$ and $T^{(0)}$
invertible satisfies
\begin{gather}\label{3.11}
\underline{v}\cdot\Omega\cdot T+\underline{v}\cdot{\rm d} T=
\nabla(\underline{v}\cdot T)=\underline{v}\cdot T\cdot\widetilde\Omega.
\end{gather}
Equation~\eqref{3.11} says more explicitly
\begin{gather}\label{3.12}
0= z\partial_i T+A_i\cdot T-T\cdot\widetilde A_i,
\\
0= z^2\partial_z T+B\cdot T-T\cdot \widetilde B.\label{3.13}
\end{gather}
These equations split into the parts for the different
powers $z^k$ for $k\geq 0$ as follows (with $T^{(-1)}:=0$):
\begin{gather*
0= \partial_i T^{(k-1)}+\sum_{l=0}^k \big(A_i^{(l)}\cdot T^{(k-l)}
-T^{(k-l)}\cdot\widetilde A_i^{(l)}\big),
\\
0= (k-1)T^{(k-1)}+\sum_{l=0}^k\big(B^{(l)}\cdot T^{(k-l)}
-T^{(k-l)}\cdot \widetilde B^{(l)}\big)
\end{gather*}
The isomorphism here fixes the base manifold $M$.
Such isomorphisms are called {\it gauge iso\-mor\-phisms}.
A general isomorphism is a composition of a
gauge isomorphism and a coordinate change on $M$
(a coordinate change induces an isomorphism of
$(TE)$-structures, see Lemma~\ref{t3.6}).
\end{Remark}
\begin{Remark
In this paper we care mainly about $(TE)$-structures
over the 2-dimensional germs of $F$-manifolds with
Euler fields. For each of them except
$({\mathcal N}_2,E=(t_1+c_1)\partial_1)$, the group of coordinate
changes of $(M,0)=\big({\mathbb C}^2,0\big)$ which respect the multiplication
and $E$ is quite small, see Theorem~\ref{t2.3}.
Therefore in this paper, we care mainly about {\it gauge}
isomorphisms of the $(TE)$-structures over these
$F$-manifolds with Euler fields.
\end{Remark}
\begin{Definition
Let $M$ be a complex manifold.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The sheaf ${\mathcal O}_M[[z]]$ on $M$ is defined by
${\mathcal O}_M[[z]](U):={\mathcal O}_M(U)[[z]]$
for an open subset $U\subset M$
(with ${\mathcal O}_M(U)$ and ${\mathcal O}_M[[z]](U)$ the sections of
${\mathcal O}_M$ and ${\mathcal O}_M[[z]]$ on $U$).
Observe that the germ $({\mathcal O}_M[[z]])_{t^0}$ for $t^0\in M$
consists of formal power series $\sum_{k\geq 0}f_kz^k$ whose
coefficients $f_k\in{\mathcal O}_{M,t^0}$ have a common
convergence domain. In~the case of $\big(M,t^0\big)=\big({\mathbb C}^n,0\big)$ we write
${\mathcal O}_{{\mathbb C}^n}[[z]]_0=:{\mathbb C}\{t,z]]$.
\item[$(b)$] A {\it formal $(T)$-structure} over $M$ is a free
${\mathcal O}_M[[z]]$-module ${\mathcal O}(H)$ of some finite rank $r\in{\mathbb N}$
together with a map
$\nabla$ as in~\eqref{3.1}, where ${\mathcal O}_{{\mathbb C}\times M}$ is replaced
by ${\mathcal O}_M[[z]]$ which satisfies properties analogous to
$\nabla$ in Definition~\ref{t3.1}$(a)$, i.e., the
Leibniz rule for $X\in{\mathcal T}_M$, \mbox{$a\in{\mathcal O}_M[[z]]$}, $s\in{\mathcal O}(H)$
and the flatness condition for $X,Y\in{\mathcal T}_M$.
A {\it formal $(TE)$-structure} is defined analogously:
In Definition~\ref{t3.1}$(b)$ one has to replace
${\mathcal O}_{{\mathbb C}\times M}$ by ${\mathcal O}_M[[z]]$.
\end{enumerate}
\end{Definition}
\begin{Remark
The formulas in Remark~\ref{t3.2} hold also
for formal $(T)$-structures and formal $(TE)$-structures
if one replaces ${\mathcal O}_{{\mathbb C}\times M}$, ${\mathcal O}_{U\times M}$
and ${\mathcal O}_{({\mathbb C},0)\times M}$ by ${\mathcal O}_M[[z]]$.
\end{Remark}
The following lemma is obvious.
\begin{Lemma}\label{t3.6}
Let $(H\to {\mathbb C}\times M,\nabla)$ be a $(TE)$-structure over $M$,
and let $\varphi\colon M'\to M$ be a~holo\-morphic map between
manifolds. One can pull back $H$ and $\nabla$ with
$\id\times\varphi\colon {\mathbb C}\times M'\to{\mathbb C}\times M$.
We~call the pull back $\varphi^*(H,\nabla)$. It~is a
$(TE)$-structure over $M'$. We say that the
pull back $\varphi^*(H,\nabla)$ is induced by the $(TE)$-structure
$(H,\nabla)$ via the map $\varphi$.
\end{Lemma}
\begin{Remarks}\qqua
\begin{enumerate}\itemsep=0pt
\item[$(i)$] We will give in Theorem~\ref{t8.5}
and in Corollary~\ref{t5.1} and Lemma~\ref{t5.2}$(iv)$
a~classification of rank $2$ $(TE)$-structures
over germs $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$ of 2-dimensional manifolds
such that any rank $2$ $(TE)$-structure over a germ
$(M',s^0)$ is obtained as the pull back $\varphi^*(H,\nabla)$
of a~rank~2 $(TE)$-structure in the classification
via a holomorphic map $\varphi\colon \big(M',s^0\big)\to \big(M,t^0\big)$.
\item[$(ii)$] Here the behaviour of the $(TE)$-structure $(H,\nabla)$
over $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$ with coordinates $t=(t_1,t_2)$
along $t_1$ is quite trivial. It~is convenient to
split it off. The next subsection does this in greater
generality.
\end{enumerate}
\end{Remarks}
\subsection[$(TE)$-structures with trace free pole part]{$\boldsymbol{(TE)}$-structures with trace free pole part}
\begin{Definition}\label{t3.8}
Let $(H\to{\mathbb C}\times M,\nabla)$ be a $(TE)$-structure.
Define the vector bundle $K:=H|_{\{0\}\times M}$ over $M$.
The {\it pole part} of the $(TE)$-structure is the endomorphism
${\mathcal U}\colon K\to K$ which is defined by
\begin{eqnarray}\label{3.16}
{\mathcal U}:=\big[z^2\nabla_{\partial_z}\big]\colon\ K\to K.
\end{eqnarray}
The pole part is {\it trace free} if $\tr{\mathcal U}=0$ on $M$.
\end{Definition}
The following lemma gives formal invariants of a $(TE)$-structure.
\begin{Lemma}\label{t3.9}
Let $(H\to{\mathbb C}\times M,\nabla)$ be a $(TE)$-structure
of rank $r\in{\mathbb N}$ over a manifold $M$.
By~a~for\-mal invariant of the $(TE)$-structure,
we mean an invariant of its formal isomorphism class.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Its pole part ${\mathcal U}$, that means the pair $(K,{\mathcal U})$
up to isomorphism, is a formal invariant of the
$(TE)$-structure. Especially, the holomorphic functions
$\delta^{(0)}:=\det{\mathcal U}\in{\mathcal O}_M$ and
$\rho^{(0)}:=\frac{1}{r}\tr{\mathcal U}\allowbreak\in{\mathcal O}_M$ are formal invariants.
\item[$(b)$] For any $t^0\in M$, fix an ${\mathcal O}_{M,t^0}$-basis $\underline{v}$
of ${\mathcal O}(H)_{(0,t^0)}$,
consider the matrices in~\eqref{3.4}--\eqref{3.6},
consider the function $\rho^{(1)}:=\frac{1}{r}\tr B^{(1)}\in{\mathcal O}_{M,t^0}$,
and consider the functions
$\delta^{(k)}\in{\mathcal O}_{M,t^0}$ for~$k\in{\mathbb N}_0$
which are defined by writing $\det B$ as a power series
\begin{gather*
\det B=\sum_{k\geq 0}\delta^{(k)}z^k.
\end{gather*}
Then the functions $\delta^{(1)}$ and $\rho^{(1)}$ are
independent of the choice of the basis $\underline{v}$.
The locally for any~$t^0$ defined functions $\delta^{(1)}$ and
$\rho^{(1)}$ glue to global holomorphic functions
$\delta^{(1)}\in{\mathcal O}_M$ and $\rho^{(1)}\in{\mathcal O}_M$.
They are formal invariants.
Furthermore, the function $\rho^{(1)}$ is constant
on any component of $M$.
\end{enumerate}
\end{Lemma}
\begin{proof} ${\mathcal U}$, $\delta^{(0)}$, $\rho^{(0)}$ and $\delta^{(1)}$
are formal invariants because of~\eqref{3.13}:
$\widetilde B = T^{-1}BT+z^2 \cdot T^{-1}\partial_z T.$
For $\rho^{(1)}$, observe additionally
\begin{gather*}
{\widetilde B}^{(1)}= \big(T^{(0)}\big)^{-1}B^{(1)}T^{(0)}
+ \big[\big(T^{(0)}\big)^{-1}B^{(0)}T^{(0)}, \big(T^{(0)}\big)^{-1}T^{(1)}\big].
\end{gather*}
Recall also that the trace of a commutator of matrices is 0.
Therefore $\rho^{(1)}$ is a formal invariant.
Equation~\eqref{3.10} for $k=2$ implies
$\partial_i\tr \big(B^{(1)}\big)=0$, so the function $\rho^{(1)}$ is constant.
\end{proof}
The following lemma is obvious.
\begin{Lemma}\label{t3.10}
Let $(H\to{\mathbb C}\times M,\nabla)$ be a $(TE)$-structure
of rank $r\in{\mathbb N}$ over a manifold $M$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Consider a holomorphic function $g\colon M\to{\mathbb C}$.
The trivial line bundle $H^{[1]}={\mathbb C}\times\allowbreak({\mathbb C}\times M)\to
{\mathbb C}\times M$ over ${\mathbb C}\times M$ with connection
$\nabla^{[1]}:={\rm d}+{\rm d}\big(\frac{g}{z}\big)$ defines a~$(TE)$-structure of rank~$1$ over~$M$, whose sheaf of sections
with connection is called ${\mathcal E}^{g/z}$.
\item[$(b)$] $({\mathcal O}(H),\nabla)\otimes {\mathcal E}^{g/z}$ for $g$ as in $(a)$
is a $(TE)$-structure.
\item[$(c)$] The $(TE)$-structure $\big(H^{[2]}\to{\mathbb C}\times M,\nabla^{[2]}\big)$ with
$\big({\mathcal O}\big(H^{[2]}\big),\nabla^{[2]}\big)=({\mathcal O}(H),\nabla)\otimes {\mathcal E}^{\rho^{(0)}/z}$
has trace free pole part. And, of course,
$({\mathcal O}(H),\nabla)\cong \big({\mathcal O}\big(H^{[2]}\big),\nabla^{[2]}\big)\otimes{\mathcal E}^{-\rho^{(0)}/z}$.
If $\underline{v}$ is a ${\mathbb C}\{t,z\}$-basis of ${\mathcal O}(H)_0={\mathcal O}\big(H^{[2]}\big)_0$,
then the matrix valued connection $1$-forms $\Omega$ and
$\Omega^{[2]}$ of $\nabla$ and $\nabla^{[2]}$ with respect to this
basis satisfy
$\Omega=\Omega^{[2]}-{\rm d}\big(\frac{\rho^{(0)}}{z}\big)\cdot {\bf 1}_r$.
\item[$(d)$] $($Definition$)$ Consider a $(TE)$-structure
$\big(H^{[3]}\to {\mathbb C}\times M^{[3]},\nabla^{[3]}\big)$ with trace free
pole part. Consider the manifold $M^{[4]}:={\mathbb C}\times M^{[3]}$
with $($local$)$ coordinates $t_1$ on ${\mathbb C}$ and $t'$ on $M^{[3]}$,
and the projection $\varphi^{[4]}\colon M^{[4]}\to M^{[3]}$,
$(t_1,t')\mapsto t'$.
Define the $(TE)$-structure $\big(H^{[4]}\to {\mathbb C}\times M^{[4]},
\nabla^{[4]}\big)$ with
$\big({\mathcal O}\big(H^{[4]}\big),\nabla^{[4]}\big)
=\big(\varphi^{[4]}\big)^*\big({\mathcal O}\big(H^{[3]}\big),\nabla^{[3]}\big)\otimes {\mathcal E}^{t_1/z}$.
\item[$(e)$] If the $(TE)$-structure $\big(H^{[2]},\nabla^{[2]}\big)$ is induced
by the $(TE)$-structure $\big(H^{[3]},\nabla^{[3]}\big)$ via a map
$\varphi\colon M\to M^{[3]}$, then the $(TE)$-structure $(H,\nabla)$
is induced by the $(TE)$-structure $\big(H^{[4]},\nabla^{[4]}\big)$
via the map $(-\rho^{(0)},\varphi)\colon M\to M^{[4]}={\mathbb C}\times M^{[3]}$.
\end{enumerate}
\end{Lemma}
Part $(c)$ allows to go from an arbitrary $(TE)$-structure to one
with trace free pole part, and to go back to the original one.
Part $(e)$ considers two $(TE)$-structures as in part $(c)$,
an original one and an associated one with trace free pole part.
If the associated one is induced by a third $(TE)$-structure,
then the original one is induced by a closely related
$(TE)$-structure with one parameter more.
Lemma~\ref{t3.11} continues Lemma~\ref{t3.10}.
\begin{Lemma}\label{t3.11}
Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be a $(TE)$-structure
of rank $r\in{\mathbb N}$ over a germ $\big(M,t^0\big)$ of a manifold,
with coordinates $t=(t_1,\dots ,t_n)$ and $\partial_i:=\partial/\partial t_i$.
We suppose $t^0=0$ so that ${\mathcal O}_{({\mathbb C}\times M,(0,t^0))}={\mathbb C}\{t,z\}$.
Recall the functions $\rho^{(0)}$ and $\rho^{(1)}$
of the $(TE)$-structure from Lemma~$\ref{t3.9}$.
Consider the $(TE)$-structure
$\big(H^{[2]},\nabla^{[2]}\big)$ from Lemma~$\ref{t3.10}$
with trace free pole part which is defined by
$\big({\mathcal O}\big(H^{[2]}\big),\nabla^{[2]}\big):= ({\mathcal O}(H),\nabla)\otimes {\mathcal E}^{\rho^{(0)}/z}$.
Here $H^{[2]}=H$, but
$\nabla^{[2]}=\nabla+{\rm d}\big(\frac{\rho^{(0)}}{z}\big)\cdot\id$.
The matrices $A_i$ and $B$ in~\eqref{3.4}--\eqref{3.6}
for the $(TE)$-structure $\big(H^{[2]},\nabla^{[2]}\big)$
of any ${\mathbb C}\{t,z\}$-basis $\underline{v}$ of ${\mathcal O}\big(H^{[2]}\big)_0$
satisfy
\begin{gather}\label{3.18}
0=\tr A_i^{(0)}=\tr B^{(0)}=\tr \big(B^{(1)}-\rho^{(1)}{\bf 1}_r\big).
\end{gather}
The basis $\underline{v}$ can be chosen such that
the matrices satisfy
\begin{eqnarray}\label{3.19}
0=\tr A_i=\tr\big(B-z\rho^{(1)}{\bf 1}_r\big).
\end{eqnarray}
\end{Lemma}
\begin{proof}
Any ${\mathbb C}\{t,z\}$-basis $\underline{v}$ of
${\mathcal O}\big(H^{[2]}\big)_0={\mathcal O}(H)_0$ satisfies
\begin{gather*}
\tr B^{(0)}=\tr{\mathcal U}^{[2]}=0 \qquad
\text{as}\ \big(H^{[2]},\nabla^{[2]}\big)\ \text{has trace free pole part,}
\\
\tr A_i^{(0)}=0 \qquad
\text{because of}\ \tr\partial_iB^{(0)}=\partial_i\tr B^{(0)}=0
\ \text{and}\ \eqref{3.10}\ \text{for}\ k=1 ,
\\
\tr\big(B^{(1)}- \rho^{(1)}{\bf 1}_r\big)=0 \qquad
\text{by Lemma~\ref{t3.9} and especially},
\\
\Omega =\Omega^{[2]}-{\rm d}\bigg(\frac{\rho^{(0)}}{z}\bigg)\cdot{\bf 1}_r
=\Omega^{[2]}-\sum_{i=1}^n z^{-1}\frac{\partial\rho^{(0)}}{\partial t_i}
\cdot{\bf 1}_r{\rm d} t_i + z^{-2}\rho^{(0)}\cdot{\bf 1}_r {\rm d} z.
\end{gather*}
Start with an arbitrary basis $\underline{v}$,
consider the function
\begin{gather}\label{3.20}
g:=\frac{1}{r}\sum_{k\geq 2}
\frac{-\tr B^{(k)}}{k-1}\cdot z^{k-1}\in z{\mathbb C}\{t,z\},
\end{gather}
{\samepage
consider $T:={\rm e}^g\cdot{\bf 1}_r$, and
$\underline{\widetilde v}:=\underline{v}\cdot T$.~\eqref{3.13} gives
\begin{gather*}
\widetilde B=B+T^{-1}z^2\partial_z T = B+ \bigg({-}\sum_{k\geq 2}\tr B^{(k)}z^k\bigg)\cdot \frac{1}{r}{\bf 1}_r,
\end{gather*}
so $\tr \widetilde B^{(k)}=0$ for $k\geq 2$, $\widetilde B^{(1)}=B^{(1)}$,
$\widetilde B^{(0)}=B^{(0)}$.}
Therefore now suppose $\tr \big(B-z\rho^{(1)}{\bf 1}_r\big)=0$.
\eqref{3.10} for $k\geq 3$ gives
$\tr A_i^{(l)}=0$ for $l\geq 2$, because
$\tr \partial_i B^{(l)}=\partial_i \tr B^{(l)}=0$.
Finally, we consider $T=T^{(0)}={\rm e}^h\cdot{\bf 1}_r$ for a
suitable function $h\in{\mathbb C}\{t\}$.
Then $\widetilde B=B$, $\widetilde A_i^{(k)}=A_i^{(k)}$ for $k\neq 1$,
and $\widetilde A_i^{(1)}=A_i^{(1)}+\partial_i h\cdot{\bf 1}_r$.
So we need $h\in{\mathbb C}\{t\}$ with
$\partial_ih=-\frac{1}{r}\tr A_i^{(1)}$.
Such a function exists because~\eqref{3.9} for $k=2$
implies $\partial_i \tr A_j^{(1)}=\partial_j \tr A_i^{(1)}$.
We have obtained a basis $\underline{v}$ with
$\tr\big(B-z\rho^{(1)}{\bf 1}_r\big)=0$ and $\tr A_i=0$ for all $i$.
\end{proof}
\subsection[$(TE)$-structures over $F$-manifolds with Euler fields]{$\boldsymbol{(TE)}$-structures over $\boldsymbol{F}$-manifolds with Euler fields
The pole part of a $(T)$-structure (or a $(TE)$-structure)
over ${\mathbb C}\times M$ along $\{0\}\times M$ induces a~Higgs bundle (together with ${\mathcal U}$).
This is elementary (e.g.,~\cite{DH20-1} or~\cite{He03}).
\begin{Lemma}\label{t3.12}
Let $(H\to{\mathbb C}\times M,\nabla)$ be a $(T)$-structure.
Define $K:=H|_{\{0\}\times M}$.
Then $C:=[z\nabla]\in\Omega^1(M,{\rm End}(K))$,
more explicitly
\begin{eqnarray}\label{3.21}
C_X[a]:= [z\nabla_X a]\qquad\text{for}\quad X\in {\mathcal T}_M,\quad a\in{\mathcal O}(H),
\end{eqnarray}
is a Higgs field, i.e., the endomorphisms $C_X,C_Y\colon K\to K$
for $X,Y\in{\mathcal T}_M$ commute.
If $(H\to{\mathbb C}\times M)$ is a $(TE)$-structure,
then its pole part ${\mathcal U}\colon K\to K$ commutes with all
endomorphisms $C_X$, $X\in{\mathcal T}_M$, short: $[C,{\mathcal U}]=0$.
\end{Lemma}
\begin{Definition}\label{t3.13}
The Higgs field of a $(T)$-structure or a $(TE)$-structure
$(H\to{\mathbb C}\times M,\nabla)$ is {\it primitive}
if there is an open cover $\mathcal V$ of $M$ and for any
$U\in \mathcal V$
a section $\zeta_{U}\in {\mathcal O} (K\vert_{U})$
(called a~{\it local primitive section})
with the property that the map
${\mathcal T}_U\ni X\rightarrow C_{X} \zeta_{U} \in {\mathcal O}(K)$
is an~isomorphism.
\end{Definition}
Theorems~\ref{t3.14} and~\ref{t3.16} show in two ways
that primitivity of a Higgs field is a good condition.
Theorem~\ref{t3.14} was first proved in
\cite[Theorem~3.3]{HHP10} (but see also
\cite[Lemma~10]{DH20-1}).
\begin{Theorem}\label{t3.14}
A $(T)$-structure $(H\rightarrow \mathbb{C}\times M, \nabla )$
with primitive Higgs field induces a multiplication $\circ$
on $TM$ which makes $M$ an $F$-manifold.
A $(TE)$-structure $(H\rightarrow \mathbb{C}\times M, \nabla )$
with primitive Higgs field induces in addition
a vector field $E$ on $M$, which, together with $\circ$,
makes $M$ an $F$-manifold with Euler field.
The multiplication $\circ$, unit field $e$ and Euler field $E$
$($the latter in the case of a $(TE)$-structure$)$, are defined by
\begin{gather*
C_{X\circ Y} = C_{X} C_{Y},\qquad
C_{e} =\mathrm{Id},\qquad
C_{E} = - {\mathcal U},
\end{gather*}
where $C$ is the Higgs field defined by $\nabla$, and
$\mathcal U$ is defined in~\eqref{3.16}.
\end{Theorem}
Definition~\ref{t3.15} recalls the notions of an
{\it unfolding} and of a {\it universal unfolding}
of a $(TE)$-structure over a germ of a manifold
from~\cite[Definition 2.3]{HM04}. It~turns out that any $(TE)$-structure over a germ of a manifold
with primitive Higgs field is a universal unfolding of itself.
Interestingly, we will see in Theorem~\ref{t8.5}
also $(TE)$-structures which are universal unfoldings
of themselves, but where the Higgs bundle is only
generically primitive. Still in the examples which
we consider, the base manifold is an $F$-manifold with
Euler field globally.
Malgrange~\cite{Ma86} proved that a $(TE)$-structure
over a point $t^0$ has a universal unfolding
with primitive Higgs field if the endomorphism
${\mathcal U}\colon K_{t^0}\to K_{t^0}$ is {\it regular}, i.e.,
it has for each eigenvalue only one Jordan block.
A generalization was given by Hertling and Manin~\cite[Theorem~2.5]{HM04}.
Theorem~\ref{t3.16} cites in part $(b)$
the generalization.
Part $(a)$ is the special case of a $(TE)$-structure
with primitive Higgs field.
Part $(c)$ is the special case of a $(TE)$-structure
over a~point, Malgrange's result.
\begin{Definition}\label{t3.15}
Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be a~$(TE)$-structure over a~germ $\big(M,t^0\big)$ of a~mani\-fold.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] An {\it unfolding} of it is a $(TE)$-structure
$\big(H^{[1]}\to{\mathbb C}\times \big(M\times {\mathbb C}^{l_1},\big(t^0,0\big)\big),\nabla^{[1]}\big)$
over a germ $\big(M\times {\mathbb C}^{l_1},\big(t^0,0\big)\big)$
(for some $l_1\in{\mathbb N}_0$)
together with a fixed isomorphism
\begin{gather*
i^{[1]}\colon \ \big(H\to{\mathbb C}\times \big(M,t^0\big),\!\nabla\big)
\to \big(H^{[1]}\to{\mathbb C}\times \big(M\times {\mathbb C}^{l_1},\big(t^0,0\big)\big),
\!\nabla^{[1]}\big)|_{{\mathbb C}\times (M\times\{0\},(t^0,0))} .
\end{gather*}
\item[$(b)$] One unfolding $\big(H^{[1]}\to{\mathbb C}\times\big(M\times {\mathbb C}^{l_1},
\big(t^0,0\big)\big),\nabla^{[1]},i^{[1]}\big)$ {\it induces}
a second unfolding $\big(H^{[2]}\to{\mathbb C}\times\big(M\times {\mathbb C}^{l_2},
\big(t^0,0\big)\big),\nabla^{[2]},i^{[2]}\big)$ if there
are a holomorphic map germ
\begin{gather*
\varphi\colon\ \big(M\times {\mathbb C}^{l_2},\big(t^0,0\big)\big)\to \big(M\times {\mathbb C}^{l_1},\big(t^0,0\big)\big),
\end{gather*}
which is the identity on $M\times \{0\}$,
and an isomorphism $j$ from the second unfolding to the
pullback of the first unfolding by $\varphi$ such that
\begin{gather}\label{3.25}
i^{[1]}= j|_{{\mathbb C}\times (M\times \{0\},(t^0,0))}\circ i^{[2]}.
\end{gather}
(Then $j$ is uniquely determined by $\varphi$ and~\eqref{3.25}.)
\item[$(c)$] An unfolding is {\it universal} if it induces any unfolding
via a unique map $\varphi$.
\end{enumerate}
\end{Definition}
By definition of a universal unfolding in part $(c)$,
a $(TE)$-structure has (up to canonical isomorphism)
at most one universal unfolding, because any two universal
unfoldings induce one another by unique maps.
\begin{Theorem}\label{t3.16}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] {\rm (\cite[Theorem~2.5]{HM04})}
A $(TE)$-structure over a germ $\big(M,t^0\big)$ with
primitive Higgs field is a~universal unfolding of itself.
\item[$(b)$] {\rm (\cite[Theorem~2.5]{HM04})}
Let $\big(H\to{\mathbb C}\times\big(M,t^0\big),\nabla\big)$ be a $(TE)$-structure over
a germ $\big(M,t^0\big)$ of a~manifold.
Let $\big(K\to \big(M,t^0\big),C\big)$ be the induced Higgs bundle over
$\big(M,t^0\big)$. Suppose that a~vec\-tor $\zeta_{t^0}\in K_{t^0}$
with the following properties exists:
\begin{enumerate}\itemsep=0pt\setlength{\leftskip}{0.25cm}
\item[$(IC)$] $($Injectivity condition$)$ The map
$C_\bullet \zeta_{t^0}\colon T_{t^0}M\to K_{t^0}$ is injective.
\item[$(GC)$] $($Generation condition$)$ $\zeta_{t^0}$ and
its images under iteration of the maps
${\mathcal U}|_{t^0}\colon K_{t^0}\to K_{t^0}$ and $C_X\colon K_{t^0}\to K_{t^0}$
for $X\in T_{t^0}M$ generate $K_{t^0}$.
\end{enumerate}
Then a universal unfolding of the $(TE)$-structure over
a germ $\big(M\times {\mathbb C}^l,\big(t^0,0\big)\big)$ $(l\in{\mathbb N}_0$ suitable$)$
exists. It~is unique up to isomorphism.
Its Higgs field is primitive.
\item[$(c)$] {\rm (\cite{Ma86})} A $(TE)$-structure over a point $t^0$ has
a universal unfolding with primitive Higgs field
if the endomorphism
$\big[z^2\nabla_{\partial_z}\big]={\mathcal U}\colon K_{t^0}\to K_{t^0}$
is regular, i.e., it has for each eigenvalue only one
Jordan block. In~that case, the germ of the $F$-manifold with Euler field
which underlies the universal unfolding,
is by definition $($Definition~$\ref{t2.4})$ regular.
\end{enumerate}
\end{Theorem}
\begin{Remarks}\label{t3.17}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] As said above, the parts $(a)$ and $(c)$ are special cases
of part $(b)$.
\item[$(ii)$] A germ $\big(\big(M,t^0\big),\circ,e,E\big)$ of a regular $F$-manifold
is uniquely determined by the regular endomorphism
$E\circ|_{t^0}\colon T_{t^0}\to T_{t^0}$ (Theorem~\ref{t2.5}).
\item[$(iii)$] Consider the germ $(M,0)=\big({\mathbb C}^2,0\big)$ of a 2-dimensional
$F$-manifold with Euler field $E$ in~Theorem~\ref{t2.2}. It~is regular if and only if
$E\circ|_{t=0}\notin\{\lambda\id\,|\, \lambda\in{\mathbb C}\}$. In~the semisimple case (Theorem~\ref{t2.2}$(a)$) this
holds if and only if $c_1\neq c_2$. In~the cases $I_2(m)$ $(m\geq 3)$ it does not hold. In~the case of ${\mathcal N}_2$ with $E=t_1\partial_1+g(t_2)\partial_2$
it holds if and only if $g(0)\neq 0$.
See also Remark~\ref{t2.6}$(ii)$.
\item[$(iv)$] Theorem~\ref{t3.16}$(c)$ implies that a $(TE)$-structure
with primitive Higgs field
over a germ $\big(M,t^0\big)$ of a regular $F$-manifold with
Euler field is determined up to gauge isomorphism
by the restriction of the $(TE)$-structure to $t^0$.
\item[$(v)$] Lemma~\ref{t3.6}, Definition~\ref{t3.8},
Lemmata~\ref{t3.9}--\ref{t3.12}, Definition~\ref{t3.13},
Theorem~\ref{t3.14} and Definition~\ref{t3.15}
hold or make sense also for {\it formal} $(T)$-structures
or $(TE)$-structures. However, the proof of Theorem~\ref{t3.16}
used in an essential way {\it holomorphic} $(TE)$-structures.
We do not know whether Theorem~\ref{t3.16}
holds also for formal $(TE)$-structures.
\end{enumerate}
\end{Remarks}
\subsection{Birkhoff normal form
\begin{Definition}\label{t3.18}
Let $(H\to{\mathbb C}\times M,\nabla)$ be a $(TE)$-structure
over a manifold $M$ with coordinates $t=(t_1,\dots ,t_n)$.
A {\it Birkhoff normal form} consists of a basis
$\underline{v}$ of $H$ and associated matrices
$A_1,\dots ,A_n,B$ as in~\eqref{3.4} such that
\begin{gather*
A_1^{(k)}=\dots =A_n^{(k)}=0\qquad \text{for}\quad k\geq 1,\qquad
B^{(k)}=0\qquad\text{for}\quad k\geq 2,\qquad
\partial_i B^{(1)}=0.
\end{gather*}
\end{Definition}
\begin{Remarks}\label{t3.19}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Such a basis defines an extension of the
$(TE)$-structure to a pure $(TLE)$-structure.
Then it is a basis of the $(TLE)$-structure
whose restriction
to $\{\infty\}\times M$ is flat with respect to the
residual connection (that is just the restriction
of the connection $\nabla$ of the underlying $(TL)$-structure
to~$H|_{\{\infty\}\times M}$). Then the conditions
\eqref{3.9} and~\eqref{3.10} boil down to
\begin{gather}\label{3.27}
0= \big[A_i^{(0)},A_j^{(0)}\big],\qquad
\partial_iA_j^{(0)}=\partial_jA_i^{(0)},
\\
0= \big[A_i^{(0)},B^{(0)}\big],\qquad
\partial_iB^{(0)}+A_i^{(0)}+\big[A_i^{(0)},B^{(1)}\big],\qquad
0=\partial_iB^{(1)}.\label{3.28}
\end{gather}
Such a basis is relevant for the construction of
Frobenius manifolds (see, e.g.,~\cite{DH20-2}).
\item[$(ii)$] Vice versa, if the $(TE)$-structure has an extension
to a pure $(TLE)$-structure, then a~basis~$\underline{v}$
of the $(TLE)$-structure exists whose restriction to
$\{\infty\}\times M$ is flat with respect to the
residual connection.
Then this basis $\underline{v}$ and the associated matrices
form a Birkhoff normal form.
\item[$(iii)$] A Birkhoff normal form does not always exist.
But if a Birkhoff normal form of the restriction of
a $(TE)$-structure over $M$ to a point $t^0\in M$
exists, it extends to a Birkhoff normal form
of the $(TE)$-structure over the germ $\big(M,t^0\big)$~\cite[Chapter~VI, Theorem~2.1]{Sa02}
(or~\cite[Theorem~5.1(c)]{DH20-2}).
\item[$(iv)$] The problem whether a $(TE)$-structure over a point
has an extension to a pure $(TLE)$-structure is a
special case of the {\it Birkhoff problem},
which itself is a special case of the Riemann--Hilbert--Birkhoff
problem. The book~\cite{AB94} and Chapter~IV in~\cite{Sa02} are devoted to these problems and results
on them.
\end{enumerate}
\end{Remarks}
Here the following two results on the Birkhoff problem will
be useful. However, we will use part $(a)$ only in the
case of a $(TE)$-structure over a point $t^0$ with
a logarithmic pole at $z=0$, in which case it is trivial.
\begin{Theorem}\label{t3.20}
Let $\big(H\to{\mathbb C}\times\big\{t^0\big\},\nabla\big)$ be a $(TE)$-structure
over a point $t^0$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] {\rm (Plemely,~\cite[Chapter~IV, Corollary~2.6(1)]{Sa02})}
If the monodromy is semisimple, the $(TE)$-structure
has an extension to a pure $(TLE)$-structure.
\item[$(b)$] {\rm (Bolibroukh and Kostov,~\cite[Chapter~IV, Corollary 2.6(3)]{Sa02})}
The germ ${\mathcal O}(H)_0\otimes_{{\mathbb C}\{z\}}{\mathbb C}\{z\}\big[z^{-1}\big]$ is a~${\mathbb C}\{z\}\big[z^{-1}\big]$-vector space of dimension $r=\rk H\in{\mathbb N}$
on which $\nabla$ acts.
If no ${\mathbb C}\{z\}\big[z^{-1}\big]$ sub vector space of dimension
in $\{1,\dots ,r-1\}$ exists which is $\nabla$-invariant,
then the $(TE)$-structure has an extension to a
pure $(TLE)$-structure.
\end{enumerate}
\end{Theorem}
\subsection[Regular singular $(TE)$-structures]{Regular singular $\boldsymbol{(TE)}$-structures
A $(TE)$-structure over a point $t^0$ is regular singular
if all its holomorphic sections have moderate growth
near 0. A good tool to treat this situation are
special sections of moderate growth, the
{\it elementary sections}.
Definition~\ref{t3.21} explains them and other basic notations.
We work with a simply connected manifold~$M$,
so that the only monodromy is the monodromy along closed
paths in the punctured $z$-plane going around~0.
One important case is the case of a germ $\big(M,t^0\big)$
of a manifold.
The most important case is the case of a point, $M=\big\{t^0\big\}$.
\begin{Definition}\label{t3.21}
Let $(H\to {\mathbb C}\times M,\nabla)$ be a $(TE)$-structure
of rank $r=\rk H\in{\mathbb N}$ over a simply connected manifold $M$.
We associate the following data to it.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] $H':=H|_{{\mathbb C}^*\times M}$ is the flat bundle on
${\mathbb C}^*\times M$.
$H^\infty$ denotes the ${\mathbb C}$-vector space (of dimen\-sion~$r$)
of global flat multivalued sections on $H'$.
Let the endomorphism $M^{\rm mon}\colon H^\infty\to H^\infty$
be the monodromy on it with semisimple part
$M^{\rm mon}_s$, unipotent part $M^{\rm mon}_u$
(with $M^{\rm mon}_sM^{\rm mon}_u=M^{\rm mon}_uM^{\rm mon}_s$),
nilpotent part $N^{\rm mon}:=\log M^{\rm mon}_u$
so that $M^{\rm mon}_u={\rm e}^{N^{\rm mon}}$,
and with eigenvalues in the finite set
$\Eig(M^{\rm mon})\subset{\mathbb C}$. For $\lambda\in{\mathbb C}$, let
\begin{gather*}
H^\infty_\lambda:=\ker\big(M^{\rm mon}_s-\lambda\id\colon H^\infty\to
H^\infty\big)
\end{gather*}
be the generalized eigenspace in $H^\infty$
of the monodromy with eigenvalue $\lambda$. It~is not $\{0\}$ if and only if $\lambda\in\Eig(M^{\rm mon})$.
\item[$(b)$] For $\alpha\in{\mathbb C}$, define the finite dimensional
${\mathbb C}$-vector space $C^\alpha$
of the following global sections of $H'$,
\begin{gather*
C^\alpha:= \big\{\sigma\in{\mathcal O}(H')({\mathbb C}^*)\,|\, (\nabla_{z\partial_z}-\alpha\id)^r(\sigma)=0,
\nabla_{\partial_i}(\sigma)=0\big\}
\end{gather*}
(where $t=(t_1,\dots ,t_n)$ are local coordinate and
$\partial_i$ are the coordinate vector fields).
Observe $z^k\cdot C^\alpha =C^{\alpha+k}$ for $k\in{\mathbb Z}$.
For each $\alpha$ the map
\begin{align*}
s(\cdot,\alpha)\colon\ H^\infty_{{\rm e}^{-2\pi {\rm i}\alpha}}&\to C^\alpha,
\\
A&\mapsto s(A,\alpha):=z^\alpha\cdot
{\rm e}^{-\log z\cdot N^{\rm mon}/2\pi {\rm i}}A(\log z),
\end{align*}
is an isomorphism. So, $C^\alpha\neq\{0\}$ if and only if
${\rm e}^{-2\pi {\rm i}\alpha}\in\Eig(M^{\rm mon})$.
The sections $s(A,\alpha)$ are called
{\it elementary sections}.
\item[$(c)$] A holomorphic section $\sigma$
of $H'|_{(U_1\setminus\{0\})\times U_2}$ for $U_1\subset {\mathbb C}$
a neighborhood of $0\in{\mathbb C}$ and $U_2\subset M$ open in $M$
can be written uniquely as an (in general infinite) sum
of elementary sections
$\operatorname{es}(\sigma,\alpha)\in {\mathcal O}_{U_2}\cdot C^\alpha$
with coefficients in ${\mathcal O}_{U_2}$,
\begin{gather*
\sigma = \sum_{\alpha\colon {\rm e}^{-2\pi {\rm i}\alpha}\in\Eig(M^{\rm mon})}
\operatorname{es}(\sigma,\alpha).
\end{gather*}
In order to see this, choose numbers $\alpha_j\in{\mathbb C}$
and elementary sections $s_j\in C^{\alpha_j}$
for $j\in\{1,\dots ,r\}$ such that $s_1,\dots ,s_r$
form a global basis of $H'$. Then
\begin{gather
\sigma =\sum_{j=1}^r a_js_j \qquad\text{with}\quad
a_j=a_j(z,t)= \sum_{k=-\infty}^\infty a_{kj}(t)z^k\in
{\mathcal O}_{(U_1\setminus\{0\})\times U_2}.\label{3.33}
\end{gather}
Here~\eqref{3.33} is the expansion of $a_j$ as a Laurent
series in $z$ with holomorphic coefficients
$a_{kj}\in {\mathcal O}_{U_2}$ in $t$. Then
\begin{eqnarray*
\operatorname{es}(\sigma,\alpha)(z,t)&=& \sum_{j\colon \alpha-\alpha_j\in{\mathbb Z}}
a_{\alpha-\alpha_j,j}(t)z^{\alpha-\alpha_j}s_j.
\end{eqnarray*}
\item[$(d)$] A holomorphic section $\sigma$ as in $(c)$
has {\it moderate growth}
if a bound $b\in{\mathbb R}$ with $\operatorname{es}(\sigma,\alpha)=0$
for all $\alpha$ with $\Ree(\alpha)<b$ exists.
The sheaf ${\mathcal V}^{>-\infty}$ on ${\mathbb C}\times M$
of all sections of moderate growth is
\begin{gather*
{\mathcal V}^{>-\infty}:= \bigoplus_{\alpha\colon -1<\Ree(\alpha)\leq 0}
{\mathcal O}_{{\mathbb C}\times M}\big[z^{-1}\big]\cdot C^\alpha.
\end{gather*}
The {Kashiwara--Malgrange $V$-filtration} is given by the
locally free subsheaves for $r\in{\mathbb R}$,
\begin{gather*
{\mathcal V}^{r}:= \bigoplus_{\alpha\colon \Ree(\alpha)\in[r,r+1[}
{\mathcal O}_{{\mathbb C}\times M}\cdot C^\alpha.
\end{gather*}
\end{enumerate}
\end{Definition}
\begin{Definition}\qqua
\begin{enumerate}\itemsep=0pt
\item[$(a)$] A $(TE)$-structure $(H\to{\mathbb C}\times M ,\nabla)$
over a simply connected manifold $M$ is
{\it regular singular} if ${\mathcal O}(H)\subset {\mathcal V}^{>-\infty}$,
so if all its holomorphic sections have moderate growth
near 0.
\item[$(b)$] A $(TE)$-structure $(H\to{\mathbb C}\times M ,\nabla)$
over a simply connected manifold $M$ is {\it logarithmic} if
it has a basis $\underline{v}$ whose connection 1-form
$\Omega$ has a logarithmic pole along $\{0\}\times M$
(then this holds for any basis). In~the notations of~\eqref{3.4}--\eqref{3.6} that means
$A_i^{(0)}=B^{(0)}=0$.
Then the restriction of $\nabla$ to $K:=H|_{\{0\}\times M}$
is well-defined. It~is called the {\it residual connection}~$\nabla^{\rm res}$. The~{\it residue endomorphism} is
${\rm Res}_0=[\nabla_{z\partial_z}]\colon K\to K$.
\end{enumerate}
\end{Definition}
\begin{Theorem}[{well known, e.g.,~\cite[Theorems 7.10 and~8.7]{He02}}]\label{t3.23}
Let $(H\to{\mathbb C}\times M,\nabla)$ with $H|_{{\mathbb C}^*\times M}=H'$
be a logarithmic $(TE)$-structure over a simply
connected manifold.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The bundle $H$ has a global basis which consists of
elementary sections $s_j\in C^{\alpha_j}$,
$j\in\{1,\dots ,\rk H\}$, for some $\alpha_j\in{\mathbb C}$.
Especially, $({\mathcal O}(H),\nabla)=\varphi_{t^0}^*
\big({\mathcal O}\big(H|_{{\mathbb C}\times\{t^0\}}\big),\nabla\big)$
for any $t^0\in M$, where $\varphi_{t^0}\colon M\to\big\{t^0\big\}$
is the projection. So it is just the pull back of
a logarithmic $(TE)$-structure over a point.
Especially, it is a regular singular $(TE)$-structure.
\item[$(b)$] The residual connection $\nabla^{\rm res}$ is flat. In~the notations~\eqref{3.4}--\eqref{3.6}, its
connection $1$-form is $\sum_{i=1}^n A_i^{(1)}{\rm d} t_i$.
The residue endomorphism ${\rm Res}$ is $\nabla^{\rm res}$-flat. In~the notations~\eqref{3.4}--\eqref{3.6}, it~is given
by $B^{(1)}$.
\item[$(c)$] The endomorphism ${\rm e}^{-2\pi {\rm i}{\rm Res}_0}\colon K\to K$
has the same eigenvalues as the monodromy $M^{\rm mon}$,
but it might have a simpler Jordan block structure.
If no eigenvalues of ${\rm Res}_0$
differ by a~nonzero integer
$($nonresonance condition$)$ then ${\rm e}^{-2\pi {\rm i}{\rm Res}_0}$
has the same Jordan block structure as the mono\-dromy $M^{\rm mon}$.
\end{enumerate}
\end{Theorem}
\begin{Remarks}\qua
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Part $(a)$ of Theorem~\ref{t3.23} implies that a
logarithmic $(TE)$-structure over a~simply connected
manifold $M$ is the pull back $\varphi^*((H,\nabla)|_{{\mathbb C}\times \{t^0\}})$
of its restriction to $t^0$ for any~$t^0\in M$.
\item[$(ii)$] In the case of a regular singular $(TE)$-structure
over a simply connected manifold $M$, one can choose
elementary sections $s_j\in C^{\alpha_j}$, $j\in\{1,\dots ,\rk H\}$,
for some $\alpha_j\in{\mathbb C}$, such that they form a basis of $H^*$
and such the extension to $\{0\}\times M$ which they define,
is a~logarithmic $(TE)$-structure.
Then the base change from any local basis of $H$ to the basis
$(s_1,\dots ,s_{\rk H})$ of this new $(TE)$-structure is meromorphic,
so the two $(TE)$-structures give the same meromorphic bundle.
This observation fits to the usual definition of meromorphic
bundle with regular singular pole.
\item[$(iii)$] The property of a section to have moderate growth,
is invariant under pull back. Therefore also the property
of a $(TE)$-structure to be regular singular is invariant
under pull back.
\end{enumerate}
\end{Remarks}
\subsection{Marked (\emph{TE})-structures and moduli spaces for them}
It is easy to give a $(TE)$-structure $(H\to{\mathbb C}\times M,\nabla)$
with nontrivial Higgs field and which is thus not the
pull back of the $(TE)$-structure over a point, such that
nevertheless the $(TE)$-structures over all points
$t^0\in M$ are isomorphic as abstract $(TE)$-structures.
Examples are given in Remark~\ref{t7.1}$(ii)$.
The existence of such $(TE)$-structures
obstructs the construction of nice Hausdorff
moduli spaces for $(TE)$-structures up to isomorphism.
The notion of a {\it marked} $(TE)$-structure hopefully
remedies this. However, in the moment, we have only results
in the regular singular cases.
Definition~\ref{t3.25} gives the notion of a {\it marked}
$(TE)$-structure. Definition~\ref{t3.26} defines {\it good}
families of marked regular singular $(TE)$-structures.
Definition~\ref{t3.28} defines a functor for such families.
Theorem~\ref{t3.29} states that this functor is represented
by a complex space. It~builds on results in~\cite[Chapter~7]{HS10}.
Several remarks discuss what is missing in the other cases
and what more we have in the regular singular rank $2$ case,
thanks to Theorems~\ref{t6.3},~\ref{t6.7} and~\ref{t8.5}.
\begin{Definition}\label{t3.25}\quad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] A {\it reference pair} $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$
consists of a finite dimensional (reference)
${\mathbb C}$-vector space $H^{{\rm ref},\infty}$
together with an automorphism $M^{\rm ref}$ of it.
\item[$(b)$] Let $M$ be a simply connected manifold.
A {\it marking} on a $(TE)$-structure
$(H\to{\mathbb C}\times M,\nabla)$ is an isomorphism
$\psi\colon (H^{\infty},M^{\rm mon})\to \big(H^{{\rm ref},\infty},M^{\rm ref}\big)$.
Here $H^{\infty}$ is (as in Definition~\ref{3.21})
the space of global flat multivalued sections
on the flat bundle $H':=H|_{{\mathbb C}^*\times M}$,
and $M^{\rm mon}$ is its monodromy.
$\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ is a reference pair.
The isomorphism $\psi$ of pairs means an isomorphism
$\psi\colon H^{\infty}\to H^{{\rm ref},\infty}$
with $\psi\circ M^{\rm mon}=M^{\rm ref}\circ\psi$.
A {\it marked} $(TE)$-structure is a $(TE)$-structure
with a marking.
{\sloppy\item[$(c)$] An isomorphism between two marked $(TE)$-structures
$\big(\big(H^{(1)},\nabla^{(1)}\big),\psi^{(1)}\big)$ and
$\big(\big(H^{(2)},\nabla^{(2)}\big),\psi^{(2)}\big)$ over the same base space
$M^{(1)}=M^{(2)}$ and with the same reference pair
$\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ is a gauge isomorphism $\varphi$
between the unmarked $(TE)$-structures such that
the induced isomorphism
$\varphi^{\infty}\colon H^{(1),\infty}\to H^{(2),\infty}$
is compatible with the marking,
\begin{gather*
\psi^{(2)}\circ\varphi^{\infty}=\psi^{(1)}.
\end{gather*}
}
\item[$(d)$] ${\rm Set}^{(H^{{\rm ref},\infty},M^{\rm ref})}$ denotes the set of
marked $(TE)$-structures over a point with the same reference
pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$. Furthermore,
${\rm Set}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}\subset
{\rm Set}^{(H^{{\rm ref},\infty},M^{\rm ref})}$
denotes the subset of marked regular singular
$(TE)$-structures over a point with the same reference pair
$\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$.
\end{enumerate}
\end{Definition}
We hope that ${\rm Set}^{(H^{{\rm ref},\infty},M^{\rm ref})}$ carries for
any reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ a natural
structure as a complex space.
Theorem~\ref{t3.29} says that this holds for
${\rm Set}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
and that this space represents a functor of good families
of marked regular singular $(TE)$-structures.
Definition~\ref{t3.26} gives a notion of a
{\it family of marked $(TE)$-structures}
and the notion of a {\it good family of marked regular
singular $(TE)$-structures}.
\begin{Definition}\label{t3.26}
Let $X$ be a complex space.
Let $t^0$ be an abstract point and
$\varphi\colon X\to\big\{t^0\big\}$ be the projection.
Let $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ be a reference pair.
Let $\big(H^{{\rm ref},*},\nabla^{\rm ref}\big)$ be a flat bundle
on~${\mathbb C}^*\times\big\{t^0\big\}$ with monodromy $M^{\rm ref}$
and whose space of global flat multivalued sections
is identified with $H^{{\rm ref},\infty}$.
Let $i\colon {\mathbb C}^*\times X\hookrightarrow {\mathbb C}\times X$
be the inclusion.
$(a)$ A {\it family of marked $(TE)$-structures over $X$}
is a pair $(H,\psi)$ with the following properties:
\begin{enumerate}\itemsep=0pt\setlength{\leftskip}{0.65cm}
\item[$(i)$]
$H$ is a holomorphic vector bundle on ${\mathbb C}\times X$,
i.e., the linear space associated to a locally free sheaf
${\mathcal O}(H)$ of ${\mathcal O}_{{\mathbb C}\times X}$-modules.
Denote $H':=H|_{{\mathbb C}^*\times X}$.
\item[$(ii)$]
$\psi$ is an isomorphism $\psi\colon H'\to \varphi^* H^{{\rm ref},*}$
such that the restriction of the induced flat connection on $H'$
to ${\mathbb C}^*\times\{x\}$ for any $x\in X$ makes $H|_{{\mathbb C}\times\{x\}}$
into a $(TE)$-structure over the point $x$, i.e.,
the connection has a pole of order $\leq 2$ on holomorphic
sections of~$H|_{{\mathbb C}\times\{x\}}$.
\end{enumerate}
$(b)$ Consider a family $(H,\psi)$ of marked
regular singular $(TE)$-structures over $X$.
The marking~$\psi$ induces for each $x\in X$
canonical isomorphisms
\begin{gather*}
\psi\colon\quad H^\infty(x)\to H^{{\rm ref},\infty}
\\
\psi\colon\quad C^{\alpha}(x)\to C^{{\rm ref},\alpha}\qquad
\big(\alpha\in{\mathbb C}\text{ with }{\rm e}^{-2\pi {\rm i}\alpha}\in\Eig\big(M^{\rm ref}\big)\big),
\nonumber
\\
\psi\colon\quad V^r(x)\to V^{{\rm ref},r} \qquad (r\in{\mathbb R}), \nonumber
\end{gather*}
where $H^{\infty}(x)$, $C^{\alpha}(x)$, $V^r(x)$ and
$C^{{\rm ref},\alpha}$, $V^{{\rm ref},r}$ are defined for the
$(TE)$-structure over $x$ respectively for $(H^{{\rm ref},*},\nabla)$
as in Definition~\ref{t3.21}.
The family $(H,\psi)$ is called {\it good} if some $r\in{\mathbb R}$
and some $N\in{\mathbb N}$ exist which satisfy
\begin{gather}\label{3.39}
{\mathcal O}(H|_{{\mathbb C}\times\{x\}})_0\supset V^{r}(x)
\qquad\text{for any}\quad x\in X,
\\
\dim_{\mathbb C} {\mathcal O}(H|_{{\mathbb C}\times\{x\}})_0/V^{r}(x)=N
\qquad\text{for any}\quad x\in X.\label{3.40}
\end{gather}
\end{Definition}
\begin{Remarks
\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The notion of a family of marked $(TE)$-structures
is too weak. For example, it contains the following
pathological family of logarithmic $(TE)$-structures
of rank 1 over $X:={\mathbb C}$ (with coordinate $t$)
and with trivial monodromy. Write $s_0\in C^0$ for a
generating flat section. Define $H$ by
\begin{gather*
{\mathcal O}(H)={\mathcal O}_{{\mathbb C}\times X}\cdot \big(t+z^l\big)s_0\qquad \text{for some}\quad l\in{\mathbb N}.
\end{gather*}
The marked $(TE)$-structures over all points $t\in{\mathbb C}^*\subset X={\mathbb C}$
are isomorphic and even equal, the one over $t=0$ is
different. The dimension ${\mathcal O}\big(H|_{{\mathbb C}\times\{t\}}\big)/V^l(t)$
is equal to $l$ for $t\in {\mathbb C}^*$ and equal to $0$ for $t=0$.
Therefore this family is not good in the sense of
Definition~\ref{t3.26}$(b)$.
Also, $z\nabla_{\partial_z}\big(t+z^l\big)s_0=lz^ls_0$ is not a
section in ${\mathcal O}(H)$, although for each fixed $t\in X$,
the restriction to ${\mathbb C}\times\{t\}$ is a section in
${\mathcal O}(H|_{{\mathbb C}\times \{t\}})$.
\item[$(ii)$] Theorem~\ref{t3.29} gives evidence that the notion
of a good family of marked regular singular $(TE)$-structures
is useful. However, it is not clear a priori whether any
regular singular $(TE)$-structure $(H\to{\mathbb C}\times M,\nabla)$
over a simply connected manifold $M$ is a good family
of marked regular singular $(TE)$-structures over $X=M$.
A marking can be imposed as~$M$ is simply connected.
Though the condition~\eqref{3.40} is not clear a priori.
Theorem~\ref{t8.5} will show this for regular singular
rank $2$ $(TE)$-structures. It~builds on Theorems~\ref{t6.3} and~\ref{t6.7} which show this for
regular singular rank $2$ $(TE)$-structures over $M={\mathbb C}$.
\item[$(iii)$] For not regular singular $(TE)$-structures,
we do not see an easy replacement of condition~\eqref{3.40}.
Is the condition $z^2\nabla_{\partial_z}{\mathcal O}(H)\subset{\mathcal O}(H)$
useful?
\end{enumerate}
\end{Remarks}
\begin{Definition}\label{t3.28}
Fix a reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Define the functor ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
from the category of complex spaces to the category of sets by
\begin{gather*
{\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}(X)
:= \{(H,\psi)\,|\, (H,\psi) \text{ is a good family of marked regular}
\\ \hphantom{{\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}(X):= \{}
\text{singular }(TE)\text{-structures over }X\},
\end{gather*}
{\sloppy
and, for any morphism $f\colon Y\to X$ of complex spaces and any element
$(H,\psi)$ of ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}(X)$, define
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}(f)(H,\psi):=f^*(H,\psi)$.
}
\item[$(b)$] Choose $r\in{\mathbb R}$ and $N\in {\mathbb N}$.
Define the functor ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$
from the category of~comp\-lex spaces to the category of sets by
\begin{gather*
{\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}(X)
:=\{(H,\psi)\,|\, (H,\psi) \text{ is a good family of marked regular }
\\ \hphantom{{\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}(X):=\{}
\text{singular} (TE)\text{-structures over }X
\text{ which satisfies~\eqref{3.39} }
\\ \hphantom{{\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}(X):=\{}
\text{and~\eqref{3.40} for the given }r\text{ and }N\},
\end{gather*}
{\sloppy
and, for any morphism $f\colon Y\to X$ of complex spaces and any element
$(H,\psi)$ of ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}(X)$, define
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}(f)(H,\psi):=f^*(H,\psi)$.
}
\end{enumerate}
\end{Definition}
{\sloppy\begin{Theorem}\label{t3.29}
The functors ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ and
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$ are represented by
complex spaces, which are called
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ and
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$. In~the case of ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$, the complex space
has even the structure of a projective algebraic variety.
As sets $M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}
={\rm Set}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$.
\end{Theorem}
}
\begin{proof} The proof for ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$
can be copied from the proof of Theorem~7.3 in~\cite{HS10}.
Here it is relevant that $r$ and $N$ with~\eqref{3.39} and
\eqref{3.40} imply the existence of an $r_2\in {\mathbb R}$ with $r_2<r$
and
\begin{gather}\label{3.44}
V^{r_2}(x)\supset {\mathcal O}(H|_{{\mathbb C}\times\{x\}})_0
\qquad\text{for any}\quad x\in X.
\end{gather}
In~\cite{HS10}, $(TERP)$-structures are considered.
\eqref{3.39} and~\eqref{3.44} are demanded there.~\eqref{3.40} is not
demanded there explicitly, but it follows from the properties
of the pairing there, and this is used in Lemma~7.2 in~\cite{HS10}.
The additional conditions of $(TERP)$-structures are not
essential for the arguments in the proof of Lemma~7.2 and
Theorem~7.3 in~\cite{HS10}. Therefore these proofs apply
also here and give the statements for
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$.
Let us call $(r,N)\in{\mathbb R}\times {\mathbb N}$ and
$\big(\widetilde r,\widetilde N\big)\in{\mathbb R}\times {\mathbb N}$ compatible if $n\in{\mathbb Z}$
with $\big(\widetilde r,\widetilde N\big)=\big(r+n,\allowbreak N+n\cdot\dim H^{{\rm ref},\infty}\big)$ exists. In~the case $n>0$,
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),\widetilde r,\widetilde N}$ is a union of
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),r,N}$ and additional irreducible
components. Thus for fixed $(r,N)$ the union
\[
\bigcup_{n\in{\mathbb N}}M^{(H^{{\rm ref},\infty},M^{\rm ref}),r+n,N+n\cdot\dim H^{{\rm ref},\infty}}
\]
is a complex space with in general countably many irreducible
(and compact) components.
And~$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ is the union
of these unions for all possible $(r,N)$
(as $\Eig(M^{\rm mon})$ is finite, in each interval of length
1, only finitely many $r$ are relevant).
\end{proof}
\begin{Remarks
\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] For each reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$
with $\dim H^{{\rm ref},\infty}=2$,
the representing complex space
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
for the functor ${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
is given in Theorem~\ref{t7.4}.
There the topological components are unions
$\bigcup_{n\in{\mathbb N}}M^{(H^{{\rm ref},\infty},M^{\rm ref}),r+n,N+n\cdot\dim H^{{\rm ref},\infty}}$
and have countably many irreducible components which are either isomorphic
to $\P^1$ or to the Hirzebruch surface ${\mathbb F}_2$ or to the variety $\widetilde{\mathbb F}_2$
obtained by blowing down the $(-2)$-curve in ${\mathbb F}_2$.
The space
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ is a union of
countably many copies of one topological component.
\item[$(ii)$] Corollary~\ref{t7.3} says that any marked rank $2$
regular singular $(TE)$-structure $(H\to{\mathbb C}\times M,\nabla,\psi)$
with reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ is
a good family of marked regular singular $(TE)$-structures.
Therefore and because of Theorem~\ref{t3.29},
such a $(TE)$-structure is induced by a morphism
$\varphi\colon M\to M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$.
This is crucial for the usefulness of the
space $M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$.
We hope that Corollary~\ref{t7.3} and this implication
are also true for higher rank regular singular
$(TE)$-structures.
\end{enumerate}
\end{Remarks}
\section[Rank 2 $(TE)$-structures over a point]{Rank 2 $\boldsymbol{(TE)}$-structures over a point}\label{c4}
Here we will classify the rank $2$ $(TE)$-structures over
a point.
\subsection{Separation into 4 cases
They separate naturally into 4 cases.
\begin{Definition
Let $(H\to{\mathbb C},\nabla)$ be a rank $2$ $(TE)$-structure
over a point $t^0=0$. Its formal invariants
$\delta^{(0)}$, $\rho^{(0)}$, $\delta^{(1)}$, $\rho^{(1)}$
from Lemma~\ref{t3.9} are complex numbers.
The eigenvalues of $-{\mathcal U}$ are called $u_1,u_2\in{\mathbb C}$.
They are given by
$(x-u_1)(x-u_2)=x^2+2\rho^{(0)}x+\delta^{(0)}$.
We separate four cases:
\begin{enumerate}\itemsep=0pt\setlength{\leftskip}{0.3cm}
\item[(Sem)] ${\mathcal U}$ has two different eigenvalues
$-u_1$ and $-u_2\in{\mathbb C}$, i.e.,
$0\neq \delta^{(0)}-\big(\rho^{(0)}\big)^2$.
\item[(Bra)] ${\mathcal U}$ has only one eigenvalue
\big(which is then $\rho^{(0)}$\big)
and one $2\times 2$ Jordan block,
and $\delta^{(1)}-2\rho^{(0)}\rho^{(1)}\neq 0$.
\item[(Reg)] ${\mathcal U}$ has only one eigenvalue
\big(which is then $\rho^{(0)}$\big)
and one $2\times 2$ Jordan block,
and $\delta^{(1)}-2\rho^{(0)}\rho^{(1)}= 0$.
\item[(Log)] ${\mathcal U}=\rho^{(0)}\cdot\id$.
\end{enumerate}
Here (Sem) stands for {\it semisimple},
(Bra) for {\it branched}, (Reg) for {\it regular singular}
and (Log) for {\it logarithmic}.
\end{Definition}
\begin{Remark}\label{t4.2}
Rank 2 $(TE)$-structures over a point are richer than the
germs of mermorphic rank $2$ vector bundles with a pole of
order 2. Though the four types above are closely related
to the formal classification of the latter ones by their
slopes (the notion of slopes is developed for example in
\cite[Section~5]{Sa93}). In~rank $2$, three slopes are possible,
slope 1, slope $\frac{1}{2}$ and slope~0.
Slope 1 corresponds to the type (Sem), slope $\frac{1}{2}$
to type (Bra), and slope 0 to the types (Reg) and (Log).
\end{Remark}
First we will treat the semisimple case (Sem).
Then the cases (Bra), (Reg) and (Log)
will be considered together.
Lemma~\ref{t4.9} will justify the names (Bra) and (Reg).
Finally, the three cases (Bra), (Reg) and (Log) will be
treated one after the other.
The following lemma gives some first information.
Its proof is straightforward.
\begin{Lemma}\label{t4.3}
Let $(H\to{\mathbb C},\nabla)$ be a rank $2$ $(TE)$-structure over a point.
Denote by $\big(\widetilde H\to{\mathbb C},\widetilde\nabla\big)$ the $(TE)$-structure
with trace free pole part with
$\big({\mathcal O}\big(\widetilde H\big),\widetilde\nabla\big)=({\mathcal O}(H),\nabla)\otimes{\mathcal E}^{\rho^{(0)}/z}$
from Lemma~$\ref{t3.10}(b)$
$\big($called $\big(H^{[2]}\to{\mathbb C},\nabla^{[2]}\big)$ there$\big)$,
and denote its invariants from Lemma~$\ref{t3.9}$
by~$\widetilde{\mathcal U}$,~$\widetilde\delta^{(0)}$, $\widetilde\rho^{(0)},
\widetilde\delta^{(1)}$, $\widetilde\rho^{(1)}$. Then
\begin{gather*
\widetilde{\mathcal U}= {\mathcal U}-\rho^{(0)}\id,
\\
\widetilde\delta^{(0)}= \delta^{(0)}-\big(\rho^{(0)}\big)^2,\qquad
\widetilde\rho^{(0)}=0,\nonumber
\\
\widetilde\delta^{(1)}= \delta^{(1)}-2\rho^{(0)}\rho^{(1)},\qquad
\widetilde\rho^{(1)}=\rho^{(1)}.
\end{gather*}
$\big(\widetilde H\to{\mathbb C},\widetilde\nabla\big)$ is of the same type $($Sem$)$ or $($Bra$)$
or $($Reg$)$ or $($Nil$)$ as $(H\to{\mathbb C},\nabla)$.
The following table characterizes of which type the
$(TE)$-structures $(H\to{\mathbb C},\nabla)$ and
$\big(\widetilde H\to{\mathbb C},\widetilde\nabla\big)$ are
\[
\def1.4{1.5}
\begin{tabular}{c|c|c|c}
\hline
$($Sem$)$ & $($Bra$)$ &$($Reg$)$ &$($Log$)$
\\ \hline
$\widetilde\delta^{(0)}\neq 0$ & $\widetilde\delta^{(0)}=0$, $\widetilde\delta^{(1)}\neq 0$
&$\widetilde\delta^{(0)}=\widetilde \delta^{(1)}=0$, $\widetilde{\mathcal U}\neq 0$ & $\widetilde{\mathcal U}=0$
\\
\hline
\end{tabular}
\]
Especially, $\widetilde{\mathcal U}=0$ implies $\widetilde\delta^{(0)}=
\widetilde\delta^{(1)}=0$.
\end{Lemma}
\subsection{The case (Sem)
A $(TE)$-structure over a point with a semisimple
endomorphism ${\mathcal U}$ with pairwise different eigenvalues
is formally isomorphic to a socalled {\it elementary model},
and its holomorphic isomorphism class is determined
by its Stokes structure. These two facts are well known.
A good reference is~\cite[Chapter~II, Sections~5 and~6]{Sa02}.
The older reference~\cite{Ma83a} considers only
the underlying meromorphic bundle, so
$\big({\mathcal O}(H)_0\otimes_{{\mathbb C}\{z\}}{\mathbb C}\{z\}\big[z^{-1}\big],\nabla\big)$.
In order to formulate the result for rank $2$ $(TE)$-structures
more precisely, we need some notation.
\begin{Definition}\label{t4.4}
Choose numbers $u_1,u_2,\alpha_1,\alpha_2\in{\mathbb C}$.
Consider the flat bundle $H'\to{\mathbb C}^*$ with flat connection
$\nabla$ and a basis $\underline{f}=(f_1,f_2)$
of global flat multivalued sections $f_1$ and $f_2$
with the monodromy
\begin{gather*
\underline{f}\big(z\cdot {\rm e}^{2\pi {\rm i}}\big)= \underline{f}(z)
\begin{pmatrix}{\rm e}^{-2\pi {\rm i} \alpha_1} & 0 \\
0 & {\rm e}^{-2\pi {\rm i} \alpha_2} \end{pmatrix}\!.
\end{gather*}
The new basis $\underline{v}=(v_1,v_2)$ which is defined by
\begin{gather*
\underline{v}(z)= \underline{f}(z)
\begin{pmatrix}{\rm e}^{u_1/z}z^{\alpha_1} & 0 \\
0 & {\rm e}^{u_2/z}z^{\alpha_2}\end{pmatrix}
\end{gather*}
(for some choice of $\log(z)$) is univalued. It~defines a $(TE)$-structure with
\begin{gather*
z^2\nabla_{\partial_z}\underline{v}=\underline{v}\cdot B
\qquad\text{and}\qquad
B=\begin{pmatrix}-u_1+ z\alpha_1 & 0 \\
0 & -u_2+z\alpha_2\end{pmatrix}\!.
\end{gather*}
This $(TE)$-structure is called an {\it elementary model}.
The numbers $\alpha_1$ and $\alpha_2$ are called
the {\it regular singular exponents}.
The formal invariants
$\delta^{(0)},\rho^{(0)},\delta^{(1)},\rho^{(1)}\in{\mathbb C}$
of the $(TE)$-structure and the tuple
$(u_1,u_2,\alpha_1,\alpha_2)$ (up to joint exchange of the indices 1 and 2) are equivalent because of
\begin{gather}\label{4.6}
\delta^{(0)}-\big(\rho^{(0)}\big)^2= -\frac{1}{4}(u_1-u_2)^2,\qquad
\rho^{(0)}=-\frac{u_1+u_2}{2},
\\
\delta^{(1)}-2\rho^{(0)}\rho^{(1)}=\frac{u_1-u_2}{2}(\alpha_1-\alpha_2),\qquad
\rho^{(1)}=\frac{\alpha_1+\alpha_2}{2}.\label{4.7}
\end{gather}
Therefore also the tuple $(u_1,u_2,\alpha_1,\alpha_2)$
(up to joint exchange of the indices 1 and 2)
is a formal invariant of the $(TE)$-structure.
\end{Definition}
\begin{Theorem}\label{t4.5}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Any rank $2$ $(TE)$-structure over a point with
endomorphism ${\mathcal U}$ with two different
eigen\-va\-lues is formally isomorphic to a unique
elementary model in Definition~$\ref{t4.4}$.
Here $-u_1$ and~$-u_2$ are the eigenvalues of ${\mathcal U}$.
\item[$(b)$] The $(TE)$-structure in $(a)$ is up to holomorphic
isomorphism determined by the numbers~$u_1$, $u_2$, $\alpha_1$, $\alpha_2$ and two more numbers
$s_1,s_2\in{\mathbb C}$, the Stokes parameters. It~is holomorphically isomorphic to the elementary
model to which it is formally isomorphic if and only if~$s_1=s_2=0$.
\item[$(c)$] Any such tuple $(u_1,u_2,\alpha_1,\alpha_2,s_1,s_2)
\in \big({\mathbb C}^2\setminus \{(u_1,u_1)\,|\, u_1\in{\mathbb C}\}\big)\times{\mathbb C}^4$
determines such a~$(TE)$-structure.
A second tuple $\big(\widetilde u_1,\widetilde u_2,\widetilde\alpha_1,
\widetilde \alpha_2,\widetilde s_1,\widetilde s_2\big)
\neq (u_1,u_2,\alpha_1,\alpha_2,s_1,s_2)$
determines an isomorphic $(TE)$-structure if and only
if $\big(\widetilde u_1,\widetilde u_2,\widetilde\alpha_1,\widetilde \alpha_2,\widetilde s_1,\widetilde s_2\big)
=(u_2,u_1,\alpha_2,\alpha_1,s_2,s_1)$.
\end{enumerate}
\end{Theorem}
Part $(a)$ follows for example from
\cite[Chapter~II, Theorem~5.7]{Sa02} together with
\cite[Chapter~II, Remark 5.8]{Sa02} (Theorem~5.7 considers
only the underlying meromorphic bundle;
Remark 5.8 takes care of the holomorphic bundle).
For the parts $(b)$ and $(c)$, one needs to deal in detail
with the Stokes structure. We will not do it here,
as the semisimple case is not central in this paper.
We~refer to~\cite[Chapter~II, Sections~5 and~6]{Sa02}
or to~\cite{HS11}.
\begin{Remarks
\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Malgrange's unfolding result, Theorem~\ref{t3.16}$(c)$,
applies to these $(TE)$-structures.
Such a $(TE)$-structure has a unique universal unfolding.
The parameters $(\alpha_1,\alpha_2,s_1,s_2)$ are constant,
the parameters $(u_1,u_2)$ are local coordinates
on the base space. The base space is an $F$-manifold
of type $A_1^2$ with Euler field $E=u_1e_1+u_2e_2$.
See Remark~\ref{t5.3}$(iii)$.
\item[$(ii)$] We do not offer normal forms for the $(TE)$-structures
in Theorem~\ref{t4.5} for three reasons: (1)~As said in $(i)$,
the $(TE)$-structures above unfold uniquely to
$(TE)$-structures over germs of $F$-manifolds. In~that sense they are easy to deal with.
(2)~It looks difficult to write down normal forms.
(3)~Normal forms should be considered
together with the Stokes parameters, and the corresponding
Riemann--Hilbert map from the space of {\it monodromy data}
$(\alpha_1,\alpha_2,s_1,s_2)$ to a space of
parameters for normal forms should be studied.
This is a nontrivial project,
which does not fit into the main aims of this paper.
\end{enumerate}
\end{Remarks}
\subsection{Joint considerations on the cases (Bra), (Reg) and (Log)
\begin{Notation
We shall use the following matrices,
\begin{gather*
C_1:={\bf 1}_2,\qquad
C_2:=\begin{pmatrix}0&0\\1&0\end{pmatrix}\!,\qquad
D:=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\!,\qquad
E:=\begin{pmatrix}0&1\\0&0\end{pmatrix}\!,
\end{gather*}
and the relations between them,
\begin{gather*}
C_2^2=0,\qquad D^2=C_1,\qquad E^2=0
\\[.5ex]
C_2D=C_2=-DC_2,\qquad [C_2,D]=2C_2
\\
C_2E=\frac{1}{2}(C_1-D),\qquad
EC_2=\frac{1}{2}(C_1+D),\qquad
[C_2,E]=-D
\\
DE=E=-ED,\qquad [D,E]=2E
\end{gather*}
\end{Notation}
Consider a $(TE)$-structure $(H\to{\mathbb C},\nabla)$
over a point with ${\mathcal U}$ of type (Bra), (Reg) or (Log).
Then ${\mathcal U}$ has only one eigenvalue, which is $\rho^{(0)}\in{\mathbb C}$.
We can and will restrict to ${\mathbb C}\{z\}$-bases $\underline{v}$
of~${\mathcal O}(H)_0$ such that the matrix
$B\in M_{2\times 2}({\mathbb C}\{z\})$ with
$z^2\nabla_{\partial_z}\underline{v}=\underline{v}\cdot B$ has the shape
\begin{gather}\label{4.13}
B= b_1C_1+b_2C_2+zb_3D+zb_4E\qquad\text{with}\quad
b_1,b_2,b_3,b_4\in{\mathbb C}\{z\}.
\end{gather}
Write as in Remark~\ref{t3.2} $B=\sum_{k\geq 0}B^{(k)}z^k$
with $B^{(k)}\in M_{2\times 2}({\mathbb C})$, and write
\begin{gather}\label{4.14}
b_j=\sum_{k\geq 0}b_j^{(k)}z^k\qquad\text{with}\quad
b_j^{(k)}\in{\mathbb C}\qquad\text{for}\quad j\in\{1,2\},
\\
zb_j=\sum_{k\geq 1}b_j^{(k)}z^{k}\qquad\text{with}\quad
b_j^{(k)}\in{\mathbb C}\qquad\text{for}\quad j\in\{3,4\}.\label{4.15}
\end{gather}
Then the formal invariants $\delta^{(0)}$, $\rho^{(0)}$,
$\delta^{(1)}$ and $\rho^{(1)}$ of Lemma~\ref{t3.9} are
given by
\begin{gather*
\rho^{(0)}=b_1^{(0)},\qquad \rho^{(1)}=b_1^{(1)},
\\
\delta^{(0)}-\big(\rho^{(0)}\big)^2=0,\qquad
\delta^{(1)}-2\rho^{(0)}\rho^{(1)}= -b_2^{(0)}b_4^{(1)}
\end{gather*}
We are in the case (Bra) if $b_2^{(0)}\neq 0$ and
$b_4^{(1)}\neq 0$, in the case (Reg) if $b_2^{(0)}\neq 0$
and $b_4^{(1)}=0$,
and~in the case (Log) if $b_2^{(0)}=0$.
Consider $T\in {\rm GL}_2({\mathbb C}\{z\})$ and the new basis
$\underline{\widetilde v}=\underline{v}\cdot T$ and its matrix
$\widetilde B =\sum_{k\geq 0}\widetilde B^{(k)}z^k$ with~$z\nabla_{z\partial_z}\underline{\widetilde v}=\underline{\widetilde v}\cdot\widetilde B$.
Write
\begin{gather*
T=\tau_1C_1+\tau_2C_2+\tau_3D+\tau_4E\qquad\text{with}\quad
\tau_j=\sum_{k\geq 0}\tau_j^{(k)}z^k,\quad
\tau_j^{(k)}\in{\mathbb C}.
\end{gather*}
Then $\widetilde B$ is determined by~\eqref{3.13}, which is
\begin{align}
0&=z^2\partial_zT+B\cdot T-T\cdot\widetilde B\nonumber
\\
&= C_1\bigg(z^2\partial_z\tau_1+\big(b_1-\widetilde b_1\big)\tau_1
+z\big(b_4-\widetilde b_4\big)\frac{\tau_2}{2}+z\big(b_3-\widetilde b_3\big)\tau_3
+\big(b_2-\widetilde b_2\big)\frac{\tau_4}{2}\bigg)\nonumber
\\
&\phantom{=}+ C_2\big(z^2\partial_z \tau_2+\big(b_2-\widetilde b_2\big)\tau_1
+\big(b_1-\widetilde b_1\big)\tau_2 + z\big({-}b_3-\widetilde b_3\big)\tau_2
+ \big(b_2+\widetilde b_2\big)\tau_3\big)\nonumber
\\
&\phantom{=}+ D\bigg(z^2\partial_z\tau_3+z\big(b_3-\widetilde b_3\big)\tau_1
+z\big(b_4+\widetilde b_4\big)\frac{\tau_2}{2}+\big(b_1-\widetilde b_1\big)\tau_3
+ \big({-}b_2-\widetilde b_2\big)\frac{\tau_4}{2}\bigg)\nonumber
\\
&\phantom{=}+ E\big(z^2\partial_z\tau_4 +z\big(b_4-\widetilde b_4\big)\tau_1
+z\big({-}b_4-\widetilde b_4\big)\tau_3+\big(b_1-\widetilde b_1\big)\tau_4
+z\big(b_3+\widetilde b_3\big)\tau_4\big).\label{4.19}
\end{align}
We will use this quite often in order to construct
or compare normal forms. The following immediate corollary
of the proof of
Lemma~\ref{t3.11} provides a reduction of $b_1$.
\begin{Corollary}\label{t4.8}
The base change matrix $T={\rm e}^g\cdot C_1$ with $g$ as in
\eqref{3.20} leads to $\widetilde b_j$ with
\begin{gather*
\widetilde b_1=b_1^{(0)}+zb_1^{(1)}=\rho^{(0)}+z \rho^{(1)},\qquad
\widetilde b_2=b_2,\qquad \widetilde b_3=b_3,\qquad \widetilde b_4=b_4,
\end{gather*}
\end{Corollary}
From now on we will work in this section only with bases
$\underline{v}$ with $b_1=\rho^{(0)}+z \rho^{(1)}$. This is justified
by Corollary~\ref{t4.8}.
Furthermore, we will consider from now on in this section
mainly $(TE)$-structures with trace free pole part
\big(Definition~\ref{t3.8}, $\rho^{(0)}=\frac{1}{2}\tr{\mathcal U}=0$\big).
See Lemmata~\ref{t3.10} and~\ref{t3.11} for the
relation to the general case.
The next lemma separates the cases (Bra) and (Reg).
\begin{Lemma}\label{t4.9}
Consider a $(TE)$-structure over a point with
${\mathcal U}$ of type $($Bra$)$ or type $($Reg$)$ and with trace free pole part
$($so ${\mathcal U}$ is nilpotent but not~$0)$.
The $(TE)$-structure is regular singular if and only
if it is of type $($Reg$)$. If it is of type $($Bra$)$,
then the pullback of
${\mathcal O}(H)_0\otimes_{{\mathbb C}\{z\}}{\mathbb C}\{z\}\big[z^{-1}\big]$
by the map ${\mathbb C}\to{\mathbb C}$, $x\mapsto x^4=z$,
is the space of germs at~$0$ of sections of a meromorphic bundle
on ${\mathbb C}$ with a meromorphic connection
with an order~$3$ pole at~$0$ with semisimple pole part with
eigenvalues $\kappa_1$ and $\kappa_2=-\kappa_1$ with
$-\frac{1}{4}\kappa_1^2=\delta^{(1)}\in{\mathbb C}^*$.
Thus $\kappa_1^2$ is a formal invariant of the
$(TE)$-structure of type $($Bra$)$.
\end{Lemma}
\begin{proof}
Consider a ${\mathbb C}\{z\}$-basis $\underline{v}$ of ${\mathcal O}(H)_0$
such that its matrix $B$ is as in~\eqref{4.13}
and such that $b_1=z\rho^{(1)}$. This is possible
by Corollary~\ref{t4.8} and the assumption $\rho^{(0)}=0$.
As ${\mathcal U}$ is nilpotent, but not 0, $b_2^{(0)}\neq 0$.
Now $\delta^{(1)}=-b_2^{(0)}b_4^{(1)}$, so
$\delta^{(1)}\neq 0\iff b_4^{(1)}\neq 0$.
Consider the case $b_4^{(1)}\neq 0$, and consider
the pullback of the $(TE)$-structure by the map
${\mathbb C}\to{\mathbb C}$, $x\mapsto x^4=z$.
Then $\frac{{\rm d} z}{z}=4\frac{{\rm d} x}{x}$ and
$z\partial_z=\frac{1}{4}x\partial_x$ and
\begin{gather
\nabla_{x\partial_x}\underline{v}=
\underline{v}\cdot 4\sum_{k\geq 0}B^{(k)}x^{4k-4},\nonumber
\\
\nabla_{x\partial_x}\big(\underline{v}\cdot x^D\big)=
\big(\underline{v}\cdot x^D\big)4\nonumber
\\ \hphantom{\nabla_{x\partial_x}\big(\underline{v}\cdot x^D\big)=}
{}\times\!\bigg(x^{-2}\!\sum_{k\geq 0}\big(b_2^{(k)}C_2+b_4^{(k+1)}E\big)x^{4k}
\!+\rho^{(1)}C_1+\bigg(\frac{1}{4}
+\!\sum_{k\geq 0}b_3^{(k+1)}x^{4k}\!\bigg)D\bigg).\!\label{4.22}
\end{gather}
One sees a pole of order 3 with matrix
$4\big(b_2^{(0)}C_2+b_4^{(1)}E\big)$ of the pole part. It~is tracefree and has the eigenvalues
$\kappa_1$ and $\kappa_2=-\kappa_1$ with
$\kappa_1^2=4b_2^{(0)}b_4^{(1)}\in{\mathbb C}^*$.
This shows the claims in the case $b_4^{(1)}\neq 0$.
Consider the case $b_4^{(1)}= 0$, and consider
the pullback of the $(TE)$-structure by the map
${\mathbb C}\to{\mathbb C}$, $x\mapsto x^2=z$.
Then $\frac{{\rm d} z}{z}=2\frac{{\rm d} x}{x}$ and
$z\partial_z=\frac{1}{2}x\partial_x$ and
\begin{gather
\nabla_{x\partial_x}\underline{v}=
\underline{v}\cdot 2\sum_{k\geq 0}B^{(k)}x^{2k-2},\nonumber
\\
\nabla_{x\partial_x}\big(\underline{v}\cdot x^D\big)=\nonumber
\big(\underline{v}\cdot x^D\big)2
\\ \hphantom{\nabla_{x\partial_x}\big(\underline{v}\cdot x^D\big)=}
{}\times\bigg(\rho^{(1)}C_1 + \frac{1}{2}D
+\sum_{k\geq 0}
\big(b_2^{(k)}C_2+b_4^{(k+2)}E+b_3^{(k+1)}D\big)x^{2k}\bigg).\label{4.24}
\end{gather}
One sees a logarithmic pole. Therefore also the
sections $v_1$ and $v_2$ have moderate growth,
and~the $(TE)$-structure is regular singular.
\end{proof}
\begin{Remark
The two transformations in~\eqref{4.22}
(for the case (Bra)) and~\eqref{4.24} (for the case (Reg))
are special cases of a systematic development of such
ramified gauge transformations in~\cite{BV83}
(a short description is given in~\cite[p.~17]{Va96}).
The basic idea goes back to the shearing transformations
of Fabry (see~\cite{Fa85} and \cite[p.~4]{Va96}).
\end{Remark}
\subsection{The case (Bra)
The following theorem gives complete control on the
$(TE)$-structures over a point of the type (Bra).
Here $\Eig(M^{\rm mon})\subset{\mathbb C}$ is the set of eigenvalues
of the monodromy of such a $(TE)$-structure
(it has 1 or 2 elements).
\begin{Theorem}\label{t4.11}\quad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Consider a $(TE)$-structure over a point of the type $($Bra$)$.
The formal invariants $\rho^{(0)}$, $\rho^{(1)}$ and
$\delta^{(1)}\in{\mathbb C}$ from Lemma~$\ref{t3.9}$ and the
set $\Eig(M^{\rm mon})$
are invariants of its isomorphism class.
Together they form a complete set of invariants.
That means, the isomorphism class of the $(TE)$-structure
is determined by these invariants.
\item[$(b)$] Any such $(TE)$-structure has a ${\mathbb C}\{z\}$-basis
$\underline{v}$ of ${\mathcal O}(H)_0$ such that its matrix is
in Birkhoff normal form, and more precisely, the matrix
$B$ has the shape
\begin{gather}\label{4.25}
B= \big(\rho^{(0)}+z\rho^{(1)}\big)C_1 + b_2^{(0)}C_2 + zb_3^{(1)}D+ zb_4^{(1)}E,
\end{gather}
where $b_2^{(0)},b_4^{(1)}\in{\mathbb C}^*$ and $b_3^{(1)}\in{\mathbb C}$ satisfy
$-b_2^{(0)}b_4^{(1)}=\delta^{(1)}-2\rho^{(0)}\rho^{(1)}$ and
$\Eig(M^{\rm mon})=\big\{{\rm e}^{-2\pi {\rm i} (\rho^{(1)}\pm b_3^{(1)})}\big\}$.
\end{enumerate}
\end{Theorem}
\begin{Remarks}\label{t4.12}\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Because of part $(a)$, two Birkhoff normal forms as in~\eqref{4.25}
with data $\big(\rho^{(0)},\rho^{(1)},b_2^{(0)},\allowbreak b_3^{(1)},b_4^{(1)}\big)$
and $\big(\widetilde\rho^{(0)},\widetilde\rho^{(1)},\widetilde b_2^{(0)},\widetilde b_3^{(1)}, \widetilde b_4^{(1)}\big)$
give isomorphic $(TE)$-structures
if and only if $\widetilde \rho^{(0)}=\rho^{(0)}$,
$\widetilde\rho^{(1)}=\rho^{(1)}$,
$\widetilde b_2^{(0)}\widetilde b_4^{(1)}=b_2^{(0)} b_4^{(1)}$
and $\widetilde b_3^{(1)}\in \big\{{\pm} b_3^{(1)}+k\,|\, k\in{\mathbb Z}\big\}$.
However, the pure $(TLE)$-structures which they
define, are isomorphic only if additionally
$\widetilde b_3^{(1)}\in\big\{{\pm} b_3^{(1)}\big\}$.
\item[$(ii)$] We could restrict to Birkhoff normal forms with
$b_2^{(0)}=1$ or with $b_4^{(1)}=1$. Though in view of the
$(TE)$-structures in the 4th case in Theorem~\ref{t6.3}
we prefer not to do that.
\end{enumerate}
\end{Remarks}
\begin{proof}[Proof of Theorem~\ref{t4.11}] The proof has 3 steps.
\medskip\noindent
{\it Step 1:} Birkhoff normal forms exist. In~order to show this, it is sufficient to prove
the hypothesis in Theorem~\ref{t3.20}$(b)$.
The hypothesis says that the germ of the meromorphic
bundle underlying a $(TE)$-structure of type (Bra)
is irreducible.
A proof by calculation is given in the proof of~Lemma~28
in~\cite{DH20-3}.
Though this is also well known as the rank is two
and the slope is $\frac{1}{2}$ (see Remark~\ref{t4.2}).
\medskip\noindent
{\it Step 2:} Analysis of the Birkhoff normal forms.
The matrix $B$ of a Birkhoff normal form can be chosen
with $b_1=\rho^{(0)}+z\rho^{(1)}$ because of Corollary~\ref{t4.8}. Then it has the shape
\begin{gather*
B= \big(\rho^{(0)}+z\rho^{(1)}\big)C_1+\big(b_2^{(0)}+zb_2^{(1)}\big)C_2
+zb_3^{(1)}D+zb_4^{(1)}E
\end{gather*}
with $b_2^{(0)}\neq 0$ and $b_4^{(1)}\neq 0$.
Consider the new basis $\underline{\widetilde v}=\underline{v}\cdot T$
and its matrix $\widetilde B$, where
\begin{gather}\label{4.29}
T=C_1+\tau_2^{(0)}C_2 \qquad\text{for some}\quad \tau_2^{(0)}\in{\mathbb C}.
\end{gather}
Equation~\eqref{4.19} gives
\begin{gather*}
0= \big(b_1-\widetilde b_1\big)+z\big(b_4^{(1)}-\widetilde b_4\big)\frac{\tau_2^{(0)}}{2},
\qquad
0= \big(b_2-\widetilde b_2\big)+\big(b_1-\widetilde b_1\big)\tau_2^{(0)}+z\big({-}b_3^{(1)}-\widetilde b_3\big)\tau_2^{(0)},
\\
0= \big(b_3^{(1)}-\widetilde b_3\big)+\big(b_4^{(1)}+\widetilde b_4\big)\frac{\tau_2^{(0)}}{2},
\qquad
0= \big(b_4^{(1)}-\widetilde b_4\big),
\end{gather*}
so
\begin{gather}
\widetilde b_4= \widetilde b_4^{(1)}=b_4^{(1)},\qquad
\widetilde b_1=b_1,\qquad
\widetilde b_3= \widetilde b_3^{(1)}=b_3^{(1)}+b_4^{(1)}\tau_2^{(0)},
\nonumber
\\
\widetilde b_2^{(0)}= b_2^{(0)},\qquad
\widetilde b_2^{(1)}= b_2^{(1)}-2b_3^{(1)}\tau_2^{(0)}-b_4^{(1)}\big(\tau_2^{(0)}\big)^2.
\label{4.30}
\end{gather}
$\tau_2^{(0)}$ can be chosen such that $\widetilde b_2^{(1)}=0$.
Then the Birkhoff normal form $\widetilde B$ has the shape in~\eqref{4.25}.
Suppose now that $B$ has this shape, so $b_2=b_2^{(0)}$.
The choice $\tau_2^{(0)}:=-2b_3^{(1)}/b_4^{(1)}$ in~\eqref{4.29} leads to
\begin{gather}\label{4.31}
\widetilde b_1=b_1,\qquad
\widetilde b_2=b_2,\qquad
\widetilde b_4=b_4\qquad\text{and}\qquad
\widetilde b_3=-b_3.
\end{gather}
Consider the new basis $\underline{\widetilde v}=\underline{v}\cdot T$
and its matrix $\widetilde B$, where
\begin{gather*
T=C_1+\tau_3^{(0)}D \qquad\text{for some}\quad \tau_3^{(0)}\in{\mathbb C}\setminus\{\pm 1\}.
\end{gather*}
Equation~\eqref{4.19} gives
\begin{gather*}
0= \big(b_1-\widetilde b_1\big)+z\big(b_3^{(1)}-\widetilde b_3\big)\tau_3^{(0)},
\\
0= \big(b_2^{(0)}-\widetilde b_2\big)+\big(b_2^{(0)}+\widetilde b_2\big)\tau_3^{(0)},
\\
0= z\big(b_3^{(1)}-\widetilde b_3\big)+\big(b_1-\widetilde b_1\big)\tau_3^{(0)},
\\
0= \big(b_4^{(1)}-\widetilde b_4\big) + \big({-}b_4^{(1)}-\widetilde b_4\big)\tau_3^{(0)},
\end{gather*}
so
\begin{gather}\label{4.33}
\widetilde b_1= b_1,\qquad
\widetilde b_3=b_3^{(1)},\qquad
\widetilde b_2= b_2^{(0)}\frac{1+\tau_3^{(0)}}{1-\tau_3^{(0)}},\qquad
\widetilde b_4= b_4^{(1)}\frac{1-\tau_3^{(0)}}{1+\tau_3^{(0)}}.
\end{gather}
So, in a Birkhoff normal form in~\eqref{4.25},
one can change $b_2^{(0)}$ and $b_4^{(1)}$ arbitrarily
with constant product $b_2^{(0)}b_4^{(1)}$ and without
changing $b_1=\rho^{(0)}+z\rho^{(1)}$ and $b_3^{(1)}$.
Consider the new basis $\underline{\widetilde v}=\underline{v}\cdot T$
and its matrix $\widetilde B$, where
\begin{gather*
T=\big(1+z\tau_1^{(1)}\big)C_1+\tau_2^{(0)}C_2+z\tau_3^{(1)}D
+z\tau_4^{(1)}E,\qquad \text{for some}\quad
\tau_1^{(1)},\tau_2^{(0)},
\tau_3^{(1)},\tau_4^{(1)}\in{\mathbb C}
\end{gather*}
We are searching for coefficients
$\tau_1^{(1)},\tau_2^{(0)},\tau_3^{(1)},\tau_4^{(1)}\in{\mathbb C}$
such that
\begin{gather}\label{4.35}
\widetilde b_1=b_1,\qquad
\widetilde b_2=b_2,\qquad
\widetilde b_4=b_4,\qquad
\widetilde b_3=b_3+\varepsilon\qquad \text{with}\quad \varepsilon=\pm 1.
\end{gather}
Under these constraints,
\eqref{4.19} gives
\begin{gather*}
0= \tau_1^{(1)}-\varepsilon \tau_3^{(1)},
\\
0= \big({-}2b_3^{(1)}-\varepsilon\big)\tau_2^{(0)}+ 2b_2^{(0)}\tau_3^{(1)},
\\
0= z\tau_3^{(1)}-\varepsilon\big(1+z\tau_1^{(1)}\big)+b_4^{(1)}\tau_2^{(0)} -b_2^{(0)}\tau_4^{(1)},
\\
0=\tau_4^{(1)} -2b_4^{(1)}\tau_3^{(1)}+\big(2b_3^{(1)}+\varepsilon\big)\tau_4^{(1)}.
\end{gather*}
With $\tau_1^{(1)}=\varepsilon\tau_3^{(1)}$, these
equations boil down to the inhomogeneous linear system
of equations
\begin{gather}\label{4.36}
\begin{pmatrix}0\\ \varepsilon\\ 0 \end{pmatrix} =
\begin{pmatrix} -2b_3^{(1)}-\varepsilon & 2b_2^{(0)} & 0 \\
b_4^{(1)} & 0 & -b_2^{(0)} \\
0 & -2b_4^{(1)} & 2b_3^{(1)}+\varepsilon+1 \end{pmatrix}
\begin{pmatrix} \tau_2^{(0)}\\ \tau_3^{(1)}\\ \tau_4^{(1)}
\end{pmatrix}\!.
\end{gather}
The determinant of the $3\times 3$ matrix is
$-2 b_2^{(0)}b_4^{(1)}\neq 0$. Therefore the system~\eqref{4.36}
has a unique solution $\big(\tau_2^{(0)},\tau_3^{(1)},
\tau_4^{(1)}\big)^t$.
Thus a new basis $\underline{\widetilde v}=\underline{v}\cdot T$
with~\eqref{4.35} exists.
Iterating this construction, one finds that one can
change the matrix $B$ in~\eqref{4.25} by a holomorphic
base change to a matrix $\widetilde B$ with
\begin{gather}\label{4.37}
\widetilde b_1=b_1,\qquad
\widetilde b_2=b_2,\qquad
\widetilde b_4=b_4,\qquad
\widetilde b_3=b_3+k,
\end{gather}
for any $k\in{\mathbb Z}$.
Putting together~\eqref{4.30},~\eqref{4.31},~\eqref{4.33}
and~\eqref{4.37}, one sees that two Birkhoff normal forms
as in~\eqref{4.25}
with data $\big(\rho^{(0)},\rho^{(1)},b_2^{(0)},b_3^{(1)},b_4^{(1)}\big)$
and $\big(\widetilde \rho^{(0)},\widetilde \rho^{(1)},\widetilde b_2^{(0)},\widetilde b_3^{(1)},
\widetilde b_4^{(1)}\big)$ give isomorphic $(TE)$-structures
if $\widetilde \rho^{(0)}=\rho^{(0)}$, $\widetilde\rho^{(1)}=\rho^{(1)}$,
$\widetilde b_2^{(0)}\widetilde b_4^{(1)}=b_2^{(0)} b_4^{(1)}$
and $\widetilde b_3^{(1)}\in \big\{{\pm}\, b_3^{(1)}+k\,|\, k\in{\mathbb Z}\big\}$.
This shows {\it if} in Remark~\ref{t4.12}$(i)$.
\medskip\noindent
{\it Step 3:} Discussion of the invariants.
By Lemma~\ref{t3.9}, $\rho^{(0)}$, $\rho^{(1)}$ and $\delta^{(1)}$
are even formal inva\-ri\-ants of the $(TE)$-structure.
The set $\Eig(M^{\rm mon})$ is obviously an invariant of the
isomorphism class of the $(TE)$-structure.
The Birkhoff normal form in~\eqref{4.25} gives a pure
$(TLE)$-structure with a logarithmic pole at~$\infty$.
From its pole part at~$\infty$
and Theorem~\ref{t3.23}$(c)$ one reads off
\begin{gather*
\Eig(M^{\rm mon})=\big\{{\rm e}^{-2\pi {\rm i}(\rho^{(1)}\pm b_3^{(1)})}\big\}.
\end{gather*}
As $\rho^{(1)}$ is an invariant of the $(TE)$-structure,
also the set $\big\{{\pm}\, b_3^{(1)}+k\,|\, k\in{\mathbb Z}\big\}$
is an invariant of the $(TE)$-structure.
\looseness=1
Together with Step 2, this shows {\it only if} in
Remark~\ref{t4.12}$(i)$ and all statements in Theo\-rem~\ref{t4.11}.
\end{proof}
\begin{Corollary
The monodromy of a $(TE)$-structure over a point
of the type $($Bra$)$ has a $2\times 2$ Jordan block if its
eigenvalues coincide $\big($equivalently, if
$b_3^{(1)}\in\frac{1}{2}{\mathbb Z}$ for some $($or any$)$
Birkhoff normal form in Theorem~$\ref{t4.11}(b)\big)$.
\end{Corollary}
\begin{proof}
Consider a $(TE)$-structure over a point
of the type (Bra) such that the eigenvalues of its monodromy
coincide. Then for any Birkhoff normal form in
Theorem~\ref{t4.11}$(b)$ $b_3^{(1)}\in\frac{1}{2}{\mathbb Z}$,
and one can choose a Birkhoff normal form with
$b_3^{(1)}\in\big\{0,-\frac{1}{2}\big\}$.
The induced pure $(TLE)$-structure
has at $\infty$ a logarithmic pole, and its residue
endomorphism $[\nabla_{\widetilde z\partial_{\widetilde z}}]$, where
$\widetilde z=z^{-1}$, is given by the
matrix $-\big(\rho^{(1)}C_1+b_3^{(1)}D+b_4^{(1)}E\big)$.
In the case $b_3^{(1)}=0$, the nonresonance condition
in Theorem~\ref{t3.23}$(c)$ is satisfied,
so Theorem~\ref{t3.23}$(c)$ can be applied.
Because of $b_4^{(1)}\neq 0$,
the monodromy has a $2\times 2$ Jordan block.
In the case $b_3^{(1)}=-\frac{1}{2}$, the meromorphic
base change
\begin{gather*}
\underline{\widetilde v}:=\underline{v}\cdot\begin{pmatrix}z&0\\0&1
\end{pmatrix}
\end{gather*}
gives the new connection matrix
\begin{gather*}
\widetilde B=\bigg(\rho^{(0)}+z\bigg(\rho^{(1)}+\frac{1}{2}\bigg)\bigg)C_1+zb_2^{(0)}C_2+b_4^{(1)}E.
\end{gather*}
Again, the pole at $\infty$ is logarithmic.
Now the nonresonance condition in Theorem~\ref{t3.23}$(c)$
is satisfied. Because of $b_2^{(0)}\neq 0$,
the monodromy has a $2\times 2$ Jordan block.
\end{proof}
For $(TE)$-structures of the type (Bra), formal isomorphism
is coarser than holomorphic isomorphism.
\begin{Lemma}\label{t4.14}
Consider a $(TE)$-structure over a point of the type $($Bra$)$.
By Lemma~$\ref{t3.9}$, the numbers $\rho^{(0)}$, $\rho^{(1)}$
and $\delta^{(1)}$ are formal invariants of the
$(TE)$-structure.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The set $\Eig(M^{\rm mon})$ and the equivalent set
$\big\{{\pm}\, b_3^{(1)}+k\,|\, k\in{\mathbb Z}\big\}$ are holomorphic invariants,
but not formal invariants.
\item[$(b)$]
The $(TE)$-structure with Birkhoff normal form in~\eqref{4.25}
is formally isomorphic to the $(TE)$-structure
with Birkhoff normal form in~$\eqref{4.25}$
with the same values $\rho^{(0)}$, $\rho^{(1)}$, $b_2^{(0)}$ and $b_4^{(1)}$,
but with an arbitrary $\widetilde b_3^{(1)}$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Part $(a)$ follows from part $(b)$.
For the proof of part (b), we have to find
$T\in {\rm GL}_2({\mathbb C}[[z]])$ such that $T$, $B$ in~\eqref{4.25}
and
\begin{gather*}
\widetilde B=\big(\rho^{(0)}+z\rho^{(1)}\big)C_1+b_2^{(0)}C_2+z\widetilde b_3^{(1)}D+zb_4^{(1)}E
\end{gather*}
with $\widetilde b_3^{(1)}\in{\mathbb C}$ arbitrary satisfy~\eqref{4.19}.
Here~\eqref{4.19} says
\begin{gather*
0= z\partial_z\tau_1 + \big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_3,
\\
0= z^2\partial_z\tau_2 + z\big({-}b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_2+2b_2^{(0)}\tau_3,
\\
0= z^2\partial_z\tau_3 + z\big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_1+ zb_4^{(1)}\tau_2-b_2^{(0)}\tau_4,
\\
0= z\partial_z\tau_4 -2b_4^{(1)}\tau_3+ \big(b_3^{(1)}+\widetilde b_3^{(1)}\big)\tau_4.
\end{gather*}
This is equivalent to
\begin{gather*
0=\tau_3^{(0)}=\tau_4^{(0)},
\\
0= k\tau_1^{(k)} + \big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_3^{(k)}\qquad
\text{for}\quad k\geq 1,
\\
0= \big(k-1-b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_2^{(k-1)}+2b_2^{(0)}\tau_3^{(k)}\qquad
\text{for}\quad k\geq 1,
\\
0= (k-1)\tau_3^{(k-1)}+ \big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_1^{(k-1)}
+ b_4^{(1)}\tau_2^{(k-1)}-b_2^{(0)}\tau_4^{(k)}\qquad
\text{for}\quad k\geq 1,
\\
0= -2b_4^{(1)}\tau_3^{(k)}+ \big(k+b_3^{(1)}+\widetilde b_3^{(1)}\big)\tau_4^{(k)}\qquad
\text{for}\quad k\geq 1.
\end{gather*}
This is equivalent to
\begin{gather}
\tau_3^{(0)}=\tau_4^{(0)}=0,\nonumber
\\
\tau_1^{(k)}= \frac{-1}{k}\big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_3^{(k)}\qquad
\text{for}\quad k\geq 1,\nonumber
\\
2b_2^{(0)}\tau_3^{(k)}= \big(b_3^{(1)}+\widetilde b_3^{(1)}+1-k\big)\tau_2^{(k-1)}\qquad
\text{for}\quad k\geq 1,\nonumber
\\
b_2^{(0)}\tau_4^{(1)}=b_4^{(1)}\tau_2^{(0)}+\big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_1^{(0)},\nonumber
\\
b_2^{(0)}\tau_4^{(k)}=b_4^{(1)}\tau_2^{(k-1)}
+\bigg(k-1+\frac{-1}{k-1}\big(b_3^{(1)}-\widetilde b_3^{(1)}\big)^2\bigg)
\big(2b_2^{(0)}\big)^{-1}\big(b_3^{(1)}+\widetilde b_3^{(1)}+2-k\big)\tau_2^{(k-2)}\nonumber
\\
\qquad\text{for}\quad k\geq 2,\nonumber
\\
0= b_4^{(1)}\tau_2^{(0)}+\big(1+b_3^{(1)}+\widetilde b_3^{(1)}\big)
\big(b_3^{(1)}-\widetilde b_3^{(1)}\big)\tau_1^{(0)},\nonumber
\\
0= b_4^{(1)}(2k+1)\tau_2^{(k)}+
\big(k+1+b_3^{(1)}+\widetilde b_3^{(1)}\big)\bigg(k+\frac{-1}{k}\big(b_3^{(1)}-\widetilde b_3^{(1)}\big)^2\bigg)\big(2b_2^{(0)}\big)^{-1}
\nonumber
\\ \hphantom{0=}
{}\times\big(b_3^{(1)}+\widetilde b_3^{(1)}+1-k\big)\tau_2^{(k-1)}
\qquad\text{for}\quad k\geq 1.
\label{4.41}
\end{gather}
One can choose $\tau_1^{(0)}\in{\mathbb C}^*$ freely.
Then the equations~\eqref{4.41} have unique
solutions $\tau_1-\tau_1^{(0)},\tau_2$, $\tau_3,\tau_4\in{\mathbb C}[[z]]$.
Therefore $T\in {\rm GL}_2({\mathbb C}[[z]])$ exists such that $T$,
$B$ as in~\eqref{4.25} and $\widetilde B$ as above satisfy~\eqref{4.19}. This shows part $(b)$.
\end{proof}
\begin{Remarks}\qqua
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Because of Lemma~\ref{t4.14}, the set $\Eig(M^{\rm mon})$
takes here the role of the Stokes structure: It~distinguishes the
holomorphic isomorphism classes within one formal
isomorphism class.
\item[$(ii)$] The following $(TE)$-structures
have trivial Stokes structure.
The proof of Lemma~\ref{t4.9} leads to these $(TE)$ structures. They are the $(TE)$-structures
with $\Eig(M^{\rm mon})=\{\lambda_1,\lambda_2\}$
with $\lambda_2=-\lambda_1$,
respectively with $b_3^{(1)}\in\big({\pm}\,\frac{1}{4}+{\mathbb Z}\big)$
for any Birkhoff normal form in~\eqref{4.25}.
Choose a number $\alpha^{(1)}\in{\mathbb C}$.
Consider the rank $2$ bundle $H'\to{\mathbb C}^*$ with flat
connection $\nabla$ and flat multivalued basis
$\underline{f}=(f_1,f_2)$ with monodromy given by
\begin{gather*
\underline{f}(z{\rm e}^{2\pi {\rm i}})=\underline{f}(z)\cdot {\rm i}{\rm e}^{-2\pi {\rm i}\alpha^{(1)}}
\cdot (C_2+E).
\end{gather*}
The eigenvalues are $\pm {\rm i}{\rm e}^{-2\pi {\rm i}\alpha^{(1)}}$.
Choose numbers $t_1\in{\mathbb C}$ and $t_2\in{\mathbb C}^*$.
The following basis of $H'$ is univalued.
\begin{eqnarray}\label{4.43}
\underline{v}:= \underline{f}\cdot {\rm e}^{t_1/z}z^{\alpha^{(1)}}
\begin{pmatrix}z^{-1/4}{\rm e}^{t_2z^{-1/2}}
& z^{1/4}{\rm e}^{t_2z^{-1/2}}\\
z^{-1/4}{\rm e}^{-t_2z^{-1/2}}
&-z^{1/4}{\rm e}^{-t_2z^{-1/2}}
\end{pmatrix}\!.
\end{eqnarray}
The matrix $B$ with
$z^2\nabla_{\partial_z}\underline{v}=\underline{v}\cdot B$ is
\begin{gather}\label{4.44}
B= \big({-}t_1+z\alpha^{(1)}\big)C_1-\frac{t_2}{2} C_2
-z\frac{1}{4}D-z\frac{t_2}{2}E.
\end{gather}
So here $\rho^{(1)}=\alpha^{(1)}$, $\rho^{(0)}=-t_1$,
$\delta^{(1)}-2\rho^{(0)}\rho^{(1)}=-\frac{1}{4}t_2^2$.
\item[$(iii)$] Part $(ii)$ generalizes to a $(TE)$-structure over
$M={\mathbb C}^2$ with coordinates $t=(t_1,t_2)$.
Consider the rank $2$ bundle $H'\to{\mathbb C}^*\times M$ with
flat connection and flat multivalued basis $\underline{f}=(f_1,f_2)$
with monodromy given by
\begin{gather*
\underline{f}\big(z{\rm e}^{2\pi {\rm i}},t\big)=\underline{f}(z,t)\cdot {\rm i}{\rm e}^{-2\pi {\rm i}\alpha^{(1)}}
\cdot (C_2+E).
\end{gather*}
The basis $\underline{v}$ in~\eqref{4.43} is univalued.
The matrices $A_1$, $A_2$ and $B$ in its connection 1-form
$\Omega$ as in~\eqref{3.4}--\eqref{3.6} are given
by~\eqref{4.44} and
\begin{gather*
A_1=C_1,\qquad
A_2=C_2+zE.
\end{gather*}
The restriction to a point $t\in {\mathbb C}\times{\mathbb C}^*$ is a
$(TE)$-structure of type (Bra) with trivial Stokes
structure.
The restriction to a point $t\in{\mathbb C}\times\{0\}$
is a $(TE)$-structure of type (Log).
\end{enumerate}
\end{Remarks}
\subsection[The case (Reg) with $\tr{\mathcal U}=0$]
{The case (Reg) with $\boldsymbol{\tr{\mathcal U}=0}$}\label{c4.5}
The $(TE)$-structures over a point of the type (Reg) with
$\tr{\mathcal U}=0$ are the regular singular $(TE)$-structures
over a point which are not logarithmic.
They can be easily classified using elementary sections.
Theorem~\ref{t4.17} splits them into three cases
(one in part $(a)$, two in part $(b)$: $\alpha_1=\alpha_2$
and $\alpha_1-\alpha_2\in{\mathbb N}$).
\begin{Notation}\label{t4.16}
Start with a $(TE)$-structure $(H\to{\mathbb C},\nabla)$ of rank $2$
over a point. Recall the notions from Definition~\ref{t3.21}:
$H':=H|_{{\mathbb C}^*}$, $M^{\rm mon}$, $M^{\rm mon}_s$, $M^{\rm mon}_u$, $N^{\rm mon}$,
$\Eig(M^{\rm mon})$, $H^\infty$, $H^\infty_{\lambda}$, $C^\alpha$
for $\alpha\in{\mathbb C}$ with ${\rm e}^{-2\pi {\rm i}\alpha}\in \Eig(M^{\rm mon})$,
$s(A,\alpha)\in C^\alpha$ for
$A\in H^{\infty}_{{\rm e}^{-2\pi {\rm i}\alpha}}$,
$\operatorname{es}(\sigma,\alpha)\in C^\alpha$ for~$\sigma$
a~holomorphic section on $H|_{U_1\setminus\{0\}}$ for
$U_1\subset{\mathbb C}$ a neighborhood of 0.
Now the eigenvalues of $M^{\rm mon}$ are called $\lambda_1$
and $\lambda_2$ ($\lambda_1=\lambda_2$ is allowed).
The sheaf ${\mathcal V}^{>-\infty}$ simplifies here to a~${\mathbb C}\{z\}\big[z^{-1}\big]$-vector space of dimension $2$,
\begin{gather*
V^{>-\infty}:= \begin{cases}
{\mathbb C}\{z\}\big[z^{-1}\big]\cdot C^{\alpha_1}
\oplus {\mathbb C}\{z\}\big[z^{-1}\big]\cdot C^{\alpha_2}&
\text{if}\quad\lambda_1\neq\lambda_2,
\\[1ex]
{\mathbb C}\{z\}\big[z^{-1}\big]\cdot C^{\alpha_1}&
\text{if}\quad\lambda_1=\lambda_2,\end{cases}
\end{gather*}
where $\alpha_1,\alpha_2\in{\mathbb C}$ with
${\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_j$.
$V^{>-\infty}$ is the space of sections of moderate growth.
\end{Notation}
\begin{Theorem}\label{t4.17}
Consider a regular singular, but not logarithmic,
rank $2$ $(TE)$-structure $(H\to{\mathbb C},\nabla)$ over a point.
Associate to it the data in the Notation~$\ref{t4.16}$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The case $N^{\rm mon}=0$:
There exist unique numbers $\alpha_1$, $\alpha_2$
with ${\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_j$
and $\alpha_1\neq\alpha_2$ and the following properties:
There exist elementary sections $s_1\in C^{\alpha_1}\setminus\{0\}$
and $s_2\in C^{\alpha_2}\setminus\{0\}$ and a number $t_2\in{\mathbb C}^*$
such that
\begin{align}
\label{4.48}
{\mathcal O}(H)_0&={\mathbb C}\{z\}(s_1+t_2s_2)\oplus {\mathbb C}\{z\}(zs_2)
\\
&= {\mathbb C}\{z\}\big(s_2+t_2^{-1}s_1\big)\oplus {\mathbb C}\{z\}(zs_1).
\label{4.49}
\end{align}
The isomorphism class of the $(TE)$-structure is
uniquely determined by the information $N^{\rm mon}=0$
and the set $\{\alpha_1,\alpha_2\}$.
The numbers $\alpha_1$ and $\alpha_2$ are called
leading exponents.
\item[$(b)$] The case $N^{\rm mon}\neq 0$ $($thus $\lambda_1=\lambda_2)$:
There exist unique numbers $\alpha_1$, $\alpha_2$
with ${\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_1$ and
$\alpha_1-\alpha_2\in{\mathbb N}_0$
and the following properties:
Choose any elementary section
$s_1\in C^{\alpha_1}\setminus \ker(z\nabla_{\partial_z}-\alpha_1\colon
C^{\alpha_1}\to C^{\alpha_1})$.
The elementary section $s_2\in C^{\alpha_2}$ with
\begin{gather}\label{4.50}
(z\nabla_{\partial_z}-\alpha_1)(s_1)=z^{\alpha_1-\alpha_2}s_2.
\end{gather}
is a generator of
$\ker(z\nabla_{\partial_z}-\alpha_2\colon C^{\alpha_2}\to C^{\alpha_2})$.
Then
\begin{gather}\label{4.51}
{\mathcal O}(H)_0={\mathbb C}\{z\}(s_1+t_2s_2)\oplus {\mathbb C}\{z\}(zs_2)
\end{gather}
for some $t_2\in{\mathbb C}$. If $\alpha_1>\alpha_2$ then
$t_2$ is in ${\mathbb C}^*$ and is independent of the choice of
$s_1$. If~\mbox{$\alpha_1=\alpha_2$}, then one can replace
$s_1$ by $s_1^{\rm[new]}:=s_1+t_2s_2$, and then
$t_2^{\rm[new]}=0$.
The~iso\-morphism class of the $(TE)$-structure is
uniquely determined by the information $N^{\rm mon}\neq 0$
and the pair $(\alpha_1,\alpha_2)$ if $\alpha_1=\alpha_2$
and the triple $(\alpha_1,\alpha_2,t_2)$ if $\alpha_1>\alpha_2$.
The numbers $\alpha_1$ and $\alpha_2$ are called
leading exponents.
\end{enumerate}
\end{Theorem}
\begin{proof} First, $(a)$ and $(b)$ are considered together.
Let $\beta_1,\beta_2\in{\mathbb C}$ be the unique numbers
with ${\rm e}^{-2\pi {\rm i}\beta_j}=\lambda_j$ and
$-1<\Ree(\beta_j)\leq 0$. Choose elementary sections
$\widetilde s_1\in C^{\beta_1}$ and
$\widetilde s_2\in C^{\beta_2}$ which
form a global basis of $H'$. In~the case $N^{\rm mon}\neq 0$ (then $\beta_1=\beta_2$)
choose them such that
$\widetilde s_1\notin \ker\big(z\nabla_{\partial_z}-\beta_1\colon C^{\beta_1}\to
C^{\beta_1}\big)$ and~$\widetilde s_2\in\ker\big(z\nabla_{\partial_z}-\beta_2\colon
C^{\beta_2}\to C^{\beta_2}\big)$.
Let $\sigma_1^{[1]},\sigma_2^{[1]}\in{\mathcal O}(H)_0$
be a ${\mathbb C}\{z\}$-basis of ${\mathcal O}(H)_0$. Write
\begin{gather*
\big(\sigma_1^{[1]},\sigma_2^{[1]}\big)=\big(\widetilde s_1,\widetilde s_2\big)
\begin{pmatrix}
b_{11}& b_{12}\\b_{21}&b_{22}\end{pmatrix}
\qquad\text{with}\quad
b_{ij}\in{\mathbb C}\{z\}\big[z^{-1}\big].
\end{gather*}
Recall that the degree $\deg_z g$ of a Laurent series
$g=\sum_{j\in{\mathbb Z}}g^{(j)}z^j\in{\mathbb C}\{z\}\big[z^{-1}\big]$ is the
minimal $j$ with $g^{(j)}\neq 0$ if $g\neq 0$,
and $\deg_z0:=+\infty$.
In the case $N^{\rm mon}=0$ and $\lambda_1=\lambda_2$
(then $\beta_1=\beta_2$),
we suppose $\min(\deg_z b_{11},\deg_z b_{12})
\leq \min(\deg_z b_{21},\deg_z b_{22})$.
If it does not hold a priori, we can exchange
$\widetilde s_1$ and $\widetilde s_2$.
In any case, we suppose $\deg_z b_{11}\leq \deg_z b_{12}$.
If it does not hold a priori, we can exchange $\sigma_1^{[1]}$
and $\sigma_2^{[1]}$.
Again in the case $N^{\rm mon}=0$ and $\lambda_1=\lambda_2$,
we suppose $\deg_zb_{11}<\deg_zb_{21}$. If it does not hold
a priori, we can replace $\widetilde s_2$ by a certain linear
combination of $\widetilde s_2$ and $\widetilde s_1$.
Now $\widetilde b_{11}:=z^{-\deg_z b_{11}}b_{11}\in{\mathbb C}\{z\}^*$ is
a unit. Consider $\alpha_1:=\beta_1+\deg_z b_{11}$
and $s_1:=z^{\deg_z b_{11}}\widetilde s_1\allowbreak\in C^{\alpha_1}$
and the new basis $\big(\sigma_1^{[2]},\sigma_2^{[2]}\big)$
of ${\mathcal O}(H)_0$ with
\begin{gather*}
\big(\sigma_1^{[2]},\sigma_2^{[2]}\big)
:=\big(\sigma_1^{[1]},\sigma_2^{[1]}\big)
\begin{pmatrix} \widetilde b_{11}^{-1} &-b_{11}^{-1}b_{12} \\
0 & 1 \end{pmatrix}
= \big(s_1,\widetilde s_2\big)
\begin{pmatrix} 1 & 0 \\ \widetilde b_{11}^{-1} b_{21} &
b_{22}-b_{11}^{-1}b_{12}b_{21}\end{pmatrix}\!
\end{gather*}
Consider $m:=\deg_z\big(b_{22}-b_{11}^{-1}b_{12}b_{21}\big)\in{\mathbb Z}$
($+\infty$ is impossible) and
$\alpha_2:=\beta_2+(m-1)$ and
$s_2:=z^{m-1}\widetilde s_2\in C^{\alpha_2}$.
Write $z^{-m+1}\widetilde b_{11}^{-1}b_{21}=c_1+c_2$
with $c_1\in{\mathbb C}\big[z^{-1}\big]$ and $c_2\in z{\mathbb C}\{z\}$.
We can replace $\sigma_2^{[2]}$ by
$\sigma_2^{[3]}:=zs_2$
and $\sigma_1^{[2]}=s_1+(c_1+c_2)s_2$ by
$\sigma_1^{[3]}=s_1+c_1s_2$.
$(a)$ Consider the case $N^{\rm mon}=0$.
If $\lambda_1=\lambda_2$ then $\deg_z b_{21}\geq \deg_z b_{11}+1$
and thus
\begin{align}
(c_1+c_2)s_2&=\widetilde{b}_{11}^{-1}b_{21}\widetilde{s}_2\in
{\mathbb C}\{z\}\cdot z^{\deg_zb_{21}}\cdot C^{\beta_2}\nonumber
\\
&{}\subset {\mathbb C}\{z\}\cdot z^{\deg_zb_{11}+1}\cdot C^{\beta_2}
= {\mathbb C}\{z\}\cdot C^{\alpha_1+1}.\label{4.54}
\end{align}
In any case (whether $\lambda_1=\lambda_2$ or
$\lambda_1\neq \lambda_2$), we must have $c_1\neq 0$.
Else the $(TE)$-structure is logarithmic.
As the pole has precisely order 2, $c_1$ is a constant
$\neq 0$ (if $\lambda_1=\lambda_2$, here we need~\eqref{4.54}),
which is now called $t_2$. This implies $m-1=\deg_z b_{21}$. In~the case $N^{\rm mon}=0$ and $\lambda_1=\lambda_2$
we have $\beta_1=\beta_2$ and
\begin{gather*}
\alpha_2-\alpha_1=m-1-\deg_z b_{11}
=\deg_z b_{21}-\deg_z b_{11}> 0
\end{gather*}
so especially $\alpha_2\neq \alpha_1$.
$(b)$ Consider the case $N^{\rm mon}\neq 0$.
Then $s_2$ is generator of
$\ker(z\nabla_{\partial_z}-\alpha_2\colon C^{\alpha_2}\to C^{\alpha_2})$,
and we can rescale it such that~\eqref{4.50} holds.
First consider the case $c_1=0$.
As the pole has precisely order 2, we must have
$\alpha_2=\alpha_1$. Then~\eqref{4.51} holds with $t_2=0$.
Now consider the case $c_1\neq 0$. Then
$\underline{\sigma}^{[3]}=\big(\sigma_1^{[3]},\sigma_2^{[3]}\big)$ satisfies
\begin{gather*
z\nabla_{\partial_z}\underline{\sigma}^{[3]}=\underline{\sigma}^{[3]}
\begin{pmatrix}\alpha_1 & 0 \\
z^{-1}(z\partial_z-\alpha_1+\alpha_2)(c_1)
+z^{\alpha_1-\alpha_2-1}& \alpha_2+1
\end{pmatrix}\!.
\end{gather*}
First case, $\alpha_1-\alpha_2\in{\mathbb Z}_{<0}$:
The coefficient of $z^{\alpha_1-\alpha_2-1}$
in $z^{-1}(z\partial_z-\alpha_1+\alpha_2)(c_1)
+z^{\alpha_1-\alpha_2-1}$ is~$1$.
Therefore the pole order is $>2$, a contradiction.
Second case, $\alpha_1\geq \alpha_2$:
As the pole has precisely order 2, $c_1$ is a constant $\neq 0$,
which is now called $t_2$. Then~\eqref{4.51} holds,
and $t_2\in{\mathbb C}^*$. In~the case $\alpha_1-\alpha_2\in{\mathbb N}$,
$t_2$ is obviously independent of the choice of $s_1$.
\end{proof}
Corollary~\ref{t4.18} is an immediate consequence of
Theorem~\ref{t4.17}.
\begin{Corollary}\label{t4.18}
The set of regular singular, but not logarithmic,
rank~$2$ $(TE)$-structures over a point is in bijection
with the set
\begin{align*}
\{(0,\{\alpha_1,\alpha_2\})\,|\, \alpha_1,\alpha_2\in{\mathbb C},\alpha_1\neq\alpha_2\}
&\cup\{(1,\alpha_1,\alpha_2)\,|\, \alpha_1=\alpha_2\in{\mathbb C}\}
\\
&\cup\{(1,\alpha_1,\alpha_2,t_2)\,|\, \alpha_1,\alpha_2\in{\mathbb C},
\alpha_1-\alpha_2\in {\mathbb N},t_2\in{\mathbb C}^*\}.
\end{align*}
The first set parametrizes the cases with $N^{\rm mon}=0$,
the second and third set parametrize the cases with
$N^{\rm mon}\neq 0$. Theorem~$\ref{t4.17}$ describes the
corresponding $(TE)$-structures.
\end{Corollary}
\begin{Remark}\label{t4.19}
The connection matrices for the special bases in
Theorem~\ref{t4.17} can be written down easily.
The basis in~\eqref{4.48}:
\begin{gather}\label{4.57}
\nabla_{z\partial_z}(s_1+t_2s_2,zs_2)=
(s_1+t_2s_2,zs_2)\begin{pmatrix}\alpha_1 & 0 \\
z^{-1}(\alpha_2-\alpha_1)t_2 & \alpha_2+1\end{pmatrix}\!.
\end{gather}
The basis in~\eqref{4.49} with $\widetilde t_2:=t_2^{-1}$:
\begin{gather}\label{4.58}
\nabla_{z\partial_z}\big(s_2+\widetilde t_2s_1,zs_1\big)=
\big(s_2+\widetilde t_2s_1,zs_1\big)\begin{pmatrix}\alpha_2 & 0 \\
z^{-1}(\alpha_1-\alpha_2)\widetilde t_2 & \alpha_1+1\end{pmatrix}\!.
\end{gather}
The basis in~\eqref{4.51} with~\eqref{4.50}:
\begin{gather}\label{4.59}
\nabla_{z\partial_z}(s_1+t_2s_2,zs_2)=
(s_1+t_2s_2,zs_2)
\begin{pmatrix}\alpha_1 & 0 \\
z^{-1}(\alpha_2-\alpha_1)t_2 +z^{\alpha_1-\alpha_2-1}
& \alpha_2+1\end{pmatrix}\!.
\end{gather}
Finally, in the case $N^{\rm mon}\neq 0$ and $t_2\in{\mathbb C}^*$,
we consider with $\widetilde t_2:=t_2^{-1}$ also the basis
$\big(s_2+\widetilde t_2 s_1,zs_1\big)$. Again~\eqref{4.50} is assumed:
\begin{gather}
\nabla_{z\partial_z}\big(s_2+\widetilde t_2s_1,zs_1\big)=\big(s_2+\widetilde t_2s_1,zs_1\big)\nonumber
\\ \hphantom{\nabla_{z\partial_z}(s_2+\widetilde t_2s_1,zs_1)=}
{}\times\begin{pmatrix}\alpha_2
+z^{\alpha_1-\alpha_2}\widetilde t_2 &
z^{\alpha_1-\alpha_2+1} \\[.5ex]
z^{-1}(\alpha_1-\alpha_2)\widetilde t_2 -z^{\alpha_1-\alpha_2-1}
\widetilde t_2^2 & \alpha_1+1-z^{\alpha_1-\alpha_2}\widetilde t_2
\end{pmatrix}\!.
\end{gather}\label{4.60}
\end{Remark}
\subsection[The case (Log) with $\tr{\mathcal U}=0$]{The case (Log) with $\boldsymbol{\tr{\mathcal U}=0}$}\label{c4.6}
The $(TE)$-structures over a point of the type (Log) with
$\tr{\mathcal U}=0$ are the logarithmic $(TE)$-structures
over a point. Just as the regular singular $(TE)$-structures,
they can easily be classified using
elementary sections. Theorem~\ref{t4.20}
splits them into two cases.
We use again the Notation~\ref{t4.16}.
\begin{Theorem}\label{t4.20}
Consider a logarithmic rank $2$ $(TE)$-structure $(H\to{\mathbb C},\nabla)$
over a point. Asso\-ci\-ate to it the data in the Notation~$\ref{t4.16}$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The case $N^{\rm mon}=0$: There exist unique numbers
$\alpha_1$, $\alpha_2$ with ${\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_j$
and the following property:
There exist elementary sections $s_1\in C^{\alpha_1}\setminus\{0\}$
and $s_2\in C^{\alpha_2}\setminus\{0\}$ such that
\begin{gather}\label{4.61}
{\mathcal O}(H)_0 = {\mathbb C}\{z\}\, s_1\oplus {\mathbb C}\{z\}\, s_2.
\end{gather}
The isomorphism class of the $(TE)$-structure is uniquely
determined by the information $N^{\rm mon}=0$ and
the set $\{\alpha_1,\alpha_2\}$.
The numbers $\alpha_1$ and $\alpha_2$ are called
leading exponents.
\item[$(b)$] The case $N^{\rm mon}\neq 0$ $($thus $\lambda_1=\lambda_2)$:
There exist unique numbers $\alpha_1$, $\alpha_2$
with ${\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_1$ and
$\alpha_1-\alpha_2\in{\mathbb N}_0$ and
the following properties:
Choose any elementary section
$s_1\in C^{\alpha_1}-\ker(\nabla_{z\partial_z}-\alpha_1\colon
C^{\alpha_1}\to C^{\alpha_1})$.
The elementary section $s_2\in C^{\alpha_2}$ with
\begin{gather}\label{4.62}
(z\nabla_{\partial_z}-\alpha_1)(s_1)=z^{\alpha_1-\alpha_2}s_2.
\end{gather}
is a generator of
$\ker(z\nabla_{\partial_z}-\alpha_2\colon C^{\alpha_2}\to C^{\alpha_2})$.
Then
\begin{gather}\label{4.63}
{\mathcal O}(H)_0={\mathbb C}\{z\}\, s_1\oplus {\mathbb C}\{z\}\, s_2.
\end{gather}
The isomorphism class of the $(TE)$-structure is
uniquely determined by the information $N^{\rm mon}\neq 0$
and the set $\{\alpha_1,\alpha_2\}$.
The numbers $\alpha_1$ and $\alpha_2$ are called
leading exponents.
\end{enumerate}
\end{Theorem}
\begin{proof}
First, $(a)$ and $(b)$ are considered together.
By Theorem~\ref{t3.23}$(a)$, ${\mathcal O}(H)_0$ is
generated by two elementary sections
$s_1\in C^{\alpha_1}$ and $s_2\in C^{\alpha_2}$
for some numbers $\alpha_1$ and $\alpha_2$.
The numbers~$\alpha_1$ and~$\alpha_2$ are the
eigenvalues of the residue endomorphism.
So, they are unique.
This finishes already the proof of part $(a)$.
$(b)$ Consider the case $N^{\rm mon}\neq 0$.
We can renumber $s_1$ and $s_2$ if necessary, so that
afterwards $\alpha_1-\alpha_2\in{\mathbb N}_0$.
If $\alpha_1=\alpha_2$, then ${\mathcal O}(H)_0={\mathbb C}\{z\}C^{\alpha_1}$,
and $s_1$ and $s_2$ can be changed so that
$s_1\in C^{\alpha_1}\setminus \ker(\nabla_{z\partial_z}-\alpha_1)$
and $s_2\in \ker(\nabla_{z\partial_z}-\alpha_1\colon
C^{\alpha_1}\to C^{\alpha_1})\setminus\{0\}$ satisfy~\eqref{4.62}.
Then nothing more has to be shown.
Consider the case $\alpha_1-\alpha_2\in{\mathbb N}$.
If $s_2\in C^{\alpha_2}\setminus \ker(\nabla_{z\partial_z}-\alpha_2)$,
then $(\nabla_{z\partial_z}-\alpha_2)(s_2)$ is not in
${\mathcal O}(H)_0$, and thus the pole is not logarithmic,
a contradiction. Therefore
$s_2\in \ker(\nabla_{z\partial_z}-\alpha_2\colon
C^{\alpha_2}\to C^{\alpha_2})$. Then necessarily
$s_1\in C^{\alpha_1}\setminus \ker(\nabla_{z\partial_z}-\alpha_1\colon
C^{\alpha_1} \to C^{\alpha_1})$.
We~can rescale $s_2$ so that~\eqref{4.62} holds.
Nothing more has to be shown.
\end{proof}
Corollary~\ref{t4.21} is an immediate consequence of
Theorem~\ref{t4.20}.
\begin{Corollary}\label{t4.21}
The set of logarithmic rank $2$ $(TE)$-structures over a point
is in bijection with the set
\begin{gather*}
\{(0,\{\alpha_1,\alpha_2\})\,|\, \alpha_1,\alpha_2\in{\mathbb C},\}
\cup\{(1,\alpha_1,\alpha_2)\,|\, \alpha_1,\alpha_2\in{\mathbb C},\alpha_1-\alpha_2\in {\mathbb N}_0\}.
\end{gather*}
The first set parametrizes the cases with $N^{\rm mon}=0$,
the second set parametrizes the cases with $N^{\rm mon}\neq 0$.
Theorem~$\ref{t4.20}$ describes the corresponding
$(TE)$-structures.
\end{Corollary}
\begin{Remark}\label{t4.22}
The connection matrices for the special bases in
Theorem~\ref{t4.20} can be written down easily.
The basis in~\eqref{4.61}:
\begin{gather*
\nabla_{z\partial_z}(s_1,s_2)=(s_1,s_2)
\begin{pmatrix}\alpha_1 & 0 \\ 0 & \alpha_2\end{pmatrix}\!.
\end{gather*}
The basis in~\eqref{4.63}:
\begin{gather*
\nabla_{z\partial_z}(s_1,s_2)=(s_1,s_2)
\begin{pmatrix}\alpha_1 & 0 \\ z^{\alpha_1-\alpha_2} & \alpha_2\end{pmatrix}\!.
\end{gather*}
The basis $(s_1,s_2)$ gives a Birkhoff normal form
in the cases $N^{\rm mon}=0$ and in the cases
$(N^{\rm mon}\neq 0$ and $\alpha_1=\alpha_2)$. In~the cases $(N^{\rm mon}\neq 0$ and $\alpha_1-\alpha_2\in{\mathbb N})$,
a Birkhoff normal form does not exist.
\end{Remark}
\section[Rank 2 $(TE)$-structures over germs of regular $F$-manifolds]{Rank 2 $\boldsymbol{(TE)}$-structures over germs of regular $\boldsymbol{F}$-manifolds}\label{c5}
This section discusses unfoldings of
$(TE)$-structures over a point $t^0$
of type (Sem) or (Bra) or~(Reg).
Here Malgrange's unfolding result
Theorem~\ref{t3.16}$(c)$ applies. It~provides a universal unfolding for the $(TE)$-structure
over $t^0$. Any unfolding is induced by the universal
unfolding.
The universal unfoldings turn out to be precisely
the $(TE)$-structures with primitive Higgs fields
over germs of regular $F$-manifolds.
Sections~\ref{c6} and~\ref{c8} discuss
unfoldings of $(TE)$-structures over a point of type (Log).
Section~\ref{c8} treats arbitrary such unfoldings.
Section~\ref{c6} prepares this. It~treats 1-parameter unfoldings with trace free pole parts
of logarithmic $(TE)$-structures over a point.
If one starts with a $(TE)$-structure with primitive Higgs field
over a germ $\big(M,t^0\big)$ of a regular $F$-manifold, then
the endomorphism ${\mathcal U}|_{t^0}\colon K_{t^0}\to K_{t^0}$ is regular.
Vice versa, if one starts with a $(TE)$-structure
over a point $t^0$ with a regular endomorphism ${\mathcal U}\colon
K_{t^0}\to K_{t^0}$,
then it unfolds uniquely to a $(TE)$-structure
with primitive Higgs field over a~germ of a regular $F$-manifold
by Malgrange's result Theorem~\ref{t3.16}$(c)$.
The germ of the regular $F$-manifold is uniquely determined by
the isomorphism class of ${\mathcal U}\colon K_{t^0}\to K_{t^0}$
(i.e., its Jordan block structure).
And the $(TE)$-structure is uniquely determined by its
restriction to $t^0$.
The following statement on the rank $2$ cases
is an immediate consequence of Malgrange's unfolding result
Theorem~\ref{t3.16}$(c)$, the classification of germs
of regular 2-dimensional $F$-manifolds in Remark~\ref{t2.6}$(ii)$
(building on Theorems~\ref{t2.2} and~\ref{t2.3},
see also Remark~\ref{t3.17}$(iii)$) and the classification
of the rank $2$ $(TE)$-structures into the cases
(Sem), (Bra), (Reg) and (Log) in Definition~\ref{t4.4}.
\begin{Corollary}\label{t5.1} \qquad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] For any rank $2$ $(TE)$-structure over a point $t^0$
except those of type $($Log$)$, the endomorphism
${\mathcal U}\colon K_{t^0}\to K_{t^0}$ is regular.
The $(TE)$-structure has a unique universal unfolding.
This unfolding has a primitive Higgs field.
Its base space is a germ $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$
of an $F$-manifold with Euler field and is as follows:
\[
\def1.4{1.5}
\begin{tabular}{c|c|c}
\hline
Type & $F$-manifold & Euler field
\\
\hline
$($Sem$)$ & $A_1^2$ & $\sum_{i=1}^2(u_i+c_i)e_i$ with $c_1\neq c_2$
\\
$($Bra$)$ or $($Reg$)$ & ${\mathcal N}_2$ & $t_1\partial_1+g(t_2)\partial_2$ with $g(0)\neq 0$
\\
\hline
\end{tabular}
\]
In the case of $($Bra$)$ or $($Reg$)$, a coordinate change brings
$E$ to the form $t_1\partial_1+\partial_2$.
\item[$(b)$] Any unfolding of a rank $2$ $(TE)$-structure over $t^0$
with regular endomorphism ${\mathcal U}\colon K_{t^0}\to K_{t^0}$
is induced by the universal unfolding in $(a)$.
\end{enumerate}
\end{Corollary}
Because of the existence and uniqueness of the universal
unfolding, it is not really necessary
to give it explicitly. On the other hand, in rank $2$,
it is easy to give it explicitly.
The following lemma offers one way.
\begin{Lemma}\label{t5.2}
Let $(H\to{\mathbb C},\nabla)$ by a $(TE)$-structure over
a point with monodromy $M^{\rm mon}$ of some rank $r\in{\mathbb N}$. It~has an unfolding which is a $(TE)$-structure
$\big(H^{\rm (unf)}\to {\mathbb C}\times M,\nabla\big)$, where
$M={\mathbb C}\times{\mathbb C}^*$ with coordinates $t=(t_1,t_2)$
$($on ${\mathbb C}^2\supset M)$, with the following properties.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The monodromy around $t_2=0$ is $(M^{\rm mon})^{-1}$.
\item[$(b)$] The original $(TE)$-structure is isomorphic to the
one over $t^0=(0,1)$.
\item[$(c)$] If $\underline{v}^0$ is a ${\mathbb C}\{z\}$-basis of ${\mathcal O}(H)_0$
with $z^2\nabla_{\partial_z}\underline{v}^0=\underline{v}^0\, B^0$, then
$H^{\rm (unf)}$ has over $({\mathbb C},0)\times M$ a~basis
$\underline{v}$ such that the matrices $A_1$, $A_2$ and $B$
in~\eqref{3.4}--\eqref{3.6} are as follows
\begin{gather}
A_1= C_1,\label{5.2}
\\
A_2= -B^0\bigg(\frac{z}{t_2}\bigg),\label{5.3}
\\
B = -t_1C_1+t_2B^0\bigg(\frac{z}{t_2}\bigg)=-t_1A_1-t_2 A_2.\label{5.4}
\end{gather}
\item[$(d)$] If ${\mathcal U}|_{t^0}$ is regular and $\rank H=2$, then
the Higgs field of the $(TE)$-structure $H^{\rm (unf)}$
is everywhere primitive. Therefore then $M$ is an $F$-manifold
with Euler field. The Euler field is $E=t_1\partial_1+t_2\partial_2$.
\item[$(e)$] If ${\mathcal U}|_{t^0}$ is regular and $\rank H=2$,
the $(TE)$-structure over the germ $\big(M,t^0\big)$
is the universal unfolding of the one over $t^0$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Let $\underline{f}^0=\big(f_1^0,\dots ,f_r^0\big)$ be a flat multivalued basis
of $H':=H|_{{\mathbb C}^*}$. Let $M^{\rm mat}\in {\rm GL}_r({\mathbb C})$
be the matrix of its monodromy, so
$\underline{f}^0\big(z {\rm e}^{2\pi {\rm i}}\big)=\underline{f}^0\cdot M^{\rm mat}$.
Let $\underline{v}^0=\big(v_1^0,\dots ,v_r^0\big)$ be a ${\mathbb C}\{z\}$-basis
of~${\mathcal O}(H)_0$. Let $B^0\in {\rm GL}_r({\mathbb C}\{z\})$ be the matrix
with $z^2\nabla_{\partial_z}\underline{v}^0=\underline{v}^0\, B^0$.
Consider the mat\-rix~$\Psi(z,t)$ with multivalued entries with
\begin{gather*
\underline{v}^0=\underline{f}^0\cdot \Psi(z).
\end{gather*}
Then
\begin{gather*
\Psi\big(z{\rm e}^{2\pi {\rm i}}\big)= \big(M^{\rm mat}\big)^{-1}\cdot\Psi(z),
\\
\Psi^{-1}\partial_z \Psi = z^{-2}B^0(z)
\end{gather*}
Embed the flat bundle $H':=H|_{{\mathbb C}^*}$ as the bundle
over $t^0=(0,1)$ into a flat bundle
${H^{(mf)}}'\to{\mathbb C}^*\times M$ with monodromy $M^{\rm mon}$
around $z=0$ and monodromy $(M^{\rm mon})^{-1}$ around
$t_2=0$. The~flat multivalued basis $\underline{f}^0$ of $H'$ extends
to a flat multivalued basis $\underline{f}$ of ${H^{(mf)}}'$ with
\begin{gather*
\underline{f}\big(z{\rm e}^{2\pi {\rm i}},t\big)=\underline{f}(z,t)M^{\rm mat},
\\
\underline{f}\big(z,t_1,t_2{\rm e}^{2\pi {\rm i}}\big)=\underline{f}(z,t)\big(M^{\rm mat}\big)^{-1}.
\label{5.9}
\end{gather*}
The tuple of sections $\underline{v}$ with
\begin{gather*
\underline{v}= \underline{f}\cdot {\rm e}^{t_1/z}\Psi\bigg(\frac{z}{t_2}\bigg)
\end{gather*}
is univalued, it is a basis of ${H^{(mf)}}'$ in a neighbourhood
of $\{0\}\times M$, and it has the connection matrices
in~\eqref{5.2}--\eqref{5.4}: The calculations for $A_2$
and $B$ are
\begin{align*}
\nabla_{\partial_2}\underline{v}&= \underline{f}\, {\rm e}^{t_1/z}\,
\bigg({-}\frac{z}{t_2^2}\bigg)(\partial_z\Psi)\bigg(\frac{z}{t_2}\bigg) = \underline{f}\, {\rm e}^{t_1/z}\, \bigg({-}\frac{z}{t_2^2}\bigg)
\Psi\bigg(\frac{z}{t_2}\bigg)\bigg(\frac{z}{t_2}\bigg)^{-2}B^0\bigg(\frac{z}{t_2}\bigg)\\
&=\underline{v}\, \bigg({-}\frac{1}{z}\bigg)B^0\bigg(\frac{z}{t_2}\bigg),
\\
\nabla_{\partial_z}\underline{v}
&= \underline{f}\, {\rm e}^{t_1/z}\,
\bigg(\bigg({-}\frac{t_1}{z^2}\bigg)\Psi\bigg(\frac{z}{t_2}\bigg)+ \bigg(\frac1{t_2}\bigg)(\partial_z\Psi)\bigg(\frac{z}{t_2}\bigg)\bigg)\\
&= \underline{f}\, {\rm e}^{t_1/z}\, \bigg(\bigg({-}\frac{t_1}{z^2}\bigg)\Psi\bigg(\frac{z}{t_2}\bigg)+ \bigg(\frac1{t_2}\bigg)
\Psi\bigg(\frac{z}{t_2}\bigg)\bigg(\frac{z}{t_2}\bigg)^{-2}B^0\bigg(\frac{z}{t_2}\bigg)\bigg)
\\
&=\underline{v}\, \bigg(\bigg({-}\frac{t_1}{z^2}\bigg)C_1+\bigg(\frac{t_2}{z^2}\bigg) B^0\bigg(\frac{z}{t_2}\bigg)\bigg).
\end{align*}
Therefore $\underline{v}$ defines a $(TE)$-structure,
which we call $(H^{\rm (unf)}\to{\mathbb C}\times M,\nabla)$. It~unfolds the one over $t^0=(0,1)$,
and that one is isomorphic to $(H\to{\mathbb C},\nabla)$.
It rests to show $(d)$ and $(e)$.
Suppose $\rank H=2$.
Then ${\mathcal U}|_{t^0}$ is regular if and only if
$\big(B^0\big)^{(0)}\notin {\mathbb C}\cdot C_1$.
Then also $A_2^{(0)}(t)=-\big(B^0\big)^{(0)}\notin{\mathbb C}\cdot C_1$,
so then the Higgs field of the $(TE)$-structure $H^{\rm (unf)}$
is everywhere primitive.
Because of $B^{(0)}=-t_1A_1^{(0)}-t_2A_2^{(0)}$,
the Euler field is $E=t_1\partial_1+t_2\partial_2$.
$(e)$ follows from $(d)$ and Malgrange's result
Theorem~\ref{t3.16}$(c)$.
\end{proof}
\begin{Remarks}\label{t5.3}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] In the cases (Reg) we will see the universal unfoldings
again in Section~\ref{c7}, in Remarks~\ref{t7.2}. In~a first step in Remarks~\ref{t7.1}, the value $t_2$
in the normal form in Remarks~\ref{t4.19} is
turned into a parameter in $\P^1$.
Remarks~\ref{t7.2} add another parameter~$t_1$ in~${\mathbb C}$.
Then the Higgs field becomes primitive and
the base space ${\mathbb C}\times\P^1$
becomes a~2-dimensional $F$-manifold with Euler field.
For each $t^0\in{\mathbb C}\times{\mathbb C}^*$, the $(TE)$-structure over~$t^0$
is of type (Reg), and the $(TE)$-structure over the
germ $\big(M,t^0\big)$ is a universal unfolding of the one over~$t^0$.
\item[$(ii)$] In the cases (Bra), the following formulas give
a universal unfolding over $\big({\mathbb C}^2,0\big)$ of any $(TE)$-structure
of type (Bra) over the point 0 (see Theorem~\ref{t4.11} for
their classification), such that the Euler field
is $E=(t_1+c_1)\partial_1+\partial_2$. Here $\rho^{(1)}\in{\mathbb C}$,
$b_3^{(0)}\in{\mathbb C}$, $b_2^{(0)},b_4^{(1)}\in{\mathbb C}^*$,
\begin{gather*
A_1= C_1,
\\
A_2= -b_2^{(0)}C_2 -z\bigg(\frac{1}{2}+b_3^{(1)}\bigg)D
- z b_4^{(1)}{\rm e}^{t_2}E
\\
B= (-t_1-c_1)C_1+b_2^{(0)}C_2 + z\big(\rho^{(1)}C_1+b_3^{(1)}D+
b_4^{(1)}{\rm e}^{t_2}E\big)
\\ \hphantom{B}
{}= (-t_1-c_1)A_1 - A_2 + z\rho^{(1)}C_1-z\frac{1}{2}D.
\end{gather*}
\item[$(iii)$] In the cases (Sem), a $(TE)$-structure over a point
extends uniquely to a $(TE)$-structure over the
universal covering $M$ of the manifold
$\big\{(u_1,u_2)\in{\mathbb C}^2\,|\, u_1\neq u_2\big\}$ (see~\cite{Ma83b} and~\cite[Chapter~III, Theorem~2.10]{Sa02}).
For each $t^0\in M$ the $(TE)$-structure over $t^0$
is of type (Sem), and the $(TE)$-structure over the germ
$\big(M,t^0\big)$ is the universal unfolding of the $(TE)$-structure
over~$t^0$.
\end{enumerate}
\end{Remarks}
\section[1-parameter unfoldings of logarithmic $(TE)$-structuresover a point]
{1-parameter unfoldings of logarithmic $\boldsymbol{(TE)}$-structures \\over a point}\label{c6}
This section classifies unfoldings over $\big(M,t^0\big)=({\mathbb C},0)$
with trace free pole part
of logarithmic $(TE)$-structures over the point~$t^0$.
It is a preparation for Section~\ref{c8},
which treats arbitrary unfoldings of $(TE)$-structures
of type (Log) over a point.
Section~\ref{c6.1}: An unfolding with trace free pole part
over $\big(M,t^0\big)=({\mathbb C},0)$ of a logarithmic rank $2$ $(TE)$-structure
over $t^0$ will be considered. Invariants of it will
be defined. Theorem~\ref{t6.2} gives constraints on these
invariants and shows that the monodromy is
semisimple if the generic type is (Sem) or (Bra).
By Theorem~\ref{t3.20}$(a)$ (which is trivial in our case
because of the logarithmic pole at $z=0$ of the $(TE)$-structure
over $t^0$) and Remark~\ref{t3.19}$(iii)$,
the $(TE)$-structure has a Birkhoff normal form,
i.e., an extension to a pure $(TLE)$-structure,
if its monodromy is semisimple.
Section~\ref{c6.2}: All pure $(TLE)$-structures over
$\big(M,t^0\big)=({\mathbb C},0)$ with trace free pole part and with
logarithmic restriction to $t^0$ are classified
in Theorem~\ref{t6.3}. These comprise all with
semisimple monodromy and thus all with generic
types (Sem) or (Bra).
Section~\ref{c6.3}: All $(TE)$-structures
over $\big(M,t^0\big)=({\mathbb C},0)$ with trace free pole part and
with logarithmic restriction over $t^0$ whose
monodromies have a $2\times 2$ Jordan block
are classified in Theorem~\ref{t6.7}.
Their generic types are (Reg) or (Log) because of
Theorem~\ref{t6.2}.
Most of them have no Birkhoff normal forms.
The intersection with Theorem~\ref{t6.3} is small
and consists of those which have Birkhoff normal forms.
Theorems~\ref{t6.3} and~\ref{t6.7} together give
all unfoldings with trace free pole parts
over $\big(M,t^0\big)=({\mathbb C},0)$ of logarithmic rank $2$
$(TE)$-structures over $t^0$.
\subsection{Numerical invariants for such (\emph{TE})-structures}
\label{c6.1}
The next definition gives some numerical invariants
for such $(TE)$-structures.
Recall the invariants $\delta^{(0)}$ and $\delta^{(1)}$
in Lemma~\ref{t3.9}.
\begin{Definition}\label{t6.1}
Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be a $(TE)$-structure
with trace free pole part over $\big(M,t^0\big)=({\mathbb C},0)$
(with coordinate $t$) whose restriction over $t^0=0$ is
logarithmic. Let $M\subset {\mathbb C}$ be a neighborhood of 0
on which the $(TE)$-structure is defined.
On $M\setminus\{0\}$ it has a fixed type, (Sem) or (Bra) or
(Reg) or (Log), which is called the {\it generic type}
of the $(TE)$-structure. Lemma~\ref{t4.3} characterizes
the generic type in terms of (non)vanishing of
$\delta^{(0)},\delta^{(1)}\in t{\mathbb C}\{t\}$ and ${\mathcal U}$:
\[
\def1.4{1.5}
\begin{tabular}{c|c|c|c}
\hline
(Sem) &(Bra) &(Reg) &(Log)
\\ \hline
$\delta^{(0)}\neq 0$ & $\delta^{(0)}=0$,\ $\delta^{(1)}\neq 0$ &
$\delta^{(0)}=\delta^{(1)}=0$, ${\mathcal U}\neq 0$ & ${\mathcal U}=0$
\\
\hline
\end{tabular}
\]
For the generic types (Sem), (Bra) and (Reg), define
$k_1\in{\mathbb N}$ by
\begin{eqnarray}\label{6.1}
k_1 :=\max(k\in{\mathbb N}\,|\, {\mathcal U}({\mathcal O}(H)_0)\subset t^k{\mathcal O}(H)_0.
\end{eqnarray}
For the generic types (Sem) and (Bra) define $k_2\in{\mathbb Z}$ by
\begin{gather*
k_2:= \begin{cases}
\deg_t\delta^{(0)}-k_1 & \text{for the generic type (Sem)},\\
\deg_t\delta^{(1)}-k_1 & \text{for the generic type (Bra)}.
\end{cases}
\end{gather*}
\end{Definition}
The following theorem gives for the generic type (Bra) and
part of the generic type (Sem) restrictions on the eigenvalues
of the residue endomorphism of the logarithmic pole at $z=0$
of the $(TE)$-structure over $t^0=0$.
And it shows that the monodromy is semisimple if the generic
type is (Sem) or (Bra).
\begin{Theorem}\label{t6.2}
Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be a rank $2$ $(TE)$-structure
with trace free pole part over $\big(M,t^0\big)=({\mathbb C},0)$
whose restriction over $t^0=0$ is logarithmic.
Recall the invariant $\rho^{(1)}\in{\mathbb C}$ from Lemma~$\ref{t3.9}(b)$, and recall the invariants $k_1\in{\mathbb N}$ and $k_2\in{\mathbb Z}$
from Definition~$\ref{t6.1}$
if the generic type is $($Sem$)$ or $($Bra$)$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Suppose that the generic type is $($Sem$)$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Then $k_2\geq k_1$.
\item[$(ii)$] If $k_2>k_1$ then the eigenvalues of the
residue endomorphism of the logarithmic pole at~$z=0$
of the $(TE)$-structure over $t^0$ are
$\rho^{(1)}\pm \frac{k_1-k_2}{2(k_1+k_2)}$. Their difference
is smaller than~$1$. Especially, the eigenvalues of the
monodromy are different, and the mono\-dromy is semisimple.
\item[$(iii)$] Also if $k_1=k_2$, the monodromy is semisimple.
\end{enumerate}
\item[$(b)$] Suppose that the generic type is $($Bra$)$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Then $k_2\in{\mathbb N}$.
\item[$(ii)$] The eigenvalues of the residue endomorphism of the
logarithmic pole at $z=0$ of the $(TE)$-structure over $t^0$
are $\rho^{(1)}\pm \frac{-k_2}{2(k_1+k_2)}$. Their difference
is smaller than $1$. Especially, the eigenvalues of the
monodromy are different, and the monodromy is semisimple.
\end{enumerate}
\end{enumerate}
\end{Theorem}
\begin{proof}
By Lemma~\ref{t3.11}, a ${\mathbb C}\{t,z\}$-basis $\underline{v}$
of the germ ${\mathcal O}(H)_{(0,0)}$
can be chosen such that the matrices
$A$ and $B\in M_{2\times 2}({\mathbb C}\{t,z\})$ with
$z\nabla_{\partial_t}\underline{v}=\underline{v} A$ and
$z^2\nabla_{\partial_z}\underline{v}=\underline{v} B$ satisfy~\eqref{3.19},
$0=\tr A=\tr\big(B-z\rho^{(1)} C_1\big)$, or, more explicitly,
\begin{gather*}
A= a_2C_2+a_3D+a_4E\qquad\text{with}\quad
a_2,a_3,a_4 \in {\mathbb C}\{t,z\},
\\
B= z\rho^{(1)}C_1+b_2C_2+b_3D+b_4E\qquad\text{with}\quad
b_2,b_3,b_4\in{\mathbb C}\{t,z\}.
\end{gather*}
Write $a_j=\sum_{k\geq 0}a_j^{(k)}z^k$ and
$a_j^{(k)}=\sum_{l\geq 0}a_{j,l}^{(k)}t^l\in{\mathbb C}\{t\}$,
and analogously for $b_j$.
Condition~\eqref{t3.8} says here
\begin{gather}
0= z\partial_tB-z^2\partial_z A + zA +[A,B]\nonumber
\\ \hphantom{0}
{}= C_2\bigg[ z\partial_t b_2 + za_2^{(0)} -
\sum_{k\geq 2}(k-1)a_2^{(k)}z^{k+1}+2a_2b_3-2a_3b_2\bigg]
\label{6.5}
\\ \hphantom{0=}
{}+ D\bigg[z\partial_tb_3 + za_3^{(0)} -
\sum_{k\geq 2}(k-1)a_3^{(k)}z^{k+1}-a_2b_4+a_4b_2\bigg]
\label{6.6}
\\ \hphantom{0=}
{}+E\bigg[ z\partial_t b_4 + za_4^{(0)} -
\sum_{k\geq 2}(k-1)a_4^{(k)}z^{k+1}-2a_4b_3+2a_3b_4\bigg].
\label{6.7}
\end{gather}
\medskip
$(a)$ Suppose that the generic type is (Sem).
\medskip
$(i)$ By definition of $k_1$ and $k_2$,
\begin{gather}\label{6.8}
k_1 = \min\big(\deg_tb_2^{(0)},\deg_tb_3^{(0)},\deg_tb_4^{(0)}\big),
\\
k_1+k_2= \deg_t\big(\big(b_3^{(0)}\big)^2+b_2^{(0)}b_4^{(0)}\big)\geq 2k_1,
\label{6.9}
\end{gather}
thus $k_2\geq k_1$.
\medskip
$(ii)$ Suppose $k_2>k_1$.
By a linear change of the basis $\underline{v}$, we can arrange
that $k_1=\deg_tb_2^{(0)}$. The base change matrix
$T=C_1+b_3^{(0)}/b_2^{(0)}\cdot E\in {\rm GL}_2({\mathbb C}\{t\})$
gives the new basis $\underline{\widetilde v}=\underline{v}\cdot T$
with matrix
\begin{gather*}
\widetilde B^{(0)}=T^{-1}B^{(0)}T=b_2^{(0)}C_2+
\bigg(b_4^{(0)}+\frac{\big(b_3^{(0)}\big)^2}{b_2^{(0)}}\bigg)E.
\end{gather*}
We can make a coordinate change in $t$ such that afterwards
\begin{gather*}
b_2^{(0)}b_4^{(0)}+\big(b_3^{(0)}\big)^2=\gamma^2 t^{k_1+k_2}
\end{gather*}
for an arbitrarily chosen $\gamma\in{\mathbb C}^*$.
Then a diagonal base change leads to a basis
which is again called $\underline{v}$ with matrices
which are again called $A$ and $B$ with
\begin{gather*
b_3^{(0)}=0,\qquad
b_2^{(0)}=\gamma t^{k_1},\qquad
b_4^{(0)}=\gamma t^{k_2}.
\end{gather*}
Now the vanishing of the coefficients in ${\mathbb C}\{t\}$ of
$C_2\cdot z^0$, $C_2\cdot z^1$, $D\cdot z^0$, $D\cdot z^1$
and $E\cdot z^1$ in~\eqref{6.5}--\eqref{6.7}
tells the following:
\begin{align*}
C_2\cdot z^0\colon \quad &a_3^{(0)}=0,
\\
C_2\cdot z^1\colon \quad &0=k_1\gamma t^{k_1-1}+a_2^{(0)}\big(1+2b_3^{(1)}\big)
-2a_3^{(1)}\gamma t^{k_1},
\\
&\text{so}\quad \deg_t a_2^{(0)}=k_1-1,\quad
0=k_1\gamma +a_{2,k_1-1}^{(0)}\big(1+2b_{3,0}^{(1)}\big),
\\
D\cdot z^0\colon\quad &a_2^{(0)}\gamma t^{k_2}=a_4^{(0)}\gamma t^{k_1},
\quad\text{so}\quad a_4^{(0)}=a_2^{(0)}t^{k_2-k_1},
\\
&\text{so}\quad \deg a_4^{(0)}=k_2-1,\quad\text{and}\quad
a_{4,k_2-1}^{(0)}=a_{2,k_1-1}^{(0)}.
\\
D\cdot z^1\colon\quad &a_2^{(0)}b_4^{(1)}+a_2^{(1)}\gamma t^{k_2}
=a_4^{(0)}b_2^{(1)}+a_4^{(1)}\gamma t^{k_1},
\\
&\text{so}\quad b_{4,0}^{(1)}=0 \quad \text{(here }
k_2>k_1\text{ is used)},
\\
E\cdot z^1\colon\quad & 0=k_2\gamma t^{k_2-1}+a_4^{(0)}\big(1-2b_3^{(1)}\big)
+2a_3^{(1)}\gamma t^{k_2},
\\
&\text{so}\quad 0=k_2\gamma +a_{4,k_2-1}^{(0)}\big(1-2b_{3,0}^{(1)}\big).
\end{align*}
This shows
\begin{gather*
b_{4,0}^{(1)}=0,\qquad
b_{3,0}^{(1)}=\frac{k_1-k_2}{2(k_1+k_2)}
\in \bigg({-}\frac{1}{2},0\bigg)\cap{\mathbb Q}.
\end{gather*}
With respect to the basis $\underline{v}|_{(0,0)}$ of $K_{(0,0)}$,
the matrix of the residue endomorphism of the
logarithmic pole at $z=0$ of the $(TE)$-structure over $t^0=0$
is
\begin{eqnarray*}
B^{(1)}(0)&=& \rho^{(1)}C_1 + b_{3,0}^{(1)}D+b_{2,0}^{(1)}C_2.
\end{eqnarray*}
It is semisimple with the eigenvalues
$\rho^{(1)}\pm b_{3,0}^{(1)}$, whose difference is smaller
than 1. The monodromy is semisimple with the two different
eigenvalues $\exp\big({-}2\pi {\rm i}\big(\rho^{(1)}\pm b_{3,0}^{(1)}\big)\big)$.
\medskip
$(iii)$ Suppose $k_2=k_1$. As in the proof of $(ii)$, we
can make a coordinate change in $t$ and then obtain a
${\mathbb C}\{t,z\}$-basis $\underline{\widetilde v}$ of ${\mathcal O}(H)_{(0,0)}$ with
\begin{gather*}
\widetilde b_3^{(0)}=0 ,\qquad \widetilde b_2^{(0)}=\widetilde b_4^{(0)}=\gamma t^{k_1}
\end{gather*}
for an arbitrarily chosen $\gamma\in{\mathbb C}^*$.
Now the constant base change matrix
$T=\left(\begin{smallmatrix}1&\hphantom{-}1\\1&-1\end{smallmatrix}\right)$ gives the
basis $\underline{v}=\underline{\widetilde v}\cdot T$ with
\begin{gather*
b_2^{(0)}=b_4^{(0)}=0,\qquad b_3^{(0)}=\gamma t^{k_1}.
\end{gather*}
The vanishing of the coefficients in ${\mathbb C}\{t\}$ of
$C_2\cdot z^0$, $E\cdot z^0$, $D\cdot z^1$,
$C_2\cdot z^1$ and $E\cdot z^1$ in~\eqref{6.5}--\eqref{6.7}
tells the following:
\begin{align*}
&C_2\cdot z^0\colon&&\hspace{-55mm} a_2^{(0)}=0,
\\
&E\cdot z^0\colon&&\hspace{-55mm} a_4^{(0)}=0,
\\
&D\cdot z^1\colon&&\hspace{-55mm} 0=k_1\gamma t^{k_1-1}+a_3^{(0)},\quad
\text{so}\quad a_3^{(0)}=-k_1\gamma t^{k_1-1},
\\
&C_2\cdot z^1\colon&&\hspace{-55mm} b_2^{(1)}=\frac{b_3^{(0)}}{a_3^{(0)}}a_2^{(1)}
= \frac{-1}{k_1} \cdot t \cdot a_2^{(1)},\quad
\text{so}\quad b_{2,0}^{(1)}=0,
\\
&E\cdot z^1\colon&&\hspace{-55mm} b_4^{(1)}=\frac{b_3^{(0)}}{a_3^{(0)}}a_4^{(1)}
= \frac{-1}{k_1} \cdot t \cdot a_4^{(1)},\quad
\text{so}\quad b_{4,0}^{(1)}=0.
\end{align*}
With respect to the basis $\underline{v}|_{(0,0)}$ of $K_{(0,0)}$,
the matrix of the residue endomorphism of the logarithmic
pole at $z=0$ of the $(TE)$-structure over $t^0=0$ is
\begin{gather*}
B^{(1)}(0)= \rho^{(1)}C_1+b_{3,0}^{(1)}D.
\end{gather*}
It is diagonal with the eigenvalues $\rho^{(1)}\pm b_{3,0}^{(1)}$.
Therefore the monodromy has the eigenvalues
$\exp\big({-}2\pi {\rm i}\big(\rho^{(1)}\pm b_{3,0}^{(1)}\big)\big)$.
If $b_{3,0}^{(1)}\in{\mathbb C}\setminus \big(\frac{1}{2}{\mathbb Z}\setminus\{0\}\big)$, the eigenvalues
of the residue endomorphism do not differ by a~nonzero integer.
Because of Theorem~\ref{t3.23}$(c)$, then the monodromy is
semisimple.
We will show that the monodromy is also in the cases
$b_{3,0}^{(1)}\in\frac{1}{2}{\mathbb Z}\setminus\{0\}$ semisimple,
by reducing these cases to the case $b_{3,0}^{(1)}=0$.
Suppose $b_{3,0}^{(1)}\in\frac{1}{2}{\mathbb N}$. The case
$b_{3,0}^{(1)}\in \frac{1}{2}{\mathbb Z}_{<0}$ can be reduced to this
case by exchanging $v_1$ and~$v_2$.
We will construct a new $(TE)$-structure over $\big(M,t^0\big)=({\mathbb C},0)$
with the same monodromy and again with trace free pole part and
of generic type (Sem)
with logarithmic restriction over~$t^0$, but where
$B^{(1)}(0)$ is replaced by
\begin{gather*
\widetilde B^{(1)}(0)=\bigg(\rho^{(1)}+\frac{1}{2}\bigg)+
\bigg(b_{3,0}^{(1)}-\frac{1}{2}\bigg)D.
\end{gather*}
Applying this sufficiently often, we arrive at the case
$b_{3,0}^{(1)}=0$, which has semisimple monodromy.
The basis $\underline{\widetilde v}:= \underline{v}\cdot
\left(\begin{smallmatrix}1&0\\0&z\end{smallmatrix}\right)$ of
$H':=H|_{{\mathbb C}^*\times (M,t^0)}$ in a neighborhood of $(0,0)$
defines a new $(TE)$-structure over $(M,0)$ because of
\begin{gather*
z\nabla_{\partial_t}\underline{\widetilde v}= \underline{\widetilde v}
\bigl(z^{-1}a_2C_2+a_3D+za_4E\bigr)\qquad\text{and}\qquad a_2^{(0)}=0,
\\[1ex]
z^2\nabla_{\partial_z}\underline{\widetilde v}= \underline{\widetilde v}
\bigg(z\bigg(\rho^{(1)}+\frac{1}{2}\bigg)C_1+z^{-1}b_2C_2+\bigg(b_3-z\frac{1}{2}\bigg)D
+zb_4E\bigg)\qquad \text{and}\qquad b_2^{(0)}=0.
\end{gather*}
Of course, it has the same monodromy.
The restriction over $t^0=0$ has a logarithmic pole at $z=0$
because $b_2^{(1)}=\frac{-1}{k_1}ta_2^{(1)}$
and $b_3^{(0)}=\gamma t^{k_1}$ with $k_1\in{\mathbb N}$.
Its generic type is still (Sem).
Its numbers $\widetilde k_1$ and $\widetilde k_2$ satisfy
$\widetilde k_1+\widetilde k_2=\deg_t\det\widetilde{\mathcal U}=\deg_t\big(b_3^{(0)}\big)^2=2k_1$.
The assumption $\widetilde k_1<\widetilde k_2$ would lead together with
part $(ii)$ to two different eigenvalues of the monodromy,
a contradiction. Therefore $\widetilde k_1=\widetilde k_2=k_1$.
Thus we are in the same situation as before, with
$b_{3,0}^{(1)}$ diminuished by~$\frac{1}{2}$.\looseness=-1
\medskip
$(b)$ Suppose that the generic type is (Bra).
$(i)$ and $(ii)$ ${\mathcal U}$ is nilpotent, but not 0.
We can choose a ${\mathbb C}\{t,z\}$-basis $\underline{v}$ of
${\mathcal O}(H)_{(0,0)}$ such that
\begin{gather*
B^{(0)}=b_2^{(0)}C_2,\qquad \text{so}\quad
b_3^{(0)}=b_4^{(0)}=0.
\end{gather*}
Then $\delta^{(1)}=-b_2^{(0)}b_4^{(1)}$.
Here $\deg_tb_2^{(0)}=k_1$ and $\deg_t\delta^{(1)}=k_1+k_2$,
so $k_2=\deg_tb_4^{(1)} \geq 0$. We can make a
coordinate change in $t$ such that afterwards
\begin{gather*}
b_2^{(0)}b_4^{(1)}=\gamma^2t^{k_1+k_2},
\end{gather*}
for an arbitrarily chosen $\gamma\in{\mathbb C}^*$.
Then a diagonal base change leads to a basis which is
again called $\underline{v}$ with matrices which are again
called $A$ and $B$ with
\begin{gather*
b_2^{(0)}=\gamma t^{k_1},\qquad
b_3^{(0)}=b_4^{(0)}=0,\qquad
b_4^{(1)}=\gamma t^{k_2}.
\end{gather*}
The vanishing of the coefficients in ${\mathbb C}\{t\}$ of
$C_2\cdot z^0$, $D\cdot z^0$, $C_2\cdot z^1$, $D\cdot z^1$
and $E\cdot z^2$ in~\eqref{6.5}--\eqref{6.7}
tells the following
\begin{align*}
&C_2\cdot z^0\colon &&\hspace{-40mm} a_3^{(0)}=0,
\\
&D\cdot z^0\colon &&\hspace{-40mm}a_4^{(0)}=0,
\\
&C_2\cdot z^1\colon
&&\hspace{-40mm}0=k_1\gamma t^{k_1-1}+a_2^{(0)}\big(1+2b_3^{(1)}\big)-2a_3^{(1)}\gamma t^{k_1},
\\
&&&\hspace{-40mm}\text{so}\quad \deg_t a_2^{(0)}=k_1-1, \quad
0=k_1\gamma +a_{2,k_1-1}^{(0)}\big(1+2b_{3,0}^{(1)}\big),
\\
&D\cdot z^1\colon &&\hspace{-40mm}a_2^{(0)}\gamma t^{k_2}=a_4^{(1)}\gamma t^{k_1},\quad
\text{so}\quad t^{k_2}=a_4^{(1)}\frac{t^{k_1}}{a_2^{(0)}},
\\
&&&\hspace{-40mm}\text{so}\quad k_2=1+\deg a_4^{(1)}\geq 1,\quad\text{and}\quad
a_4^{(1)}=a_2^{(0)}t^{k_2-k_1},
\\
&E\cdot z^2\colon &&\hspace{-40mm}
0=k_2\gamma t^{k_2-1}+2a_3^{(1)}\gamma t^{k_2}-2a_4^{(1)}b_3^{(1)},
\\
&&&\hspace{-40mm} \text{so}\quad 0=k_2\gamma -2a_{2,k_1-1}^{(0)}b_{3,0}^{(1)}.
\end{align*}
This shows
\begin{gather*
k_2\geq 1,\qquad b_{4,0}^{(1)}=0,\qquad
b_{3,0}^{(1)}=\frac{-k_2}{2(k_1+k_2)}
\in \bigg({-}\frac{1}{2},0\bigg)\cap{\mathbb Q}.
\end{gather*}
With respect to the basis $\underline{v}|_{(0,0)}$ of $K_{(0,0)}$,
the matrix of the residue endomorphism of the
logarithmic pole at $z=0$ of the $(TE)$-structure over $t^0=0$
is
\begin{gather*}
B^{(1)}(0)= \rho^{(1)}C_1 + b_{3,0}^{(1)}D+b_{2,0}^{(1)}C_2.
\end{gather*}
It is semisimple with the eigenvalues
$\rho^{(1)}\pm b_{3,0}^{(1)}$, whose difference is smaller
than 1. The mono\-dromy is semisimple with the two different
eigenvalues $\exp\big({-}2\pi {\rm i}\big(\rho^{(1)}\pm b_{3,0}^{(1)}\big)\big)$.
\end{proof}
\subsection[1-parameter unfoldings with trace free pole part
of logarithmic pure $(TLE)$-structures over a point]
{1-parameter unfoldings with trace free pole part
of logarithmic \\pure $\boldsymbol{(TLE)}$-structures over a point}\label{c6.2}
\looseness=1
Such unfoldings are themselves pure $(TLE)$-structures
over $({\mathbb C},0)$, see Remark~\ref{t3.19}$(iii)$ respectively
\cite[Chapter~VI, Theorem~2.1]{Sa02} or~\cite[Theorem~5.1(c)]{DH20-2}.
Their restrictions over $t^0=0$ have a logarithmic
pole at $z=0$.
Theorem~\ref{t6.3} classifies such pure $(TLE)$-structures.
The underlying $(TE)$-structures were subject of
Definition~\ref{t6.1} and Theorem~\ref{t6.2}.
They gave their generic type and
invariants $(k_1,k_2)\in{\mathbb N}^2$ (for the generic types
(Sem) and (Bra)) and $k_1\in{\mathbb N}$ (for the generic type (Reg)).
Theorem~\ref{t6.3} will give an invariant $k_1\in{\mathbb N}$ also
for the generic type (Log) with Higgs field $\neq 0$.
Lemma~\ref{t3.9}$(b)$ gave the invariant $\rho^{(1)}\in{\mathbb C}$.
The coordinate on ${\mathbb C}$ is again called~$t$.
\begin{Theorem}\label{t6.3}
Any pure rank $2$ $(TLE)$-structure over $\big(M,t^0\big)=({\mathbb C},0)$
with trace free pole part and with logarithmic
restriction over $t^0$ has after a suitable coordinate change
in $t$ a unique Birkhoff normal form in the following list.
Here the Birkhoff normal form
consists of two matrices $A$ and $B$ which are associated
to a global basis $\underline{v}$ of $H$ whose restriction to
$\{\infty\}\times \big(M,t^0\big)$ is flat with
respect to the residual connection along
$\{\infty\}\times \big(M,t^0\big)$, via
$z\nabla_{\partial_t}\underline{v}=\underline{v}A$ and
\mbox{$z^2\nabla_{\partial_z}\underline{v}=\underline{v}B$}.
The matrices have the shape
\begin{gather}
A=a_2^{(0)}C_2+a_3^{(0)}D+a_4^{(0)}E,\nonumber
\\
B=z\rho^{(1)}C_1-\gamma tA
+zb_2^{(1)}C_2+zb_3^{(1)}D,\label{6.19
\end{gather}
with $a_2^{(0)},a_3^{(0)},a_4^{(0)}\in{\mathbb C}[t]$,
$\rho^{(1)},\gamma\in{\mathbb C}$, $b_2^{(1)},b_3^{(1)}\in{\mathbb C}$
$($so here $zb_4^{(1)}E$ does not turn up, resp.~$b_4^{(1)}\allowbreak=0)$.
The left column of the following list gives the
generic type of the
underlying $(TE)$-structure and, depending on the type,
the invariant $k_1\in{\mathbb N}$ or the invariants $k_1,k_2\in{\mathbb N}$
from Definition~$\ref{t6.1}$ of the underlying $(TE)$-structure.
The invariant $\rho^{(1)}\in {\mathbb C}$ is arbitrary and
is not listed in the table.
$\zeta\in{\mathbb C}$, $\alpha_3\in {\mathbb R}_{\geq 0}\cup\H$,
$\alpha_4\in{\mathbb C}\setminus \{-1\}$, $k_1\in{\mathbb N}$ and $k_2\in{\mathbb N}$
are invariants in some cases. In~the first $6$ cases, $a_i^{(0)}$ is determined by
$b_i^{(0)}=-\gamma t a_i^{(0)}$.
\[
\def1.4{1.5}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
\raisebox{1mm}[3.9ex][2.2ex]{\parbox[c]{25mm}{\centering Generic type and~invariants}} & $\gamma$
& $b_2^{(0)}$ & $b_3^{(0)}$ & $b_4^{(0)}$ &
$b_2^{(1)}$ & $b_3^{(1)}$
\\
\hline
$($Sem$)$ & & & & & &
\\
$k_2-k_1>0$\ odd & $\frac{2}{k_1+k_2}$ &
$t^{k_1}$ & $0$ & $t^{k_2}$ & $0$ & $\frac{k_1-k_2}{2(k_1+k_2)}$
\\
$k_2-k_1\in 2{\mathbb N}$& $\frac{2}{k_1+k_2}$ &
$t^{k_1}$ & $\zeta t^{(k_1+k_2)/2}$ & $\big(1-\zeta^2\big)t^{k_2}$ &
$0$ & $\frac{k_1-k_2}{2(k_1+k_2)}$
\\
$k_2=k_1$ & $\frac{1}{k_1}$ & $0$ & $t^{k_1}$ & $0$ &
$0$ & $\alpha_3$
\\
\hline
$($Bra$)$, $k_1$, $k_2$ & $\frac{1}{k_1+k_2}$ & $t^{k_1}$ &
$t^{k_1+k_2}$ & $-t^{k_1+2k_2}$ & $0$ & $\frac{-k_2}{2(k_1+k_2)}$
\\
\hline
$($Reg$)$, $k_1$ & $\frac{1+\alpha_4}{k_1}$ & $t^{k_1}$ & $0$ &
$0$ & $0$ & $\frac{1}{2}\alpha_4$
\\
$($Reg$)$, $k_1$ & $\frac{1}{k_1}$ & $t^{k_1}$ & $0$ & $0$ & $1$ & $0$
\\ \hline \hline
Generic type & $\gamma$
& $a_2^{(0)}$ & $a_3^{(0)}$ & $a_4^{(0)}$ &
$b_2^{(1)}$ & $b_3^{(1)}$
\\ \hline
$($Log$)$ & $0$ & $k_1t^{k_1-1}$ & $0$ & $0$ & $0$ & $-\frac{1}{2}$
\\
$($Log$)$ & $0$ & $0$ & $0$ & $0$ & $0$ & $\alpha_3$
\\
$($Log$)$ & $0$ & $0$ & $0$ & $0$ & $1$ & $0$
\\
\hline
\end{tabular}
\]
\end{Theorem}
Before the proof, several remarks on these Birkhoff
normal forms are made.
The proof is given after Remark~\ref{t6.6}.
\begin{Remarks}\label{t6.4}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The matrix $B(0)=zB^{(1)}(0)$ is the matrix of the
logarithmic pole at $z=0$ of the restriction over $t^0=0$
of the $(TE)$-structure. In~all cases except the 6th case and the 9th case, it is
$z\big(\rho^{(1)}C_1+b_3^{(1)}D\big)$, so it is diagonal. In~these cases the monodromy is semisimple with eigenvalues
$\exp\big({-}2\pi {\rm i} \big(\rho^{(1)}\pm b_3^{(1)}\big)\big)$. In~the 6th case and the 9th case,
this matrix is $z\big(\rho^{(1)}C_1+C_2\big)$. Then the matrix and
the monodromy have a $2\times 2$ Jordan block,
and the monodromy has the eigenvalue $\exp\big({-}2\pi {\rm i}\rho^{(1)}\big)$. In~all cases, the leading exponents
(defined in Theorem~\ref{t4.20})
of the logarithmic $(TE)$-structure over $t^0$ are
called $\alpha_1^0$ and $\alpha_2^0$, and they~are
\begin{gather*
\alpha_{1/2}^0=\rho^{(1)}\pm b_3^{(1)},\qquad
\text{i.e.},\qquad \frac{\alpha_1^0+\alpha_2^0}{2}=\rho^{(1)},\qquad
\alpha_1^0-\alpha_2^0=2b_3^{(1)}.
\end{gather*}
The 6th and 9th cases turn up again in Theorem~\ref{t6.7}.
See Remarks~\ref{t6.8}$(iv)$--$(vi)$.
\item[$(ii)$] In the generic types (Sem), the critical values satisfy
$u_2=-u_1$ because the pole part is trace free,
$-\frac{u_1+u_2}{2}=\rho^{(0)}=0$.
They and the regular singular exponents $\alpha_1$ and
$\alpha_2$ can be calculated with the formulas~\eqref{4.6} and~\eqref{4.7}:
\begin{gather}\label{6.22}
\delta^{(0)}= -b_2^{(0)}b_4^{(0)}-\big(b_3^{(0)}\big)^2=-t^{k_1+k_2},
\\
u_{1/2}=\pm\sqrt{\frac{1}{4}(u_1-u_2)^2}=\pm\sqrt{-\delta^{(0)}}
=\pm t^{(k_1+k_2)/2},\label{6.23}
\\
\frac{\alpha_1+\alpha_2}{2}= \rho^{(1)}, \label{6.24}
\\
\alpha_1-\alpha_2= u_1^{-1}\delta^{(1)}
= \begin{cases}
0,& \text{gen. type (Sem) with }k_2-k_1>0\text{ odd,}\\
\frac{k_2-k_1}{k_1+k_2}\zeta,
& \text{gen. type (Sem) with }k_2-k_1\in 2{\mathbb N},\\
-2\alpha_3,& \text{gen. type (Sem) with }k_2=k_1.
\end{cases} \label{6.25}
\end{gather}
If $k_2=k_1$ then $\{\alpha_1,\alpha_2\}
=\big\{\alpha_1^0,\alpha_2^0\big\}$, but if
$k_2>k_1$ then $\{\alpha_1,\alpha_2\}\neq
\big\{\alpha_1^0,\alpha_2^0\big\}$, except if $\zeta\in\{\pm 1\}$.
\item[$(iii)$] In the generic type (Bra), $\rho^{(1)}\in{\mathbb C}$ is arbitrary,
$b_3^{(1)}=\frac{-k_2}{2(k_1+k_2)}$, and $\delta^{(1)}$
varies as follows,
\begin{eqnarray}\label{6.26}
\delta^{(1)}= \frac{k_2}{k_1+k_2}t^{k_1+k_2}.
\end{eqnarray}
\item[$(iv)$] In the 5th, 7th and 8th cases in Theorem~\ref{t6.3},
the monodromy is semisimple and the $(TE)$-structure is regular singular.
Associate to it the data in Definition~\ref{t3.18}:
$H':=H|_{{\mathbb C}\times (M,t^0)}$, $M^{\rm mon}$, $N^{\rm mon}$,
$\Eig(M^{\rm mon})=\{\lambda_1,\lambda_2\}$, $H^\infty$, $C^{\alpha}$ for
$\alpha\in{\mathbb C}$ with ${\rm e}^{-2\pi {\rm i} \alpha_j}\in\{\lambda_1,\lambda_2\}$.
The~leading exponents of the logarithmic $(TE)$-structure
over $t^0$ are called $\alpha_1^0$ and $\alpha_2^0$ as in $(i)$.
The leading exponents of the $(TE)$-structure over
$t\in {\mathbb C}\setminus\{0\}$ are now called $\alpha_1$ and $\alpha_2$.
Possibly after renumbering $\lambda_1$ and $\lambda_2$,
$\alpha_1^0$ and $\alpha_2^0$, and $\alpha_1$ and $\alpha_2$,
we have ${\rm e}^{-2\pi {\rm i}\alpha_j^0}={\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_j$
and the relations in the following table:
\begin{gather}\label{6.27}
\def1.4{1.3}
\begin{tabular}{c|c|c|c|c}
\hline
In Theorem~\ref{t6.3} & $\alpha_1^0$ & $\alpha_2^0$&$\alpha_1$ &$\alpha_2$
\\ \hline
5th case &$\rho^{(1)}+\frac{1}{2}\alpha_4$ &$\rho^{(1)}-\frac{1}{2}\alpha_4$
&$\alpha_1^0$ & $\alpha_2^0-1$
\\
7th case & $\rho^{(1)}-\frac{1}{2}$ &
$\rho^{(1)}+\frac{1}{2}$ & $\alpha_1^0$ & $\alpha_2^0$
\\
8th case &$\rho^{(1)}+\alpha_3$ &$\rho^{(1)}-\alpha_3$ & $\alpha_1^0$ & $\alpha_2^0$
\\
\hline
\end{tabular}
\end{gather}
And there exist sections $s_j\in C^{\alpha_j}\setminus\{0\}$ with
\begin{gather}
{\mathcal O}(H)_0= {\mathbb C}\{t,z\}\bigg(s_1+\frac{-1}{1+\alpha_4}t^{k_1}s_2\bigg)
\oplus {\mathbb C}\{t,z\}(zs_2)
\qquad \text{in the 5th case,} \label{6.28}\\
{\mathcal O}(H)_0= {\mathbb C}\{t,z\}\big(s_1+t^{k_1}z^{-1}s_2\big)
\oplus {\mathbb C}\{t,z\}s_2
\qquad \text{in the 7th case,} \label{6.29}\\[.5ex]
{\mathcal O}(H)_0= {\mathbb C}\{t,z\}s_1
\oplus {\mathbb C}\{t,z\}s_2
\qquad \text{in the 8th case.}\label{6.30}
\end{gather}
{\sloppy
One confirms~\eqref{6.28}--\eqref{6.30}
immediately by calculating the matrices
$A$ and $B$ with \mbox{$z\nabla_{\partial_t}\underline{v}=\underline{v}A$}
and $z^2\nabla_{\partial_z}\underline{v}=\underline{v}B$ for
$\underline{v}$ the basis in~\eqref{6.28}--\eqref{6.30}.
}
\item[$(v)$] Theorem~\ref{t6.7} contains for the 6th and 9th cases
in Theorem~\ref{t6.3} a description similar to part $(iv)$.
See Remarks~\ref{t6.8}$(iv)$--$(vi)$.
\end{enumerate}
\end{Remarks}
\begin{Remarks}\label{t6.5}
These remarks study the behaviour of the $(TE)$-structures
in Theorem~\ref{t6.3} under pull back via maps
$\varphi\colon ({\mathbb C},0)\to({\mathbb C},0)$.
The normal forms in Theorem~\ref{t6.3}
are chosen such that the pull backs
by maps $\varphi$ with
$\varphi(s)=s^n$ for some $n\in{\mathbb N}$
are again normal forms in Theorem~\ref{t6.3}.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] A general observation:
Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be a $(TE)$-structure
over $\big(M,t^0\big)=({\mathbb C},0)$ of rank $r\in{\mathbb N}$.
Let $\underline{v}$ be ${\mathbb C}\{t,z\}$-basis
of ${\mathcal O}(H)_0$ with $z\nabla_{\partial_t}\underline{v}=\underline{v}A$
and $z^2\nabla_{\partial_z}\underline{v}=\underline{v}B$ and~$A,B\in M_{r\times r}({\mathbb C}\{t,z\})$.
Choose $n\in{\mathbb N}$ and consider a map
$\varphi\colon ({\mathbb C},0)\to({\mathbb C},0)$, $s\mapsto \varphi(s)=t$.
Then the pull back $(TE)$-structure $\varphi^*(H,\nabla)$
has the basis $\underline{\widetilde v}(z,s)=\underline{v}(z,\varphi(s))$.
The~matrices $\widetilde A,\widetilde B\in M_{r\times r}({\mathbb C}\{s,z\})$
with $z\nabla_{\partial_s}\underline{\widetilde v}=\underline{\widetilde v}\widetilde A$,
$z^2\nabla_{\partial_z}\underline{\widetilde v}=\underline{\widetilde v}\widetilde B$ are
\begin{gather}\label{6.31}
\widetilde A=\partial_s\varphi(s)\cdot A(z,\varphi(s)),\qquad
\widetilde B=B(z,\varphi(s)).
\end{gather}
\item[$(ii)$] These formulas~\eqref{6.31} show for the 1st to 7th
cases in the list in Theorem~\ref{t6.3} the following:
The pull back via $\varphi\colon ({\mathbb C},0)\to({\mathbb C},0)$ with
$\varphi(s)=s^n$ for some $n\in{\mathbb N}$
of such a $(TE)$-structure with invariants $(k_1,k_2)$
or $k_1$
is a $(TE)$-structure in the same case, where the invariants
$(k_1,k_2)$ or $k_1$ are replaced by $\big(\widetilde k_1,\widetilde k_2\big)=(nk_1,nk_2)$
or $\widetilde{k}_1=nk_1$,
and where all other invariants coincide with the old invariants.
\item[$(iii)$] The following table says
which of the $(TE)$-structures in the 1st to 7th cases
in the list in Theorem~\ref{t6.3}
are not induced by other such $(TE)$-structures:
\[
\def1.4{1.3}
\begin{tabular}{l|c}
\hline
Generic type and invariants & Not induced if
\\ \hline
(Sem)$\colon\ k_2-k_1>0$ odd & $\gcd(k_1,k_2)=1$
\\
(Sem)$\colon\ k_2-k_1\in 2{\mathbb N}$, $\zeta= 0$ & $\gcd(k_1,k_2)=1$
\\
(Sem)$\colon\ k_2-k_1\in 2{\mathbb N}$, $\zeta\neq 0$ &
$\gcd\big(k_1,\frac{k_1+k_2}{2}\big)=1$
\\
(Sem)$\colon\ k_2=k_1$ &$ k_2=k_1=1 $
\\ \hline
(Bra) & $\gcd(k_1,k_2)=1$
\\ \hline
(Reg)$\colon\ N^{\rm mon}=0$ &$ k_1=1$
\\
(Reg)$\colon\ N^{\rm mon}\neq$ 0 & $k_1=1$
\\ \hline
(Log) & $k_1=1$
\\
\hline
\end{tabular}
\]
\item[$(iv)$] In the 8th and 9th cases, the $(TE)$-structure is
induced by its restriction over $t^0$ via the map
$\varphi\colon \big(M,t^0\big)\to\big\{t^0\big\}$, so it is constant
along $M$.
\item[$(v)$] The formulas~\eqref{6.28} and~\eqref{6.29}
confirm part $(ii)$ for the 5th and 7th cases in Theorem~\ref{t6.3}.
Formula~\eqref{6.30} confirms part $(iv)$ in the
8th case in Theorem~\ref{t6.3}.
Analogous statements to part $(ii)$ and part $(iv)$
hold for the cases in Theorem~\ref{t6.7}.
They follow from the formulas~\eqref{6.49},~\eqref{6.50}
and~\eqref{6.51} there,
which are analogous to~\eqref{6.28},~\eqref{6.29}
and~\eqref{6.30}.
See Remarks~\ref{t6.8}$(ii)$ and $(iii)$.
\end{enumerate}
\end{Remarks}
\begin{Remark}\label{t6.6}
In the 2nd and 4th cases in the list in Theorem~\ref{t6.3},
another ${\mathbb C}\{t,z\}$-basis $\underline{\widetilde v}$
of~${\mathcal O}(H)_0$ with nice matrices $\widetilde A$ and $\widetilde B$
is
\begin{gather*
\underline{\widetilde v}=\underline{v}\cdot T\quad\text{with}\quad
T=C_1+\frac{a_3^{(0)}}{a_2^{(0)}}E
=\begin{cases}
C_1+\zeta t^{(k_2-k_1)/2}E & \text{in the 2nd case,}\\
C_1+t^{k_2}E & \text{in the 4th case.}
\end{cases}
\end{gather*}
In the 2nd case
\begin{gather*
\widetilde A= -\gamma^{-1}\big(t^{k_1-1}C_2 + t^{k_2-1}E\big) + z\frac{k_2-k_1}{2}\zeta t^{(k_2-k_1-2)/2}E,
\\
\widetilde B= z\rho^{(1)} C_1 -\gamma t\widetilde A +
z b_3^{(1)} D.\nonumber
\end{gather*}
In the 4th case
\begin{gather*
\widetilde A= -\gamma^{-1}t^{k_1-1}C_2 + zk_2t^{k_2-1}E,\\
\widetilde B= z\rho^{(1)} C_1 -\gamma t\widetilde A +z b_3^{(1)} D.\nonumber
\end{gather*}
These matrices are not in Birkhoff normal form.
The basis $\underline{\widetilde v}$ is still a global basis
of the pure $(TLE)$-structure, but
the sections $\widetilde v_j|_{\{\infty\}\times M}$ are not
flat with respect to the residual connection along
$\{\infty\}\times M$.
\end{Remark}
\begin{proof}[Proof of Theorem~\ref{t6.3}]
Consider any pure $(TLE)$-structure over $\big(M,t^0\big)=({\mathbb C},0)$
with trace free pole part and with logarithmic restriction
to $t^0$. Choose a global basis $\underline{v}$ of $H$
whose restriction to
$\{\infty\}\times \big(M,t^0\big)$ is flat with
respect to the residual connection along
$\{\infty\}\times \big(M,t^0\big)$.
Its matrices $A$ and $B$ with
$z\nabla_{\partial_t}\underline{v}=\underline{v}A$ and
$z^2\nabla_{\partial_z}\underline{v}=\underline{v}B$
have because of~\eqref{3.18} (in Lemma~\ref{t3.11})
the shape~\eqref{6.19} and
\begin{gather*
B=z\rho^{(1)}C_1+\big(b_2^{(0)}+zb_2^{(1)}\big)C_2 +
\big(b_3^{(0)}+zb_3^{(1)}\big)D+\big(b_4^{(0)}+zb_4^{(1)}\big)E
\end{gather*}
with $a_j^{(0)}\in{\mathbb C}\{t\}$, $b_j^{(0)}\in t{\mathbb C}\{t\}$,
$b_j^{(1)}\in{\mathbb C}$. They satisfy the relations~\eqref{3.28}
(and, equivalently,~\eqref{6.5}--\eqref{6.7}),
so, more explicitly,
\begin{gather}\label{6.37}
a_i^{(0)}b_j^{(0)}=a_j^{(0)}b_i^{(0)}\qquad\text{for}\quad
(i,j)\in\{(2,3),(2,4),(3,4)\},
\\[1ex]
\begin{pmatrix}-\partial_tb_2^{(0)} \\ -\partial_tb_3^{(0)} \\
-\partial_tb_4^{(0)} \end{pmatrix}
= \begin{pmatrix}1+2b_3^{(1)} & -2b_2^{(1)} & 0 \\
-b_4^{(1)} & 1 & b_2^{(1)} \\
0 & 2b_4^{(1)} & 1-2b_3^{(1)} \end{pmatrix}
\begin{pmatrix} a_2^{(0)} \\ a_3^{(0)} \\ a_4^{(0)}
\end{pmatrix}\!. \label{6.38}
\end{gather}
First we consider the cases when all $a_j^{(0)}$ are 0.
Then also all $b_j^{(0)}$ are 0, because of
$b_j^{(0)}\in t{\mathbb C}\{t\}$ and because of the differential
equations~\eqref{6.38}.
Then $B=zB^{(1)}$, and it is clear that this matrix
can be brought to the form $B=z\rho^{(1)}C_1+ z\alpha_3 D$
or $B=z\rho^{(1)}C_1+zC_2$
by a constant base change.
The number $\alpha_3\in{\mathbb C}$ can be replaced by $-\alpha_3$, so
$\alpha_3\in{\mathbb R}_{\geq 0}\cup\H$ is unique.
This gives the last two cases in the list.
There the generic type is (Log).
For the rest of the proof, we consider the cases
when at least one $a_j^{(0)}$ is not 0.
Then~\eqref{6.37} says
\begin{gather}\label{6.39}
\big(b_2^{(0)},b_3^{(0)},b_4^{(0)}\big)=\frac{b_j^{(0)}}{a_j^{(0)}}
\cdot \big(a_2^{(0)},a_3^{(0)},a_4^{(0)}\big),\qquad
\text{so} \quad B^{(0)} = \frac{b_j^{(0)}}{a_j^{(0)}}\cdot A^{(0)}.
\end{gather}
If $b_j^{(0)}=0$ then $b_2^{(0)}=b_3^{(0)}=b_4^{(0)}=0$,
and the generic type is (Log).
If $b_j^{(0)}\neq 0$, then the generic type is
(Sem) or (Bra) or (Reg).
Consider for a moment the cases when the residue endomorphism
of the logarithmic pole at $z=0$ of the $(TE)$-structure
over $t^0$ is semisimple. By Theorem~\ref{t6.1}, these cases
include the generic types (Sem) and (Bra).
Then a linear base change gives $b_2^{(1)}=b_4^{(1)}=0$,
so that the $3\times 3$-matrix in~\eqref{6.38} is diagonal.
Then denote $\widetilde\beta_j:=\deg_t b_j^{(0)}\in{\mathbb N}$.
A coordinate change in $t$ leads to
$b_j^{(0)}=b_{j,\widetilde\beta_j}^{(0)}\cdot t^{\widetilde\beta_j}$.
The differential equation in~\eqref{6.38} leads to
$a_j^{(0)}=a_{j,\widetilde\beta_j-1}^{(0)}\cdot t^{\widetilde\beta_j-1}$,
and to $b_j^{(0)}/a_j^{(0)}=-\gamma t$ for some $\gamma\in{\mathbb C}^*$.
Define
\begin{gather}\label{6.40}
\beta_2=\frac{1+2b_3^{(1)}}{\gamma},\qquad
\beta_3=\frac{1}{\gamma},\qquad
\beta_4=\frac{1-2b_3^{(1)}}{\gamma}.
\end{gather}
Now~\eqref{6.39} and the differential equations in~\eqref{6.38}
show $\widetilde\beta_j=\beta_j$ and
\begin{gather}
b_2^{(0)}=0\qquad\text{or}\qquad\big(\beta_2\in{\mathbb N}\text{ and }
b_2^{(0)}=b_{2,\beta_2}^{(0)}\cdot t^{\beta_2}\big)\neq 0,
\nonumber
\\
b_3^{(0)}=0\qquad\text{or}\qquad\big(\beta_3\in{\mathbb N}\text{ and }
b_3^{(0)}=b_{3,\beta_3}^{(0)}\cdot t^{\beta_3}\big)\neq 0,
\label{6.41}
\\
b_4^{(0)}=0\qquad\text{or}\qquad\big(\beta_4\in{\mathbb N}\text{ and }
b_4^{(0)}=b_{4,\beta_4}^{(0)}\cdot t^{\beta_4}\big)\neq 0.
\nonumber
\end{gather}
Now we discuss the generic types (Sem), (Bra), (Reg)
and (Log) separately.
\medskip\noindent
{\it Generic type $($Sem$)$}.
By Theorem~\ref{t6.2}, we can choose the basis $\underline{v}$
such that $b_2^{(1)}=b_4^{(1)}=0$. In~the cases $k_2>k_1$, by Theorem~\ref{t6.2},
$b_3^{(1)}$ is up to the sign unique, and we can choose it
to be
\begin{gather*}
b_3^{(1)}=\frac{k_1-k_2}{2(k_1+k_2)}\in\bigg({-}\frac{1}{2},0\bigg)\cap{\mathbb Q}
\end{gather*}
(possibly by exchanging $v_1$ and $v_2$). In~the cases $k_2=k_1$
we write $\alpha_3:=b_3^{(1)}\in{\mathbb C}$.
We can change its sign and get a unique
$\alpha_3\in{\mathbb R}_{\geq 0}\cup\H$.
We make a suitable coordinate change in $t$ and obtain
$b_2^{(0)}$, $b_3^{(0)}$, $b_4^{(0)}$ as in~\eqref{6.41}.
The relations~\eqref{6.8} and~\eqref{6.9} still hold.
Equation~\eqref{6.9} implies
\begin{gather*}
\big(b_2^{(0)}b_4^{(0)}\neq 0,\ \beta_2+\beta_4=k_1+k_2\big)
\qquad\text{or}\qquad
\big(b_3^{(0)}\neq 0,\ 2\beta_3=k_1+k_2\big)
\end{gather*}
(or both). In~both cases~\eqref{6.40} gives
\begin{gather*
\gamma=\frac{2}{k_1+k_2}.
\end{gather*}
Thus
\begin{gather}\label{6.43}
(\beta_2,\beta_3,\beta_4)=
\begin{cases}
\big(k_1,\frac{k_1+k_2}{2},k_2\big)&\text{if}\quad k_2>k_1,
\\
(k_1(1+2\alpha_3),k_1,k_1(1-2\alpha_3)&\text{if}\quad k_2=k_1.\end{cases}
\end{gather}
In the cases $k_2>k_1$, we have $\beta_2<\beta_3<\beta_4$.
Then~\eqref{6.41} and the relation~\eqref{6.8} imply
$b_2^{(0)}\neq 0$, so $b_{2,\beta_2}^{(0)}\neq 0$.
The nonvanishing $\delta^{(0)}\neq 0 $ implies
$b_{2}^{(0)}b_{4}^{(0)}+\big(b_{3}^{(0)}\big)^2
\neq 0$.
In the case $k_2-k_1>0$ even, a linear coordinate change in $t$
and a diagonal base change allow to reduce the triple
$\big(b_{2,\beta_2}^{(0)},b_{3,\beta_3}^{(0)},b_{4,\beta_4}^{(0)}\big)
\in{\mathbb C}^3$ to a triple $\big(1,\zeta,1-\zeta^2\big)$
with $\zeta\in{\mathbb C}$ unique.
In the case $k_2-k_1>0$ odd, we have $\beta_3\notin{\mathbb N}$,
so $b_3^{(0)}=0$,
and a linear coordinate change in $t$ and a diagonal base
change allow to reduce the pair
$\big(b_{2,\beta_2}^{(0)},b_{4,\beta_4}^{(0)}\big)
\in({\mathbb C}^*)^2$ to the pair $(1,1)$.
In the cases $k_2=k_1$ and $\alpha_3\neq 0$,~\eqref{6.8}
and~\eqref{6.43} imply $b_2^{(0)}=b_4^{(0)}=0$.
Then a linear coordinate change in $t$ allows to reduce
$b_{3,\beta_3}^{(0)}$ to the value $1$.
In the cases $k_2=k_1$ and $\alpha_3=0$,
as in the proof of Theorem~\ref{t6.2}$(a)$ $(iii)$,
a base change with constant coefficients leads to
$b_2^{(0)}=b_4^{(0)}=0$. Then a linear coordinate change
in $t$ allows to reduce $b_{3,\beta_3}^{(0)}$ to the value
$1$. In~all cases of generic type (Sem), we obtain the
normal forms in the list in Theorem~\ref{t6.3}.
\medskip\noindent
{\it Generic type $($Bra$)$.}
By Theorem~\ref{t6.2}, we can choose the basis $\underline{v}$
such that $b_2^{(1)}=b_4^{(1)}=0$,
and~$b_3^{(1)}$ is up to the sign unique.
We can choose it to be
\begin{gather*}
b_3^{(1)}=\frac{-k_2}{2(k_1+k_2)}\in
\Bigl(-\frac{1}{2},0\Bigr)\cap{\mathbb Q}
\end{gather*}
(possibly by exchanging $v_1$ and $v_2$).
We make a suitable coordinate change in $t$ and obtain
$b_2^{(0)}$, $b_3^{(0)}$, $b_4^{(0)}$ as in~\eqref{6.41}.
The nonvanishing $\delta^{(1)}\neq 0$
and $\deg_t\delta^{(1)}=k_1+k_2$ say
\begin{gather*}
0\neq \delta^{(1)}=-2b_3^{(1)}b_3^{(0)},
\end{gather*}
so $b_3^{(0)}\neq 0$ and
\begin{gather*
\frac1\gamma=\beta_3=\deg b_3^{(0)}=\deg\delta^{(1)}=k_1+k_2,\qquad
\gamma=\frac{1}{k_1+k_2},
\\
(\beta_2,\beta_3,\beta_4)=(k_1,k_1+k_2,k_1+2k_2)
\end{gather*}
The relation~\eqref{6.8} still holds, and it implies
$b_2^{(0)}\neq 0$. The vanishing $\delta^{(0)}=0$ says
$b_{2,\beta_2}^{(0)}b_{4,\beta_4}^{(0)}+\big(b_{3,\beta_3}^{(0)}\big)^2
= 0$. Together with $b_{2,\beta_2}^{(0)}\neq 0$ and
$b_{3,\beta_3}^{(0)}\neq 0$ it implies
$b_{4,\beta_4}^{(0)}\neq 0$.
A linear coordinate change in $t$
and a diagonal base change allow to reduce the triple
$\big(b_{2,\beta_2}^{(0)},b_{3,\beta_3}^{(0)},b_{4,\beta_4}^{(0)}\big)
\in({\mathbb C}^*)^3$ to the triple $(1,1,-1)$. We obtain the
normal form in the list in Theorem~\ref{t6.3}.
\medskip\noindent
{\it Generic type $($Reg$)$.}
First we consider the case when the residue endomorphism
of the logarithmic pole at $z=0$ of the $(TE)$-structure
over $t^0$ is semisimple. Then a linear base change
gives $b_2^{(1)}=b_4^{(1)}=0$. And a suitable coordinate
change in $t$ gives $b_2^{(0)}$, $b_3^{(0)}$, $b_4^{(0)}$
as in~\eqref{6.41}.
First consider the case $b_3^{(1)}\neq 0$. Then
the vanishing $0=\delta^{(1)}=-2b_3^{(1)}b_3^{(0)}$
says $b_3^{(0)}=0$. Now the vanishing
$0=\delta^{(0)}=-b_2^{(0)}b_4^{(0)}$ says that either
$b_2^{(0)}=0$ or $b_4^{(0)}=0$. Both together cannot be
0 as the generic type is (Reg) and not (Log).
After possibly exchanging $v_1$ and $v_2$, we suppose
$b_2^{(0)}\neq 0$, $b_4^{(0)}=0$. Now $k_1=\beta_2$.
Write $\alpha_4:=2b_3^{(1)}\in{\mathbb C}$.
By~\eqref{6.40},
\begin{eqnarray}\label{6.46}
\gamma=\frac{1+\alpha_4}{k_1}.
\end{eqnarray}
A diagonal base change allows to reduce $b_{2,\beta_2}^{(0)}$
to $1$.
Now consider the case $b_3^{(1)}=0$.
Then $\beta_2=\beta_3=\beta_4=1/\gamma$, and this is
equal to $k_1$, as $\beta_j\in{\mathbb N}$ for at least one $j$.
Write $\alpha_4:=b_3^{(1)}=0$. Then
\eqref{6.46} still holds.
By a base change with constant coefficients, we can obtain
$b_2^{(0)}=t^{k_1}$ and $b_3^{(0)}=0$.
The vanishing $0=\delta^{(0)}=-b_2^{(0)}b_4^{(0)}$
tells $b_4^{(0)}=0$.
For $\alpha_4\neq 0$ as well as for $\alpha_4=0$,
we obtain the normal form in the 5th case in the list
in Theorem~\ref{t6.3}.
Finally consider the case when the residue endomorphism
of the logarithmic pole at $z=0$ of the $(TE)$-structure
over $t^0$ has a $2\times 2$ Jordan block.
A base change with constant coefficients leads to
$b_3^{(1)}=b_4^{(1)}=0$ and $b_2^{(1)}=1$.
We will lead the assumption $b_4^{(0)}\neq 0$ as well as the
assumption $b_4^{(0)}=0,b_3^{(0)}\neq 0$ to a contradiction.
{\samepage
Assume $b_4^{(0)}\neq 0$.
Denote $\beta_4:=\deg_t b_4^{(0)}\in{\mathbb N}$.
A coordinate change in $t$ leads to
$b_4^{(0)}=\frac{-1}{\beta_4}t^{\beta_4}$.
The differential equation in~\eqref{6.38} for $b_4^{(0)}$
gives $a_4^{(0)}=t^{\beta_4-1}$.
Now~\eqref{6.39} gives $b_3^{(0)}=\frac{-1}{\beta_4}ta_3^{(0)}$.
The differential equation
in~\eqref{6.38} for $b_3^{(0)}$ becomes
\begin{gather*}
\partial_t \big(ta_3^{(0)}\big)= \beta_4 a_3^{(0)}+\beta_4 t^{\beta_4-1}.
\end{gather*}
This equation has no solution in ${\mathbb C}\{t\}$, a contradiction.
}
Assume $b_4^{(0)}=0$, $b_3^{(0)}\neq 0$.
The same arguments as for the case $b_4^{(0)}\neq 0$
give a contradiction if we replace
$\big(b_4^{(0)},a_4^{(0)},b_3^{(0)},a_3^{(0)}\big)$ by
$\big(b_3^{(0)},a_3^{(0)},b_2^{(0)},a_2^{(0)}\big)$.
Therefore $b_4^{(0)}=0$, $b_3^{(0)}=0$, $b_2^{(0)}\neq 0$.
Now $k_1=\deg_t b_2^{(0)}$.
A coordinate change in $t$ leads to
$b_2^{(0)}=t^{k_1}$. The differential equations~\eqref{6.38} gives
$a_4^{(0)}=a_3^{(0)}=0$, $a_2^{(0)}=-k_1t^{k_1-1}$.
We obtain the normal form in the 6th case in the list
in Theorem~\ref{t6.3}.
\medskip\noindent
{\it Generic type $($Log$)$.}
Now $b_2^{(0)}=b_3^{(0)}=b_4^{(0)}=0$.
The cases when all $a_i^{(0)}=0$, were consi\-de\-red above.
We assume now $a_j^{(0)}\neq 0$ for some $j\in\{2,3,4\}$.
The equations~\eqref{6.38} become a~homogeneous system
of linear equations with a nontrivial solution.
Therefore the determinant of the $3\times 3$-matrix in
\eqref{6.38} vanishes. It~is $1-4\big(b_3^{(1)}\big)^2-4b_2^{(1)}b_4^{(1)}$.
Its vanishing tells $\det \big(B^{(1)}-z\rho^{(1)}C_1\big)=\frac{-1}{4}$.
As $\tr\big(B^{(1)}-z\rho^{(1)}C_1\big)=0$, this matrix has the
eigenvalues $\pm\frac{1}{2}$.
Therefore a~linear base change gives
\begin{gather*}
b_2^{(1)}=b_4^{(1)}=0,\qquad b_3^{(1)}=-\frac{1}{2}.
\end{gather*}
Now the system of equations~\eqref{6.38} gives
$a_3^{(0)}=a_4^{(0)}=0$, whereas $a_2^{(0)}$ is
arbitrary in ${\mathbb C}\{t\}\setminus\{0\}$.
Denote $k_1:=1+\deg_t a_2^{(0)}\in{\mathbb N}$. A coordinate change
in $t$ leads to $a_2^{(0)}=k_1t^{k_1-1}$.
We obtain the normal form in the seventh case in the list
in Theorem~\ref{t6.3}.
\end{proof}
\subsection[Generically regular singular $(TE)$-structures
over $({\mathbb C},0)$ with logarithmic restriction over $t^0=0$
and not semisimple monodromy]
{Generically regular singular $\boldsymbol{(TE)}$-structures
over $\boldsymbol{({\mathbb C},0)}$ \\with logarithmic restriction over $\boldsymbol{t^0=0}$
and not semisimple monodromy}\label{c6.3}
The only 1-parameter unfoldings with trace free pole part
of logarithmic $(TE)$-structures over a point,
which are not covered by Theorem~\ref{t6.3}, have
generic type (Reg) or (Log) and not semisimple monodromy.
This follows from Theorem~\ref{t6.2}
and Theorem~\ref{t3.20}$(a)$.
These $(TE)$-structures are classified in Theorem~\ref{t6.7}. Some of them are in the 6th or 9th case
in Theorem~\ref{t6.3}, but most are not.
\begin{Theorem}\label{t6.7}
Consider a rank $2$ $(TE)$-structure $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$
over $\big(M,t^0\big)=({\mathbb C},0)$ which is generically regular singular
$($so of generic type $($Reg$)$ or $($Log$))$, which has trace free
pole part, whose restriction over $t^0$ is logarithmic,
and whose monodromy has a $2\times 2$ Jordan block.
Associate to it the data in Definition~$\ref{t3.18}$:
$H':=H|_{{\mathbb C}\times (M,t^0)}$, $M^{\rm mon}$, $N^{\rm mon}$,
$\Eig(M^{\rm mon})=\{\lambda\}$, $H^\infty$,
$C^\alpha$ for $\alpha\in{\mathbb C}$ with ${\rm e}^{-2\pi {\rm i}\alpha}=\lambda$.
The leading exponents of the $(TE)$-structures over $t\neq 0$
come from Theorem~$\ref{4.15}(b)$ if the generic type is
$($Reg$)$ and from Theorem~$\ref{t4.20}(b)$ if the generic type
is $($Log$)$. In~both cases the leading exponents are independent
of $t$ and are still called $\alpha_1$ and $\alpha_2$.
Recall $\alpha_1-\alpha_2\in{\mathbb N}_0$.
The leading exponents of the logarithmic $(TE)$-structure
over $t^0=0$ from Theorem~$\ref{t4.20}(b)$ are now called
$\alpha_1^0$ and $\alpha_2^0$.
Recall $\alpha_1^0-\alpha_2^0\in{\mathbb N}_0$.
Precisely one of the three cases $(I)$, $(II)$ and $(III)$
in the following table holds:
\[
\def1.4{1.5}
\begin{tabular}{l|l|l|l}
\hline
case $(I)$ & $\alpha_1^0=\alpha_1$ &$ \alpha_2^0=\alpha_2+1$ &thus $\alpha_1-\alpha_2\in{\mathbb N}$
\\
case $(II)$ & $\alpha_1^0=\alpha_1+1$ & $\alpha_2^0=\alpha_2$
\\
case $(III)$ & $\alpha_1^0=\alpha_1$ & $\alpha_2^0=\alpha_2$
\\
\hline
\end{tabular}
\]
Choose any section
$s_1\in C^{\alpha_1}\setminus \ker(\nabla_{z\partial_z}-\alpha_1\colon
C^{\alpha_1}\to C^{\alpha_1})$. It~determines uniquely
a section
$s_2\in \ker(\nabla_{z\partial_z}-\alpha_2\colon C^{\alpha_2}\to
C^{\alpha_2})\setminus\{0\}$
with
\begin{gather}\label{6.48}
(\nabla_{z\partial_z}-\alpha_1)(s_1)=z^{\alpha_1-\alpha_2}s_2.
\end{gather}
Then
\begin{gather}
{\mathcal O}(H)_{(0,0)}={\mathbb C}\{t,z\}(s_1+fs_2)\oplus {\mathbb C}\{t,z\}zs_2\ \
\text{for some } f\in t{\mathbb C}\{t\}\setminus\{0\}
\text{ in case $(I)$,}\label{6.49}
\\
{\mathcal O}(H)_{(0,0)}= {\mathbb C}\{t,z\}(s_2+fs_1)\oplus {\mathbb C}\{t,z\}zs_1\ \
\text{for some }f\in t{\mathbb C}\{t\}\setminus\{0\}
\text{ in case $(II)$,}\!\!\label{6.50}
\\
{\mathcal O}(H)_{(0,0)}={\mathbb C}\{t,z\}s_1 \oplus {\mathbb C}\{t,z\}s_2\ \
\text{in case $(III)$.}\label{6.51}
\end{gather}
The function $f$ in the cases~\eqref{6.49} and~\eqref{6.50}
is independent of the choice of $s_1$, so it is an
invariant of the gauge equivalence class of the
$(TE)$-structure.
\end{Theorem}
Before the proof, some remarks are made.
\begin{Remarks}\label{t6.8}\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Equation~\eqref{6.48} gives
\begin{gather}\label{6.52}
\nabla_{z\partial_z}((s_1,s_2))=
(s_1,s_2)\begin{pmatrix}\alpha_1 & 0 \\ z^{\alpha_1-\alpha_2}
& \alpha_2\end{pmatrix}
=(s_1,s_2)\cdot z^{-1}B,
\end{gather}
with
\begin{gather*}
B=z\frac{\alpha_1+\alpha_2}{2}C_1
+z^{\alpha_1-\alpha_2+1}C_2+z\frac{\alpha_1-\alpha_2}{2}D.
\end{gather*}
\item[$(ii)$] The generic type is (Log) in the case (III).
This $(TE)$-structure is induced by its restriction
over $t^0=0$ via the projection $\varphi\colon \big(M,t^0\big)\to\big\{t^0\big\}$.
The matrices $A$ and $B$ for the basis $\underline{v}=(s_1,s_2)$
are $A=0$ and $B$ as in~\eqref{6.52}.
\item[$(iii)$] The generic type is (Reg) in the cases
(I) and (II). In~these cases
the $(TE)$-structure is induced by the special
cases of~\eqref{6.49} respectively~\eqref{6.50} with
$\widetilde f=t$ via the map $\varphi=f\colon ({\mathbb C},0)\to({\mathbb C},0)$.
\item[$(iv)$] The matrices $A$ and $B$ for the basis
$\underline{v}=(s_1+fs_2,zs_2)$
in~\eqref{6.49} ($\Rightarrow$ case (I),
$\Rightarrow \alpha_1-\alpha_2\in{\mathbb N}$) are
\begin{gather}\label{6.53}
A=\partial_tf\cdot C_2,\qquad
B=\begin{pmatrix}z\alpha_1 & 0 \\ (\alpha_2-\alpha_1)f
+z^{\alpha_1-\alpha_2} & z(\alpha_2+1)\end{pmatrix}\!.
\end{gather}
The matrices $A$ and $B$ for the basis
$\underline{v}=(s_2+fs_1,zs_1)$
in~\eqref{6.50} ($\Rightarrow$ case (II),
$\Rightarrow \alpha_1-\alpha_2\allowbreak\in{\mathbb N}_0$) are
\begin{gather}\label{6.54}
A=\partial_tf\cdot C_2,\qquad
B=\begin{pmatrix}z\alpha_2+z^{\alpha_1-\alpha_2+1}f &
z^{\alpha_1-\alpha_2+2} \\ (\alpha_1-\alpha_2)f
-z^{\alpha_1-\alpha_2}f^2 & z(\alpha_1+1)-z^{\alpha_1-\alpha_2+1}f\end{pmatrix}\!.
\end{gather}
\item[$(v)$] The invariant $k_1\in{\mathbb N}$ from~\eqref{6.1} is here
$k_1=\deg_t f\in{\mathbb N}$ in the case~\eqref{6.49}
and the case (\eqref{6.50} and $\alpha_1-\alpha_2\in{\mathbb N}$). It~is $k_1=2\deg_t f\in 2{\mathbb N}$ in the case
(\eqref{6.50} and $\alpha_1=\alpha_2$).
A~suitable coordinate change in $t$
reduces $f$ to $f=t^{k_1}$ respectively $f=t^{k_1/2}$.
\item[$(vi)$] The overlap of the $(TE)$-structures in Theorem~\ref{t6.3}
and in Theorem~\ref{t6.7} is as follows.
The case~\eqref{6.49} with $\alpha_1=\alpha_2+1$
and $f=-t^{k_1}$
is the 6th case in Theorem~\ref{t6.3}
with $\rho^{(1)}=\alpha_1$.
The case~\eqref{6.51} with $\alpha_1=\alpha_2$
is the 9th case in Theorem~\ref{t6.3}
with $\rho^{(1)}=\alpha_1$.
\end{enumerate}
\end{Remarks}
\begin{proof}[Proof of Theorem~\ref{t6.7}]
Choose any section
$s_1^0\in C^{\alpha_1^0}\setminus \ker\big(\nabla_{z\partial_z}-\alpha_1^0\colon C^{\alpha_1^0}
\to C^{\alpha_1^0}\big)\setminus\{0\}$. It~determines uniquely a section
$s_2^0\in \ker\big(\nabla_{z\partial_z}\colon C^{\alpha_2^0}\to C^{\alpha_2^0}\big)\setminus\{0\}$
with
\begin{gather*
\big(\nabla_{z\partial_z}-\alpha_1^0\big)s_1^0
=z^{\alpha_1^0-\alpha_2^0}s_2^0.
\end{gather*}
Then
\begin{gather*
{\mathcal O}\big(H|_{{\mathbb C}\times\{t^0\}}\big)_0={\mathbb C}\{z\}s_1^0|_{{\mathbb C}\times\{t^0\}}
\oplus {\mathbb C}\{z\}s_2^0|_{{\mathbb C}\times\{t^0\}}.
\end{gather*}
{\sloppy
Choose a ${\mathbb C}\{t,z\}$-basis $\underline{v}=(v_1,v_2)$ of
${\mathcal O}(H)_{(0,0)}$ which extends this ${\mathbb C}\{z\}$-basis of
${\mathcal O}\big(H|_{{\mathbb C}\times\{t^0\}}\big)_0$. It~has the shape
\begin{gather}
\underline{v}= \big(s_1^0,s_2^0\big)\cdot F\qquad \text{with}\quad
F=\begin{pmatrix} f_1 & f_2\\f_3&f_4\end{pmatrix}
\qquad\text{and}\nonumber
\\
f_1,f_4\in{\mathbb C}\{t,z\}\big[z^{-1}\big],\qquad
f_1(z,0)=f_4(z,0)=1,\qquad
f_2,f_3\in t{\mathbb C}\{t,z\}\big[z^{-1}\big].\label{6.57}
\end{gather}}\noindent
We write $f_j=\sum_{k\geq \deg_zf_j}f_j^{(k)}z^k$ with
$f_j^{(k)}=\sum_{l\geq 0}f_{j,l}^{(k)}t^l\in{\mathbb C}\{t\}$
and $f_{j,l}^{(k)}\in{\mathbb C}$.
Also we write
$\det F=\sum_{k\geq \deg_z\det F}(\det F)^{(k)}z^k$
with $(\det F)^{(k)}\in{\mathbb C}\{t\}$.
A meromorphic function $g\in{\mathbb C}\{t,z\}\big[z^{-1}\big]$ on a
neighborhood $U\subset {\mathbb C}\times M$ which is holomorphic
and not vanishing on $(U\setminus\{0\}\times M)\cup\{(0,0)\}$
is in ${\mathbb C}\{t,z\}^*$.
This and the facts that $\underline{v}$ and $\big(s_1^0,s_2^0\big)$ are bases
of $H|_{U\setminus\{0\}\times M}$ for some neighborhood
$U\subset{\mathbb C}\times M$ of $(0,0)$ and that
$\underline{v}|_{{\mathbb C}\times\{t^0\}}=\big(s_1^0,s_2^0\big)$ imply
\begin{gather*}
(\det F)\in{\mathbb C}\{t,z\}^*,\qquad \text{so especially}\quad
(\det F)^{(k)}=0\quad\text{ for}\quad k<0.\label{6.58}
\end{gather*}
Write $k_j:=\deg_z f_j\in{\mathbb Z}\cup\{\infty\}$
($\infty$ if $f_j=0$). Recall~\eqref{6.57}. It~implies
$f_1^{(0)},f_4^{(0)}\in{\mathbb C}\{t\}^*$ and
$f_1^{(k)},f_4^{(k)}\in t{\mathbb C}\{t\}$ for $k\neq 0$
and $f_2^{(k)},f_3^{(k)}\in t{\mathbb C}\{t\}$ for all $k$.
Especially $k_1\leq 0$ and $k_4\leq 0$.
We~distinguish four cases. Precisely one of them holds:
\begin{align*}
&\text{Case }\widetilde{\rm(I)}\colon&&\hspace{-60mm}
0=k_1\leq k_2,\qquad\ 0>\min(k_3,k_4),
\\%\label{6.59}\\
&\text{Case }\widetilde{\rm(II)}\colon&&\hspace{-60mm}
0=k_4\leq k_3,\qquad\ 0>\min(k_1,k_2),
\\%\label{6.60}\\
&\text{Case }\widetilde{\rm(III)}\colon&&\hspace{-60mm}
0=k_1=k_4,\qquad\ 0\leq k_2,\quad 0\leq k_3,
\\%\label{6.61}\\
&\text{Case }\widetilde{\rm(IV)}\colon&&\hspace{-60mm}
0>\min(k_1,k_2),\ \, \ 0>\min(k_3,k_4).
\end{align*}
We will show: Case $\widetilde{\rm (I)}$ leads to~\eqref{6.49}
and case (I),
case $\widetilde{\rm (II)}$ leads to~\eqref{6.50} and case (II),
case~$\widetilde{\rm (III)}$ leads to~\eqref{6.51} and case~(III),
and case $\widetilde{\rm (IV)}$ is impossible.
\medskip\noindent
{\it Case $\widetilde{(III)}$}:
Then $F\in {\rm GL}_2({\mathbb C}\{t,z\})$ and a base change leads to the
new basis $\underline{\widetilde v}=\big(s_1^0,s_2^0\big)$. With
\begin{gather*
(\alpha_1,\alpha_2,s_1,s_2)=\big(\alpha_1^0,\alpha_2^0,s_1^0,s_2^0\big),
\end{gather*}
this gives~\eqref{6.51} and case (III).
\medskip\noindent
{\it Case $\widetilde{(I)}$}: Then $f_1\in{\mathbb C}\{t,z\}^*$, and
a base change leads to a new basis
$\underline{v}^{[1]}=\big(s_1^0,s_2^0\big)\cdot F^{[1]}$ with
\begin{gather*}
f_1^{[1]}=1,\qquad f_2^{[1]}=0,
\qquad f_4^{[1]}=\det F^{[1]}\in {\mathbb C}\{t,z\}^*,
\qquad f_3^{[1]}\in t{\mathbb C}\{t,z\}\big[z^{-1}\big].
\end{gather*}
As $k_4^{[1]}=0$, we have $k_3^{[1]}<0$.
A base change leads to a new basis
$\underline{v}^{[2]}=\big(s_1^0,s_2^0\big)\cdot F^{[2]}$ with
\begin{gather*}
F^{[2]}=C_1 + f_3^{[2]}C_2, \qquad\text{with}\quad
f_3^{[2]}\in tz^{-1}{\mathbb C}\{t\}\big[z^{-1}\big]\setminus\{0\}.
\end{gather*}
The covariant derivative
$z\nabla_{\partial_t}v_1^{[2]}=z\partial_t f_3^{[2]}\cdot v_2^{[2]}$
must be in ${\mathcal O}(H)_0$. This shows
$f_3^{[2]}\in tz^{-1}{\mathbb C}\{t\}\allowbreak\setminus\{0\}$. With
{\samepage\begin{gather*
(\alpha_1,\alpha_2,s_1,s_2,f)=\big(\alpha_1^0,\alpha_2^0-1,s_1^0,z^{-1}s_2^0,zf_3^{[2]}\big),
\end{gather*}
this gives~\eqref{6.49} and case (II).
}
\medskip\noindent
{\it Case $\widetilde{(II)}$}: Then $f_4\in{\mathbb C}\{t,z\}^*$, and
a base change leads to a new basis
$\underline{v}^{[1]}=\big(s_1^0,s_2^0\big)\cdot F^{[1]}$ with
\begin{gather*}
f_4^{[1]}=1,\qquad f_3^{[1]}=0,
\qquad f_1^{[1]}=\det F^{[1]}\in {\mathbb C}\{t,z\}^*,
\qquad f_2^{[1]}\in t{\mathbb C}\{t,z\}\big[z^{-1}\big].
\end{gather*}
As $k_1^{[1]}=0$, we have $k_2^{[1]}<0$.
A base change leads to a new basis
$\underline{v}^{[2]}=\big(s_1^0,s_2^0\big)\cdot F^{[2]}$ with
\begin{gather*}
F^{[2]}=C_1 + f_2^{[2]}E, \qquad\text{with}\quad
f_2^{[2]}\in tz^{-1}{\mathbb C}\{t\}\big[z^{-1}\big]\setminus\{0\}.
\end{gather*}
The covariant derivative
$z\nabla_{\partial_t}v_2^{[2]}=z\partial_t f_2^{[2]}\cdot v_1^{[2]}$
must be in ${\mathcal O}(H)_0$. This shows
$f_2^{[2]}\in tz^{-1}{\mathbb C}\{t\}\allowbreak\setminus\{0\}$. With
\begin{gather*
(\alpha_1,\alpha_2,s_1,s_2)=\big(\alpha_1^0-1,\alpha_2^0,z^{-1}s_1^0,s_2^0\big),
\end{gather*}
this gives~\eqref{6.50} and almost case (II).
{}"{}Almost{}"{} because we still have to show
$\alpha_1-\alpha_2\in{\mathbb N}_0$.
This follows from the summand $-z^{\alpha_1-\alpha_2}f^2$ in
the left lower entry in the matrix $B$ in~\eqref{6.54}.
\medskip\noindent
{\it Case $\widetilde{(IV)}$}: Exchange $v_1$ and $v_2$ if
$k_1>k_2$ or if $k_1=k_2$ and
$\deg_tf_1^{(k_1)}>\deg_tf_2^{(k_1)}$.
Keep the basis $\underline{v}$ if not. The new basis
$\underline{v}^{[1]}$ satisfies
$\min(k_1,k_2)=k_1^{[1]}\leq k_2^{[1]}$, and
in the case $k_1^{[1]}=k_2^{[1]}$ it satisfies
$\deg_t \big(f_1^{[1]}\big)^{(k_1^{[1]})}\leq \deg_t \big(f_2^{[1]}\big)^{(k_1^{[1]})}$.
By replacing $v_2^{[1]}$ by a suitable element in
$v_2^{[1]}+{\mathbb C}\{t,z\} v_1^{[1]}$ , we obtain a new basis
$\underline{v}^{[2]}$ either with $f_2^{[2]}=0$ or
with $k_1^{[2]}< k_2^{[2]}$ and
$\deg_t \big(f_1^{[2]}\big)^{(k_1^{[2]})}
>\deg_t \big(f_2^{[2]}\big)^{(k_2^{[2]})}$.
The case $f_2^{[2]}=0$ is impossible, as then
we would have
$f_1^{[2]}f_4^{[2]}=\det F^{[2]}\in{\mathbb C}\{t,z\}^*$, so
$f_1^{[2]}\in{\mathbb C}\{t,z\}^*$ and
$0= k_1^{[2]}$, but also $k_1^{[2]}=k_1^{[1]}=\min(k_1,k_2)<0$.
For the same reason, $f_3^{[2]}=0$ is impossible.
$f_4^{[2]}=0$ is impossible as then we would have
$-f_2^{[2]}f_3^{[2]}=\det F^{[2]}\in {\mathbb C}\{t,z\}^*$,
so $f_2^{[2]},f_3^{[2]}\in{\mathbb C}\{t,z\}^*$,
$0=k_2^{[2]}=k_3^{[2]}$ and $k_4^{[2]}=\infty$,
so $0=\min\big(k_3^{[2]},k_4^{[2]}\big)=\min(k_3,k_4)<0$,
a contradiction.
Write
\begin{gather*}
l_2:=\deg_t\big(f_2^{[2]}\big)^{(k_2^{[2]})}\in{\mathbb N}_0,\qquad
l_1:=\deg_t \big(f_1^{[2]}\big)^{(k_1^{[2]})}-l_2\in{\mathbb N},
\\
l_3:=\deg_t\big(f_3^{[2]}\big)^{(k_3^{[2]})}\in{\mathbb N}_0,\qquad
l_4:=\deg_t \big(f_4^{[2]}\big)^{(k_4^{[2]})}\in{\mathbb N}_0.
\end{gather*}
Multiplying $v_1^{[2]}$ and $v_2^{[2]}$ by suitable units in
${\mathbb C}\{t\}$, we obtain a basis $\underline{v}^{[3]}$ with
$k_j^{[3]}=k_j^{[2]}$ and
\begin{gather*}
\big(f_1^{[3]}\big)^{(k_1^{[3]})}=t^{l_1+l_2},\qquad
\big(f_2^{[3]}\big)^{(k_2^{[3]})}=t^{l_2},
\\
\big(f_3^{[3]}\big)^{(k_3^{[3]})}=t^{l_3}\cdot u_3,\qquad
\big(f_4^{[3]}\big)^{(k_4^{[3]})}=t^{l_4}\cdot u_4
\end{gather*}
for some units $u_3,u_4\in{\mathbb C}\{t\}^*$.
We still have $0>k_1^{[3]}<k_2^{[3]}$ and
$\min\big(k_3^{[3]},k_4^{[3]}\big)<0$. Consider
\begin{gather*}
z\nabla_{\partial_t}\big(v_1^{[3]}\big)=z\partial_t f_1^{[3]}\cdot s_1^0 +
z\partial_t f_3^{[3]}\cdot s_2^0 \in {\mathcal O}(H)_{(0,0)}
={\mathbb C}\{t,z\}v_1^{[3]} \oplus {\mathbb C}\{t,z\}v_2^{[3]}.
\end{gather*}
The leading nonvanishing monomial in $z\partial_t f_1^{[3]}$
is $z^{k_1^{[3]}+1}t^{l_1+l_2-1}$.
This implies $k_2^{[3]}=k_1^{[3]}+1\leq 0$.
Therefore $k_1^{[3]}+k_4^{[3]}<0$ or
$k_2^{[3]}+k_3^{[3]}<0$. Each part
$\big(\det F^{[3]}\big)^{(k)}$ for $k<0$ vanishes. This shows
\begin{gather*}
k_1^{[3]}+k_4^{[3]}=k_2^{[3]}+k_3^{[3]}<0, \qquad \text{so}\quad
k_4^{[3]}=k_3^{[3]}+1\leq 0,\quad k_3^{[3]}<0,
\\
0=\big(f_1^{[3]}\big)^{(k_1^{[3]})} \big(f_4^{[3]}\big)^{(k_3^{[3]}+1)}
-\big(f_2^{[3]}\big)^{(k_1^{[3]}+1)} \big(f_3^{[3]}\big)^{(k_3^{[3]})}
= t^{l_2}\bigl(t^{l_1+l_4}u_4 - t^{l_3}u_3\bigr),
\\
\text{so}\quad l_3=l_1+l_4, \quad u_3=u_4.
\end{gather*}
We can write
\begin{gather*}
\underline{v}^{[3]}= \big(s_1^0,s_2^0\big)\begin{pmatrix}
(t^{l_1+l_2}+zg_1)z^{k_1^{[3]}} & (t^{l_2}+zg_2)z^{k_1^{[3]}+1} \\
(t^{l_1+l_4}u_3 + zg_3)z^{k_3^{[3]}} & (t^{l_4}u_3 + zg_4)z^{k_3^{[3]}+1}
\end{pmatrix}
\end{gather*}
with some suitable $g_1,g_2,g_3,g_4\in{\mathbb C}\{t,z\}$.
This shows
\begin{gather}
{\mathcal O}(H)_{(0,0)}\cap\bigl( z^{k_1^{[3]}+2}{\mathbb C}\{t,z\}s_1^0 +
{\mathbb C}\{t,z\}\big[z^{-1}\big]s_2^0\bigr)\nonumber
\\ \qquad
{}= z^2{\mathbb C}\{t,z\}v_1^{[3]} + z{\mathbb C}\{t,z\}v_2^{[3]} +
{\mathbb C}\{t,z\}\big(zv_1^{[3]}-t^{l_1}v_2^{[3]}\big)\nonumber
\\ \qquad
{}\subset {\mathcal O}(H)_{(0,0)}\cap \bigl(
{\mathbb C}\{t,z\}\big[z^{-1}\big]s_1^0 + z^{k_3^{[3]}+2}{\mathbb C}\{t,z\}s_2^0\bigr).\label{6.66}
\end{gather}
Now consider the element
\begin{gather*}
z\big(\nabla_{z\partial_z}-\big(\alpha_1^0+k_1^{[3]}\big)\big)\big(v_1^{[3]}\big)
\\ \qquad
{}= z^2\partial_z(zg_1)z^{k_1^{[3]}}s_1^0
+ \big(t^{l_1+l_2}+zg_1\big)z^{k_1^{[3]}+1+\alpha_1^0-\alpha_2^0}s_2^0 + z^2\partial_z(zg_3)z^{k_3^{[3]}}s_2^0
\\ \qquad \hphantom{=}
{}
+ \big(t^{l_1+l_4}u_3+zg_3\big)\big(k_3^{[3]}+\alpha_2^0
-\alpha_1^0-k_1^{[3]}\big)z^{k_3^{[3]}+1}s_2^0
\end{gather*}
of ${\mathcal O}(H)_{(0,0)}$. It~is contained in the first line of~\eqref{6.66},
and therefore also in the third line of~\eqref{6.66}.
But this leads to a contradiction, when we compare the
coefficient of $s_2^0$. Here observe
\begin{gather*}
k_3^{[3]}+\alpha_2^0-\alpha_1^0-k_1^{[3]}
\left\{\!\!\begin{array}{l}<\\=\\>\end{array}\!\!\right\} 0
\iff
k_1^{[3]}+1+\alpha_1^0-\alpha_2^0
\left\{\!\!\begin{array}{l}<\\=\\>\end{array}\!\!\right\} k_3^{[3]}+1.
\end{gather*}
This contradiction shows that case $\widetilde{\rm (IV)}$
is impossible.
\end{proof}
\section[Marked regular singular rank 2 $(TE)$-structures]{Marked regular singular rank 2 $\boldsymbol{(TE)}$-structures}\label{c7}
The regular singular rank $2$ $(TE)$-structures over points
were subject of Sections~\ref{c4.5} and~\ref{c4.6},
those over $({\mathbb C},0)$ were subject of Theorem~\ref{t6.3}
and Remark~\ref{t6.4}$(iv)$ and of Theorem~\ref{t6.7}
and~Remarks~\ref{t6.8}.
First we will consider in Remarks~\ref{t7.1}$(i){+}(ii)$
regular singular rank $2$ $(TE)$-structures over~$\P^1$,
which arise naturally
from Theorems~\ref{t4.17} and~\ref{t4.20}.
The $(TE)$-structures over the germs
$\big(\P^1,0\big)$ and $\big(\P^1,\infty\big)$ appeared already
in Remark~\ref{t6.4}$(iv)$ and in Theorem~\ref{t6.7}.
With the construction in Lemma~\ref{t3.10}$(d)$,
each of these $(TE)$-structures over $\P^1$
extends to a rank $2$ $(TE)$-structure
of generic type (Reg) or (Log)
over ${\mathbb C}\times\P^1$ with primitive Higgs field.
With Theorem~\ref{t3.14}, the base manifold
${\mathbb C}\times\P^1$ obtains a canonical structure
as $F$-manifold with Euler field.
For each $t^0\in{\mathbb C}\times{\mathbb C}^*$, the $(TE)$-structure
over the germ $\big({\mathbb C}\times\P^1,t^0\big)$ is a universal
unfolding of its restriction over $t^0$.
For each $t^0\in{\mathbb C}\times\{0,\infty\}$, the
$(TE)$-structure over the germ
$\big({\mathbb C}\times \P^1,t^0\big)$ will reappear in Theorems~\ref{t8.1},~\ref{t8.5} and~\ref{t8.6}.
See Remarks~\ref{t7.2}$(i){+}(ii)$.
Then we will observe in Corollary~\ref{t7.3} that
any marked regular singular $(TE)$-structure
is a~{\it good} family of marked regular singular
$(TE)$-structures (over points) in the sense of Definition~\ref{t3.26}$(b)$.
In Theorem~\ref{t7.4} we will determine the moduli spaces
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
for {\it marked} regular singular
rank $2$ $(TE)$-structures, which were subject of
Theorem~\ref{t3.29}. The parameter space~$\P^1$ of each
$(TE)$-structure over $\P^1$ in Remarks~\ref{t7.1}$(i){+}(ii)$
embeds into one of these moduli spaces, after the choice
of a marking. Also these embeddings will be described
in Theorem~\ref{t7.4}.
Because of Corollary~\ref{t7.3}, any marked
regular singular rank $2$ $(TE)$-structure
over a mani\-fold~$M$ is induced by a holomorphic map
$M\to M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$,
where $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ is the reference
pair used in the marking of the $(TE)$-structure.
Remark~\ref{t7.5} says something about the
horizontal direction(s) in the moduli spaces.
\begin{Remarks}\label{t7.1}\qquad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Consider the manifold $M^{(3)}:=\P^1$ with coordinate
$t_2$ on ${\mathbb C}\subset\P^1$ and coordinate
$\widetilde t_2:=t_2^{-1}$ on $\P^1\setminus\{0\}\subset\P^1$.
With the projection $M^{(3)}\to\{0\}$,
we pull back the flat bundle $H'\to{\mathbb C}^*$ in Theorem~\ref{t4.17} to a flat bundle ${{H^{(3)}}'}$ on
${\mathbb C}^*\times M$. Recall the notations~\eqref{4.14}.
Now we read $\underline{v}:=(s_1+t_2s_2,zs_2)$
in~\eqref{4.57} and~\eqref{4.59}
as a basis of sections on ${{H^{(3)}}'}|_{{\mathbb C}^*\times{\mathbb C}}$,
and $\underline{\widetilde v}:=\big(s_2+\widetilde t_2s_1,zs_1\big)$
in~\eqref{4.58} and~\eqref{4.60}
as a basis of sections on ${{H^{(3)}}'}|_{{\mathbb C}^*\times(\P^1\setminus\{0\})}$.
One sees immediately
\begin{gather}\label{7.1}
z\nabla_{\partial_2}\underline{v}=\underline{v}\, C_2,\qquad
z\nabla_{\widetilde \partial_2}\underline{\widetilde v}=\underline{\widetilde v}\, C_2.
\end{gather}
and again~\eqref{4.57} resp.~\eqref{4.59}
and~\eqref{4.58} resp.~\eqref{4.60}.
Therefore $\underline{v}$ and $\underline{\widetilde v}$ are in any case bases
of a $(TE)$-structure $\big(H^{(3)}\to{\mathbb C}\times M^{(3)},\nabla^{(3)}\big)$
on ${\mathbb C}\times {\mathbb C}\subset{\mathbb C}\times M^{(3)}$ respectively
${\mathbb C}\times \big(\P^1\setminus\{0\}\big)\subset{\mathbb C}\times M^{(3)}$.
The restricted $(TE)$-structures over $t_2\in{\mathbb C}^*$ are
those in~Theo\-rem~\ref{t4.17}.
They are regular singular, but not logarithmic.
Their leadings exponents $\alpha_1$ and~$\alpha_2$ are
independent of $t_2\in{\mathbb C}^*$.
The $(TE)$-structures over $t_2=0$ and over $\widetilde t_2=0$
(so $t_2=\infty$) are logarithmic except for the case
($N^{\rm mon}\neq 0$ and~$\alpha_1=\alpha_2$), in which case
the one over $t_2=0$ is regular singular, but not logarithmic.
Their leading exponents are
called~$\alpha_1^0$ and~$\alpha_2^0$ and
$\alpha_1^\infty$ and~$\alpha_2^\infty$. Then
\[
\def1.4{1.3}
\begin{tabular}{l|l}
\hline
\quad\ over $0$ & \quad\ over $\infty$
\\ \hline
$\alpha_1^0=\alpha_1$ & $\alpha_1^\infty=\alpha_1+1$
\\
$\alpha_2^0=\alpha_2+1$ & $\alpha_2^\infty=\alpha_2$
\\
\hline
\end{tabular}
\]
except that in the case
($N^{\rm mon}\neq 0$ and~$\alpha_1=\alpha_2$)
we have $\alpha_1^0=\alpha_1$, $\alpha_2^0=\alpha_2$.
For use in Theorem~\ref{t7.3}, we write the base space
for the $(TE)$-structure over $\P^1$ with leading exponents
$\alpha_1$ and~$\alpha_2$ as
$M^{(3),0,\alpha_1,\alpha_2}\cong\P^1$ in the case
$N^{\rm mon}=0$ and as $M^{(3),\neq 0,\alpha_1,\alpha_2}\cong\P^1$
in the case $N^{\rm mon}\neq 0$.
\item[$(ii)$] We extend the case $N^{\rm mon}=0$ from Theorem~\ref{t4.17}$(a)$ to the case $\alpha_1=\alpha_2$.
\eqref{4.57} and~\eqref{4.58} still hold,
but now the restricted $(TE)$-structures over
points in $M^{(3)}=\P^1$ are all logarithmic,
though the $(TE)$-structure over $M^{(3)}$ is not
logarithmic, but only regular singular.
\eqref{7.1} still holds. In~this case, the leading exponents are constant
and are $\alpha_1$ and $\alpha_1+1$
(so, not $\alpha_1$ and $ \alpha_2=\alpha_1$).
Similarly to $(i)$, the base space is called
$M^{(3),0,\alpha_1,{\rm log}}\cong\P^1$.
\end{enumerate}
\end{Remarks}
\begin{Remarks}\label{t7.2}\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The construction in Lemma~\ref{t3.10}$(d)$
extends a $(TE)$-structure
$\big(H^{(3)}\to{\mathbb C}\times M^{(3)},\nabla^{(3)}\big)$ in~Remark~\ref{t7.1}$(i)$ or $(ii)$
with $M^{(3)}=\P^1$ to a $(TE)$-structure
$\big(H^{(4)}\to{\mathbb C}\times M^{(4)},\nabla^{(4)}\big)$ with
$M^{(4)}={\mathbb C}\times M^{(3)}={\mathbb C}\times \P^1$, via
$\big({\mathcal O}\big(H^{(4)}\big),\nabla^{(4)}\big)=\big(\varphi^{(4)}\big)^*
\big({\mathcal O}\big(H^{(3)}\big),\nabla^{(3)}\big)\otimes {\mathcal E}^{t_1/z}$,
where $t_1$ is the coordinate on the first factor ${\mathbb C}$
in ${\mathbb C}\times\P^1$, and where $\varphi^{(4)}\colon M^{(4)}\to
M^{(3)}$, $(t_1,t_2)\mapsto t_2,$ is the projection.
Define $\underline{v^{(4)}}:=\big(\varphi^{(4)}\big)^*
(\underline{v}\text{ in Remark~\ref{t7.1}$(i)$ or $(ii)$})$.
Then the matrices $A_2$ and $B$ with
$z\nabla_{\partial_i}\underline{v}^{(4)}=\underline{v}^{(4)}A_i$ and
$z^2\nabla_{\partial_z}\underline{v}^{(4)}=\underline{v}^{(4)}B$ are unchanged,
and $A_1=C_1$, so
\begin{gather*
A_1=C_1,\quad A_2=C_2\quad \text{(as in~\eqref{7.1})},\qquad
B\quad\text{is as in~\eqref{4.57} or~\eqref{4.59}}.
\end{gather*}
The Higgs field is everywhere on $M^{(4)}$ primitive.
By Theorem~\ref{t3.14},
$M^{(4)}={\mathbb C}\times\P^1$ is an~$F$-manifold with Euler field.
The unit field is $\partial_1$, the multiplication is given
by $\partial_2\circ\partial_2=0$ and $\widetilde\partial_2\circ\widetilde\partial_2=0$.
So, each germ $\big(M^{(4)},t^0\big)$ is the germ ${\mathcal N}_2$.
The Euler field is
\begin{gather*}
\begin{split
&E= t_1\partial_1 + (\alpha_1-\alpha_2)t_2\partial_2
=t_1\partial_1 + (\alpha_2-\alpha_1)\widetilde t_2\widetilde\partial_2
\\
&\qquad\begin{cases}
\text{in the case~\eqref{4.57} and~\eqref{4.58}
and in $(ii)$ above,}
\\
\text{and in the case~\eqref{4.59} and~\eqref{4.60}
with }\alpha_1-\alpha_2\in{\mathbb N},\end{cases}
\\
&E= t_1\partial_1 -\partial_2
=t_1\partial_1 + \widetilde t_2^2\widetilde\partial_2\qquad
\text{in the case~\eqref{4.59} and~\eqref{4.60}
with }\alpha_1=\alpha_2
\end{split}
\end{gather*}
\item[$(ii)$] For $t^{(4)}\in {\mathbb C}\times{\mathbb C}^*\subset M^{(4)}$,
the $(TE)$-structure
$\big(H^{(4)}\to {\mathbb C}\times \big(M^{(4)},t^{(4)}\big),\nabla\big)$
is a universal unfolding of the one over $t^0$,
because that one is of type (Reg) and because the
Higgs field is primitive. See Corollary~\ref{t5.1}.
\item[$(iii)$] Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be a regular
singular unfolding of a regular singular, but not
logarithmic rank $2$ $(TE)$-structure over $t^0$.
Because of part $(ii)$, it is induced by the
$(TE)$-structure $\big(H^{(4)},\nabla^{(4)}\big)$ via a map
$\big(M,t^0\big)\to \big(M^{(4)},t^{(4)}\big)$ for some
$t^{(4)}\in\{0\}\times {\mathbb C}^*$.
Because it is regular singular, the image of the map
is in $\{0\}\times {\mathbb C}^*\subset \{0\}\times M^{(3)}$.
As there the leading exponents are constant,
they are also constant on the unfolding $(H,\nabla)$.
\end{enumerate}
\end{Remarks}
\begin{Theorem}\label{t7.3}
Any marked regular singular rank $2$ $(TE)$-structure
$($see Definition~$\ref{t3.15}(b)$, espe\-cially,
$M$ is simply connected$)$
is a {\it good} family of marked regular singular
$(TE)$-structures $($over points$)$ in the sense of Definition~$\ref{t3.26}(b)$.
\end{Theorem}
\begin{proof} Let $((H\to{\mathbb C}\times M,\nabla),\psi)$
be a regular singular rank $2$
$(TE)$-structure with a marking~$\psi$,
i.e., an isomorphism $\psi$ from $(H^\infty,M^{\rm mon})$
to a reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$.
We have to show the conditions~\eqref{3.39} and~\eqref{3.40}
for a good family of marked regular singular $(TE)$-structures.
By definition of a marking, $M$ is simply connected,
so especially it is connected. The subset
\begin{gather*
M^{\rm [log]}:=\{t\in M\,|\, {\mathcal U}|_t=0\}
=\{t\in M\,|\, \text{the }(TE)\text{-structure over }
t\text{ is logarithmic}\}
\end{gather*}
is a priori a subvariety (in fact, it is either $\varnothing$
or a hypersurface or equal to $M$).
First consider the case $M^{\rm [log]}=M$.
Choose any point $t^0\in M$ and any disk $\Delta\subset M$
through $t^0$. The restriction of the $(TE)$-structure
over the germ $\big(\Delta,t^0\big)$ is in the case $N^{\rm mon}=0$
isomorphic to one in the 7th or 8th or 9th case in Theorem~\ref{t6.3}. In~the case $N^{\rm mon}\neq 0$, it is
isomorphic to one in case (III) in Theorem~\ref{t6.7}. In~either case the leading exponents are constant on $\Delta$,
because of table~\eqref{6.27}
in Remark~\ref{t6.4}$(iv)$ and because of the definition
of case (III) in Theorem~\ref{t6.7}.
Therefore they are constant on $M$.
We call them $\alpha_1^{\rm gen}$ and $\alpha_2^{\rm gen}$.
Now consider the case $M^{\rm [log]}\subsetneqq M$.
For each $t^0\in M\setminus M^{\rm [log]}$, the $(TE)$-structure
over the germ $\big(M,t^0\big)$ has constant leading
exponents because of Remark~\ref{t7.2}$(iii)$.
Therefore the leading exponents are constant on
$M\setminus M^{\rm [log]}$. We call these generic leading exponents
$\alpha_1^{\rm gen}$ and $\alpha_2^{\rm gen}$.
For $t^0\in M^{\rm [log]}$ choose a generic small disk
$\Delta\subset M$ through $t^0$. Then
$\Delta\setminus \big\{t^0\big\}\subset M\setminus M^{\rm [log]}$. The restriction of the
$(TE)$-structure over the germ $\big(\Delta,t^0\big)$ is
in the case $N^{\rm mon}=0$ isomorphic to one in the 5th case
in Theorem~\ref{t6.3}. In~the case $N^{\rm mon}\neq 0$,
it is isomorphic to one in case~(I) or case (II)
in Theorem~\ref{t6.7}. In~either case, the leading exponents
$\big(\alpha_1\big(t^0\big),\alpha_2\big(t^0\big)\big)$ of the $(TE)$-structure
over $t^0$ are either $\big(\alpha_1^{\rm gen}+1,\alpha_2^{\rm gen}\big)$
or $\big(\alpha_1^{\rm gen},\alpha_2^{\rm gen}+1\big)$,
because of table~\eqref{6.27} in~Remark~\ref{t6.4}$(iv)$ and
because of the definition of the cases~(I) and~(II)
in Theorem~\ref{t6.7}.
Remark~\ref{t6.4}$(iv)$ and Theorem~\ref{t6.7}
provide generators of ${\mathcal O}\big(H|_{{\mathbb C}\times(\Delta,t^0)}\big)_{(0,t^0)}$
which are certain linear combinations of elementary sections.
The shape of these generators and the almost constancy
of the leading exponents imply the two conditions,
\begin{gather*
{\mathcal O}\big(H|_{{\mathbb C}\times\{t\}}\big)_{(0,t)}\supset V^r\quad
\text{for any}\ t\in M,
\ \text{where} \
r:=\max\big(\Ree\big(\alpha_1^{\rm gen}\big)+1,\,\Ree\big(\alpha_2^{\rm gen}\big)+1\big),\\
\dim_{\mathbb C}{\mathcal O}\big(H|_{{\mathbb C}\times\{t\}}\big)_{(0,t)}/V^r \quad
\text{is independent of $t\in M$},
\end{gather*}
which are the conditions~\eqref{3.39} and~\eqref{3.40}
for a good family of marked regular singular
$(TE)$-structures.
\end{proof}
The following theorem describes the moduli space
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ from Theorem~\ref{t3.29}
for the marked regular singular rank $2$ $(TE)$-structures as infinite
unions of curves isomorphic to~$\P^1$ such that the families of
$(TE)$-structures over these curves are the $(TE)$-structures
in Remarks~\ref{t7.1}$(i){+}(ii)$.
Part $(a)$ treats the cases with $N^{\rm mon}=0$,
part $(b)$ treats the cases with $N^{\rm mon}\neq 0$.
Recall the definitions of $M^{(3),0,\alpha_1,\alpha_2}$,
$M^{(3),\neq 0,\alpha_1,\alpha_2}$ and
$M^{(3),0,\alpha_1,{\rm log}}$ in Remarks~\ref{t7.1}$(i)$ and~$(ii)$.
\begin{Theorem}\label{t7.4}
Let $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$ be a reference pair
with $\dim H^{{\rm ref},\infty}=2$.
Let $\Eig\big(M^{\rm ref}\big)=\{\lambda_1,\lambda_2\}$
be the set of eigenvalues of $M^{\rm ref}$.
Let $\beta_1,\beta_2\in{\mathbb C}$ be the unique numbers
with ${\rm e}^{-2\pi {\rm i}\beta_j}=\lambda_j$ and
$-1<\Ree\beta_j\leq 0$.
\begin{enumerate}\itemsep=0pt
\item[$(a)$] The case $N^{\rm mon}=0$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] The cases with $\lambda_1\neq\lambda_2$. Then
\begin{gather*
M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}=
\dot{\bigcup\limits_{l_1\in{\mathbb Z}}}\bigg(\bigcup_{l_2\in{\mathbb Z}}M^{(3),0,\beta_1+l_1+l_2,\beta_2-l_2}\bigg).
\end{gather*}
Its topological components are the unions in brackets, so
$\bigcup_{l_2\in{\mathbb Z}}M^{(3),0,\beta_1+l_1+l_2,\beta_2-l_2}$.
Each component is a chain of $\P^1$'s,
the point $\infty$ of $M^{(3),0,\alpha_1,\alpha_2}$ is
identified with the point $0$ of
$M^{(3),0,\alpha_1+1,\alpha_2-1}$.
\setlength{\unitlength}{1mm}
\begin{figure}[h]
\centering
\begin{picture}(120,41
\multiput(10,15)(30,0){3}{\bezier{300}(0,0),(20,12),(40,0)}
\put(0,15){\bezier{20}(0,6),(10,6),(20,0)}
\put(100,15){\bezier{20}(0,0),(10,6),(20,6)}
\multiput(15,17.5)(30,0){4}{\circle*{2}}
\put(15,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1-1\\ \alpha_2+2\end{pmatrix}$}}
\put(30,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1-1\\ \alpha_2+1\end{pmatrix}$}}
\put(45,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_2+1\end{pmatrix}$}}
\put(60,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_2\end{pmatrix}$}}
\put(75,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_2\end{pmatrix}$}}
\put(90,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_2-1\end{pmatrix}$}}
\put(105,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_2-1\end{pmatrix}$}}
\multiput(15,7)(30,0){4}{\makebox(0,0){\small(Log)}}
\multiput(30,13)(30,0){3}{\makebox(0,0){\small(Reg)}}
\thinlines
\multiput(15,10)(30,0){4}{\vector(0,1){5}}
\multiput(15,30)(30,0){4}{\vector(0,-1){10}}
\end{picture}
\caption{One topological component in part $(a)$ $(i)$.}\label{figure1}
\end{figure}
\item[$(ii)$] The cases with $\lambda_1=\lambda_2$ $($so
$\beta_1=\beta_2)$. Then
\begin{gather}
M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}
=\dot{\bigcup\limits_{l_1\in{\mathbb Z}}}\bigg(\bigcup_{l_2\in{\mathbb N}}
{\mathbb F}_2^{\beta_1+l_1+l_2,\beta_1+l_1-l_2}\bigg)\nonumber
\\ \hphantom{M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}=}
\cup \dot{\bigcup\limits_{l_1\in{\mathbb Z}}}
\bigg(\widetilde{\mathbb F}_2^{\beta_1+l_1+1,\beta_1+l_1}\cup
\bigcup_{l_2\in{\mathbb N}}
{\mathbb F}_2^{\beta_1+l_1+1+l_2,\beta_1+l_1-l_2}\bigg).
\label{7.10}
\end{gather}
Here ${\mathbb F}_2^{\alpha_1,\alpha_2}$ is for all possible
$\alpha_1$, $\alpha_2$ the Hirzebruch surface ${\mathbb F}_2$,
and $\widetilde{\mathbb F}_2^{\alpha_1,\alpha_1-1}$ is the surface~$\widetilde{\mathbb F}_2$, which is obtained from ${\mathbb F}_2$
by blowing down the unique $(-2)$-curve in ${\mathbb F}_2$.
The unions in brackets are the topological components.
They are chains of Hirzebruch surfaces.
A $(+2)$-curve of ${\mathbb F}_2^{\alpha_1,\alpha_2}$
is identified with the $(-2)$-curve of
${\mathbb F}_2^{\alpha_1+1,\alpha_2-1}$
$($and a $(+2)$-curve of $\widetilde{\mathbb F}_2^{\alpha_1,\alpha_1-1}$
is identified with the $(-2)$ curve in
${\mathbb F}_2^{\alpha_1+1,\alpha_1-2})$. The $(TE)$-structures over
the points in the $(-2)$-curves are logarithmic,
and also the $(TE)$-structure over the singular point of
$\widetilde {\mathbb F}_2^{\alpha_1,\alpha_1-1}$ is logarithmic.
The $(TE)$-structures over all other points of
${\mathbb F}_2^{\alpha_1,\alpha_2}$ and $\widetilde{\mathbb F}_2^{\alpha_1,\alpha_2}$
are regular singular, but not logarithmic, and have
leading exponents $\alpha_1$ and~$\alpha_2$.
For each ${\mathbb F}_2^{\alpha_1,\alpha_2}$, and also for
$\widetilde{\mathbb F}_2^{\alpha_1,\alpha_2}$ after blowing up the singular
point to a $(-2)$-curve, the fibers of it as
a $\P^1$-fiber bundle over $\P^1$ are isomorphic
to $M^{(3),0,\alpha_1,\alpha_2}$.
The $(-2)$-curve in ${\mathbb F}_2^{\alpha_1,\alpha_1-2}$
$($the ${\mathbb F}_2$ with $l_2=1$ in each topological component in the first
line of~\eqref{7.10}$)$ is isomorphic to
$M^{(3),0,\alpha_1-1,{\rm log}}$, and the $(TE)$-structures
over its points are logarithmic with leading exponents
$\alpha_1$, $\alpha_1-1$.
\end{enumerate}
\setlength{\unitlength}{1mm}
\begin{figure}[h]
\centering
\begin{picture}(120,79)
\multiput(10,35)(30,0){3}{\bezier{300}(0,0),(20,12),(40,0)}
\multiput(10,45)(30,0){3}{\bezier{300}(0,0),(20,12),(40,0)}
\multiput(10,55)(30,0){3}{\bezier{300}(0,0),(20,12),(40,0)}
\put(100,35){\bezier{20}(0,0),(10,6),(20,6)}
\put(100,45){\bezier{20}(0,0),(10,6),(20,6)}
\put(100,55){\bezier{20}(0,0),(10,6),(20,6)}
\multiput(15,37.5)(30,0){4}{\circle*{2}}
\multiput(15,47.5)(30,0){4}{\circle*{2}}
\multiput(15,57.5)(30,0){4}{\circle*{2}}
\multiput(15,32.5)(30,0){4}{\line(0,1){30}}
\put(15,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1-1\end{pmatrix}$}}
\put(30,70){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1-2\end{pmatrix}$}}
\put(45,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-2\end{pmatrix}$}}
\put(60,70){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-3\end{pmatrix}$}}
\put(75,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-3\end{pmatrix}$}}
\put(90,70){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-4\end{pmatrix}$}}
\put(105,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+3\\ \alpha_1-4\end{pmatrix}$}}
\multiput(45,20)(30,0){3}{\makebox(0,0){\small(Log)}}
\multiput(30,33)(30,0){3}{\makebox(0,0){\small(Reg)}}
\put(10,20){\makebox{$M^{(3),0,\alpha_1-1,{\rm log}}$}}
\put(15,17){\makebox(0,0){(Log)}}
\thinlines
\multiput(15,25)(30,0){4}{\vector(0,1){5}}
\multiput(15,70)(30,0){4}{\vector(0,-1){5}}
\multiput(10,15)(30,0){3}{\bezier{50}(0,0)(0,-5)(20,-5)}
\multiput(30,15)(30,0){3}{\bezier{50}(0,-5)(20,-5)(20,0)}
\put(25,5){\makebox{${\mathbb F}_2^{\alpha_1,\alpha_1-2}$}}
\put(55,5){\makebox{${\mathbb F}_2^{\alpha_1+1,\alpha_1-3}$}}
\put(85,5){\makebox{${\mathbb F}_2^{\alpha_1+2,\alpha_1-4}$}}
\end{picture}\vspace{-2ex}
\caption{One topological component in part $(a)$ $(ii)$.}\label{figure2}
\end{figure}
\begin{figure}[h]
\centering
\begin{picture}(120,82)
\multiput(40,35)(30,0){2}{\bezier{300}(0,0),(20,12),(40,0)}
\multiput(40,45)(30,0){2}{\bezier{300}(0,0),(20,12),(40,0)}
\multiput(40,55)(30,0){2}{\bezier{300}(0,0),(20,12),(40,0)}
\put(10,41){\bezier{300}(0,0),(20,30),(40,13)}
\put(10,45){\bezier{300}(0,0),(20,12),(40,0)}
\put(10,48){\bezier{300}(0,0),(20,0),(40,-13.5)}
\put(100,35){\bezier{20}(0,0),(10,6),(20,6)}
\put(100,45){\bezier{20}(0,0),(10,6),(20,6)}
\put(100,55){\bezier{20}(0,0),(10,6),(20,6)}
\multiput(45,37.5)(30,0){3}{\circle*{2}}
\multiput(15,47.5)(30,0){4}{\circle*{2}}
\multiput(45,57.5)(30,0){3}{\circle*{2}}
\multiput(45,32.5)(30,0){3}{\line(0,1){30}}
\put(15,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1\end{pmatrix}$}}
\put(30,70){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1-1\end{pmatrix}$}}
\put(45,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-1\end{pmatrix}$}}
\put(60,70){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-2\end{pmatrix}$}}
\put(75,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-2\end{pmatrix}$}}
\put(90,70){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-3\end{pmatrix}$}}
\put(105,75){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+3\\ \alpha_1-3\end{pmatrix}$}}
\multiput(15,20)(30,0){4}{\makebox(0,0){\small(Log)}}
\multiput(30,33)(30,0){3}{\makebox(0,0){\small(Reg)}}
\thinlines
\multiput(45,25)(30,0){3}{\vector(0,1){5}}
\multiput(45,70)(30,0){3}{\vector(0,-1){5}}
\put(15,25){\vector(0,1){15}}
\put(15,70){\vector(0,-1){15}}
\multiput(10,15)(30,0){3}{\bezier{50}(0,0)(0,-5)(20,-5)}
\multiput(30,15)(30,0){3}{\bezier{50}(0,-5)(20,-5)(20,0)}
\put(25,5){\makebox{$\widetilde{\mathbb F}_2^{\alpha_1,\alpha_1-1}$}}
\put(55,5){\makebox{${\mathbb F}_2^{\alpha_1+1,\alpha_1-2}$}}
\put(85,5){\makebox{${\mathbb F}_2^{\alpha_1+2,\alpha_1-3}$}}
\end{picture}\vspace{-2ex}
\caption{Another topological component in part $(a)$ $(ii)$.}\label{figure3}
\end{figure}
\item[$(b)$] The cases with $N^{\rm mon}\neq 0$ $($and thus
$\lambda_1=\lambda_2$, $\beta_1=\beta_2)$. Then
\begin{gather*}
M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}=
\dot{\bigcup\limits_{l_1\in{\mathbb Z}}}\bigg(\bigcup_{l_2\in{\mathbb N}_0}
M^{(3),\neq 0,\beta_1+l_1+l_2,\beta_1+l_1-l_2}\bigg) \nonumber
\\ \hphantom{M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}=}
\cup \dot{\bigcup\limits_{l_1\in{\mathbb Z}}}\bigg(\bigcup_{l_2\in{\mathbb N}_0}
M^{(3),\neq 0, \beta_1+l_1+1+l_2,\beta_1+l_1-l_2}\bigg).
\end{gather*}
Its topological components are the unions in brackets.
Each component is a chain of $\P^1$'s,
the point $\infty$ of $M^{(3),\neq0,\alpha_1,\alpha_2}$ is
identified with the point $0$ of
$M^{(3),\neq0,\alpha_1+1,\alpha_2-1}$.
\end{enumerate}
\setlength{\unitlength}{1mm}
\begin{figure}[h]
\centering
\begin{picture}(120,35)
\multiput(10,15)(30,0){3}{\bezier{300}(0,0),(20,12),(40,0)}
\put(100,15){\bezier{20}(0,0),(10,6),(20,6)}
\multiput(15,17.5)(30,0){4}{\circle*{2}}
\put(22,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1\end{pmatrix}$}}
\put(45,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1\end{pmatrix}$}}
\put(60,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-1\end{pmatrix}$}}
\put(75,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-1\end{pmatrix}$}}
\put(90,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-2\end{pmatrix}$}}
\put(105,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+3\\ \alpha_1-2\end{pmatrix}$}}
\multiput(45,7)(30,0){3}{\makebox(0,0){\small(Log)}}
\put(22,13){\makebox(0,0){\small(Reg)}}
\multiput(60,13)(30,0){2}{\makebox(0,0){\small(Reg)}}
\thinlines
\multiput(45,10)(30,0){3}{\vector(0,1){5}}
\multiput(45,30)(30,0){3}{\vector(0,-1){10}}
\end{picture}\vspace{-2ex}
\caption{One topological component in part $(b)$.}\label{figure4}
\end{figure}
\begin{figure}[h]
\centering
\begin{picture}(120,35)
\multiput(10,15)(30,0){3}{\bezier{300}(0,0),(20,12),(40,0)}
\put(100,15){\bezier{20}(0,0),(10,6),(20,6)}
\multiput(15,17.5)(30,0){4}{\circle*{2}}
\put(15,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1\end{pmatrix}$}}
\put(30,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1\\ \alpha_1-1\end{pmatrix}$}}
\put(45,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-1\end{pmatrix}$}}
\put(60,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+1\\ \alpha_1-2\end{pmatrix}$}}
\put(75,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-2\end{pmatrix}$}}
\put(90,30){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+2\\ \alpha_1-3\end{pmatrix}$}}
\put(105,35){\makebox(0,0){\small$\begin{pmatrix}\alpha_1+3\\ \alpha_1-3\end{pmatrix}$}}
\multiput(15,7)(30,0){4}{\makebox(0,0){\small(Log)}}
\multiput(30,13)(30,0){3}{\makebox(0,0){\small(Reg)}}
\thinlines
\multiput(15,10)(30,0){4}{\vector(0,1){5}}
\multiput(15,30)(30,0){4}{\vector(0,-1){10}}
\end{picture}\vspace{-2ex}
\caption{Another topological component in part $(b)$.}\label{figure5}
\end{figure}
\end{Theorem}
\begin{proof}
We consider only marked $(TE)$-structures with a
fixed reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$.
Because of the markings, we can identify for each such
$(TE)$-structure its pair $(H^\infty,M^{\rm mon})$
with the reference pair $\big(H^{{\rm ref},\infty},M^{\rm ref}\big)$.
Thus also the spaces $C^\alpha$ can be identified
for all marked $(TE)$-structures.
$(a)$ $(i)$ and $(b)$ In both parts, there is no harm in fixing
elementary sections $s_1\in C^{\alpha_1}$ and
$s_2\in C^{\alpha_2}$ as in Theorem~\ref{t4.17}.
Then Theorem~\ref{t4.17} lists all marked $(TE)$-structures
with the given reference pair.
Remarks~\ref{t7.1}$(i)$ just put these marked
$(TE)$-structures into families parametrized by the spaces
$M^{(3),0,\alpha_1,\alpha_2}$ resp.~$M^{(3),\neq 0,\alpha_1,\alpha_2}$.
Most logarithmic $(TE)$-structures (which are classified
in Theorem~\ref{t4.20}) turn up in two such
families. This leads to the identification of the point
$\infty$ in $M^{(3),0/\neq 0,\alpha_1,\alpha_2}$
with the point $0$ in
$M^{(3),0/\neq 0,\alpha_1+1,\alpha_2-1}$.
Only each of the logarithmic $(TE)$-structures
with $N^{\rm mon}\neq 0$ and leading exponents
$\alpha_1=\alpha_2$ turns up in only one $\P^1$,
in the space $M^{(3),\neq 0,\alpha_1,\alpha_1-1}$.
There it is over the point $0$.
$(a)$ $(ii)$ Here the leading exponents satisfy
$\alpha_1-\alpha_2\in{\mathbb Z}\setminus\{0\}$, and we index them
such that $\alpha_1-\alpha_2\in{\mathbb N}$. We fix a basis
$\sigma_1$, $\sigma_2$ of $C^{\alpha_1}$
and define $\sigma_3:=z^{\alpha_2-\alpha_1}\sigma_1\in
C^{\alpha_2}$,
$\sigma_4\:=z^{\alpha_2-\alpha_1}\sigma_2\in C^{\alpha_2}$.
Then because of Theorem~\ref{t4.17}$(a)$,
we can write all marked regular singular, but not
logarithmic $(TE)$-structures with leading exponents
$\alpha_1$ and~$\alpha_2$ in two charts ${\mathbb C}\times{\mathbb C}^*$
with coordinates $(r_1,t_2)$ and $(r_2,t_3)$,
\begin{gather*
{\mathcal O}(H)_0= {\mathbb C}\{z\}(\sigma_1+t_2(\sigma_4+r_1\sigma_3))\oplus {\mathbb C}\{z\}(z(\sigma_4+r_1\sigma_3)),
\\
{\mathcal O}(H)_0= {\mathbb C}\{z\}(\sigma_2+t_3(\sigma_3+r_2\sigma_4))\oplus {\mathbb C}\{z\}(z(\sigma_3+r_2\sigma_4)).\nonumber
\end{gather*}
The charts overlap where $r_1,r_2\in{\mathbb C}^*$, with
\begin{gather*
r_2=r_1^{-1},\qquad
t_3=-t_2r_1^2.
\end{gather*}
Compactification to $t_2= 0$ and $t_2=\infty$
(and $t_3=0$ and $t_3=\infty$) gives
the Hirzebruch surface ${\mathbb F}_2={\mathbb F}_2^{\alpha_1,\alpha_2}$.
The curve with $t_2=0$ (and $t_3=0$) is the $(-2)$-curve.
Over this curve, we have the family of marked logarithmic
$(TE)$-structures (see Theorem~\ref{t4.20}) with
leading exponents~$\alpha_1$ and~$\alpha_2+1$,
\begin{gather*}
{\mathcal O}(H)_0= {\mathbb C}\{z\}(\sigma_1)\oplus {\mathbb C}\{z\}(z(\sigma_4+r_1\sigma_3)),
\\
{\mathcal O}(H)_0= {\mathbb C}\{z\}(\sigma_2)\oplus {\mathbb C}\{z\}(z(\sigma_3+r_2\sigma_4)).
\end{gather*}
The curve with $t_2=\infty$ (and $t_3=\infty$)
is a $(+2)$-curve.
Over this curve, we have the family of marked logarithmic
$(TE)$-structures with
leading exponents $\alpha_1+1$ and $\alpha_2$.
Therefore the $(+2)$-curve in ${\mathbb F}_2^{\alpha_1,\alpha_2}$
must be identified with the $(-2)$-curve in
${\mathbb F}_2^{\alpha_1+1,\alpha_2-1}$.
In the case $\alpha_1-\alpha_2=2$, the $(-2)$-curve
in ${\mathbb F}_2^{\alpha_1,\alpha_2}$ is the moduli space
$M^{(3),0,\alpha_1-1,{\rm log}}$ from Remark~\ref{t7.1}$(i)$.
In the case $\alpha_1-\alpha_2=1$, the $(-2)$-curve
in ${\mathbb F}_2^{\alpha_1,\alpha_2}$ has to be blown down,
as then for $t_2=0$
\begin{gather*}
{\mathcal O}(H)_0={\mathbb C}\{z\}(\sigma_1)\oplus{\mathbb C}\{z\}(z(\sigma_4+r_1\sigma_3))
={\mathbb C}\{z\}C^{\alpha_1}=V^{\alpha_1}
\end{gather*}
is independent of the parameter $r_1$.
The projection $(r_1,t_2)\mapsto r_1$ extends
to the $\P^1$-fibration of ${\mathbb F}_2^{\alpha_1,\alpha_2}$
over $\P^1$.
The fibers are isomorphic to $M^{(3),0,\alpha_1,\alpha_2}$.
Affine coordinates on these fibers are $t_2$
and $\widetilde t_2=t_2^{-1}$ or $t_3$ and $\widetilde t_3=t_3^{-1}$.
\end{proof}
\begin{Remarks}\label{t7.5}\quad
\begin{enumerate}\itemsep=-1pt
\item[$(i)$] Consider a marked regular singular rank $2$ $(TE)$-structure
$((H\to{\mathbb C}\times M,\nabla),\psi)$. There is a unique
map $\varphi\colon M\to M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$,
which maps $t\in M$ to the unique point in~$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ over which one has up to
marked isomorphism the same marked $(TE)$-structure
as over $t$. Corollary~\ref{t7.3} and the fact that the
moduli space represents the moduli functor
${\mathcal M}^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$, imply that
$\varphi$ is holomorphic.
Because $M$ is (simply) connected, the map $\varphi$ goes to
one irreducible component of the moduli space,
so to one $M^{(3),0/\neq 0,\alpha_1,\alpha_2}\cong\P^1$
in the parts $(a)$ $(i)$ and $(b)$ in Theorem~\ref{t7.4}
and to one ${\mathbb F}_2^{\alpha_1,\alpha_2}$ or to
$\widetilde{\mathbb F}_2^{\alpha_1,\alpha_1-1}$ in part $(a)$ $(ii)$.
\item[$(ii)$] In fact, in part $(a)$ $(ii)$ the map $\varphi$ goes even
to a projective curve which is isomorphic to one curve
$M^{(3),0,\alpha_1,\alpha_2}$ or to the curve
$M^{(3),0,\alpha_1,{\rm log}}$.
This holds for the $(TE)$-structure over any manifold,
as it holds by Remark~\ref{t6.4}$(iv)$ and Theorem~\ref{t6.7} for the $(TE)$-structures over the
1-dimensional germ $({\mathbb C},0)$.
The curves isomorphic to $M^{(3),0,\alpha_1,\alpha_2}$
are the $(0)$-curves in the $\P^1$ fibration of
${\mathbb F}_2^{\alpha_1,\alpha_2}$ over~$\P^1$ \big(in the case of
$\widetilde{\mathbb F}_2^{\alpha_2+1,\alpha_2}$ each fiber of
${\mathbb F}_2^{\alpha_2+1,\alpha_2}$ over $\P^1$ embeds also into
the blown down surface $\widetilde{\mathbb F}_2^{\alpha_2+1,\alpha_2}$\big).
The curve isomorphic to $M^{(3),0,\alpha_1,{\rm log}}$
is the $(-2)$-curve in ${\mathbb F}_2^{\alpha_1,\alpha_1-2}$.
\item[$(iii)$] We have here a notion of horizontal directions which
is similar to that for classifying spaces of
Hodge structures. There it comes from Griffiths
transversality. Here it comes from the part of the
pole of Poincar\'e rank 1, which says that the covariant
derivatives $\nabla_{\partial_j}$ along vector fields on the
base space see only a pole of order 1.
In the cases of the ${\mathbb F}_2^{\alpha_1,\alpha_2}$
with $\alpha_1-\alpha_2\in{\mathbb N}\setminus \{1,2\}$, the horizontal
directions are the tangent spaces to the fibers of the
$\P^1$ fibration. In~the cases of ${\mathbb F}_2^{\alpha_1,\alpha_1-2}$ and
$\widetilde {\mathbb F}_2^{\alpha_1,\alpha_1-1}$, the horizontal directions
contain these tangent spaces.
However, on points in the $(-2)$-curve in
${\mathbb F}_2^{\alpha_1,\alpha_1-2}$
and on the singular point in
$\widetilde{\mathbb F}_2^{\alpha_1,\alpha_1-1}$,
any direction is horizontal.
\end{enumerate}
\end{Remarks}
{\sloppy\begin{Remark
If we forget the markings of the $(TE)$-structures in one
moduli space $M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
and consider the unmarked $(TE)$-structures up to isomorphism,
we obtain in the cases $N^{\rm mon}=0$ countably many
points, one for each intersection point or intersection
curve of two irreducible components,
and one for each irreducible component.
On the contrary, in the cases $N^{\rm mon}\neq 0$,
the unmarked and the marked $(TE)$-structures almost coincide,
as the choice of an elementary section $s_1$ in
Theorem~\ref{t4.17}$(b)$ fixes uniquely the elementary
section~$s_2$ with~\eqref{4.50}.
The set of unmarked $(TE)$-structures up to isomorphism
is still almost in~bijec\-tion with the moduli space
$M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$ in the case $N^{\rm mon}\neq0$.
Only the components $M^{(3),\neq 0,\alpha_1,\alpha_1}-\{\infty\}$
boil down to single points.
\end{Remark}
}
\section[Unfoldings of rank 2 $(TE)$-structures of type (Log) over a point]
{Unfoldings of rank 2 $\boldsymbol{(TE)}$-structures of type (Log) \\over a point}\label{c8}
Sections~\ref{c5} and~\ref{c8} together
treat all rank $2$ $(TE)$-structures over germs $\big(M,t^0\big)$ of
manifolds. Section~\ref{c5} treated the unfoldings
of $(TE)$-structures of types (Sem) or (Bra) or (Reg)
over $t^0$.
Section~\ref{c8} will treat the unfoldings of
$(TE)$-structures of type (Log) over $t^0$.
It builds on Section~\ref{c6}, which classified the
unfoldings with trace free pole parts over
$\big(M,t^0\big)=({\mathbb C},0)$ of a logarithmic rank $2$ $(TE)$-structure
over $t^0$ and on Section~\ref{c7},
which treated arbitrary regular singular rank $2$
$(TE)$-structures.
Here Lemmata~\ref{t3.10} and~\ref{t3.11} are helpful.
They allow to go from arbitrary $(TE)$-structures
to $(TE)$-structures with trace free pole parts
and vice versa.
Section~\ref{c8.1} gives the classification results.
Section~\ref{c8.2} extracts from them a
characterization of the space of all $(TE)$-structures
with generically primitive Higgs fields over a given
germ of a~2-dimensional $F$-manifold with Euler field.
Section~\ref{c8.3} gives the proof of Theorem~\ref{t8.5}.
First we characterize in Theorem~\ref{t8.1}
the 2-parameter unfoldings of rank $2$
$(TE)$-structures of type (Log) over a point such that
the Higgs field is generically primitive
and induces an $F$-manifold structure on the underlying
germ $\big(M,t^0\big)$ of a manifold.
Theorem~\ref{t8.1} is a rather imme\-diate
implication of Theorem~\ref{t6.3} and Theorem~\ref{t6.7}
together with Lemmata~\ref{t3.10} and~\ref{t3.11}.
Part $(d)$ gives an explicit classification.
The other results in this section will all refer
to this classification.
Corollary~\ref{t8.3} lists for any logarithmic rank $2$
$(TE)$-structure over a point $t^0$ all unfoldings within
the set of $(TE)$-structures in Theorem~\ref{t8.1}$(a)$.
The proof consists of inspection of the explicit
classification in Theorem~\ref{t8.1}$(d)$.
Theorem~\ref{t8.5} is the main result of this section. It~lists a finite subset of the unfoldings in~Theo\-rem~\ref{t8.1}$(d)$ with the following property:
Any unfolding of a rank $2$ $(TE)$-structure of type (Log)
over a point is induced by a $(TE)$-structure in this list.
The $(TE)$-structures in the list turn out to be
universal unfoldings of themselves.
The proof of Theorem~\ref{t8.5} is long. It~is deferred
to Section~\ref{c8.3}. The results of Section~\ref{c6}
are crucial, especially Theorem~\ref{t6.3} and
Theorem~\ref{t6.7}.
Finally, Theorem~\ref{t8.6} lists the rank $2$
$(TE)$-structures over a germ $\big(M,t^0\big)$ of a manifold
such that the Higgs field is primitive (so that $\big(M,t^0\big)$
becomes a germ of an $F$-manifold with Euler field)
and the restriction over $t^0$ is of type (Log).
This list turns out to be a sublist of the one in
Theorem~\ref{t8.5}.
Theorem~\ref{t8.6} follows easily from Theorem~\ref{t8.1}.
Theorem~\ref{t8.6} is also contained in the papers~\cite{DH20-3}
and~\cite{DH20-2}, the generic types (Bra), (Reg) and (Log)
are in~\cite{DH20-3}, the generic type (Sem)
is in~\cite{DH20-2}.
The proofs there are completely different. They build on the
formal classification of $(T)$-structures in~\cite{DH20-1}.
\subsection{Classification results}\label{c8.1}
\begin{Theorem}\label{t8.1}\quad
\begin{enumerate}\itemsep=0pt
\item[$(a)$]
Consider a rank $2$ $(TE)$-structure $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$
over a $2$-dimen\-sio\-nal germ $\big(M,t^0\big)$
with restriction over $t^0$ of type $($Log$)$,
with generically primitive Higgs field,
and such that the induced $F$-manifold structure on generic
points of $M$ extends to all of $M$.
There is a unique rank $2$ $(TE)$-structure
$\big(H^{[3]}\to{\mathbb C}\times({\mathbb C},0),\nabla^{[3]}\big)$ over $({\mathbb C},0)$
(with coordinate $t_2$) with trace free pole part,
with nonvanishing Higgs field
and with logarithmic restriction over $t_2=0$ such that
$({\mathcal O}(H),\nabla)$ arises from $\big({\mathcal O}(H^{[3]}),\nabla^{[3]}\big)$
as follows. There are coordinates $t=(t_1,t_2)$
on $\big(M,t^0\big)$ such that $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$
and a constant $c_1\in{\mathbb C}$ such that
\begin{gather}\label{8.1}
({\mathcal O}(H),\nabla)\cong\pr_2^*\big({\mathcal O}\big(H^{[3]}\big),\nabla^{[3]}\big)\otimes
{\mathcal E}^{(t_1+c_1)/z},
\end{gather}
where $\pr_2\colon \big(M,t^0\big)\to({\mathbb C},0)$, $(t_1,t_2)\mapsto t_2$
$\big($see Lemma~$\ref{t3.10}(a)$ for ${\mathcal E}^{(t_1+c_1)/z}\big)$.
The $(TE)$-structure $(H,\nabla)$ is of type $($Log$)$
over $({\mathbb C}\times\{0\},0)$ and of one generic type
$($Sem$)$ or $($Bra$)$ or $($Reg$)$ or $($Log$)$ over $({\mathbb C}\times{\mathbb C}^*,0)$.
\item[$(b)$] Vice versa, if $\big(H^{[3]},\nabla^{[3]}\big)$ is as in (a)
and $c_1\in{\mathbb C}$, then the $(TE)$-structure
$({\mathcal O}(H),\nabla):=\pr_2^*\big({\mathcal O}\big(H^{[3]}\big),\nabla^{[3]}\big)\otimes
{\mathcal E}^{(t_1+c)/z}$ over $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$ satisfies
the properties in $(a)$.
\item[$(c)$] The rank $2$ $(TE)$-structures $\big(H^{[3]},\nabla^{[3]}\big)$
over $({\mathbb C},0)$ with trace free pole part, nonvanishing Higgs field
and logarithmic restriction over $0$
are classified in Theorems~$\ref{t6.3}$ and~$\ref{t6.7}$.
They are in suitable coordinates the first $7$ of the $9$
cases in the list in Theorem~$\ref{t6.3}$
and the cases~\eqref{6.49} and~\eqref{6.50}
with $f=\frac{1}{k_1}t^{k_1}$ for some $k_1\in{\mathbb N}$
in Theorem~$\ref{t6.7}$. $($Though here the 6th case
in Theorem~$\ref{t6.3}$ is part of the cases~\eqref{6.49}
and~\eqref{6.50} in Theorem~$\ref{t6.7}.)$
\item[$(d)$] The explicit classification of the $(TE)$-structures
$(H,\nabla)$ in $(a)$ is as follows.
There are coordinates $(t_1,t_2)$ such that $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$,
and there is a ${\mathbb C}\{t,z\}$-basis $\underline{v}$ of ${\mathcal O}(H)_0$
whose matrices $A_1,A_2,B\in M_{2\times 2}({\mathbb C}\{t,z\})$ with
$z\nabla_{\partial_i}\underline{v}=\underline{v}A_i$,
$z^2\nabla_{\partial_z}\underline{v}=\underline{v}B$ are in the following
list of normal forms. The normal form is unique.
We always have
\begin{gather*
A_1=C_1.
\end{gather*}
Always $M$ is an $F$-manifold with Euler field
in one of the normal forms in Theorems~$\ref{t2.2}$
and~$\ref{t2.3}$ $($in the case $(i)$ the product
$\partial_2\circ\partial_2$ is only almost in the normal form
in Theorem~$\ref{t2.2}$;
in the case $(iii)$ with $\alpha_4=-1$
the Euler field is only almost in the normal form
in Theorem~$\ref{t2.3})$.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Generic type $($Sem$){:}$
invariants $k_1,k_2\in{\mathbb N}$ with $k_2\geq k_1$,
$c_1,\rho^{(1)}\in{\mathbb C}$, $\zeta\in{\mathbb C}$ if $k_2-k_1\in 2{\mathbb N}$,
$\alpha_3\in{\mathbb R}_{\geq 0}\cup\H$ if $k_1=k_2$,
\begin{gather*
\gamma := \frac{2}{k_1+k_2},
\\
A_2=\begin{cases} -\gamma^{-1}
\big(t_2^{k_1-1}C_2 + t_2^{k_2-1}E\big)
\quad\text{if}\ k_2-k_1>0\ \text{is odd,}
\\[.5ex]
-\gamma^{-1}\big(t_2^{k_1-1}C_2 +\zeta t^{(k_1+k_2)/2-1}D+\big(1-\zeta^2\big)t_2^{k_2-1}E\big)
\quad \text{if}\ k_2-k_1\in 2{\mathbb N},\!\!\!
\\[.5ex]
-\gamma^{-1}t_2^{k_1-1}D \quad\text{if}\ k_2=k_1,
\end{cases}
\\
B= \big({-}t_1-c_1+z\rho^{(1)}\big)C_1+(-\gamma t_2)A_2
+\begin{cases}
z\frac{k_1-k_2}{2(k_1+k_2)}D&\text{if}\ k_2>k_1,
\\
z\alpha_3 D&\text{if}\ k_2=k_1,\end{cases}
\\ \hphantom{B= }
\text{$F$-manifold }I_2(k_1+k_2)\ \big(\text{with }I_2(2)=A_1^2\big),\
\text{with}\ \partial_2\circ\partial_2
=\gamma^{-2}t_2^{k_1+k_2}\cdot \partial_1,
\\
E= (t_1+c_1)\partial_1 +\gamma t_2\partial_2\quad\text{Euler field}.
\end{gather*}
\item[$(ii)$] Generic type $($Bra$){:}$ invariants $k_1,k_2\in{\mathbb N}$,
$c_1,\rho^{(1)}\in{\mathbb C}$,
\begin{gather*
\gamma := \frac{1}{k_1+k_2},
\\
A_2=-\gamma^{-1}\big(t_2^{k_1-1}C_2+t_2^{k_1+k_2-1}D-t_2^{k_1+2k_2-1}E\big),
\\
B= \big({-}t_1-c_1+z\rho^{(1)}\big)C_1+(-\gamma t_2)A_2+ z\frac{-k_2}{2(k_1+k_2)}D,
\\ \hphantom{B= }
\text{$F$-manifold }{\mathcal N}_2,
\text{ with }\partial_2\circ\partial_2=0,
\\
E= (t_1+c_1)\partial_1 +\gamma t_2\partial_2\quad\text{Euler field}.
\end{gather*}
\item[$(iii)$] Generic type $($Reg$){:}$
invariants $c_1,\rho^{(1)}\in{\mathbb C}$,
$\alpha_4\in{\mathbb C}\setminus\{-1\}$ if~$N^{\rm mon}=0$,
$\alpha_4\in{\mathbb Z}$ if~$N^{\rm mon}\neq 0$,
$k_1\in{\mathbb N}$ if~$N^{\rm mon}=0$,
$\widetilde k_1\in{\mathbb N}$ if~$N^{\rm mon}\neq 0$
$($with $k_1=\widetilde k_1$ if~$\alpha_4\neq -1$,
and $k_1=2\widetilde k_1$ if~$\alpha_4=-1)$,
\begin{gather*
\gamma := \frac{1+\alpha_4}{k_1},
\\
A_2=\begin{cases}
-\gamma^{-1}t_2^{k_1-1}C_2&\text{if}\quad N^{\rm mon}=0,
\\[.5ex]
\widetilde k_1t_2^{\widetilde k_1-1}C_2&\text{if}\quad N^{\rm mon}\neq 0,
\end{cases}
\\
B= \big({-}t_1-c_1+z\rho^{(1)}\big)C_1+(-\gamma t_2)A_2+z\frac{1}{2}\alpha_4 D
\\ \hphantom{B= }
{}+ \begin{cases}
0 \qquad\text{if}\quad N^{\rm mon}=0,
\\
z^{\alpha_4+1}C_2\qquad\text{if}\quad N^{\rm mon}\neq 0,\quad
\alpha_4\in{\mathbb N}_0,
\\
-z^{-\alpha_4-1}t_2^{2\widetilde k_1}C_2 +z^{-\alpha_4}t_2^{\widetilde k_1}D+
z^{-\alpha_4+1}E\quad\text{if}\quad N^{\rm mon}\neq 0,\quad
\alpha_4\in{\mathbb Z}_{<0},\!\!\!\!\end{cases}
\\ \hphantom{B= }
\text{$F$-manifold }{\mathcal N}_2,\
\text{with}\ \partial_2\circ\partial_2=0,
\\
E= \left\{\!\!\begin{array}{ll}
(t_1+c_1)\partial_1 +\gamma t_2\partial_2&\text{if}\quad\alpha_4\neq -1,\\[.5ex]
(t_1+c_1)\partial_1 +\frac{1}{\widetilde k_1}t_2^{\widetilde k_1+1} \partial_2
&\text{if}\quad\alpha_4=-1\end{array}\!\!\right\}
\quad\text{Euler field}.
\end{gather*}
\item[$(iv)$] Generic type $($Log$)$:
invariants $k_1\in{\mathbb N}$,
$c_1,\rho^{(1)}\in{\mathbb C}$,
\begin{gather*
A_2=k_1t_2^{k_1-1}C_2,
\\
B= \big({-}t_1-c_1+z\rho^{(1)}\big)C_1-z\frac{1}{2} D,
\quad
\text{$F$-manifold }{\mathcal N}_2,
\text{ with }\partial_2\circ\partial_2=0,\\
E= (t_1+c_1)\partial_1 \quad\text{Euler field}.
\end{gather*}
\end{enumerate}
\end{enumerate}
\end{Theorem}
Theorem~\ref{t8.1} is proved after Remark~\ref{t8.2}.
\begin{Remark}\label{t8.2}
The other normal forms in Remark~\ref{t6.6} for
the generic type (Sem) with $k_2-k_1\in 2{\mathbb N}$ and for the
generic type (Bra) give the following other normal
forms. In~both cases, the formulas for $A_1=C_1$,
$\gamma$, $B$, the $F$-manifold and $E$ are unchanged,
only the matrix $A_2$ changes.
For the generic type (Sem) with $k_2-k_1\in 2{\mathbb N}$, $A_2$ becomes
\begin{gather*
A_2=-\gamma^{-1}\big(t_2^{k_1-1}C_2+t_2^{k_2-1}E\big) +
z\frac{k_2-k_1}{2}\zeta t_2^{(k_2-k_1-2)/2}E.
\end{gather*}
For the generic type (Bra), $A_2$ becomes
\begin{gather*
A_2=-\gamma^{-1}t_2^{k_1-1}C_2+zk_2t_2^{k_2-1}E .
\end{gather*}
\end{Remark}
\begin{proof}[Proof of Theorem~\ref{t8.1}]
We prove the parts of Theorem~\ref{t8.1}
in the order $(c)$, $(d)$, $(b)$, $(a)$.
$(c)$ Consider a rank $2$ $(TE)$-structure
$\big(H^{[3]}\to{\mathbb C}\times({\mathbb C},0),\nabla^{[3]}\big)$
(with coordinate $t_2$ on $({\mathbb C},0)$)
with trace free pole part and with logarithmic
restriction over $t_2=0$. If it admits an
extension to a pure $(TLE)$-structure, it is contained in Theo\-rem~\ref{t6.3}. If not, then it is contained in~Theorem~\ref{t6.7}. The condition that the Higgs field
is not vanishing, excludes the 8th and 9th cases in Theo\-rem~\ref{t6.3} and the case~\eqref{6.51} = case (III)
in Theorem~\ref{t6.7}, see Remarks~\ref{t6.8}$(ii)$ and~$(iii)$.
$(d)$ Part $(d)$ makes for such a $(TE)$-structure
$\big(H^{[3]},\nabla^{[3]}\big)$ the $(TE)$-structure
$({\mathcal O}(H),\nabla)=\pr_2^*\big({\mathcal O}\big(H^{[3]}\big),\nabla^{[3]}\big)\otimes
{\mathcal E}^{(t_1+c_1)/z}$ explicit.
The cooordinate $t$ and the matrix $A$ in Theorem~\ref{t6.3}
and in Remark~\ref{t6.8}$(iv)$ become now $t_2$ and $A_2$.
Here the matrices in the 6th case in Theorem~\ref{t6.3}
are not used, but the matrices in Remark~\ref{t6.8}$(iv)$.
The function $f$ in Remark~\ref{t6.8}$(iv)$ is now
specialized to $f=t_2^{k_1/2}$ if $\alpha_1=\alpha_2$
($\Rightarrow$ case (II) and~\eqref{6.54})
and to $f=t_2^{k_1}$ if $\alpha_1-\alpha_2\in{\mathbb N}$
(case~(I) and~\eqref{6.53} or case (II) and~\eqref{6.54}).
The new matrix $B$ is $(-t_1-c_1)C_1$ plus the matrix~$B$ in Theorem~\ref{t6.3} and in Remark~\ref{t6.8}$(iv)$.
In the normal forms in Remark~\ref{t6.8}$(iv)$ we replaced
$\alpha_1$ and $\alpha_2$ by $\rho^{(1)}$ and $\alpha_4$
as follows
\begin{gather*
\rho^{(1)}:=\frac{\alpha_1+\alpha_2+1}{2},\qquad
\alpha_4:=\begin{cases}
\alpha_1-\alpha_2-1\in{\mathbb N}_0&\text{in}~\eqref{6.53},
\\
\alpha_2-\alpha_1-1\in{\mathbb Z}_{<0}&\text{in}~\eqref{6.54}.
\end{cases}
\end{gather*}
$(b)$ Now part $(b)$ follows from inspection of the normal forms
in part $(d)$.
$(a)$ Consider a $(TE)$-structure
as in $(a)$. Choose coordinates $t=(t_1,t_2)$ on $\big(M,t^0\big)$
such that $\big(M,t^0\big)=\big({\mathbb C}^2,0\big)$ and the germ of the $F$-manifold
is in a normal form in Theorem~\ref{t2.2}
(especially $e=\partial_1$)
and the Euler field has the form $E=(t_1+c_1)\partial_1
+g(t_2)\partial_2$ for some $c_1\in{\mathbb C}$ and some
$g(t_2)\in{\mathbb C}\{t_2\}$.
Choose any ${\mathbb C}\{t,z\}$-basis $\underline{v}$ of
${\mathcal O}(H)_0$ and consider its matrices
$A_1$, $A_2 $, $B$ with $z\nabla_{\partial_i}\underline{v}=\underline{v}A_i$,
$z^2\nabla_{\partial_z}\underline{v}=\underline{v}B$.
Now $\partial_1=e$ implies $A_1^{(0)}=C_1$.
We make a base change with the matrix $T\in {\rm GL}_2({\mathbb C}\{t,z\})$
which is the unique solution of the differential equation
\begin{gather*}
\partial_1 T=-\bigg(\sum_{k\geq 1}A_1^{(k)}z^{k-1}\bigg)T,\qquad
T(z,0,t_2)=C_1.
\end{gather*}
Then the matrices $\widetilde A_1$, $\widetilde A_2$, $\widetilde B$
of the new basis $\underline{\widetilde v}=\underline{v}T$ satisfy
\begin{gather}\label{8.10}
\widetilde A_1=C_1,\qquad \partial_1\widetilde A_2=0,\qquad \partial_1\widetilde B=-C_1,
\end{gather}
because~\eqref{3.12} for $i=1$ and~\eqref{3.7}
and~\eqref{3.8} give
\begin{gather*}
0= z\partial_1T+A_1T-T\widetilde A_1=C_1T-T\widetilde A_1=T\big(C_1-\widetilde A_1\big),
\\
0= z\partial_1\widetilde A_2-z\partial_2\widetilde A_1+\big[\widetilde A_1,\widetilde A_2\big]= z\partial_1 \widetilde A_2,
\\
0= z\partial_1B-z^2\partial_z\widetilde A_1+z\widetilde A_1+\big[\widetilde A_1,\widetilde B\big]= z(\partial_1B+C_1).
\end{gather*}
In Lemmata~\ref{t3.10}$(c)$ and~\ref{t3.11}
we considered the $(TE)$-structure
$\big({\mathcal O}\big(H^{[2]}\big),\nabla^{[2]}\big)=({\mathcal O}(H),\nabla)\allowbreak\otimes {\mathcal E}^{\rho^{(0)}/z}$
with trace free pole part. Here $\rho^{(0)}=-t_1-c_1$.
\eqref{8.10} shows that $\big(H^{[2]},\nabla^{[2]}\big)$ is the pull
back of its restriction $\big(H^{[3]},\nabla^{[3]}\big)$ to
$(\{0\}\times {\mathbb C},0)\subset \big({\mathbb C}^2,0\big)$. This and
$({\mathcal O}(H),\nabla)\cong \big({\mathcal O}\big(H^{[2]}\big),\nabla^{[2]}\big)\otimes
{\mathcal E}^{-\rho^{(0)}/z}$ in Lemma~\ref{t3.10}$(c)$ imply~\eqref{8.1}.
\end{proof}
\begin{Corollary}\label{t8.3}
The following table gives for each logarithmic rank $2$
$(TE)$-structure over a point~$t^0$ its unfoldings within
the set of $(TE)$-structures in Theorem~$\ref{t8.1}(d)$.
Here the set $\big\{\alpha_1^0,\alpha_2^0\big\}\subset{\mathbb C}$ is the
set of leading exponents in Theorem~$\ref{t4.20}$ of the
logarithmic $(TE)$-structure over~$t^0$.
So, in the case $N^{\rm mon}=0$, $\alpha_1^0$ and $\alpha_2^0\in{\mathbb C}$
are arbitrary. In~the case $N^{\rm mon}\neq 0$, they satisfy
$\alpha_1^0-\alpha_2^0\in{\mathbb N}_0$. Two conditions are
$c_1=0$ and $\rho^{(1)}=\frac{\alpha_1^0+\alpha_2^0}{2}$.
The other conditions and the other invariants
$($though without their definition domains$)$
are given in the table.
All invariants in Theorem~$\ref{t8.1}(d)$, which are not
mentioned here, are $($intended to be$)$ arbitrary:
\begin{gather}\label{8.11}
\def1.4{1.25}
\begin{tabular}{l|l|l|l}
\hline
\multicolumn{1}{c|}{Generic type}&\multicolumn{1}{c|}{Invariants} & \multicolumn{1}{c|}{$N^{\rm mon}$} &\multicolumn{1}{c}{Condition}
\\
\hline
$(Sem)\colon\ k_2>k_1$ & $k_1,k_2,\zeta$ & $=0$&
$\alpha_1^0-\alpha_2^0=\pm \frac{k_1-k_2}{k_1+k_2}$
\\
$(Sem)\colon\ k_2=k_1$ & $k_1,k_2,\alpha_3$ & $=0$&
$\alpha_1^0-\alpha_2^0=\pm 2\alpha_3$
\\
\hline
$(Bra)$ & $k_1,k_2$ &$ =0$&$\alpha_1^0-\alpha_2^0=\pm \frac{-k_2}{k_1+k_2} $
\\
\hline
$(Reg)$ & $k_1,\alpha_4$ & $=0$ &$\alpha_1^0-\alpha_2^0=\pm \alpha_4$
\\
\hline
$(Reg)$ & $\widetilde k_1,\alpha_4$ & $\neq 0$ &$\alpha_1^0-\alpha_2^0=|\alpha_4| $
\\
\hline
$(Log)$ &$ k_1$ & $=0$ & $\alpha_1^0-\alpha_2^0=\pm 1$
\\
\hline
\end{tabular}
\end{gather}
\end{Corollary}
\begin{proof} This follows from inspection of the cases
in Theorem~\ref{t8.1}$(d)$.
\end{proof}
\begin{Remark
Beware of the following:
\begin{enumerate}\itemsep=0pt
\item[$(i)$] In the generic case (Sem) with $k_1=k_2$ we have
$\alpha_3\in{\mathbb R}_{\geq 0}\cup\H$. Here $\widetilde\alpha_3=-\alpha_3$
is excluded, as it gives an isomorphic unfolding.
\item[$(ii)$] In the generic cases (Reg) with
$\alpha_1^0-\alpha_2^0\in{\mathbb C}\setminus\{0\}$ almost always
$\alpha_4=\alpha_1^0-\alpha_2^0$ and $\widetilde\alpha_4=-\alpha_4$
give (for the same $k_1\in{\mathbb N}$ respectively $\widetilde k_1\in{\mathbb N}$)
two different unfoldings.
The~only exception is the case $N^{\rm mon}=0$ and
$\alpha_1^0-\alpha_2^0=\pm 1$, as then $\alpha_4=-1$
is not allowed.
\item[$(iii)$] In the generic case (Log), one has
one unfolding (and not two unfoldings) for each $k_1\in{\mathbb N}$.
\item[$(iv)$] Unfoldings of generic type (Sem) with $k_2>k_1$ and of
generic type (Bra) exist only if
$\alpha_1^0-\alpha_2^0\in (-1,1)\cap{\mathbb Q}^*$ and $N^{\rm mon}=0$.
\end{enumerate}
\end{Remark}
\begin{Theorem}\label{t8.5}\quad
\begin{enumerate}\itemsep=0pt
\item[$(a)$] Any unfolding of a rank $2$ $(TE)$-structure of type $($Log$)$
over a point is induced by one in the
following subset of $(TE)$-structures in Theorem~$\ref{t8.1}(d)$:
\begin{gather}\label{8.12}
\def1.4{1.25}
\begin{tabular}{l|l}
\hline
\multicolumn{1}{c|}{Generic type and invariants} & \multicolumn{1}{c}{Condition}
\\ \hline
$($Sem$)\colon\ k_2-k_1>0$ odd & $\gcd(k_1,k_2)=1$
\\
$($Sem$)\colon\ k_2-k_1\in 2{\mathbb N}$, $\zeta= 0$ & $\gcd(k_1,k_2)=1$
\\
$($Sem$)\colon\ k_2-k_1\in 2{\mathbb N}$, $\zeta\neq 0$ &$\gcd\big(k_1,\frac{k_1+k_2}{2}\big)=1$
\\
$($Sem$)\colon\ k_2=k_1$ & $k_2=k_1=1 $
\\ \hline
$($Bra$)$ & $\gcd(k_1,k_2)=1$
\\ \hline
$($Reg$)\colon\ N^{\rm mon}=0$ & $k_1=1$
\\ \hline
$($Reg$)\colon\ N^{\rm mon}\neq 0$ & $\widetilde k_1=1$
\\ \hline
$($Log$)\colon\ N^{\rm mon}=0$ &$ k_1=1$
\\
\hline
\end{tabular}
\end{gather}
\item[$(b)$] The inducing $(TE)$-structure is not unique only if
the original $(TE)$-structure has the form
$\varphi^*\big({\mathcal O}\big(H^{[5]}\big),\nabla^{[5]}\big)\otimes {\mathcal E}^{-\rho^{(0)}/z}$,
where $\big(H^{[5]},\nabla^{[5]}\big)$ is a logarithmic
$(TE)$-structure over a point~$t^{[5]}$ and
$\varphi\colon \big(M,t^0\big)\to\big\{t^{[5]}\big\}$ is the projection,
and $\big(H^{[5]},\nabla^{[5]}\big)$ is not one with $N^{\rm mon}\neq 0$
and equal leading exponents $\alpha_1=\alpha_2$.
Then the original $(TE)$-structure is of type $($Log$)$
everywhere with Higgs field endomorphisms
$C_X\in{\mathcal O}_{(M,t^0)}\cdot\id$ for any $X\in {\mathcal T}_{(M,t^0)}$.
\item[$(c)$] The $(TE)$-structures in the list in $(a)$ are universal
unfoldings of themselves.
\end{enumerate}
\end{Theorem}
The proof of Theorem~\ref{t8.5} will be given in
Section~\ref{c8.3}.
\begin{Theorem}\label{t8.6}
The set of rank $2$ $(TE)$-structures
with primitive $($not just generically primitive$)$
Higgs field over a germ $\big(M,t^0\big)$
of an $F$-manifold and with restriction of type $($Log$)$ over $t^0$
is $($after the choice of suitable coordinates$)$
the proper subset of those in the list~\eqref{8.12}
in Theorem~$\ref{t8.5}$ which satisfy $k_1=1$
respectively $\widetilde k_1=1$. In~the cases $($Reg$)$ and $($Log$)$, it coincides with
the list~\eqref{8.12}. In~the cases $($Sem$)$ and $($Bra$)$,
it is a proper subset.
\end{Theorem}
\begin{proof} The set of rank $2$ $(TE)$-structures
with primitive Higgs field over a germ $\big(M,t^0\big)$
of an~$F$-manifold and with restriction of type (Log) over $t^0$
consists by Theorem~\ref{t8.1}$(a){+}(d)$ of those
$(TE)$-structures in Theorem~\ref{t8.1}$(d)$
which satisfy $A_2(t_2=0)\notin{\mathbb C}\cdot C_1$.
This holds if and only if
$k_1=1$ respectively $\widetilde k_1=1$
$\big(\widetilde k_1=1$ if the generic type is (Reg) and $N^{\rm mon}\neq 0\big)$,
and then $A_2(t_2=0)\in\{-\gamma C_2,-\gamma D,C_2\}$.
Obviously, this is a proper
subset of those in table~\eqref{8.12} in the generic
cases (Sem) and (Bra), and it coincides with those
in table~\eqref{8.12} in the generic cases
(Reg) and (Log).
\end{proof}
\subsection[$(TE)$-structures over given $F$-manifolds with Euler fields]{$\boldsymbol{(TE)}$-structures over given $\boldsymbol{F}$-manifolds with Euler fields}\label{c8.2}
\begin{Remarks
For a given germ $\big(\big(M,t^0\big),\circ,e,E\big)$ of an $F$-manifold
with Euler field, define
\begin{gather*}
B_1\big(\big(M,t^0\big),\circ,e,E\big):=
\big\{(TE)\text{-structures over }\big(M,t^0\big)
\text{ with generically primitive Higgs}
\\ \hphantom{B_1(\big(M,t^0\big),\circ,e,E):=\big\{}
\text{field, inducing the given $F$-manifold structure with Euler field}\big\},
\\
B_2\big(\big(M,t^0\big),\circ,e,E\big):=\{(TE)\text{-structures in }B_1
\text{ which are in table~\eqref{8.12}}\},
\\
B_3\big(\big(M,t^0\big),\circ,e,E\big):=
\{(TE)\text{-structures in }B_1\text{ with primitive Higgs fields}\}.
\end{gather*}
Now we can answer the questions, how big these sets are.
Often we write $B_j$ instead of $B_j\big(\big(M,t^0\big),\circ,e,E\big)$,
when the germ $\big(\big(M,t^0\big),\circ,e,E\big)$ is fixed.
\begin{enumerate}\itemsep=0pt
\item[$(i)$] First we consider the cases when
the germ $\big(\big(M,t^0\big),\circ, e,E\big)$ is regular.
Compare Remark~\ref{t2.6}$(ii)$ and Remark~\ref{t3.17}$(iii)$.
By Malgrange's unfolding result Theorem~\ref{t3.16}$(c)$,
any $(TE)$-structure over $\big(M,t^0\big)$ is the universal
unfolding of its restriction over $t^0$,
and it is its own universal unfolding.
So then $B_1=B_2=B_3$, and the classification of
the $(TE)$-structures over points in Section~\ref{c4}
determines this space $B_1$.
In the case of $A_1^2$ with $E=(u_1+c_1)e_1+(u_2+c_2)e_2$
with $c_1\neq c_2$, any $(TE)$-structure over~$t^0$
is of type (Sem). Theorem~\ref{t4.5}
tells that then $B_1$ is connected and 4-dimensional.
The parameters are the two regular singular exponents
and two Stokes parameters.
In the case of ${\mathcal N}_2$ with $E=(t_1+c_1)\partial_1+\partial_2$,
any $(TE)$-structure over $t^0$ is either of type (Bra)
or of type (Reg). Then $B_1$ has one component
for type (Bra) and countably many components for type (Reg).
The component for type (Bra) is connected and 3-dimensional.
The parameters are given in Theorem~\ref{t4.11},
they are $\rho^{(1)}$, $\delta^{(1)}$ and
$\Eig(M^{\rm mon})$ \big(here $\rho^{(0)}\big(t^0\big)=-c_1$ is fixed, and
one eigenvalue and $\rho^{(1)}$ determine the other eigenvalue\big).
Corollary~\ref{t4.18} gives the countably many components
for type (Reg). One is 1-dimensional, the others are
2-dimensional.
\item[$(ii)$] Now we consider the cases when the germ
$\big(\big(M,t^0\big),\circ,e,E\big)$ is not regular.
Then $E|_{t^0}=c_1\partial_1$ for some $c_1\in{\mathbb C}$.
If $({\mathcal O}(H),\nabla)$ is a $(TE)$-structure in
$B_j\big(\big(M,t^0\big),\circ,e,E\big)$, then
$({\mathcal O}(H),\nabla)\otimes {\mathcal E}^{-c_1/z}$ is a $(TE)$-structure in
$B_j\big(\big(M,t^0\big),\circ,e,E-c_1\partial_1\big)$.
Therefore we can and will restrict to the cases with
$E|_{t^0}=0$.
Theorem~\ref{t8.1}$(d)$ gives the $(TE)$-structures in $B_1$,
Theorem~\ref{t8.5}$(a)$ gives the $(TE)$-structures in $B_2$,
and Theorem~\ref{t8.6} gives the $(TE)$-structures in $B_3$.
For each germ $\big(\big(M,t^0\big),\circ,e,E\big)$ with $E|_{t^0}=0$
\begin{gather*
B_1\supset B_2\supset B_3.
\end{gather*}
In the cases $A_1^2$ and $I_2(m)$, the Euler field with
$E|_{t^0}=0$ is unique on $\big(M,t^0\big)$, therefore we do not
write it down.
In {\it the case of $I_2(m)$ with $m\in 2{\mathbb N}$} (this includes
the case $A_1^2=I_2(2)$)
\begin{gather}
B_1(I_2(m))\cong \dot{\bigcup\limits_{(k_1,k_2)\in{\mathbb N}^2\colon
k_1+k_2=m,k_2\geq k_1}}{\mathbb C}^2,\nonumber
\\
B_2(I_2(m))\cong \dot{\bigcup\limits_{(k_1,k_2)\in{\mathbb N}^2\colon
k_1+k_2=m,k_2\geq k_1,\gcd(k_1,m/2)=1}}{\mathbb C}^2, \nonumber
\\
B_3(I_2(m))\cong {\mathbb C}^2,\quad\text{here}\quad(k_1,k_2)=(1,m-1).
\label{8.17}
\end{gather}
The 2 continuous parameters are the regular singular exponents of the
$(TE)$-structures at generic points in $M$.
In {\it the case of $I_2(m)$ with $m\geq 3$ odd},
\begin{gather}
B_1(I_2(m))\cong \dot{\bigcup\limits_{(k_1,k_2)\in{\mathbb N}^2\colon
k_1+k_2=m,k_2>k_1}}{\mathbb C},\nonumber
\\
B_2(I_2(m))\cong \dot{\bigcup\limits_{(k_1,k_2)\in{\mathbb N}^2\colon
k_1+k_2=m,k_2>k_1,\gcd(k_1,k_2)=1}}{\mathbb C}, \nonumber
\\
B_3(I_2(m))\cong{\mathbb C}, \qquad\text{here}\quad (k_1,k_2)=(1,m-1).
\label{8.18}
\end{gather}
For odd $m\geq 3$, the regular singular exponents of the
$(TE)$-structures at generic points in~$M$ coincide and
give the continuous parameter.
Especially, for $m\in\{2,3\}$
\begin{gather*
B_1(I_2(m))=B_2(I_2(m))=B_3(I_2(m))\cong
\begin{cases} {\mathbb C}^2&\text{if}\ m=2,\\
{\mathbb C}&\text{if}\ m=3.\end{cases}
\end{gather*}
The $F$-manifold ${\mathcal N}_2$ allows by Theorem~\ref{t2.3}
many nonisomorphic Euler fields with $E|_{t^0}=0$, the
cases~\eqref{2.10}--\eqref{2.12} with $c_1=0$.
{\it The case~\eqref{2.10}}, $E=t_1\partial_1$:
Here each $(TE)$-structure has generic type (Log)
and semisimple monodromy. Here
\begin{gather*
B_1({\mathcal N}_2,E)\cong \dot{\bigcup\limits_{k_1\in{\mathbb N}}}{\mathbb C},
\\
B_2({\mathcal N}_2,E)= B_3({\mathcal N}_2,E) \cong {\mathbb C},\qquad
\text{here}\quad k_1=1.\nonumber
\end{gather*}
The continuous parameter is $\rho^{(1)}$ in Theorem~\ref{t8.1}$(d)$ (iv) or, equivalently, one of the
two residue eigenvalues \big(which are $\rho^{(1)}\pm\frac{1}{2}$\big).
{\it The case~\eqref{2.12}},
$E=t_1\partial_1+t_2^r\big(1+c_3t_2^{r-1}\big)$
for some $r\in{\mathbb Z}_{\geq 2}$ and some $c_3\in{\mathbb C}$:
Here each $(TE)$-structure has generic type (Reg) and
satisfies $N^{\rm mon}\neq 0$. Here
\begin{gather*
B_1({\mathcal N}_2,E) \begin{cases}
=\varnothing,& \text{if}\quad c_3\in{\mathbb C}^*,\\
\cong {\mathbb C}&\text{if}\quad c_3=0,\end{cases}
\\
B_2({\mathcal N}_2,E)=B_3({\mathcal N}_2,E) \begin{cases}
=\varnothing,& \text{if}\quad c_3\in{\mathbb C}^*\quad \text{or}\quad r\geq 3,\\
=B_1({\mathcal N}_2,E)&\text{if}\quad c_3=0\quad \text{and}\quad r=2.
\end{cases}
\end{gather*}
So, $({\mathcal N}_2,E)$ with $c_3\in{\mathbb C}^*$ does not allow
$(TE)$-structures over it, and $({\mathcal N}_2,E)$ with
$c_3=0$ and $r\geq 3$ does not allow $(TE)$-structures
over it with primitive Higgs field.
If $B_j({\mathcal N}_2,E)\allowbreak\neq\varnothing$ then $B_j({\mathcal N}_2,E)\cong{\mathbb C}$
and the continuous parameter is $\rho^{(1)}$ in
Theorem~\ref{t8.1}$(d)$ $(iii)$.
{\it The case~\eqref{2.11}}, $E=t_1\partial_1+c_2t_2\partial_2$
for some $c_2\in{\mathbb C}^*$: This is a rich case. Here we
decompose $B_j=B_j({\mathcal N}_2,E)$ as
\begin{gather*
B_j=B_j^{({\rm Reg}),0}
\ \dot\cup\ B_j^{({\rm Reg}),\neq 0}
\ \dot\cup\ B_j^{({\rm Bra})},
\end{gather*}
where the first set contains $(TE)$-structures of generic
type (Reg) with $N^{\rm mon}=0$, the second set contains
$(TE)$-structures of generic type (Reg) with $N^{\rm mon}\neq 0$,
and the third set contains $(TE)$-structures of
generic type (Bra). Then
\begin{gather*
B_1^{({\rm Reg}),0}\cong \dot{\bigcup\limits_{k_1\in{\mathbb N}}}{\mathbb C},\qquad
B_2^{({\rm Reg}),0}=B_3^{({\rm Reg}),0}\cong{\mathbb C},
\\
B_1^{({\rm Reg}),\neq 0}
\begin{cases}
=\varnothing&\text{if}\quad c_2\in{\mathbb C}^*\setminus{\mathbb Q}^*,
\\
\cong\dot\bigcup_{(k_1,\alpha_4)\in{\mathbb N}\times{\mathbb Z}\colon k_1c_2=1+\alpha_4}{\mathbb C}
&\text{if}\quad c_2\in{\mathbb Q}^*,
\end{cases}
\\
B_2^{({\rm Reg}),\neq 0}=B_3^{({\rm Reg}),\neq 0}
\begin{cases}
=\varnothing&\text{if}\quad c_2\in{\mathbb C}\setminus{\mathbb Z},
\\
\cong {\mathbb C}&\text{if}\quad c_2\in{\mathbb Z}\setminus\{0\},
\end{cases}
\\%\label{8.25}
B_1^{({\rm Bra})}=B_2^{({\rm Bra})}=B_3^{({\rm Bra})}=\varnothing
\qquad\text{if}\quad c_2^{-1}\in{\mathbb C}^*\setminus {\mathbb Z}_{\geq 2},
\\
\left.\!\!\! \begin{array}{l}
B_1^{({\rm Bra})} \cong\dot
\bigcup_{(k_1,k_2)\in{\mathbb N}^2\colon k_1+k_2=c_2^{-1}}{\mathbb C},\\
B_2^{({\rm Bra})} \cong\dot
\bigcup_{(k_1,k_2)\in{\mathbb N}^2\colon k_1+k_2=c_2^{-1},\gcd(k_1,k_2)=1}
{\mathbb C},\\
B_3^{({\rm Bra})} \cong{\mathbb C},\quad\text{here }(k_1,k_2)=\big(1,c_2^{-1}-1\big)
\end{array}\!\!\right\}\qquad\text{if}\quad c_2^{-1}\in {\mathbb Z}_{\geq 2}.
\nonumber
\end{gather*}
\end{enumerate}
\end{Remarks}
\begin{Remarks}\label{t8.8}\quad
\begin{enumerate}\itemsep=0pt
\item[$(i)$] Theorem~\ref{t8.1}$(d)$ $(i)$ tells how many $(TE)$-structures
exist over the $F$-manifold with \mbox{Euler} field $I_2(m)$,
such that the Higgs bundle is generically primitive
and induces this $F$-manifold structure.
There are $\big[\frac{m}{2}\big]$ many holomorphic families
from the different choices of $(k_1,k_2)\allowbreak\in{\mathbb N}^2$ with
$k_2\geq k_1$ and $k_1+k_2=m$.
They have 2 parameters if $m$ is even and 1 parameter
if $m$ is odd, compare~\eqref{8.17} and~\eqref{8.18}.
For each $I_2(m)$, only one of these families consists
of $(TE)$-structures with primitive Higgs fields.
\item[$(ii)$] Consider $m\geq 3$.
Write $M={\mathbb C}^2$ for the $F$-manifold $I_2(m)$
in Theorem~\ref{t2.2}, and $M^{\rm [log]}={\mathbb C}\times\{0\}$
for the subset of points where the multiplication
is not semisimple. Over these points the restricted
$(TE)$-structures are of type (Log).
We checked that there are $\big[\frac{m}{2}\big]$ many Stokes
structures which give $(TE)$-structures on $M\setminus M^{\rm [log]}$.
Because of $(i)$, all these $(TE)$-structures extend
holomorphically over $M^{\rm [log]}$, and they give the
$\big[\frac{m}{2}\big]$ holomorphic families of $(TE)$-structures
on $I_2(m)$ in $(i)$.
\item[$(iii)$] Especially remarkable is the case $A_1^2=I_2(2)$.
There Theorem~\ref{t8.1}$(a){+}(d)$ $(i)$ implies directly that
each holomorphic $(TE)$-structure over $A_1^2$
with generically primitive Higgs field has primitive
Higgs field and is an elementary model (Definition~\ref{t4.4}),
so it has trivial Stokes structure.
\item[$(iv)$] This result is related to much more general work
in~\cite{CDG17} and~\cite{Sa19}
on meromorphic connections over the $F$-manifold
$A_1^n$ near points where some of the canonical
coordinates coincide.
Let us restrict to the special case of a neighborhood
of a point where all canonical coordinates coincide.
This generalizes the germ at 0 of $A_1^2$ to the germ at 0
of $A_1^n$.
\cite[Theorem~1.1]{CDG17} and~\cite[Theorem~3]{Sa19}
both give the triviality of the Stokes structure.
Though their starting points are slightly restrictive.
\cite{CDG17} starts in our notation from pure
$(TLE)$-structures with primitive Higgs fields.
The step before in the case of $A_1^2$,
passing from a $(TE)$-structure over $A_1^2$ to a pure
$(TLE)$-structure, is done essentially in our Theorem~\ref{t6.2} $(a)$ $(iii)$.
Our argument for the triviality of the Stokes structure
is then contained in the proof of Theorem~\ref{t6.3}.
\cite{Sa19} starts in our notation from $(TE)$-structures
which are already formally isomorphic to sums
$\bigoplus_{i=1}^n{\mathcal E}^{u_i/z}z^{\alpha_i}$.
Then it is shown that they are also holomorphically
isomorphic to such sums. In~this special case, Corollary 5.7 in~\cite{DH20-2}
give this implication, too.
\item[$(v)$] In $(ii)$ we stated that in the case of $I_2(m)$
with $m\geq 3$,
each $(TE)$-structure on $M\setminus M^{\rm [log]}$ with primitive
Higgs field extends holomorphically to $M$. In~the case of ${\mathcal N}_2$ this does not hold in general.
For example, start with the flat rank $2$ bundle
$H'\to{\mathbb C}^*\times M$, where $M={\mathbb C}^2$ (with coordinates
$t=(t_1,t_2)$) with semisimple monodromy with two
different eigenvalues $\lambda_1$ and $\lambda_2$.
Choose $\alpha_1,\alpha_2\in{\mathbb C}$ with
${\rm e}^{-2\pi {\rm i}\alpha_j}=\lambda_j$. Let $s_j\in C^{\alpha_j}$
be generating elementary sections.
Define the new basis
\begin{gather*
\underline{v}=(v_1,v_2)=\big({\rm e}^{t_1/z}\big(s_1+{\rm e}^{-1/t_2}s_2\big),{\rm e}^{t_1/z}(zs_2)\big)
\end{gather*}
on $H'|_{M'}$, where $M':=M\setminus {\mathbb C}\times\{0\}$. Then
\begin{gather*
z\nabla_{\partial_1}\underline{v}=\underline{v}\cdot C_1,
\\
z\nabla_{\partial_2}\underline{v}=\underline{v}\cdot t_2^{-2}{\rm e}^{-1/t_2}C_2,
\\
z^2\nabla_{\partial_z}\underline{v}=\underline{v}\cdot
\bigg({-}t_1C_1+(\alpha_2-\alpha_1){\rm e}^{-1/t_2}C_2 +
z\begin{pmatrix}\alpha_1&0\\0&\alpha_2+1\end{pmatrix}\bigg).
\end{gather*}
So, we obtain a regular singular $(TE)$-structure on
$M'$ with primitive Higgs field.
The $F$-manifold structure on $M'$ is given by
$e=\partial_1$ and $\partial_2\circ\partial_2=0$, so it is ${\mathcal N}_2$,
and the Euler field is
$E=t_1\partial_1 + (\alpha_1-\alpha_2)t_2^2\partial_2$.
$F$-manifold and Euler field extend from
$M'$ to $M$, but not the $(TE)$-structure.
\end{enumerate}
\end{Remarks}
\subsection{Proof of Theorem~\ref{t8.5}}\label{c8.3}
$(a)$ Let $\big(H\to{\mathbb C}\times \big(M,t^0\big),\nabla\big)$ be an unfolding of
a $(TE)$-structure of type (Log) over $t^0$.
The $(TE)$-structure $\big(H^{[2]}\to{\mathbb C}\times \big(M,t^0\big),\nabla^{[2]}\big)$
in Lemma~\ref{t3.10}$(c)$ with
$\big({\mathcal O}\big(H^{[2]}\big),\nabla^{[2]}\big)=({\mathcal O}(H),\nabla)\otimes
{\mathcal E}^{-\rho^{(0)}/z}$ has trace free pole part.
Lemma~\ref{t3.10}$(d)$ and~$(e)$ apply.
Because of them, it is sufficient to prove that the
$(TE)$-structure $\big(H^{[2]},\nabla^{[2]}\big)$ is induced by
a~$(TE)$-structure
$\big(H^{[3]}\to{\mathbb C}\times \big(M^{[3]},t^{[3]}\big),\nabla^{[3]}\big)$
over $\big(M^{[3]},t^{[3]}\big)=({\mathbb C},0)$
via a map $\varphi\colon \big(M,t^0\big)\to\big(M^{[3]},t^{[3]}\big) $,
where the $(TE)$-structure $\big(H^{[3]},\nabla^{[3]}\big)$
is one of the $(TE)$-structures in the 1st to 7th cases
in Theorem~\ref{t6.3} or one of the $(TE)$-structures
in the cases (I) or (II) in Theorem~\ref{t6.7} with invariants as in table~\eqref{8.12}.
Then the $(TE)$-structure $\big(H^{[4]},\nabla^{[4]}\big)$ which is
constructed in Lemma~\ref{t3.10}$(d)$ from $\big(H^{[3]},\nabla^{[3]}\big)$
is one of the $(TE)$-structures in Theorem~\ref{t8.1}$(d)$
with invariants as in table~\eqref{8.12},
and it induces by Lemma~\ref{t3.10}$(e)$ the $(TE)$-structure
$(H,\nabla)$.
From now on we suppose $\rho^{(0)}=0$, so
$(H,\nabla)=\big(H^{[2]},\nabla^{[2]}\big)$.
We consider the invariants $\delta^{(0)},\delta^{(1)}\in
{\mathcal O}_{M,t^0}$ and ${\mathcal U}$ and the four possible {\it generic types}
(Sem), (Bra), (Reg) and (Log),
which are defined by the following table,
analogously to Definition~\ref{t6.1},{\samepage
\[
\def1.4{1.3}
\begin{tabular}{c|c|c|c}
\hline
(Sem) & (Bra) & (Reg)& (Log)
\\ \hline
$\delta^{(0)}\neq 0$ & $\delta^{(0)}=0$, $\delta^{(1)}\neq 0$ &
$\delta^{(0)}=\delta^{(1)}=0$, ${\mathcal U}\neq 0$ & ${\mathcal U}=0$
\\
\hline
\end{tabular}
\]
First we treat the generic types (Reg) and (Log),
then the generic type (Sem) and (Bra).}
\medskip\noindent
{\it Generic types $($Reg$)$ and $($Log$)$.}
Then the $(TE)$-structure $(H,\nabla)$ is regular singular.
We can use the results in Section~\ref{c7}
(which built on Theorems~\ref{t6.3} and~\ref{t6.7}).
Choose a marking for the $(TE)$-structure $(H,\nabla)$.
Then by Remark~\ref{t7.5}$(i)$, there is a unique map
$\varphi\colon \big(M,t^0\big)\to M^{(H^{{\rm ref},\infty},M^{\rm ref}),{\rm reg}}$
which maps $t\in M$ to the point in the moduli space
over which one has up to isomorphism the same
marked $(TE)$-structure as over $t$. The map
$\varphi$ is holomorphic. By~Re\-mark~\ref{t7.5}$(i){+}(ii)$
it maps $\big(M,t^0\big)$ to one projective curve
which is isomorphic to
$M^{(3),0,\alpha_1,\alpha_2}$ or
$M^{(3),\neq 0,\alpha_1,\alpha_2}$ or
$M^{(3),0,\alpha_1,{\rm log}}$.
The $(TE)$-structure $(H,\nabla)$ is induced by the
$(TE)$-structure over this curve via the map $\varphi$.
The point $t^0$ is mapped to $0$ or $\infty$ in the
cases $M^{(3),0,\alpha_1,\alpha_2}$ or~$M^{(3),\neq 0,\alpha_1,\alpha_2}$
\big(not 0 in the case $M^{(3),\neq 0,\alpha_1,\alpha_1}$\big)
as the $(TE)$-structure over $t^0$ is logarithmic.
The germs at $0$ and $\infty$ in
$M^{(3),0,\alpha_1,\alpha_2}$ and
$M^{(3),\neq 0,\alpha_1,\alpha_2}$
\big(not 0 in the case $M^{(3),\neq 0,\alpha_1,\alpha_1}$\big)
and the germ at any point $t_2^{(3)}$ in
$M^{(3),0,\alpha_1,{\rm log}}$ are contained in table~\eqref{8.12}.
This shows Theorem~\ref{t8.5} for the generic
cases (Reg) and (Log).
\medskip\noindent
{\it Generic types $($Sem$)$ and $($Bra$)$.}
We choose a (connected and sufficiently small)
representative~$M$ of the germ
$\big(M,t^0\big)$, and we choose on it coordinates
$t=(t_1,\dots ,t_m)$ (with $m=\dim M$) with $t^0=0$.
We denote by $M^{\rm [log]}$ the analytic hypersurface
\begin{gather*
M^{\rm [log]}:=\begin{cases}
\big(\delta^{(0)}\big)^{-1}(0),&\text{if the generic type is (Sem)},
\\[1ex]
\big(\delta^{(1)}\big)^{-1}(0),&\text{if the generic type is (Bra)}.
\end{cases}
\end{gather*}
It contains $t^0$.
Choose a disk $\Delta\subset M$ through $t^0$ with
$\Delta\setminus \{t^0\}\subset M\setminus M^{\rm [log]}$.
The restricted $(TE)$-structure
$(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$ has the same
generic type as the $(TE)$-structure $(H,\nabla)$.
The restricted $(TE)$-structure
$(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$
is isomorphic to a $(TE)$-structure in the cases
1, 2, 3 or 4 in Theorem~\ref{t6.3}.
The parameters of the restricted $(TE)$-structure
$(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$ are given in the
following table:
\[
\def1.4{1.4}
\begin{tabular}{c|l}
\hline
\multicolumn{1}{c|}{Generic type} & \multicolumn{1}{c}{Parameters}
\\ \hline
(Sem)& $k_1,k_2\in{\mathbb N}$ with $k_2\geq k_1$, $\rho^{(1)}\in{\mathbb C}$,
\\
& $\begin{cases}
\zeta\in{\mathbb C} & \text{if}\ k_2-k_1\in 2{\mathbb N},\\
\alpha_3\in {\mathbb R}_{\geq 0}\cup\H & \text{if}\ k_1=k_2
\end{cases}$
\\
(Bra) & $k_1,k_2\in{\mathbb N}$, $\rho^{(1)}\in{\mathbb C}$
\\
\hline
\end{tabular}
\]
There is a unique pair $\big(k_1^0,k_2^0\big)\in{\mathbb N}^2$ with
$(k_1,k_2)\in{\mathbb Q}_{>0}\cdot \big(k_1^0,k_2^0\big)$ and with
the conditions in table~\eqref{8.30},
\begin{gather}\label{8.30}
\def1.4{1.4}
\begin{tabular}{l|l}
\hline
\multicolumn{1}{c|}{Generic type and invariants} & \multicolumn{1}{c}{Conditions}
\\ \hline
(Sem$)\colon\ k_2-k_1>0$ odd& $\gcd\big(k^0_1,k^0_2\big)=1$
\\
(Sem$)\colon\ k_2-k_1\in 2{\mathbb N}$, $\zeta= 0$ & $\gcd\big(k^0_1,k^0_2\big)=1$
\\
(Sem$)\colon\ k_2-k_1\in 2{\mathbb N}$, $\zeta\neq 0$ &$k^0_2-k^0_1\in 2{\mathbb N}$, $\gcd\left(k^0_1,\frac{k_1^0+k^0_2}{2}\right)=1$
\\
(Sem$)\colon k_2=k_1$ & $k^0_2=k^0_1=1$
\\
\hline
(Bra) & $\gcd\big(k^0_1,k^0_2\big)=1$
\\
\hline
\end{tabular}
\end{gather}
In fact, it is the pair $\big(k_1^0,k_2^0\big)\in{\mathbb N}^2$
of minimal numbers which satisfies{\samepage
\begin{gather*
(k_1,k_2)\in {\mathbb N}\cdot \big(k_1^0,k_2^0\big)
\end{gather*}
and which satisfies in the case (Sem) with $k_2-k_1\in 2{\mathbb N}$
and $\zeta\neq 0$ additionally $k_2^0-k_1^0\in 2{\mathbb N}$.}
We denote by $\big(H^{[3]}\to{\mathbb C}\times \big(M^{[3]},t^{[3]}\big),\nabla^{[3]}\big)$
the $(TE)$-structure over $\big(M^{[3]},t^{[3]}\big)=({\mathbb C},0)$
which has $\big(k_1^0,k_2^0\big)$ instead of $(k_1,k_2)$,
but which has the same other parameters as the restricted
$(TE)$-structure $(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$.
We have seen in Remarks~\ref{t6.5}$(ii)$ and $(iii)$
that the restricted $(TE)$-structure
$(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$
is induced by the $(TE)$-structure $\big(H^{[3]},\nabla^{[3]}\big)$
via the branched covering
$\varphi^\Delta\colon (\Delta,t^0)\to\big(M^{[3]},t^{[3]}\big)$
with $\varphi^\Delta(\tau)= \tau^{k_1/k_1^0}$.
Here $\tau$ denotes {\it that} coordinate on $\Delta$
with which $(H,\nabla)|_{{\mathbb C}\times (\Delta,t^0)}$ can be
brought to a normal form in the cases 1, 2, 3 and 4 in
Theorem~\ref{t6.3}.
It rests to extend $\varphi^\Delta$ to a map
$\varphi\colon M\to M^{[3]}$ such that $(H,\nabla)$ is induced
by $\big(H^{[3]},\nabla^{[3]}\big)$ via this map $\varphi$.
\begin{claim}\label{cl8.9}
There exists a unique holomorphic function
$\varphi\in{\mathcal O}_M$ with
\begin{gather}\label{8.32}
\varphi|_\Delta= \varphi^\Delta,
\\[.5ex]
\delta^{(0)}= -\varphi^{k_1^0+k_2^0}\qquad
\text{if the generic type is $($Sem$)$},\label{8.33}
\\
\delta^{(1)}= \frac{k_2^0}{k_1^0+k_2^0}\cdot\varphi^{k_1^0+k_2^0}
\qquad\text{if the generic type is $($Bra$)$}.\label{8.34}
\end{gather}
\end{claim}
\begin{proof}
Choose any point $t^{[1]}\in M^{\rm [log]}$ and any disk
$\Delta^{[1]}$ through $t^{[1]}$ with
$\Delta^{[1]}\setminus \big\{t^{[1]}\big\}\subset M\setminus M^{\rm [log]}$. In~order to show the existence of a function $\varphi\in{\mathcal O}_M$
with~\eqref{8.33} respectively~\eqref{8.34},
it is sufficient to show that $\delta^{(0)}|_{\Delta^{[1]}}$
respectively $\delta^{(1)}|_{\Delta^{[1]}}$
has at $t^{[1]}$ a zero of an order which is a~multiple of
$k_1^0+k_2^0$.
The restricted $(TE)$-structure
$(H,\nabla)|_{{\mathbb C}\times(\Delta^{[1]},t^{[1]})}$ has the same generic
type as $(H,\nabla)$ and is isomorphic to a $(TE)$-structure
in the cases 1, 2, 3 or 4 in Theorem~\ref{t6.3}.
Its invariants $k_1$ and~$k_2$ are here called
$k_1^{[1]}$ and~$k_2^{[1]}$, in order to distinguish them
from the invariants of $(H,\nabla)|_{(\Delta,t^0)}$.
We~want to show
\begin{gather}\label{8.35}
\big(k_1^{[1]},k_2^{[2]}\big)\in {\mathbb N}\cdot \big(k_1^0,k_2^0\big).
\end{gather}
We did not say much about the Stokes structure.
Here we need the following properties of it, if the
generic type is (Sem):
\begin{gather*
k_2=k_1
\\
\quad \stackrel{(1)}{\iff} (H,\nabla)|_{{\mathbb C}\times\{t^{[2]}\}}
\text{ has trivial Stokes structure for }
t^{[2]}\in\Delta\setminus \big\{t^0\big\}
\\
\quad \stackrel{(2)}{\iff} (H,\nabla)|_{{\mathbb C}\times\{t^{[2]}\}}
\text{ has trivial Stokes structure for }
t^{[2]}\in\Delta^{[1]}\setminus \big\{t^{[1]}\big\}
\\
\quad \stackrel{(3)}{\iff} k_2^{[1]}=k_1^{[1]}.
\end{gather*}
$\stackrel{(1)}{\Longrightarrow}$ and
$\stackrel{(3)}{\Longleftarrow}$ are obvious from the
normal form in the 3rd case in Theorem~\ref{t6.3}. It~is not hard to see that the normal forms for fixed
$t\in{\mathbb C}^*$ in the 1st and 2nd case in Theorem~\ref{t6.3}
are not holomorphically isomorphic to an
elementary model in Definition~\ref{t4.4}
(see also Remark~\ref{t8.8}$(ii)$).
This shows $\stackrel{(1)}{\Longleftarrow}$ and
$\stackrel{(3)}{\Longrightarrow}$.
The equivalence $\stackrel{(2)}{\iff}$ is a consequence
of the invariance of the Stokes structure within
isomonodromic deformations.
In the generic type (Sem) with $k_1=k_2$ we have also
$k_2^{[1]}=k_1^{[1]}$ and $k_2^0=k_1^0=1$, and thus
especially~\eqref{8.35}.
Now consider the cases with $k_2>k_1$.
This comprises the generic type (Bra) and
gives in the generic type (Sem) also $k_2^0>k_1^0$
and $k_2^{[1]}>k_1^{[1]}$.
So $(H,\nabla)|_{{\mathbb C}\times(\Delta^{[1]},t^{[1]})}$ is
in the 1st, 2nd or 4th case in Theorem~\ref{t6.3}.
The number $b_3^{(1)}$ in Theorem~\ref{t6.3} is uniquely
determined by the properties
$b_3^{(1)}\in{\mathbb Q}\,\cap \big]\frac{-1}{2},0\big[$ and
$\Eig(M^{\rm mon})=\big\{\exp\big({-}2\pi {\rm i} (\rho^{(1)}\pm b_3^{(1)}\big)\big)\big\}$
(see Remark~\ref{t6.4}$(i)$ for the second property).
Therefore
\begin{gather}
\frac{k_1^0-k_2^0}{2\big(k_1^0+k_2^0\big)}
=\frac{k_1-k_2}{2(k_1+k_2)} =b_3^{(1)}
=\frac{k_1^{[1]}-k_2^{[1]}}{2\big(k_1^{[1]}+k_2^{[1]}\big)}\qquad
\text{in the case (Sem)},\nonumber
\\
\frac{-k_2^0}{2\big(k_1^0+k_2^0\big)}
=\frac{-k_2}{2(k_1+k_2)} =b_3^{(1)}
=\frac{-k_2^{[1]}}{2\big(k_1^{[1]}+k_2^{[1]}\big)}\qquad
\text{in the case (Bra)}.\label{8.37}
\end{gather}
This implies $\big(k_1^{[1]},k_2^{[1]}\big)\in
{\mathbb Q}_{>0}\cdot\big(k_1^0,k_2^0\big)$. In~the cases with $\gcd\big(k_1^0,k_2^0\big)=1$~\eqref{8.35} follows.
If $\gcd\big(k_1^0,k_2^0\big)\neq 1$, then the generic type
is (Sem), $k_2-k_1\in 2{\mathbb N}$, $k_2^0-k_1^0\in 2{\mathbb N}$,
and the invariant $\zeta$ of
$(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$ is $\zeta\neq 0$.
However, then the regular singular
exponents $\alpha_1$ and $\alpha_2$ of the restriction
of the $(TE)$-structure $(H,\nabla)$ over points in
$M\setminus M^{\rm [log]}$ are invariants of the $(TE)$-structure
$(H,\nabla)$. By~\eqref{6.25} and~\eqref{8.37}
also $\zeta$ is an invariant of the $(TE)$-structure
$(H,\nabla)$. Now $\zeta\neq 0$ implies
$k_2^{[1]}-k_1^{[1]}\in 2{\mathbb N}$. Again~\eqref{8.35} follows.
Equations~\eqref{6.22} and~\eqref{6.26} imply that
$\delta^{(0)}|_{\Delta^{[1]}}$
respectively $\delta^{(1)}|_{\Delta^{[1]}}$
has at $t^{[1]}$ a zero of an order which is a multiple of
$k_1^0+k_2^0$. Therefore a function $\varphi\in{\mathcal O}_M$
with~\eqref{8.33} respectively~\eqref{8.34} exists.
Equations~\eqref{6.22} and~\eqref{6.26} for $(H,\nabla)|_{{\mathbb C}\times(\Delta,t^0)}$
tell
\begin{gather*}
\delta^{(0)}|_\Delta = -\tau^{k_1+k_2}
=-\big(\tau^{k_1/k_1^0}\big)^{k_1^0+k_2^0}
=-\big(\varphi^\Delta\big)^{k_1^0+k_2^0}\qquad
\text{in the case (Sem),}
\\
\delta^{(1)}|_\Delta = \frac{k_2}{k_1+k_2}\tau^{k_1+k_2}
=\frac{k_2^0}{k_1^0+k_2^0}\big(\tau^{k_1/k_1^0}\big)^{k_1^0+k_2^0}
=\frac{k_2^0}{k_1^0+k_2^0}\big(\varphi^\Delta\big)^{k_1^0+k_2^0}\quad\
\text{in the case (Bra).}
\end{gather*}
Therefore a function $\varphi$ as in the claim exists
and is unique.
\end{proof}
Now compare the $(TE)$-structures $(H,\nabla)$ and
$\varphi^*\big(H^{[3]},\nabla^{[3]}\big)$ over $M$.
Both extend to pure $(TLE)$-structures.
For $\varphi^*\big(H^{[3]},\nabla^{[3]}\big)$, one uses the
pull back $\varphi^*\big(\underline{v}^{[3]}\big)$ of the basis
$\underline{v}^{[3]}$ which gives for $\big(H^{[3]},\nabla^{[3]}\big)$
the Birkhoff normal form in Theorem~\ref{t6.3}.
For $(H,\nabla)$ one starts with the analogous basis
$\underline{v}^\Delta$ for $H|_{{\mathbb C}\times\Delta}$ which gives
for $(H,\nabla)|_{{\mathbb C}\times\Delta}$ the Birkhoff normal form
in Theorem~\ref{t6.3}. It~has a unique extension $\underline{v}$ to
${\mathbb C}\times M$ which still yields a Birkhoff normal form.
Compare Remark~\ref{t3.19}$(ii)$ for this.
Remarks~\ref{t6.5}$(ii)$ and $(iii)$ (or simply
the Birkhoff normal forms in Theorem~\ref{t6.3}) tell that
the map $\big(\varphi^*\underline{v}^{[3]}\big)|_{{\mathbb C}\times\Delta}
\mapsto \underline{v}^\Delta=\underline{v}|_{{\mathbb C}\times\Delta}$ is an
isomorphism of pure $(TLE)$-structures.
Now consider a point $t^{[2]}\in \Delta\setminus \big\{t^0\big\}$ and
its image $t^{[4]}:=\varphi\big(t^{[2]}\big)\in
M^{[3]}\setminus \big\{t^{[3]}\big\} ={\mathbb C}\setminus\{0\}$. Over the germ
$\big(M^{[3]},t^{[4]}\big)$, the $(TE)$-structure $\big(H^{[3]},\nabla^{[3]}\big)$
is the part with trace free pole part of a universal
unfolding of $\big(H^{[3]},\nabla^{[3]}\big)|_{{\mathbb C}\times\{t^{[4]}\}}$.
Therefore in a neighborhood $U\subset M$ of $t^{[2]}$,
the $(TE)$-structure $(H,\nabla)|_{{\mathbb C}\times U}$
is induced by
$\big(H^{[3]},\nabla^{[3]}\big)|_{{\mathbb C}\times(M^{[3]},t^{[4]})}$
via a map $\widetilde\varphi\colon U\to M^{[3]}$.
We~can choose it such that
\begin{gather}\label{8.38}
\widetilde\varphi|_\Delta=\varphi^\Delta.
\end{gather}
Equations~\eqref{6.22} and~\eqref{6.26} tell
\begin{gather}\label{8.39}
\delta^{(0)}|_U = -(\widetilde\varphi)^{k_1^0+k_2^0}
\qquad\text{in the case (Sem)},
\\
\delta^{(1)}|_U =
\frac{k_2^0}{k_1^0+k_2^0}(\widetilde\varphi)^{k_1^0+k_2^0}
\qquad\text{in the case (Bra)}.\label{8.40}
\end{gather}
Equations~\eqref{8.38}--\eqref{8.40} and Claim~\ref{cl8.9} imply
$\widetilde\varphi=\varphi|_U$.
Therefore the matrices in Birkhoff normal form
for the basis $\underline{v}$ of $(H,\nabla)$ coincide
on ${\mathbb C}\times U$ with the matrices in Birkhoff normal form
for the basis $\varphi^*\big(\underline{v}^{[3]}\big)$ of
$\varphi^*\big(H^{[3]},\nabla^{[3]}\big)$.
As all matrices are holomorphic on ${\mathbb C}\times M$,
they coincide pairwise on ${\mathbb C}\times M$.
Therefore the pure $(TLE)$-structure $(H,\nabla)$
with basis $\underline{v}$ is isomorphic to the pure
$(TLE)$-structure $\varphi^*\big(H^{[3]},\nabla^{[3]}\big)$
with basis $\varphi^*\big(\underline{v}^{[3]}\big)$.
This finishes the proof of part~$(a)$ of Theorem~\ref{t8.5}.
\medskip
$(b)$ If the original $(TE)$-structure $(H,\nabla)$ has
the form
$\varphi^*\big({\mathcal O}(H^{[5]}),\nabla^{[5]}\big)\otimes{\mathcal E}^{-\rho^{(0)}/z}$
then the $(TE)$-structure $\big(H^{[2]},\nabla^{[2]}\big)$
with trace free pole part which was associated to
$(H,\nabla)$ at the beginning of the proof of part~$(a)$,
has the form $\varphi^*\big({\mathcal O}(H^{[5]}),\nabla^{[5]}\big)$.
Then any $(TE)$-structure $\big(H^{[3]},\nabla^{[3]}\big)$
over $\big(M^{[3]},t^{[3]}\big)=({\mathbb C},0)$ works, whose restriction
over~$t^{[3]}$ is the given logarithmic $(TE)$-structure
$\big(H^{[5]},\nabla^{[5]}\big)$.
In the cases with $N^{\rm mon}=0$, table~\eqref{8.12} offers
one of generic type (Sem) with $k_1=k_2=1$
(and some with $k_2>k_1$ if $\alpha_1-\alpha_2\in{\mathbb Q}\cap (-1,1)$)
and one or two of generic type (Reg), see table~\eqref{8.11}. In~the cases with $N^{\rm mon}\neq 0$, table~\eqref{8.12}
offers two of generic type (Reg) if the leading exponents
$\alpha_1$ and $\alpha_2$ satisfy $\alpha_1-\alpha_2\in{\mathbb N}$,
and one if they satisfy $\alpha_1=\alpha_2$,
compare also Figures~\ref{figure4} and~\ref{figure5} in Theorem~\ref{t7.4}$(b)$.
Therefore the inducing $(TE)$-structure in table~\eqref{8.12}
is not unique except for the case $N^{\rm mon}\neq 0$ and
$\alpha_1=\alpha_2$,
if the original $(TE)$-structure has the form
$\varphi^*\big({\mathcal O}(H^{[5]}),\nabla^{[5]}\big)\otimes{\mathcal E}^{-\rho^{(0)}/z}$.
{\sloppy
In the other cases, the proof of part~$(a)$ shows the
uniqueness of the $(TE)$-structure $\big(H^{[3]},\nabla^{[3]}\big)$.
The uniqueness of $\big(H^{[3]},\nabla^{[3]}\big)$
gives also the uniqueness of $\big(H^{[4]},\nabla^{[4]}\big)$
in the first paragraph of the proof of part~$(a)$.
}
\medskip
$(c)$ This follows from the uniqueness in part $(b)$.
\hfill$\qed$
\section[A family of rank 3 $(TE)$-structures with a functional parameter]
{A family of rank 3 $\boldsymbol{(TE)}$-structures\\ with a functional parameter}\label{c9}
M.~Saito presents in~\cite{SaM17} a family of
Gauss--Manin connections with a functional parameter. In~the arXiv paper~\cite{SaM17}, the bundle has rank
$n$, but in a preliminary version
it has rank 3 and is more transparent.
Here we translate the rank 3 example
by a Fourier--Laplace transformation
into a family of $(TE)$-structures
with primitive Higgs fields over a fixed
3-dimensional globally irreducible
$F$-manifold with an Euler field, such that the
$F$-manifold with Euler field is nowhere regular.
The family of $(TE)$-structures has a functional parameter
$h(t_2)\in{\mathbb C}\{t_2\}$.
In the following, we write down a $(TE)$-structure of rank 3
on a manifold $M={\mathbb C}^3$ with coordinates $t_1$, $t_2$, $t_3$.
The restriction to
$\big\{t\in{\mathbb C}^3\,|\, t_1=0\big\}=\{0\}\times {\mathbb C}^2$
is a FL-transformation of~Saito's example.
The parameter $t_1$ and this $F$-manifold are
not considered in~\cite{SaM17}. There the base
space has only the two parameters $t_2$ and $t_3$.
Choose an arbitrary function $h(t_2)\in{\mathbb C}\{t_2\}$
with $h''(0)\neq 0$.
Let $H'\to{\mathbb C}^*\times M$ be a holomorphic vector bundle
with flat connection with trivial monodromy and basis of
global flat sections $s_1$, $s_2$, $s_3$. Define an extension
to a vector bundle $H\to{\mathbb C}\times M$ using the following
holomorphic sections of $H'$, which also form a basis
of sections of $H'$:
\begin{gather*
v_1:= {\rm e}^{t_1/z}\cdot \big(zs_1+t_2\cdot zs_2+
h(t_2)\cdot zs_3+t_3\cdot z^2s_3\big),\\
v_2:= {\rm e}^{t_1/z}\cdot \big(z^2s_2+h'(t_2)\cdot z^2s_3\big),\qquad
v_3:= {\rm e}^{t_1/z}\cdot z^3s_3.\nonumber
\end{gather*}
Denote $\underline{v}:=(v_1,v_2,v_3)$.
Denote $\partial_{t_j}:=\partial_j$. Then
\begin{gather*
z\nabla_{\partial_1}\underline{v}= \underline{v}\cdot {\bf 1}_3,\\
z\nabla_{\partial_2}\underline{v}= \underline{v}\cdot
\begin{pmatrix}0&0&0\\1&0&0\\0&h''(t_2)&0\end{pmatrix}\!,
\nonumber\\
z\nabla_{\partial_3}\underline{v}= \underline{v}\cdot
\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\end{pmatrix}\!,\nonumber\\
z^2\partial_z\underline{v}= \underline{v}\cdot
\left({-}t_1\cdot {\bf 1}_3
+\begin{pmatrix}0&0&0\\0&0&0\\t_3&0&0\end{pmatrix}\right)
+z\cdot\underline{v}\cdot
\begin{pmatrix}1&0&0\\0&2&0\\0&0&3\end{pmatrix}\!.\nonumber
\end{gather*}
Write $\underline\partial:=(\partial_1,\partial_2,\partial_3)$.
The pole parts give the multiplication $\circ$ on the
$F$-manifold and the Euler field $E$ by
\begin{gather*
\partial_1\circ\underline{\partial}=\underline{\partial}\cdot{\bf 1}_3,\\
\partial_2\circ\underline{\partial}= \underline{\partial}\cdot
\begin{pmatrix}0&0&0\\1&0&0\\0&h''(t_2)&0\end{pmatrix}\!,\nonumber\\
\partial_3\circ\underline{\partial}= \underline{\partial}\cdot
\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\end{pmatrix}\!,\nonumber\\
E\circ\underline{\partial}= -\underline{\partial}\cdot
\left(-t_1\cdot {\bf 1}_3
+\begin{pmatrix}0&0&0\\0&0&0\\t_3&0&0\end{pmatrix}\right)\!,\qquad
\text{so}\quad E= t_1\cdot\partial_1-t_3\cdot\partial_3.\nonumber
\end{gather*}
One can introduce a new coordinate system
$\big(\widetilde t_1,\widetilde t_2,\widetilde t_3\big)=\big(t_1,\widetilde t_2,t_3\big)$ on the germ
$(M,0)$ with
\begin{gather*
\partial_{\widetilde t_2}=\frac{1}{\sqrt{h''(t_2)}}\cdot\partial_2.
\end{gather*}
Denote $\widetilde\partial_j:=\partial_{\widetilde t_j}$
and $\widetilde{\underline\partial}:=\big(\widetilde\partial_1,\widetilde\partial_2,
\widetilde\partial_3\big)=\big(\partial_1,\widetilde\partial_2,\partial_3\big)$.
Introduce also the new section
\begin{gather*
\widetilde v_2:=\frac{1}{\sqrt{h''(t_2)}}\cdot v_2,
\end{gather*}
and the new basis
$\widetilde{\underline v}=\big(\widetilde v_1,\widetilde v_2,\widetilde v_3\big)
=\big(v_1,\widetilde v_2,v_3\big)$
of the given $(TE)$-structure. Then
\begin{gather*
z\nabla_{\widetilde\partial_1}\underline{\widetilde v}=\underline{\widetilde v}\cdot {\bf 1}_3,
\\
z\nabla_{\widetilde\partial_2}\underline{\widetilde v}=\underline{\widetilde v}\cdot
\begin{pmatrix}0&0&0\\1&0&0\\0&1&0\end{pmatrix}
+\underline{\widetilde v}\cdot
\begin{pmatrix}0&0&0\\0&\partial_2\frac{1}{\sqrt{h''(t_2)}}&0
\\
0&0&0&\end{pmatrix}\!,
\\
z\nabla_{\widetilde\partial_3}\underline{\widetilde v}=\underline{\widetilde v}\cdot
\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\end{pmatrix}\!,
\\
z^2\partial_z\underline{\widetilde v}=\underline{\widetilde v}\cdot
\left(-t_1\cdot {\bf 1}_3
+\begin{pmatrix}0&0&0\\0&0&0\\t_3&0&0\end{pmatrix}\right)
+z\cdot\underline{\widetilde v}\cdot
\begin{pmatrix}1&0&0\\0&2&0\\0&0&3\end{pmatrix}\!.\nonumber
\end{gather*}
In the new coordinates the multiplication becomes simpler
and independent of the choice of~$h(t_2)$ (as long as
$h''(t_2)\neq 0$):
\begin{gather*
\widetilde\partial_1\circ\underline{\widetilde\partial}=\underline{\widetilde\partial}\cdot {\bf 1}_3,
\\
\widetilde\partial_2\circ\underline{\widetilde\partial}=\underline{\widetilde\partial}\cdot
\begin{pmatrix}0&0&0\\1&0&0\\0&1&0\end{pmatrix}\!,
\\
\widetilde\partial_3\circ\underline{\widetilde\partial}= \underline{\widetilde\partial}\cdot
\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\end{pmatrix}\!,
\\
E\circ\underline{\widetilde\partial}= -\underline{\widetilde\partial}\cdot\left({-}t_1\cdot {\bf 1}_3
+\begin{pmatrix}0&0&0\\0&0&0\\t_3&0&0\end{pmatrix}\right)\!,
\\
\text{so}\quad E= t_1\cdot\widetilde\partial_1-t_3\cdot\widetilde\partial_3.
\end{gather*}
This is the nilpotent $F$-manifold for $n=3$ in
\cite[Theorem~3]{DH17}.
However, the Euler field here is different from the one in
\cite[Theorem~3]{DH17}. The endomorphism $E\circ$ here is not
regular, but has only the one eigenvalue $t_1$ and
has for $t_3\neq 0$ one Jordan block of size
$2\times 2$ and one Jordan block of size $1\times 1$
and is semisimple for $t_3=0$.
The sections $v_1$, $v_2$, $v_3$ define also an
extension $\widehat{H}\to\P^1$ such that the $(TE)$-structure
extends to a pure $(TLE)$-structure.
Furthermore $v$ satisfies all properties of the
section $\zeta$ in Theorem~6.6(b) in~\cite{DH20-2}.
Thus the $F$-manifold with Euler field is enriched to a
flat $F$-manifold with Euler field (Definition~3.1(b) in~\cite{DH20-2}).
If we try to introduce a pairing which would make it
into a pure $(TLEP)$-structure, we get a constraint
$h''(t_2)={\rm const}$. However, probably similar higher dimensional
examples allow also an extension to pure $(TLEP)$-structures
while keeping the functional freedom.
This would give families of Frobenius manifolds with
Euler fields with functional freedom on a fixed $F$-manifold
with Euler field.
In the example above, $t_1$, $t_2$, $t_3$ are flat coordinates
and $\widetilde t_1=t_1$, $\widetilde t_2$, $\widetilde t_3=t_3$ are
{\it generalized canonical coordinates}
(in which the multiplication has simple formulas).
\subsection*{Acknowledgements}
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)~-- 242588615. I would like to thank Liana David for a lot of joint work on $(TE)$-structures.
\addcontentsline{toc}{section}{References}
|
1,477,468,750,903 | arxiv | \subsection{The surface of revolution}
Suppose we are given a surface of revolution $M$ in ${\R}^{m+2}$
with $m \geq 1$. Using the coordinates $(x,y) \in {\R}^{m+2} = {\R}^1\times{\R}^{m+1}$, $M$ is represented as
\begin{equation}
y = f(x)\omega, \quad \omega \in \S^{m}, \quad x\in I = [0,x_0],
\end{equation}
where $f \in C^2(I)$, $f(x) > 0$.
Then the induced metric on $M$ is
\begin{equation}
ds^2 =\left(1 + f'(x)^2\right)(dx)^2 + f(x)^2g_{\S^{m}},
\end{equation}
$g_{\S^{m}}$ being the standard metric on $\S^{m}$. Making the change
of variable $t = t(x)$ by
\begin{equation}
{dt\/dx} = \sqrt{1 + f'(x)^2},
\label{S1dtdx}
\end{equation}
we can rewrite $ds^2$ as
\begin{equation}
\left\{
\begin{split}
& ds^2 = (dt)^2 + r(t)^2g_{\S^{m}}, \quad r(t) = f(x(t)), \\
& 0 \leq t
\leq t_0 = \int_0^{x_0}\sqrt{1+f'(x)^2}dx.
\end{split}
\right.
\label{S1Metricrewritten}
\end{equation}
Then we have
$$
|r'(t)| < 1,
$$
since
\begin{equation}
r'(t) = f'(x(t))\frac{dx}{dt} = \frac{f'(x(t))}{\sqrt{1+f'(x(t))^2}}.
\label{S1r'(t)}
\end{equation}
Now, the Laplace-Beltrami operator on $M$ is written as
\begin{equation}
\Delta_M = \frac{1}{r^m}\partial_t\left(r^m\partial_t\right) + \frac{\Delta_Y}{r^2},
\label{S1DeltaMsurfrev}
\end{equation}
where $\Delta_Y$ is the Laplace-Beltrami operator on $\S^m$.
By imposing suitable boundary conditions on $t=0$ and $t=t_0$, one can get the spectral data for $M$. We are interested in the inverse spectral problem, i.e. the recovery of $M$ from its spectral data. Note that in this setting, we are given the operator $\Delta_Y$. The value of $t_0$ is not known a-priori, since it is computed from (\ref{S1Metricrewritten}), which contains unknown $f(x)$.
However, the eigenvalue problem for (\ref{S1DeltaMsurfrev}) is reduced to the 1-dimensional Sturm-Liouville problem, and one can derive the value of $t_0$ from the asymptotics of eigenvalues. By virtue of (\ref{S1r'(t)}), (\ref{S1dtdx}) is rewritten as
\begin{equation}
\frac{dx}{dt} = \sqrt{1 - r'(t)^2},
\label{S1dxdt}
\end{equation}
from which one can compute $x(t)$ as well as $x_0$. We can then recover $f(x)$ from the formula $r(t) = f(x(t))$ and the inverse function theorem.
We have thus seen that our problem is reduced to the inverse spectral problem for (\ref{S1DeltaMsurfrev}) defined on $[0,t_0]\times Y$ with suitable boundary condition. Since $t_0$ is known from the spectral asymptotics, we can assume without loss of generality that $t_0=1$. More precisely, in the general case, we have only to repeat the arguments below with
$M = [0,1]\times Y$ replaced by $M = [0,x_0]\times Y$, where $x_0$ is computed from (\ref{S1dxdt}) and $t_0$.
\subsection {Rotationally symmetric manifold}
Let us slightly generalize our problem. Assume that we are given a compact $m$-dimensional Riemannian manifold $(Y, g_0)$
(with or without boundary). We consider a cylindrical manifold $M = [0,1]\ts Y$ with warped product metric
\[
\lb{1}
g = (dx)^2 + r^2(x)g_0.
\]
The Laplace-Beltrami operator on $M$ is written as
\begin{equation}
\Delta_{M} = \frac{1}{r(x)^m}\partial_x\Big(r(x)^m\partial_x\Big)
+ \frac{1}{r^2(x)}\Delta_Y.
\label{S0DeltaMdefine}
\end{equation}
Two examples are given in Fig. 1, where $Y = \S^1$ and Fig. 2, where $Y = [0,\alpha]$ with a suitable boundary condition on $\partial Y$.
\begin{figure}[hbtp]
\centering
\unitlength 0.7mm
\linethickness{0.4pt}
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(75.55,65)(0,0)
\qbezier(2.4,27.25)(3,42.375)(4.8,43)
\qbezier(35.6,27.25)(36.2,42.375)(38,43)
\qbezier(61.6,27.25)(62.2,42.375)(64,43)
\qbezier(2.4,26.75)(3,11.625)(4.8,11)
\qbezier(35.6,26.75)(36.2,11.625)(38,11)
\qbezier(61.6,26.75)(62.2,11.625)(64,11)
\qbezier[30](7.5,27.25)(6.9,42.375)(5.1,43)
\qbezier[30](40.7,27.25)(40.1,42.375)(38.3,43)
\qbezier(66.7,27.25)(66.1,42.375)(64.3,43)
\qbezier[30](7.5,26.75)(6.9,11.625)(5.1,11)
\qbezier[30](40.7,26.75)(40.1,11.625)(38.3,11)
\qbezier(66.7,26.75)(66.1,11.625)(64.3,11)
\qbezier(4.65,43)(21.075,58.625)(39,42.75)
\qbezier(4.65,11)(21.075,-4.625)(39,11.25)
\put(74.5,24){\makebox(0,0)[cc]{$x$}}
\put(1.25,22.25){\makebox(0,0)[cc]{$0$}}
\put(25,55){\makebox(0,0)[cc]{$r(x)$}}
\qbezier(39,42.75)(49.5,34.25)(64,42.75)
\qbezier(39,11.25)(49.5,19.75)(64,11.25)
\put(64.25,24.75){\makebox(0,0)[cc]{$1$}} \linethickness{0.2pt}
\put(5.25,64.7){\line(0,-1){60.4}} \put(3.15,27.5){\line(1,0){72.4}}
\multiput(7.75,29.25)(-.0517578125,-.0336914063){512}{\line(-1,0){.0517578125}}
\end{picture}
\caption{\footnotesize The surface with $Y=\{y\in \S^1\}$.}
\lb{F1}
\end{figure}
\begin{figure}[hbtp]
\tiny
\unitlength 0.7mm
\ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi
\begin{picture}(115.77,67.275)(0,0)
\linethickness{0.2pt} \put(14.41,14.775){\line(1,0){101.36}}
\put(17.35,6.025){\line(0,1){60.9}}
\multiput(18.4,15.3)(-.0431303116,-.0337110482){353}{\line(-1,0){.0431303116}}
\put(19.1,12.325){\makebox(0,0)[cc]{$0$}}
\put(12.1,66.275){\makebox(0,0)[cc]{$r(x)$}}
\put(99.95,10.925){\makebox(0,0)[cc]{$1$}}
\put(7.6,3.275){\makebox(0,0)[cc]{$x$}}
\put(113.95,11.4){\makebox(0,0)[cc]{$y$}} \linethickness{0.6pt}
\qbezier(16.51,36.475)(39.505,58.35)(64.6,36.125)
\qbezier(64.6,36.125)(79.3,24.225)(99.6,36.125)
\multiput(99.95,14.425)(.033510638,.0875){94}{\line(0,1){.0875}}
\qbezier[5](66,18)(66.7,20)(67.4,22)
\multiput(64.6,14.425)(.033510638,.0875){44}{\line(0,1){.0875}}
\multiput(17.175,14.425)(.033510638,.0875){30}{\line(0,1){.0875}}
\qbezier[5](18.5,17.5)(19.2,19.5)(19.9,21.5)
\multiput(99.95,14.6)(-.039095745,.033510638){94}{\line(-1,0){.039095745}}
\multiput(64.6,14.6)(-.039095745,.033510638){94}{\line(-1,0){.039095745}}
\multiput(17.175,14.6)(-.039095745,.033510638){94}{\line(-1,0){.039095745}}
\qbezier(103.1,21.775)(102.313,36.213)(99.775,36.3)
\qbezier[30](67.75,21.775)(66.963,36.213)(64.425,36.3)
\qbezier[30](20.325,21.775)(19.537,36.213)(17,36.3)
\qbezier(99.6,36.3)(96.975,36.388)(96.45,17.225)
\qbezier(64.25,36.3)(61.625,36.388)(61.1,17.225)
\qbezier(16.825,36.3)(14.2,36.388)(13.675,17.225)
\qbezier(13.675,17.05)(34.412,13.813)(61.1,17.225)
\qbezier(60.925,17.225)(81.225,20.025)(96.625,17.225)
\qbezier[50](20.15,22.3)(43.338,27.2)(67.575,22.3)
\qbezier[30](67.75,22.3)(79.3,20.375)(102.75,21.95)
\end{picture}
\caption{\footnotesize The surface of revolution of an angle
$\a<\pi$} \lb{tune}
\end{figure}
\noindent
For the operator (\ref{S0DeltaMdefine}), we impose one of the following boundary conditions on $\partial M = \{0,1\}\times Y$ : For $y \in Y$,
\[
\label{S1BC}
\left\{
\begin{array}{l}
{\rm Dirichlet \ b. c. } \quad f(0,y) = f(1,y) = 0,\\
{\rm Mixed\ b. c.} \quad f(0,y)=0, \ f'(1,y)+b f(1,y)=0, \ b\in \R, \\
{\rm Robin \ b. c.} \quad f'(0,y)-a f(0,y)=0, \
f'(1,y)+b f(1,y)=0, \ a,b\in \R.
\end{array}
\right.
\]
The Laplacian $-\D_Y$ on $Y$ has the discrete spectrum
$$
0\le
E_1\le E_2\le E_3\le . . .
$$
with an associated orthonormal family of
eigenfunctions ${\P_\n, \n\ge 1}$, in $L^2(Y )$. Then, we have the orthogonal decomposition
$$
L^2(M)=\os_{\n\ge
1} \mL_\n^2(M),
$$
$$
\mL_\n^2(M)=\rt\{h(x,y)=f(x)\P_\n(y)\, ;\, \int_0^1|f(x)|^2r^m(x)dx < \infty
\rt\},\qq \n\ge 1.
$$
Thus, $-\Delta_M$
is unitarily equivalent to a
direct sum of one-dimensional operators,
$$
\lb{3} -\D_M\backsimeq\os_{\n=1}^{\iy}\left(-\D_\n\right),
$$
\begin{equation}
\label{S1-Deltanu1}
\begin{aligned}
-\D_\n =-{1\/\r^2} \pa_x \left(\r^2 \pa_x\right) +{E_\n\/r^2} ,\quad
\r=r^{m/2}, \quad {\rm on} \quad L^2\big([0,1];r^m(x)dx\big).
\end{aligned}
\end{equation}
We call $-\Delta_{\nu}$ a Sturm-Liouville operator.
The boundary condition (\ref{S1BC}) is inherited for $-\Delta_{\nu}$ :
\[
\lb{bc123}
\left\{
\begin{aligned}
& {\rm Dirichlet \ b. c. } \quad f(0)=f(1)=0,\\
&{\rm Mixed \ b. c.} \quad f(0)=0,\ f'(1)+b f(1)=0,\ b\in \R,\\
&{\rm Robin \ b. c.}\quad f'(0)-af(0)=0,\ f'(1)+b f(1)=0, \ a,b\in\R.
\end{aligned}
\right.
\]
The operator $-\D_\n$ actually depends on
\[
{\r'\/\r} \quad {\rm and} \quad \rho(0) = r(0)^{m/2}.
\]
For our purpose, it is convenient to introduce a parameter $q_0 = \rho'(0)/\rho(0)$ and put
\begin{equation}
\frac{\rho'(x)}{\rho(x)} = q_0 + q(x).
\end{equation}
Then $r(x)$ is written as
\[
\lb{2}
r(x)=r(0)e^{2Q(x)/m},\quad Q(x)=\int_0^x(q_0+q(t))dt.
\]
We then have
\begin{equation}
\frac{r'(0)}{r(0)} = \frac{2q_0}{m}, \quad \quad
\log\frac{r(1)}{r(0)} = \frac{2}{m}\Big(q_0 + \int_0^1q(x)dx\Big).
\label{S1r'(0)r(1)}
\end{equation}
This implies that, if we are given either $r(0)$ and $r'(0)$, or $r(0)$ and $r(1)$, we can reconstruct $r(x)$ from $q(x)$ for $0 \leq x \leq 1$.
The problem we address in this paper is the characterization of the range of the {\it spectral data mapping}
$$
q \to \left\{\mu_n(q), \kappa_n(q)\right\}_{n=1}^{\infty},
$$
where $\mu_n$ and $\kappa_n$ are eigenvalues and norming constants for
(\ref{S1-Deltanu1}) with a fixed $\nu$.
\subsection{Function spaces}
Let us introduce the following spaces of real functions
\begin{equation}
\begin{aligned}
\lb{dWH}
&\mW_1^0 =\rt\{q \in L^2(0,1)\ ;\ q'\in L^2(0,1), \ q(0)=q(1)=0\rt\}, \qqq
\\
&\mH_\alpha =\rt\{q\in L^2(0,1)\ ; \ q^{(\a)} \in L^2(0,1), \int
_0^1q^{(j)}(x)dx=0, \forall\ j=0,..,\a\rt\},
\end{aligned}
\end{equation}
where $\alpha \geq 0$, equipped with norms
$$
\|q\|^2_{\mW_1^0}=\|q'\|^2=\int_0^1|q'(x)|^2dx,\qqq
\|q\|^2_{\mH_\a}=\|q^{(\a)}\|^2=\int_0^1|q^{(\a)}(x)|^2dx.
$$
Define the spaces of even functions $L_{even}^2(0,1)$, and of odd functions
$L_{odd}^2(0,1)$ by
\[
\begin{aligned}
\lb{oeL}
L_{even}^2(0,1)&=\rt\{q\in L^2(0,1)\, ; \, q(x)=q(1-x), \qq \forall
\ x\in (0,1)\rt\},\\
L_{odd}^2(0,1)&=\rt\{q\in L^2(0,1)\, ; \, q(x)=-q(1-x), \qq \forall
\ x\in (0,1)\rt\},\\
L^2(0,1)&=L_{even}^2(0,1) \os L_{odd}^2(0,1)
\end{aligned}
\]
and for $\o=even$ or $\o=odd$ we define
\[
\lb{oe}
\mW_1^{0,\o}= \mW_1^0 \cap L_{\o}^2(0,1),\quad
\mH_\a^{\o} =\mH_\a\cap L_{\o}^2(0,1), \qq \a\ge 0.
\]
We also introduce the space $\ell^2_{\a}$ of real sequences
$h=(h_n)_1^{\iy }$,
equipped with the norm
\[
\lb{1.0}
\|h\|_{\a}^2=2\sum _{n\ge 1}(2\pi n)^{2\a}|h_n|^2,\qqq \a\in \R,
\]
and let $\ell^2=\ell_0^2$. Finally we define the set $\cM_1 \ss\ell^2$ by
\[
\lb{S1Mj} \cM_1 =
\cM_1\left((\mu_n^0)_{n=1}^{\infty}\right)=\left\{(h_n)_{n=1}^{\iy}\in\ell^2\,
; \,\m_1^0\!+\! h_1\!<\!\m_{2}^0\!+\! h_{2}\! <\dots \right\},
\]
where the sequence $(\mu_n^0)_{n=1}^{\infty}$ will be specified below.
\subsection{Main results I. Spectral data mapping}
We are now in a position to stating our main results of this paper.
\subsubsection{ Dirichlet boundary condition}
First we consider $-\D_\n, \n\ge 1$, on the
interval $[0,1]$ with Dirichlet boundary condition:
\[
\lb{ipC11}
\left\{
\begin{split}
&-\D_\n f=-{1\/\r^2}(\r^2f')'+{E_\n\/r^2}f, \quad \r=r^{m/2}, \quad {\rm on} \quad (0,1),\\ &f(0)=f(1)=0.
\end{split}
\right.
\]
Denote by $\m_n=\m_n(q), n=1,2\cdots$,
the eigenvalues of $-\D_{\nu}$. It is well-known that all $\m_n$ are
simple and satisfy
\begin{equation}
\lb{ipD12}
\begin{split}
&\m_n=\m_n^0+c_0+\wt\m_n, \\
& \m_n^0 = (n\pi)^2, \quad (\wt\m_n)_{1}^{\iy}\in\ell^2, \\ &c_0=\int_0^1\Big((q_0+q)^2+{E_\n\/r^2}\Big)dx,
\end{split}
\end{equation}
where $\m_n^0$, $n\ge 1$, are the eigenvalues for the unperturbed case $r=1$.
Following \cite{PT87}, \cite{IK13}, we introduce the norming constants
\[
\lb{ipC13} \vk_n(q)=\log\left|\r(1)f_n'(1,q)\/f_n'(0,q)\right|,
\qquad n\ge 1,
\]
where $f_n$ is the $n$-th eigenfunction of $-\Delta_{\nu}$. Note that $f_n'(0)\ne 0$ and
$f_n'(1)\ne 0$. Recall that
\begin{equation}
q_0 = \frac{\rho'(0)}{\rho(0)}.
\label{Defineq0}
\end{equation}
\begin{theorem}
\label{T1}
Fix $\n\ge 1$, and consider
$-\D_\n$ with the Dirichlet boundary condition.
Assume either (i) or (ii) of the following conditions:
\noindent
(i) $q_0=0$,
\noindent
(ii) $\n=1$ and $E_1=0$.
Then the mapping
$$
\P : q\mapsto \Big((\wt\m_{n}(q))_{n=1}^{\iy}\,,
(\vk_{n}(q))_{n=1}^{\iy}\Big)
$$
defined by \er{ipD12}, \er{ipC13} is a real-analytic isomorphism
between $\mW_1^0$ and $\cM_1\ts \el2_1$, where $\mathcal M_1$ is
defined by (\ref{S1Mj}) with $\m_n^0=(\pi n)^2,
n\ge 1$. In particular, in the symmetric case (the function $q$ is
odd and the manifold $M$ is symmetric with respect to the plane
$x={1\/2}$) the spectral data mapping
\[
\wt\m: \mW_1^{0,odd} \ni q \to (\widetilde\mu_n)_1^\iy\in \cM_1
\]
is a real analytic isomorphism between $\mW_1^{0,odd}$ and $\cM_1$.
\end{theorem}
\subsubsection{Mixed boundary condition}
We next consider $-\D_\n,
\n\ge 1$, with mixed boundary condition:
\[
\lb{ipC21}
\left\{
\begin{aligned}
&- \Delta_{\nu}f = -{1\/\rho^2}(\rho^2f')'+{E_\n\/r^2},\quad \r=r^{m/2} \quad {\rm on} \quad (0,1),\\
& f(0)=0,\quad f'(1)+b
f(1)=0, \quad
(b,q)\in \R\ts\mW_1^0.
\end{aligned}
\right.
\]
Let $\mu_n=\mu_n(q,b), n=0,1,2,...$ be the associated eigenvalues. They satisfy
\[
\lb{ipC22}
\begin{split}
& \mu_n(q,b)=\mu_n^0+c_0+\wt\mu_n(q,b),\\
&\mu_n^0 = \pi^2(n+{1\/2})^2+2b, \\
&(\wt\mu_n)_{1}^{\iy}\in\ell^2,\qq c_0=\int_0^1\rt((q_0+q)^2+{E_\n\/r^2}\rt)dx.
\end{split}
\]
where $\mu_n^0$'s are the eigenvalues for for the unperturbed case $r=1$.
As in \cite{KC09}, we introduce the norming constants
\[
\label{ipC23}
\c_n(q,b)=\log\left|\r(1)f_n(1,q,b)\/f_n'(0,q,b)\right|,\qquad n\ge
0,
\]
where $f_n$ is the $n$-th eigenfunction satisfying $f_n'(0,q,b)\ne 0$
and $f_n(1,q,b)\ne 0$. When $q =b = 0$, a simple calculation gives
\[
\label{ipC24}
\c_n^0:=\c_n(0,0)=-\log \pi(n\!+\!{\textstyle{1\/2}}).
\]
\begin{theorem}
\lb{T2}
For any fixed $(b,q_0,\n)\in \R^2\ts\N$, consider
$-\D_\n$ with mixed boundary condition. Assume either (i) or (ii) of the following conditions:
\noindent
(i) $q_0=0$,
\noindent
(ii) $\n=1$ and $E_1=0$.
Then the mapping defined by \er{ipC22}-\er{ipC24}
$$
\P:q\mapsto
\left((\wt\mu_n(q,b))_{n=1}^{\iy}\,,(\c_{n-1}(q,b)-\c_{n-1}^0)_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\mathcal M_1\ts\ell^2_1$, where
$\mathcal M_1$ is defined by (\ref{S1Mj})
with $\m_n^0=(\pi n+{1\/2})^2+2b,
n\ge 1$. Moreover, for each $(q,b)\in \mW_1^0\ts\R$, the following
identity holds:
\[
\label{S1IdentityB}
b=\sum_{n=0}^{\iy} \lt(2-{e^{\c_n(q,b)}\/|{\pa w\/\pa \l}(\mu_n,q,b)|}\rt),
\]
where the function $w(\l,q,b)$ is given by
\[
\label{S1adam} w(\l,q,b)=
\cos\sqrt\l\cdot\prod_{n=0}^{\iy}{\l-\mu_n(q,b)\/\l-\mu_n^0}\,,\qquad \l\in\C.
\]
Here (\ref{S1IdentityB}) and (\ref{S1adam}) converge uniformly on any bounded
subsets in ${\bf C}$.
\end{theorem}
\subsubsection{Robin boundary conditions}
The 3rd case is the Robin boundary condition:
\[
\lb{ipC31}
\left\{
\begin{aligned}
&- \Delta_{\nu}f = -{1\/\rho^2}(\rho^2f')'+{E_\n\/r^2},\quad \r=r^{m/2} \quad {\rm on} \quad (0,1), \\
& f'(0)-af(0)=0,\quad
f'(1)+b f(1)=0,\quad (a,b,q)\in \R^2\ts\mW_1^0.
\end{aligned}
\right.
\]
Let $\m_n=\m_n(q,a,b), n=0,1,2,...$ be the associated eigenvalues.
It is well-known that
\[
\lb{Case3EV}
\begin{split}
& \m_n= \mu_n^0+c_0+\wt\m_n(q,a,b), \\
&\m_n^0 = (n\pi)^2+2(a+b), \quad (\wt\m_n)_{1}^{\iy}\in\ell^2, \\ & c_0=\int_0^1\rt((q_0+q)^2+{E_\n\/r^2}\rt)dx.
\end{split}
\]
Note $\m_n^0, n\ge 0$ are the eigenvalues for $r=1$.
The norming constants are defined by
\[
\label{Case3NC}
\f_n(q,a,b)=\log\left|\r(1)f_n(1,q,a,b)\/f_n(0,q,a,b)\right|,\qquad
n\ge 0,
\]
where $f_n$ is the $n$-th eigenfunction. They satisfy
$f_n(1,q,a,b)\ne 0$ and $f_n(0,q,a,b)\ne 0$.
\begin{theorem}
\label{T3}
For any fixed $(a,b,q_0,\n)\in \R^3\ts\N$,
consider $-\D_\n$ with Robin boundary condition.
Suppose either (i) or (ii) of the following conditions hold:
\noindent
(i) $q_0=0$,
\noindent
(ii) $\n=1$ and $E_1=0$.
Then the mapping defined by \er{Case3EV}, \er{Case3NC}
\[
\lb{IPGbct}
\P_{a,b}:q\mapsto \Big((\wt\m_{n}(q,a,b))_{n=1}^{\iy}\, ,
(\f_{n}(q,a,b))_{n=1}^{\iy}\Big)
\]
is a real-analytic isomorphism between $\mW_1^0$ and $\cM_1\ts
\el2_1$, where $\mathcal M_1$ is defined by (\ref{S1Mj}) with $\m_n^0=(\pi n)^2+2(a+b), n\ge 1$.
\end{theorem}
\no {\bf Remark.} 1) In Theorems \ref{T1}-\ref{T3} we consider two
cases: (i) $q_0=0$, or (ii) $\n=1$ and $E_1=0$. The inverse problems
for the cases: (1) $q_0\in \R, \n\ge 2$ or (2) $\n=1$ and $E_1\ne 0$
in Theorems \ref{T1}-\ref{T3} are still open.
2) We have the standard asymptotics \er{ipD12}, \er{ipC22} and
\er{ipC31} for fixed $\n$. It is interesting to determine the
asymptotics uniformly in $\n\ge 1$.
\subsection {Main results II. The curvature mapping.}
The Minkowski problem in classical differential geometry asks the existence of a convex surface with a prescribed Gaussian curvature. More precisely, for a given strictly positive real function $F$ defined on a sphere, one seeks a strictly convex compact surface
$\cS$, whose Gaussian curvature at $x$ is equal to
$F({\bf n}(x))$, where ${\bf n}(x)$ denotes the outer unit normal to
$\cS$ at $x$. The Minkowski problem was solved by Pogorelov \cite{P74} and
by Cheng-Yau \cite{CY76}.
We consider only the case $m=\dim Y=1$.
Note that our surface is not convex, in general.
We solve an analogue of the Minkowski problem
in the case of the surface of revolution by showing the existence of
a bijection between the Gaussian curvatures and the profiles of surfaces.
As it is well-known, the Gaussian curvature $\cK$ is given by
\[
\lb{K1} \cK=-{r''\/r},\qqq \r=r^{1\/2}.
\]
As above, we represent the profile $r(x)$ in the following way:
\[
\lb{K1r}
r(x)=r_0e^{2Q(x)}, \quad Q(x)=\int_0^x (q_0+q(t))dt,\quad (q_0,q)\in \R\ts \mW_1^0.
\]
Then we have
\[
\lb{K2}
\cK=-2q'-4(q_0+q)^2.
\]
Note that if $q=0$, then $\cK=-4q_0^2<0$ is a negative constant.
Letting
\[
\lb{K3}
\cK_0=4\int_0^1
(2q_0q+q^2)dx,\quad
G(q)=2q'+4(q_0+q)^2-\cK_0,
\]
we rewrite $\cK$ into the form
\begin{equation}
\lb{K4}
\cK=-G(q)-\cK_0-4q_0^2.
\end{equation}
\begin{theorem}
\lb{T4} Let
the Gaussian curvature $\cK$ and the profile $r(x)$ be given by
\er{K1},
\er{K1r}, where $(q_0,q)\in \R\ts\mW_1^0$.
Then the mapping $G: \mW_1^0\to \mH_0$ defined by
\[
q\to G(q)=-\cK-\cK_0-4q_0^2
\]
is a real analytic isomorphism between
$\mW_1^0$ and $\mH_0$. Moreover, the constant $\cK_0$ is uniquely
defined by $G(q)$.
\end{theorem}
\noindent
{\bf Remark.} This theorem also holds with $\mW_1^0$ replaced by $\mH_1$.
\medskip
Theorem \ref{T4} gives the mapping between $\cK$ and the profile $r$.
Thus Theorems \ref{T1} $\sim$ \ref{T4} make the mapping
$$
{\rm Gaussian \ curvature} \ \cK\qq \to \qq {\rm eigenvalues\ +\ norming
\ constants}
$$
well-defined. We illustrate this by Theorem \ref{T5}.
We consider the Sturm-Liouville problem with Robin boundary condition:
\[
\lb{ipGc1}
-{1\/\r^2}(\r^2f')'+{E_\n\/r^2}f=\l f,\qq f'(0)-af(0)=0,\quad f'(1)+b f(1)=0.
\]
Let $\x = G(q) \in \mH_0$ and $A=(a,b,q_0)\in\R^3$. Let
$\m_n=\m_n(\x,A), n=0,1,2,...$ be the eigenvalues of \er{ipGc1}.
They satisfy
\begin{equation}
\lb{ipCc2}
\begin{split}
& \m_n(\x,A)= \mu_n^0+c_0+\wt\m_n(\x,A), \\
& \mu_n^0 = (n\pi)^2 + 2(a+b), \quad
(\wt\m_n)_{1}^{\iy}\in\ell^2,\\
&c_0=\int_0^1\rt(q^2+{E_\n\/r^2}\rt)dx.
\end{split}
\end{equation}
Here $\mu_n^0$, $n\ge 0$, are the unperturbed eigenvalues for the
case $r=1$. We introduce the norming constants
\[
\label{ipGc3}
\f_n(\x,A)=\log\lt|{\r(1)f_n(1,\x,A)\/f_n(0,\x,A)}\rt|,\qq n\ge 1,
\]
where $f_n$ is the $n$-th eigenfunction. Note that $f_n(1,\x,A)\ne
0$ and $f_n(0,\x,A)\ne 0$.
\begin{theorem}
\label{T5}
Let $A=(a,b, q_0)\in\R^3, \n\ge 1$ be fixed and consider $-\D_\n$ with Robin boundary condition. Assume either (i) or (ii) of the following conditions:
\noindent
(i) $q_0=0$,
\noindent
(ii) \ $\n=1$ and $E_1=0$.
Then the mapping defined by \er{ipCc2}, \er{ipGc3}
\[
\lb{ipCc4} \x\to \F_{A}(\x)=\Big((\wt\m_{n}(\x,A))_{n=1}^{\iy}\, ,
(\f_{n}(\x,A))_{n=1}^{\iy}\Big)
\]
is a real-analytic isomorphism between $\mH_0$ and $\cM_1\ts
\el2_1$, where $\mathcal M_1$ is defined by (\ref{S1Mj})
with $\m_n^0=(\pi n)^2+2(a+b), n\ge 1$.
\end{theorem}
\subsection{Brief overview}
There is an abundance of works devoted to the spectral theory and inverse problems for the surface of revolution from the view points of classical inverse Strum-Liouville theory, integrable systems, micro-local analysis, see \cite{AA07}, \cite{E98}, \cite{GWL05}, \cite{GL99}, \cite{M83} and references therein.
Bruning-Heintz \cite{BH84} proved that the symmetric metric is determined from the spectrum by using the 1-dimensional Gel'fand-Levitan theory \cite{L84}, \cite{M77}.
For integrable systems associated with
surfaces of revolution, see e.g. \cite{KT96}, \cite{Ta97},\cite{BeKo99},
\ \cite{SW03}, \cite{S08} and references therein. Here we mention the work of Zelditch \cite{Z98}, which proved that the isospectral revolutionary surfaces of simple length spectrum, with some additional conditions, are isometric. In fact, the assumptions ensure the existence of global action-angle variables for the geodesic flow, which entails that the Laplacian has a global quantum normal form in terms of action operators. From the singularity expansion of the trace of wave group, one can then reconstruct the global quantum normal form, hence the metric. This argument, in due course, recovers the result of \cite{BH84}. Note, however, that the class of metrics considered is shown to be residual in
the class of metrics satisfying all the assumptions above concerning
the metric but not the simple length spectrum assumption.
In the proof we use the analytic approach of Trubowitz and his co-authors
(see \cite{PT87} and references therein) plus its development for periodic systems \cite{KK97}. Using them we obtain
the global transformation for inverse Sturm-Liouville theory \cite{IK13}.
Note that for \cite{IK13} the results of inverse Sturm-Liouville theory \cite{PT87}, \cite{KC09} and \cite{K02} are important.
\subsection{Plan of the paper}
We start from proving Theorem \ref{T4}, which is based on
an abstract theorem in non-linear functional analysis \cite{KK97}.
In Section 2, we do it after preparing the estimates for the
Riccati type mapping. The idea of the proof of Theorems 1.1,
1.2 and 1.3 consists in converting the Sturm-Liouville equation
$$
- \frac{1}{\rho^2}\left(\rho^2f'\right)' + \frac{E}{r^2} = 0
$$
to the Schr{\"o}dinger equation
$$
- y'' + py = Ey
$$
using some non-linear mapping. In Section 3, we explain the results for the isomorphic property of the spectral data mapping. The paper \cite{IK13} has been prepared for this purpose, and using the results there we shall prove Theorem 1.1, 1.2. and 1.3 in Section 4.
\section {The curvature inverse problem and Riccati type mappings}
\setcounter{equation}{0}
\subsection{Estimates for Riccati type mappings.}
We define the mapping $G: \cH\to \mH_0$, where $\cH=\mH_1$ or
$\cH=\mW_1^0$ by
\[
\lb{dR}
\begin{aligned}
\ca p=G(q)=q'+q^2+2q_0q-c_0,\qqq c_0=\int_0^1 (q^2+2q_0q)dx, \\
q_0=\const \in \R, \qq q\in\mW_1^0\qq or \qq q\in\mH_1 \ac,
\end{aligned}
\]
\begin{lemma}
\lb{TR1} Let $p$ be given by \er{dR}, where $ q\in \mH_1$ or $q\in
\mW_1^0$. Then the following estimates hold true:
\[
\lb{Rx1} \|q'\|^2\le \|p\|^2=\|q'\|^2+\|q^2+2q_0q-c_0\|^2,
\]
\[
\lb{Rx2} \|p\|^2=\|q'\|^2+\|q^2\|^2+4q_0^2\|q\|^2+4q_0(q^3,1)-c_0^2,
\]
\[
\lb{Rx3} \|p\|^2\le \|q'\|^2+\|q^2\|^2+4q_0^2\|q\|^2+4q_0(q^3,1),
\]
where $(\cdot, \cdot)$ is the scalar product in $L^2(0,1)$.
\end{lemma}
\no {\bf Proof.} Let $h=q^2+2q_0q-c_0$. We have
$$
\begin{aligned}
&\|p\|^2=\|q'\|^2+\|h\|^2+2(q',h),\\
&(q',h)=(q',q^2+2q_0q-c_0)=0,\\
\end{aligned}
$$
where the integration by parts has been used. This yields \er{Rx1}.
We have
$$
\begin{aligned}
&\|h\|^2=\|q^2+2q_0q-c_0\|^2=\|q^2+2q_0q\|^2-2(q^2+2q_0q,c_0)+c_0^2
\\
=\|q^2+2q_0q\|^2-c_0^2
&=\|q^2\|^2+4q_0^2c_0+4q_0(q^2,q)-c_0^2,\\
& \|q^2+2q_0q\|^2=\|q^2\|^2+4q_0^2\|q\|^2+4q_0(q^2,q)=
\|q^2\|^2+4q_0^2\|q\|^2+4q_0(q^3,1)
\end{aligned}
$$
and together with \er{Rx1} this yields \er{Rx2} and \er{Rx3}. \BBox
\medskip
We show that that mapping $G= G(q)=G(q,q_0)$ in \er{dR} is real analytic.
\begin{lemma}
\lb{TR2} Let $\cH=\mH_1$ or $\cH=\mW_1^0$ and let $q_0\in \R$. The
mapping $G: \cH\to \mH_0$ given by \er{dR} is real analytic and its
gradient is given by
\[
\lb{ar1}
{\pa G(q)\/\pa q} f=f'+2(q_0+q) f -\int_0^1 2(q_0+q)fdx,\qqq
\qqq \ \forall \ \ q, f\in \cH.
\]
Moreover, the operator ${\pa G(q)\/\pa q} $ is invertible for all
$q\in\cH$.
\end{lemma}
\no {\bf Proof}. By the standard arguments (see \cite{PT87}), we see
that $G(q)$ is real analytic and its gradient is given by \er{ar1}.
Due to \er{ar1}, the linear operator ${\pa G(q)\/\pa q}: \cH\to
\mH_0$ is a sum of a boundedly invertible operator and a
compact operator for all $q\in \cH$. Hence ${\pa G(q)\/\pa q}$ is a
Fredholm operator. We prove that the operator ${\pa G(q)\/\pa q}$ is
invertible by contradiction. Let $f\in \cH$ be a solution of the
equation
\[
\lb{ar2} {\pa G(q)\/\pa q}f=0, \qqq f\ne 0,
\]
for some fixed $q\in \cH$. Due to \er{ar1} we have the equation
\[
\lb{ar3} {\pa G(q)\/\pa q}f=f'+2(q_0+q) f -C=0,\qqq C=\int_0^1
2(q_0+q)fdx.
\]
This implies
\[
(e^{2Q}f)'=Ce^{2Q},\qqq Q=\int_0^x(q_0+q)dt.
\]
Let us first assume that the constant $C=0$. Then we get $ (e^{2Q}f)'=0 $, which
yields $(e^{2Q}f)(x)=(e^{2Q}f)(0), x\in [0,1]$. If $f\in \mW_1^0$,
then we obtain $(e^{2Q}f)=0$ and $f=0$. If $f\in \mH_1$, then we
obtain $(e^{2Q}f)(x)=f(0)$ and $f=e^{-2Q}f(0)$. This gives $f=0$,
since $\int_0^1fdt=0$. In any case, we have arrived at a
contradiction.
Next let us assume that $C\ne 0$. Without loss of generality, we can assume that
$C=1$. Then we get
$$
(e^{2Q}f)(x)=f(0)+\int_0^xe^{-2Q}dt.
$$
If $f\in \mW_1^0$, then we obtain
$(e^{2Q}f)(1)=\int_0^1e^{-2Q}dt>0$, which gives
a contradiction.
If $f\in \mH_1$, then we obtain
$(e^{2Q}f)(1)=f(0)+\int_0^1e^{-2Q}dt>f(0)$, which again gives
a contradiction. Thus the operator ${\pa G\/\pa q} $ is invertible
for all $q\in\cH$.
\BBox
\subsection {Analytic isomorphism}
In order to prove Theorem \ref{T4} we use the "direct approach"
in \cite{KK97} based on nonlinear functional analysis. Our main tool is the following theorem in \cite{KK97}.
\begin{theorem}
\lb{TA97}
Let $H, H_1$ be real separable Hilbert spaces equipped with norms
$\|\cdot \|, \|\cdot \|_1$. Suppose that the map $f: H \to H_1$
satisfies the following conditions:
\no i) $f$ is real analytic and the operator ${d \/dq}f$ has an
inverse for all $q\in H$,
\no ii) there is a nondecreasing function $\e: [0, \iy ) \to [0, \iy
), \e (0)=0,$ such that $\|q\|\le \e (\|f(q)\|_1)$ for all $q\in
H$,
\no iii) there exists a linear isomorphism $f_0:H\to H_1$ such that
the mapping $f-f_0: H \to H_1$ is compact.
\no Then $f$ is a real analytic isomorphism between $H$ and $H_1$.
\end{theorem}
{\bf Proof of Theorem \ref{T4}.} We check all conditions in Theorem
\ref{TA97} for the mapping $\x=G(q), q\in \mW_1^0$ given by \er{K3}.
The proof for the case $q\in \mH_1$ is similar. We rewrite this
mapping in the form
$$
\x=G(v/2)=v'+2v_0v+v^2-c_0,\qqq c_0=\int_0^1 (2v_0v+v^2)dt,
$$
where $v=2q\in \mW_1^0$ and $v_0=2q_0$ is a constant.
Lemma \ref{TR2} implies the assertion (i), and Lemma \ref{TR1} the assertion (ii).
Let us check iii). We take a model mapping $\x_0$ by $\x_0(v)=v'$.
Suppose $q^\n\to q$ weakly in $\mW_1^0$ as $\n\to \iy$. Then
$q^\n\to q$ strongly in $\mH_0$ as $\n\to \iy$, since the imbedding
mapping $\mW_1^0\to\mH_0$ is compact. Hence the mapping $q\to
\x(v)-\x_0(v)$ is compact.
Therefore, all conditions in Theorem \ref{TA97} hold true and the mapping
$G:\mW_1^0 \to \mH_0$ is a real analytic isomorphism between
$\mW_1^0$ and $\mH_0$.
\BBox
\section{Spectral data mapping for the case $\nu=1$ and $E_1=0$}
\setcounter{equation}{0}
\subsection {Unitary transformations.}
Consider the Sturm-Liouville operator $-\D_{q}$ defined in
$L^2((0,1);\r^2dx)$, where $\r=\r(x)>0$, having the form
\[
\lb{if1}
-\D_{q} f=-{1\/ \r^2}(\r^2f')',\qqq\qqq \r=r^{m\/2}=e^Q,
\]
equipped with the boundary condition
\[
\lb{if2}
f'(0)-af(0)=0,\qquad f'(1)+b f(1)=0,\qquad a,b\in \R\cup \{\iy\}.
\]
Here $Q'$ is continuous on $[0,1]$.
We define the simple unitary transformation $\mU$ by
\[
\lb{dUx}
\mU:
L^2([0,1],\r^2dx)\to L^2([0,1],dx),\qqq \mU f= \r f.
\]
We transform the operator $-\D_{q}$ into the Schr\"odinger operator $S_p$ by
\[
\lb{5}
\begin{aligned}
\mU (-\D_{q}) \mU^{-1}=
-\r^{-1}\pa_x \r^2\pa_x \r^{-1}=\cD^*\cD=S_p+c_0,\qq
S_p =-{d^2\/dx^2}+p,\\
c_0=\int_0^1(Q''+(Q')^2)dx,\qqq
\qqq p=Q''+(Q')^2-c_0.
\end{aligned}
\]
since using the identity $\r=\r_0e^{Q}$ we obtained
\[
\begin{aligned}
\lb{a7}
\cD=\r\ \pa_x \ \r^{-1}= \pa_x -Q',\qqq \cD^*=
\rt(\r \ \pa_x \ \r^{-1}\rt)^*=-\pa_x-Q',\\
\cD^*\cD=-(\pa_x +Q')(\pa_x -Q')=-\pa_x^2+Q''+(Q')^2.
\end{aligned}
\]
Here the operator $S_p=-{d^2\/dx^2}+p$ acts in $ L^2([0,1],dx)$. We
describe the boundary conditions for the operators $\D_{q}f$ and
$S_py$, where $y=\r f$. We have the following identities
\[
\begin{aligned}
\lb{bcfy}
y(0)=f(0),\qqq y'(0)=Q'(0)f(0)+f'(0),\\
y(1)=\r(1)f(1),\qqq y'(1)=Q'(1)\r(1)f(1)+\r(1)f'(1).
\end{aligned}
\]
The identities \er{bcfy} yield the relations between the boundary conditions for $f$ for $\D_{q}$ and $y$ for $S_p$:
\[
\lb{if2a}
\ca f'(0)-af(0)=0,\\
f'(1)+b f(1)=0,\ac
\ \Leftrightarrow \
\ca y'(0)-(a+Q'(0))y(0)=0,\\
y'(1)+(b-Q'(1)) y(1)=0,\ac \
a,b\in \R\cup \{\iy\}.
\]
We consider the eigenvalue problems for $-\Delta_q$ and $S_p$ on
$(0,1)$ subject to \er{if2a}. Our second main
theorem asserts that the above transformation $- \Delta_q \to S_p$ preserves the
boundary conditions and spectral data.
\begin{theorem}
\lb{TSD} Let $p=G(q), q\in \mW_1^0$, be defined by \er{dR}. Then
the operators $S_p$ and $-\D_q$, subject to
the boundary condition \er{if2}, are unitarily equivalent. Moreover, they have the same
eigenvalues and the norming constants.
\end{theorem}
{\bf Proof.} Let $p=G(q), q\in \mW_1^0$, be defined by \er{dR}. Then
under the transformation $y=\mU f=\r f$ the operators $S_p$ and
$-\D_q$ are unitarily equivalent. Moreover, due to $y=\r f$ and
\er{if2} the operators $S_p$ and $-\D_q$ have the boundary conditions
given by \er{if2a}.
Using \er{bcfy} we can define the same norming constants. \BBox
\bigskip
Assume that the mapping $p\to $ (eigenvalues + norming constants for
the operator $S_p$) gives the solution of the inverse problem for
the operator $S_p$. Then, since the mapping $p\to q$ is an analytic
isomorphism we obtain that the solution of the inverse problem for
the mapping $p\to $ (eigenvalues + norming constants for the
operator $-\D_q$).
\medskip
Similar arguments work for the operator $-\D_q$ and the associated inverse problem. We will give a more precise explanation in the proof of Theorems \ref{T1} $\sim$ \ref{T3}.
\medskip
Therefore, the inverse problem for $-\Delta_q$ is solvable if and
only if so is for $S_p$. In this section, we consider the case $E_1=0$, $\nu=1$.
\subsection{Robin boundary condition}
Consider the operator $-\Delta_qf=-{1\/\r^2}(\r^2f')'$ subject to
the boundary condition (\ref{if2}) for the case $a, b\in \R$. We consider
the case $q_0\in \R$ and $E_1=0$. Let $A=(a,b,q_0)\in \R^3$. Let
$\m_n=\m_n(q,A), n\geq 0,$ be the eigenvalues of $\Delta_q$. Then we
have
$$
\m_n(q,A)=\m_n^0+c_0+\wt\m_n(q,A),\quad \mathrm{where} \quad
(\wt\m_n)_{1}^{\iy}\in\ell^2,\qq c_0=\|q\|^2,
$$
and $\m_n^0=(\pi n)^2+2(a+b)$, $n\ge 0$, denote the unperturbed
eigenvalues. We introduce the norming constants
\[
\label{ncab}
\f_n(q,A)=\log\left|\r(1)f_n(1,q,A)\/f_n(0,q,A)\right|,\qquad n\ge
0,
\]
where $f_n$ is the $n$-th eigenfunction. Note that $f_n(1,q,A)\ne 0$
and $f_n(0,q,A)\ne 0$. The inverse problem for $S_p$ with Robin
boundary condition was solved in \cite{KC09}. Therefore, applying
Theorem \ref{T4} and the result of the inverse problem for $S_p$
\cite{KC09}, we have the following theorem.
\begin{theorem}
\label{Tipq3} Let $E_1=0$ for $\n=1$. For each $A=(a,b, q_0)\in\R^3$,
the mapping
$$
\P_{A}:q\mapsto \left((\wt\m_{n}(q,A))_{n=1}^{\iy}\,;
(\f_{n}(q,A))_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\cM_1\ts
\el2_1$, where $\cM_1$ is given by \er{S1Mj} with $\m_n^0=(\pi
n)^2+2(a+b), n\ge 1$.
\end{theorem}
{\bf Proof.} Let $q\in \mW_0^1$ and $A=(a,b,q_0)\in\R^3$. We
consider the Sturm-Liouville problem with the generic boundary
conditions,
$$
-{1\/\r^2}(\r^2f')'=\l f,\qqq f'(0)-af(0)=0,\qqq f'(1)+b f(1)=0.
$$
Let $\m_n=\m_n(q,A), n=0,1,2,...$ be the eigenvalues of the
Sturm-Liouville problem. It is well known that
$$
\m_n(q,A)=\m_n^0+c_0+\wt\m_n(q,A),\quad \mathrm{where}
\quad (\wt\m_n)_{1}^{\iy}\in\ell^2,\qq c_0=\|q\|^2.
$$
Following \cite{KC09}, we introduce the norming constants
\[
\label{ncb}
\f_n(q,A)=\log\left|\r(1)f_n(1,q,A)\/f_n(0,q,A)\right|,\qquad n\ge
0,
\]
where $f_n$ is the $n$-th eigenfunction. Thus for fixed $A\in \R^3$
we have the mapping
$$
\P_{A}:q\mapsto \P_{A}(q)=
\left((\wt\m_n(q,A))_{n=1}^{\iy}\,;(\f_n(q,A))_{n=1}^{\iy}\right)
$$
Let $p=G(q), q\in \mW_0^1$. We use Theorem \ref{TSD}. Consider the
Sturm-Liouville problem
$$
\begin{aligned}
S_p y=-y''+p(x)y,\qquad y'(0)-a_0y(0)=0,\qquad y'(1)+b_0y(1)=0,\\
a,b,q_0\in \R,\qq a_0=a-q_0,\qq b_0=b+q_0.
\end{aligned}
$$
Denote by $\s_n=\s_n(p), n\ge 0$ the eigenvalues of $S_p$ and let
$\vk_n(p)$ be the corresponding norming constants given by
\[
\label{NuDef}
\vk_n(p)=\log\lt|{y_n(1,p,a_0,b_0)\/y_n'(0,p,a_0,b_0)}\rt|\,,\qquad
n\ge 0.
\]
Recall that due to \cite{KC09} (see Prosition 5.4 in \cite{KC09})
for each $a_0,b_0\in\R$ the mapping
\[
\lb{FKC} \F_{a_0,b_0}:p\mapsto \F_{a_0,b_0}(p)=
\left((\wt\s_{n}(p))_{n=1}^{\iy}\,;(\vk_n(p))_{n=1}^{\iy}\right)
\]
is a real-analytic isomorphism between $\mH_0$ and $\cM_1\ts\el2_1$.
Due to Theorem \ref{T4} we obtain the identity
$$
\F_{a_0,b_0}(G(q))=\P_{A}(q),\qq \forall \ q\in \mW_0^1.
$$
The mapping $\P_{A}(\cdot)$ is the composition of two mappings $\F_{a_0,b_0}$ and
$G$, where each of them is the corresponding analytic isomorphism
(see \er{FKC} and Theorem\ref{T4}). Then for each $A\in \R^3$ the
mapping
$$
\P_{A}:q\mapsto
\left((\wt\m_n(q,A))_{n=1}^{\iy}\,;(\f_n(q,A))_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_0^1$ and
$\cM_1\ts\ell^2_1$. \BBox
\subsection{Dirichlet boundary condition}
On the
interval $[0,1]$ we consider the operator $-\D_\n=-{1\/\r^2}(\r^2f')'$ with Dirichlet boundary condition. We consider the
case $\n=1$, $q_0\in \R$ and $E_1=0$. Denote by $\m_n=\m_n(q),
n=1,2\cdots$, the eigenvalues of $-\D_1$. It is well-known that all
$\m_n$ are simple and satisfy
\[
\lb{DBc0} \m_n=\m_n^0+c_0+\wt\m_n,\qq \m_n^0 = (n\pi)^2, \quad
(\wt\m_n)_{1}^{\iy}\in\ell^2, \qqq c_0=\int_0^1(q_0+q)^2dx,
\]
where $\m_n^0=(\pi n)^2$, $n\ge 1$, are the eigenvalues for the unperturbed
case $r=1$. We introduce the norming constants
\[
\lb{DBc1} \vk_n(q)=\log\left|\r(1)f_n'(1,q)\/f_n'(0,q)\right|,
\qquad n\ge 1,
\]
where $f_n$ is the $n$-th eigenfunction of $-\Delta_{\nu}$. Note
that $f_n'(0)\ne 0$ and $f_n'(1)\ne 0$. The inverse problem for
$S_p$ with the Dirichlet boundary condition was solved in
\cite{PT87}. Therefore, applying Theorem \ref{T4} and the result of
the inverse problem for $S_p$ \cite{PT87}, we have the following
theorem.
\begin{theorem}
\label{Tipq1}
Let $\n=1$ and $E_1=0$. For any $q_0\in\R$ the mapping
$$
\P : q\mapsto \left((\wt\m_{n}(q))_{n=1}^{\iy}\,;
(\vk_{n}(q))_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\cM_1\ts
\el2_1$, where $\cM_1$ is given by \er{S1Mj} with $\m_n^0=(\pi
n)^2, n\ge 1$. In particular, in the symmetric case the spectral
mapping
\[
\wt\m: \mW_0^{1,odd}\to \cM_1,\qqq {\rm given \ by } \qqq q\to
\wt\m
\]
is a real real analytic isomorphism between the Hilbert space
$\mW_0^{1,odd}$ and $\cM_1$.
\end{theorem}
{\bf Proof}. The proof repeats the proof of Theorem \ref {Tipq3},
based on Theorem \ref{TSD} and the well-known results from
\cite{PT87}. \BBox
\subsection{Mixed boundary condition}
We consider the operator $-\D_\n=-{1\/\rho^2}(\rho^2f')'$ with
mixed boundary condition $f(0)=0,\quad f'(1)+b f(1)=0$, where
$(b,q)\in \R\ts\mW_1^0.$
We consider the
case $\n=1$, $q_0\in \R$ and $E_1=0$.
Let $\mu_n=\mu_n(q,b), n=0,1,2,...$ be the associated eigenvalues. They satisfy
\[
\lb{ipC22x}
\begin{split}
\mu_n(q,b)=\mu_n^0+c_0+\wt\mu_n(q,b), \qq (\wt\mu_n)_{1}^{\iy}\in\ell^2,\qq
c_0=\int_0^1(q_0+q)^2dx.
\end{split}
\]
where $\mu_n^0= \pi^2(n+{1\/2})^2+2b$ are the eigenvalues for for
the unperturbed case $r=1$. We introduce the norming constants
\[
\label{ipC23x}
\c_n(q,b)=\log\left|\r(1)f_n(1,q,b)\/f_n'(0,q,b)\right|,\qquad n\ge
0,
\]
where $f_n$ is the $n$-th eigenfunction satisfying $f_n'(0,q,b)\ne
0$
and $f_n(1,q,b)\ne 0$. When $q =b = 0$, a simple calculation gives
$\c_n^0:=\c_n(0,0)=-\log \pi(n\!+\!{\textstyle{1\/2}}).
$
\begin{theorem}
\lb{Tipq2}
Let $\n=1$ and $E_1=0$ and let $b,q_0\in\R$.
Consider the inverse problem for \er{ipC21} $\sim$ \er{ipC24} for
any fixed $(b,q_0)\in \R^2$.
(i) The mapping
$$
\P:q\mapsto
\left((\wt\mu_n(q,b))_{n=1}^{\iy}\,;(\c_{n-1}(q,b)-\c_{n-1}^0)_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\mathcal
M_1\ts\ell^2_1$, where $\cM_1$ is given by \er{S1Mj} with
$\m_n^0=\pi^2(n+{1\/2})^2+2b, n\ge 1$.
(ii) For each $(q;b)\in \mW_1^0\ts\R$ the following identity holds true:
\[
\label{IdentityB}
b=\sum_{n=0}^{+\iy} \lt(2-{e^{\c_n(q,b)}\/|{\pa w\/\pa \l}(\mu_n,q,b)|}\rt),
\]
where the function $w(\l,q,b)$ is given by
\[
\label{adamz} w(\l,q,b)=
\cos\sqrt\l\cdot\prod_{n=0}^{+\iy}{\l-\mu_n(q,b)\/\l-\mu_n^0}\,,\qquad \l\in\C.
\]
Here both the product and the series converge uniformly on bounded
subsets on the complex plane.
\end{theorem}
{\bf Proof}. The proof is based on Theorem \ref{TSD} and the results
from \cite{KC09}. We omit one, since it repeats the proof of Theorem
\ref {Tipq3}. \BBox
\subsection{Inverse problem for the curvature.}
We define the simple unitary transformation $\mU$ by
$$
\mU:
L^2([0,1],rdx)\to L^2([0,1],dx),\qqq y=\mU f= r^{{1\/2}}f, \qq \r=r^{{1\/2}}.
$$
{\bf Proof of Theorem \ref{T5}.} Consider the inverse problem for
\er{ipGc1}-\er{ipGc3} for fixed $A=(a,b, q_0)\in\R^3$.
i) Let $q_0=0, \n\ge 1$. We have two mappings $\x=G(q)$ and
$$
q\to \P_{A_0}(q)=\rt((\wt\m_{n}(q,A_0))_{n=1}^{\iy}\,;
(\f_{n}(q,A_0))_{n=1}^{\iy}\rt)
$$
and the composition of these mappings
\[
\lb{PG1} \x\to \P_{A_0}(G^{-1}(\x))=\P_{A_0}\circ G^{-1}(\x)
\]
Then due to Theorems \ref{T2} and \ref{T4}, we deduce that the
mapping $\P_{A_0}\circ G^{-1}$ is a real-analytic isomorphism
between $\mH_0$ and $\cM_1\ts \el2_1$, where $\cM_1$ is given by
\er{S1Mj} with $\m_n^0=(\pi n)^2+2(a+b)$.
ii) Let $q_0\in \R, \n=1,\ E_1=0$ and $(a,b,q)\in \R^2\ts\mW_1^0$.
Consider the Sturm-Liouville operator $-\D_q$ given by
\[
\lb{ipC31aa}
\begin{aligned}
-\D_qf=-{1\/\r^2}(\r^2f')',\qqq f'(0)-af(0)=0,\qquad f'(1)+b f(1)=0.
\end{aligned}
\]
Let $\m_n=\m_n(q,a,b), n=0,1,2,...$ be the eigenvalues of the
Sturm-Liouville problem \er{ipC31aa}. It is well known that
\[
\lb{ipC32} \m_n=\m_n^0+c_0+\wt\m_n(q,a,b),\quad
\mathrm{where} \quad (\wt\m_n)_{1}^{\iy}\in\ell^2,\qq c_0=\|q\|^2.
\]
Here $\m_n^0=(\pi n)^2+2(a+b)$, $n\ge 1$ are the unperturbed eigenvalues
for $r=1$. We introduce the norming constants
\[
\label{ipC33}
\f_n(q,a,b)=\log\left|\r(1)f_n(1,q,a,b)\/f_n(0,q,a,b)\right|,\qquad
n\ge 0,
\]
where $f_n$ is the $n$-th eigenfunction. Note that $f_n(1,a,q,b)\ne
0$ and $f_n(0,q,a,b)\ne 0$.
Under the transformation $\mU: L^2([0,1],\r^2dx)\to L^2([0,1],dx)$,
given by $y=\mU f= \r f$, we obtain
$$
\mU (-\D_{\r,u}) \mU^{-1}=S_p+c_0,\qq S_p y=-y''+py,
$$
where due to \er{bcfy} the function $y$ satisfies the following
boundary conditions
\[
\lb{if2x}
\ca f'(0)-af(0)=0,\\
f'(1)+b f(1)=0,\ac
\qq \Leftrightarrow \qq
\ca y'(0)-(a+q_0)y(0)=0,\\
y'(1)+(b-q_0) y(1)=0,\ac \qq
a,b\in \R\cup \{\iy\}.
\]
We have two mappings $\x=G(q)$ and
$$
q\to \P_{a,b}(q)=\rt((\wt\m_{n}(q,a,b))_{n=1}^{\iy}\,;
(\f_{n}(q,a,b))_{n=1}^{\iy}\rt)
$$
and the composition of these mappings
\[
\lb{PG1x} \x\to \P_{a,b}(G^{-1}(\x))=\P_{a,b}\circ G^{-1}(\x)
\]
Then due to Theorems \ref{T2} and \ref{T4}, we deduce that the
mapping $\P_{a,b}\circ G^{-1}$ is a real-analytic isomorphism
between $\mH_0$ and $\cM_1\ts \el2_1$, where $\cM_1$ is given by
\er{S1Mj}.
\BBox
\section {Spectral data mapping for the case $q_0=0$}
\setcounter{equation}{0}
\subsection {Non-linear mapping}
In view of (\ref{if1}), we take $\r(x)$ as follows
\[
\lb{dro} \r(x)=e^{Q(x)},\qqq Q=\int_0^xq(t)dt,\qqq q_0=0.
\]
We assume that the potential $u=u(Q)$ is related to $q$ is the
following way.
\medskip
\noindent
\no {\bf Condition U.} {\it
The function $u(\cdot)$ is real analytic and satisfies
\[
\lb{con1}
u'(t)\le 0, \qqq \forall \ t\in \R.
\]
\[
\lb{con2}
\|u'(Q)\|\le F( \|q\|), \quad q\in \mW_1^0,
\]
for some increasing function $F: [0, \iy ) \to [0, \iy )$. Here
$\|\cdot\|$ denotes the norm of $L^2(0,1)$.}
\medskip
Since $\r, u$ are related with $q$ by \er{dro} and the Condition U,
we write $\Delta_q$ instead of $\Delta_{\r,u}$. Now, we recall the
theorem from \cite{IK13} about the following mapping
\[
\lb{Pu}
p=P(q)=q'+q^2+u(Q)-c_0,\qqq c_0=\int_0^1(q'+q^2+u(Q))dx.
\]
\begin{theorem}
\lb{TPg}
The mapping $P:\mW_1^0\to \mH_0$ given by \er{Pu} is a real analytic
isomorphism between the Hilbert spaces $\mW_1^0$ and $\mH_0$. In particular, the operator ${\pa P\/\pa q}$ has an inverse for each $q\in
\mW_1^0$. Moreover, it has the following properties.
\noindent
(1) Let $p=P(q), q\in \mW_0^1$. Then the following estimates hold true
\[
\begin{aligned}
\lb{eP1}
&\|q'\|^2\le \|p\|^2 \le \|q'\|^2+2\|q^2\|^2+2\|u\|^2-c_0^2,\\
&\|u\|\le \|q\| F(\|q\|).
\end{aligned}
\]
\noindent
(2) The mapping $P(q)-q' : \mW_1^0 \to \mH_0$ is compact.
Furthermore, the mapping $q\to p=P(q), q\in \mW_1^{0,odd}$
given by \er{Pu} is a real analytic
isomorphism between the Hilbert spaces $\mW_1^{0,odd}$ and $\mH_0^{even}$.
\end{theorem}
{\bf Remark.} 1) The mapping $q\to p=q'+q^2+u-c_0 : \mH_1 \to
\mH_0$ was considered in \cite{K02}. In some cases the mapping
$\mH_0$ into $\mH_{-1}$ is also useful (see \cite{K03},
\cite{BKK03}).
2) In the case of inverse spectral theory for surfaces of
revolution, we study the case of the function $u=E\r^{-{4\/d}}$.
Here $d+1\ge 2$ is the dimension of the surface of revolution and $E\ge 0$
is a constant.
\
Our second main theorem asserts that the mapping in Theorem
\ref{TPg}
preserves the boundary conditions and spectral data.
\begin{theorem}
\lb{T2g} Let $p=P(q), q\in \mW_1^0$, be defined by \er{Pu}. Then
the operators $S_p$ and $\D_q$ are unitarily equivalent. In
particular, they
have the same boundary conditions, eigenvalues and the norming constants.
\end{theorem}
Therefore, the inverse problem for $\Delta_q$ is solvable if and only
if so is for $S_p$. Let us consider the following three cases separately.
\subsection{Dirichlet boundary condition : $a=b=\iy$.}
Consider the Sturm-Liouville operator $\D_{q}$ defined in
$L^2((0,1);\r^2(x)dx)$, where $\r(x)=e^{Q(x)}$, having the form
$ \D_{q} f=-{1\/ \r^2}(\r^2f')'+u(Q) f$ equipped with the boundary
condition $f(0)=f(1)=0$. Here $Q(x)=\int_0^xq(t)dt,\qqq q_0=0$ and
$u$ satisfies Condition U.
Denote by $\m_n=\m_n(q), n\ge 1$, the eigenvalues of $\D_q$ subject
to the boundary condition $f(0)=f(1)=0$ for the case $a =b =\infty$.
It is well-known that all $\m_n$ are simple and satisfy
$$
\m_n=\m_n^0+c_0+\wt\m_n,\quad {\rm where}\quad
(\wt\m_n)_{1}^{+\iy}\in\ell^2, \qq c_0=\int_0^1(q^2+u)dt,
$$
where $\m_n^0=(\pi n)^2$, $n\ge 1$, denote the unperturbed eigenvalues.
The norming constants are defined by
\[
\label{nc00} \vk_n(q)=\log\left|\r(1)f_n'(1,q)\/f_n'(0,q)\right|,\qquad n\ge 1,
\]
where $f_n$ is the $n$-th eigenfunction. Note that $f_n'(0)\ne 0$ and
$f_n'(1)\ne 0$.
We recall theorem from \cite{IK13}.
\begin{theorem}
\label{Tip1g} Let $a=b=\iy$.
Then the mapping
$$
\P : q\mapsto \left((\wt\m_{n}(q))_{n=1}^{\iy}\,;
(\vk_{n}(q))_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\cM_1\ts \el2_1$,
where $\mathcal M_1$ is
defined by (\ref{S1Mj}) with $\m_n^0=(\pi n)^2,
n\ge 1$.
In particular, in the symmetric case the spectral mapping
\[
\wt\m: \mW_1^{0,odd}\to \cM_1,\qqq {\rm given \ by } \qqq p\to
\wt\m
\]
is a real real analytic isomorphism between the Hilbert space
$\mW_1^{0,odd}$ and $\cM_1$.
\end{theorem}
\subsection{Mixed boundary condition : $a=\iy, b\in \R$.}
Consider the Sturm-Liouville operator $\D_{q}$ defined in
$L^2((0,1);\r^2(x)dx)$, where $\r(x)=e^{Q(x)}>0$, having the form
$ \D_{q} f=-{1\/ \r^2}(\r^2f')'+u(Q) f$ equipped with the mixed
boundary condition $f(0)=0, f'(1)+bf(1)=0$. Here $Q=\int_0^xq(t)dt,\
q_0=0$ and $u$ satisfies Condition U.
Let $\mu_n=\mu_n(q,b), n\geq 0$, be the eigenvalues of $-\D_q$ subject
to the boundary condition $f(0)=0, f'(1)+bf(1)=0$ for the case
$a=\iy, b\in \R$. We then have
$$
\mu_n=\mu_n^0+c_0+\wt\mu_n(q,b),\quad \mathrm{where} \quad
(\wt\mu_n)_{1}^{\iy}\in\ell^2,\qq c_0=\int_0^1(q^2+u)dt,
$$
and $\mu_n^0=\pi^2(n+{1\/2})^2+2b$, $n\ge 0$, denote the unperturbed eigenvalues.
The norming constants are defined by
\[
\label{ncbxx} \c_n(q,b)=\log\left|\r(1)f_n(1,q,b)\/f_n'(0,q,b)\right|,
\qquad n\ge 0,
\]
where $f_n$ is the $n$-th eigenfunction.
Note that $f_n'(0,q,b)\ne 0$ and $f_n(1,q,b)\ne 0$.
A simple calculation gives
$$
\c_n^0=\c_n(0,0)=-\log \pi(n\!+\!{\textstyle{1\/2}}),\qquad {\rm where}\qquad
\sqrt{\mu_n^0}=\pi(n\!+\!{\textstyle{1\/2}}).
$$
We recall theorem from \cite{IK13}.
\begin{theorem}
\lb{Tip2g} i) For each fixed
$b\in \R$ the mapping
$$
\P:q\mapsto
\left((\wt\mu_n(q,b))_{n=1}^{\iy}\,;(\c_{n-1}(q,b)-\c_{n-1}^0)_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\mathcal M_1\ts\ell^2_1$,
where $\mathcal M_1$ is
defined by (\ref{S1Mj}) with $\m_n^0=\pi^2(n+{1\/2})^2+2b,
n\ge 1$.
ii) For each $(q,b)\in \mW_0^1\ts\R$ the following identity holds true:
\[
\label{IdentityBa}
b=\sum_{n=0}^{\iy} \lt(2-{e^{\c_n(q,b)}\/|{\pa w\/\pa \l}(\mu_n,q,b)|}\rt),
\]
where
\[
\label{adamy} w(\l,q,b)=
\cos\sqrt\l\cdot\prod_{n=0}^{+\iy}{\l-\mu_n(q,b)\/\l-\mu_n^0}\,,\qquad \l\in\C.
\]
Here both the product and the series converge uniformly on bounded subsets
on the complex plane.
\end{theorem}
\subsection{Robin boundary condition : $a,b\in \R$.}
Consider the Sturm-Liouville operator $\D_{q}$ defined in
$L^2((0,1);\r^2(x)dx)$, where $\r(x)=e^{Q(x)}>0$, having the form
$ \D_{q} f=-{1\/ \r^2}(\r^2f')'+u(Q) f$ equipped with the generic
boundary condition $f'(0)-af(0)=0, f'(1)+bf(1)=0$. Here
$Q=\int_0^xq(t)dt,\ q_0=0$ and $u$ satisfies Condition U.
Let $\m_n=\m_n(q,a,b), n\geq 0,$ be the eigenvalues of $\Delta_q$
subject to the boundary condition $f'(0)-af(0)=0, f'(1)+bf(1)=0$ for
the case $a, b\in \R$. Then we have
$$
\m_n=\m_n^0+c_0+\wt\m_n(q,a,b),\quad \mathrm{where} \quad
(\wt\m_n)_{1}^{\iy}\in\ell^2,\qq c_0=\int_0^1(q^2+u)dt,
$$
and $\m_n^0=(\pi n)^2+2(a+b)$ denote the unperturbed eigenvalues.
We introduce the norming constants
\[
\label{ncab4}
\f_n(q,a,b)=\log\left|\r(1)f_n(1,q,a,b)\/f_n(0,q,a,b)\right|, \qquad
n\ge 1,
\]
where $f_n$ is the $n$-th eigenfunction. Note that $f_n(1,a,q,b)\ne
0$ and $f_n(0,q,a,b)\ne 0$. We recall the results from \cite{IK13}.
\begin{theorem}
\label{Tip3g} For any $a,b\in\R$, the mapping
$$
\P_{a,b}:q\mapsto \left((\wt\m_{n}(q,a,b))_{n=1}^{\iy}\,;
(\f_{n}(q,a,b))_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\cM_1\ts \el2_1$, where
$\cM_1$ is given by
(\ref{S1Mj}) with $\m_n^0=(\pi n)^2+2(a+b)$.
\end{theorem}
\subsection{Proof of Theorems \ref{T1} $\sim$\ref{T3}.}
Recall that due to \er{3} we obtain that the Laplacian on $(M, g)$
is unitarily equivalent to a direct sum of one-dimensional
Schr\"odinger operators, namely, $ \lb{} -\D_{(M,g)}\backsimeq
\os_{\n\ge 1} \D_\n, $ where the direct sum acts in $\os_{\n\ge 1}
L^2([0,1],dx)$. We consider the inverse problem for the operator
$\D_\n$ for fixed $\n\ge 1$ and $q_0=0$.
{\bf Proof of Theorem \ref{T1}.} We consider
the inverse problem
for the operator $\D_q$ given by
\[
\lb{snH}
\begin{aligned}
&\D_\n =-{1\/\r^2} \pa_x \r^2 \pa_x +{E_\n\/r^2} ,\\
&\r=r^{m\/2}=\r_0e^{Q}, \qq Q(x)=\int_0^x(q_0+q)dt,\qq q\in \mW_1^0,
\end{aligned}
\]
under the Dirichlet boundary conditions $f(0)=f(1)=0$ and for each
$\n\ge 1$.
Consider the case $q_0=0$. We apply Theorem \ref{Tip1g} to our
operator $\D_\n$, since the function $u={E_\n\/r^2}=E_\n
e^{-{4\/m}Q}$ satisfies Condition U. Then Theorem \ref{Tip1g} gives
that the mapping $ \P : q\mapsto \left((\wt\m_{n}(q))_{1}^{\iy}\,;
(\vk_{n}(q))_{1}^{\iy}\right) $ is a real-analytic isomorphism
between $\mW_1^0$ and $\cM_1\ts \el2_1$, where $\cM_1$ is given by
\er{S1Mj} with $\m_n^0=(\pi n)^2$. In particular, in the symmetric
case the spectral mapping $\wt\m: \mW_0^{1,odd}\to \cM_1$ given by $
q\to \wt\m $ is a real real analytic isomorphism between the Hilbert
space $\mW_0^{1,odd}$ and $\cM_1$.
The case $\n=1$ and $E_1=0$ has been considered in Theorem \ref{Tipq1}.
\BBox
\bigskip
{\bf Proof of Theorem \ref{T2}.}
We consider the inverse problem for the operator $-\D_\n$ givn by
\er{snH},
under the mixed boundary conditions $f(0)=0, f'(1)+bf(1)=0$
for any fixed $(b,\n)\in \R\ts\N$.
Consider the case $q_0=0$. We apply Theorem \ref{Tip2g} to our
operator $-\D_\n$, since the function $u={E_\n\/r^2}=E_\n
e^{-{4\/m}Q}$ satisfies Condition U. Then Theorem \ref{Tip2g} gives
that the mapping
$$
\P:q\mapsto
\left((\wt\mu_n(q,b))_{n=1}^{\iy}\,;(\c_{n-1}(q,b)-\c_{n-1}^0)_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\mathcal M_1\ts\ell^2_1$,
where
$\mathcal M_1$ is given by \er{S1Mj} with $\m_n^0=(\pi
n+{1\/2})^2+2b$. Moreover, for each $(q;b)\in \mW_1^0\ts\R$ the
following identity holds true:
\[
\label{IdentityBb}
b=\sum_{n=0}^{+\iy} \lt(2-{e^{\c_n(q,b)}\/|{\pa w\/\pa \l}(\mu_n,q,b)|}\rt),
\]
where the function $w(\l,q,b)$ is given by
\[
\label{adamx} w(\l,q,b)=
\cos\sqrt\l\cdot\prod_{n=0}^{+\iy}{\l-\mu_n(q,b)\/\l-\mu_n^0}\,,\qquad \l\in\C.
\]
where both the product and the series converge uniformly on bounded subsets
on the complex plane.
The case $\n=1$ and $E_1=0$ has been considered in Theorem \ref{Tipq2}.
\BBox
{\bf Proof of Theorem \ref{T3}.} We consider the inverse problem for
the operator $-\D_\n$ given by\er{snH}, under the generic boundary
conditions $f'(0)-af(0)=0, f'(1)+bf(1)=0$ for any fixed $(a,b,\n)\in
\R^2\ts\N$.
Consider the case $q_0=0$. We apply Theorem \ref{Tip3g} to our
operator $-\D_\n$, since the function $u={E_\n\/r^2}=E_\n
e^{-{4\/m}Q}$ satisfies Condition U. Then Theorem \ref{Tip3g} gives
that the mapping
$$
\P_{a,b}:q\mapsto \left((\wt\m_{n}(q,a,b))_{n=1}^{\iy}\,;
(\f_{n}(q,a,b))_{n=1}^{\iy}\right)
$$
is a real-analytic isomorphism between $\mW_1^0$ and $\cM_1\ts
\el2_1$, where $\cM_1$ is given by \er{S1Mj} with $\m_n^0=(\pi
n)^2+2(a+b)$.
The case $\n=1$ and $E_1=0$ has been considered in Theorem \ref{Tipq3}.
\BBox
\setlength{\itemsep}{-\parskip} {\footnotesize \no {\bf
Acknowledgments.} Various parts of this paper were written during
Evgeny Korotyaev's stay in the Mathematical Institute of University
of Tsukuba, Japan and Mittag-Leffler Institute, Sweden. He is
grateful to the institutes for the hospitality. His study was
supported by the RSF grant No. 15-11-30007. }
|
1,477,468,750,904 | arxiv | \section{Introduction}
Many mathematical models are described by non-linear ordinary differential equations (ODEs). It is therefore important to be able to visualise solutions to these equations, and study how their dynamic behaviour changes when parameter values are modified. The theory of bifurcation analysis provides a rigorous mathematical framework for understanding how the stability of fixed points and limit cycles change as the parameters of the system are varied. Bifurcation theory can be combined with numerical continuation in order to trace the boundaries in parameter space that separate different dynamical regimes, and several software packages exist for this purpose, such as AUTO \cite{Doedel2012} and MATCONT \cite{Dhooge2003}. Before applying numerical continuation tools, in many cases it is useful to first get a rough understanding of how the phase space and system behaviour for particular parameter values, since the initial points for continuation (such as approximate fixed point and limit cycle positions) must be known. This is where more qualitative investigation (such as visualisation of the system) can be useful.
In this paper we introduce new software, ``Fireflies''\footnote{Named after the classic dynamical systems example of firefly phase synchronisation, and also because the visualisations produced (slightly) resemble swarms of fireflies.} for the dynamic visualisation of ODE solutions. Instead of the traditional method of showing complete trajectories in phase space, Fireflies presents the user with a two- or three-dimensional view of a cloud of moving particles. Each particle represents the position in state space of one trajectory at the current point in time, and as the simulation runs the particles move around the screen according to the equations of the system. Different coloured groups of particles can be given different ranges of initial conditions, and each group can be either integrated forwards or backwards in time. Forwards moving particles illuminate attractors (e.g. stable fixed points and limit cycles) of the system, while backwards moving ones illuminate repelling (unstable) objects. By watching the motion of the particles, it is also possible to see features such as saddle separatrices, and how the speed of movement varies around limit cycles. Finally, Fireflies can also generate ``dynamic'' bifurcation diagrams, in which one of the visualised dimensions corresponds to a chosen parameter and each particle is given a different value for this parameter.
When only the current position of each trajectory is shown, a very large number of particles must be used in order for the structure of the system to be visible - we find that several million are required for good results. Numerically integrating this many equations quickly enough to display an interactive animation is not typically possible using the limited parallelism of traditional central processing units (CPU), but is an ideal problem for so-called ``massively parallel'' graphical processing units (GPUs). Unlike CPUs, which typically contain a small number (up to 18, at time of writing) of largely independent processing cores, GPUs consist of hundreds or thousands of cores, all of which execute the same code at the same time. Nowadays GPUs, which were originally developed to accelerate rendering in 3D graphics applications, sit alongside the central processing unit (CPU) in almost all modern computers, including tablets and smartphones. Despite their origins in the video games industry, the use of GPUs is rapidly becoming an essential part of many scientific computing applications, due to their low cost, widespread availability, and potential for large efficiency gains in certain types of computations. Fireflies consists of a user friendly graphical interface which can be used to produce simulations of N-dimensional systems of ODEs. Thanks to the power of GPU computing, these simulations can contain millions of particles while still running quickly enough to be interactive, allowing the user to change parameter values and immediately observe the effect of this on the particles' motion.
In this paper we will demonstrate the capabilities of Fireflies by showing visualisations of three example systems of ODEs. We will show how using Fireflies to study these systems can give insights into their behaviour and how their dynamics depends on their parameters. Note that the figures show static screenshots from moving simulations; in order to properly appreciate the capabilities of the software we recommend viewing the movies included in the Supplementary Material.
\section{Example Systems}
\subsection{A Two-Dimensional Model of the Basal Ganglia}
\label{sec:stngpe}
In this section we show an example of a two dimensional system of non-linear ODEs, corresponding to a simple model of neuronal activity. We show how Fireflies makes the dynamics of this system clear, and how the bifurcations of the system can be observed in an exciting new way. Although the system is two-dimensional, we also show how Fireflies can be used to create an interactive three-dimensional bifurcation diagram, by extending the visualisation into parameter space.
Based on earlier modelling work \cite{Gillies2002,Pavlides2012} we developed a mathematical model of activity in two connected regions in the brain: the subthalamic nucleus (STN) and external globus pallidus (GPe) \cite{Merrison-Hort2013}. These regions are both part of the ``basal ganglia'', a group of nuclei that are known to be involved in movement control and that are severely affected by Parkinson's Disease. A current subject of much interest is the fact that the Parkinsonian basal ganglia show a much larger degree of rhythmically modulated (oscillatory) activity \cite{Boraud2005}. We therefore use our model to study the conditions under which oscillations can appear in the STN-GPe network.
The equations of our model are:
\begin{equation}
\label{eqn:stn_gpe}
{\tau_s}\dot{x} = -x + {Z_s}(w_{ss}x - w_{gs}y + I)
\end{equation}
\begin{equation}
\label{eqnGPe}
{\tau_g}\dot{y} = -y + {Z_g}(-w_{gg}y + w_{sg}x)
\end{equation}
In these equations, which are based on the Wilson-Cowan formulation \cite{Wilson1972}, the variables $x$ and $y$ correspond to the average level of neuronal activity in the STN and GPe populations, respectively. The populations are coupled to each other and themselves through chemical connections (synapses), and the strength of these connections are given by the parameters $w_{gg}$, $w_{gs}$, $w_{ss}$ and $w_{sg}$ (so, for example, $w_{sg}$ is the strength of the connection from STN to GPe). The STN population is excitatory, meaning it acts to increase the activity of its synaptic targets, while the GPe is inhibitory, meaning it acts to decrease the activity of its synaptic targets. The functions $Z_s$ and $Z_g$ are monotonically increasing sigmoid curves that describe how each population responds to synaptic input, and the parameters $\tau_s$ and $\tau_g$ are time constants that determine how quickly the activity in each population changes.
Here we will show how Fireflies can be used to investigate how the behaviour of the system changes with one of its parameters. The parameter we will vary is $w_{ss}$, which corresponds to the strength of self-excitation in the STN. \cite{Gillies2002} demonstrated that if the neurons within the STN are able to excite each other (i.e. $w_{ss} > 0$) then the STN-GPe circuit is able to generate oscillations, although currently there is no known biological mechanism for such self-excitation. The simulations shown all contain two ``particle groups'', one of which is integrated forwards in time and other backwards. Green and pink particles are used to render the forwards and backwards particle groups, respectively. Currently Fireflies supports only one solver method, 4\textsuperscript{th} order Runge-Kutta, with a constant time step that can be adjusted by the user during the simulation. All particles are given random initial conditions, with values of the system variables drawn from a uniform random distribution on the interval $(0,1)$; the values 0 and 1 correspond to the minimum and maximum levels of population activity respectively. Any trajectories that leave this square region, or which have not been reset for more than time $T_{max}$, are reset to a new random set of initial conditions.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Figures/stngpe_composite.png}
\caption{Exploring the two dimensional STN/GPe model in Fireflies under variation of $w_{ss}$. (A)-(D) show screenshots from Fireflies for different values of $w_{ss}$, with 700,000 particles, half of which are integrated forwards in time (green) and half of which are integrated backwards (pink). (A) Full screenshot of the Fireflies window with $w_{ss}=0$. All forward-moving particles approach the globally stable spiral (in this still picture it is very difficult to see the backwards particles, which diverge quickly to infinity). (B) $w_{ss}=4.9$: A stable spiral, unstable limit cycle, and stable limit cycle. (C) $w_{ss}=7.8$: An unstable spiral and a stable limit cycle. (D) $w_{ss}=11$: An unstable node, saddle (not visible), and stable node. (E) Phase portrait (not generated by Fireflies) of the STN/GPe system with $w_{ss}=0$. All trajectories start from the borders of the phase plane and spiral in to the fixed point. The trajectories that start in the four corners of the phase plane are shown in pink.}\label{fig:stngpe_composite}
\end{figure}
Figure \ref{fig:stngpe_composite}A shows a screenshot of the simulation when $w_{ss}=0$, corresponding to the situation with no STN self-excitation. Under these conditions there is a single stable fixed point, which all of the green (forwards time) particles spiral in towards. The pink (backwards time) particles are difficult to see in this image, as they move very quickly out of the bounds of the system (to infinity) and are constantly being reset to new random initial conditions. It is interesting to note from this image that the density of particles approaching the spiral is not uniform, and has several discontinuities where the density suddenly changes. We suspected that this phenomena is a result of the boundaries of the phase space, and confirmed this with a traditional phase portrait where all the trajectories began on the edges of phase space (Figure \ref{fig:stngpe_composite}B). The borders where particle density changes quickly correspond to the trajectories that begin in the four corners of phase space (red lines in Figure \ref{fig:stngpe_composite}B); these four trajectories divide the phase plane into regions that ``funnel'' trajectories into the spiral.
The slider controls (visible in the bottom left of Figure \ref{fig:stngpe_composite}A) can be used to explore the effects of changing the parameters of the system. When the value of a parameter is changed using its slider, the dynamics of the system change immediately and the difference can be clearly seen in the movement of the particles. Figure \ref{fig:stngpe_composite}C shows the STN/GPe system after the strength of STN self-excitation ($w_{ss}$) has been increased to 4.9, taking the system through a fold of limit cycles bifurcation. For this parameter value, a pair of limit cycles (stable and unstable) now encircle the original stable fixed point. This illustrates the purpose of the group of particles that are integrated backwards (pink): these particles are attracted to unstable objects, here they reveal the location of the unstable limit cycle. As $w_{ss}$ is increased further, the unstable limit cycle shrinks around the fixed point and eventually disappears in a subcritical Andronov-Hopf bifurcation, at which point the fixed point becomes unstable. After this bifurcation the limit cycle is globally stable, but as $w_{ss}$ is increased further the period of oscillation increases. This can be seen on the visualisation in the form of particles ``bunching up'' and moving very slowly around one part of the cycle (Figure \ref{fig:stngpe_composite}C). Eventually, a saddle-node on invariant circle (SNIC) bifurcation occurs, after which all trajectories approach a new, globally stable, fixed point.
Fireflies can also be used to generate three-dimensional animated bifurcation diagrams of two-dimensional systems. To see this, we redefine the system so that the bifurcation parameter ($w_{ss}$) is a new state variable, subject to $dw_{ss}/dt=0$. The initial condition range for this new state variable is set to be the range of parameter values that are of interest. With this configuration, each particle has its own value for $w_{ss}$, and the space consists of a ``continuum'' of phase portraits, beautifully illustrating the dynamics of the system (Figure \ref{fig:stngpe_3d}). The user can further investigate the effects of the other parameters on the system's dynamics by varying them interactively and observing how this changes the 3D bifurcation diagram.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Figures/stngpe_3d.png}
\caption{3D bifurcation diagram of the STN/GPe system under variation of the parameter $w_{ss}$. The simulation contains 8 million particles, divided into a forwards in time group (green) and a backwards in time group (pink). The three visible bifurcations are marked with asterisks, from left to right: fold of limit cycles, subcritical Andronov-Hopf, saddle node on invariant circle. Note how the particles on the stable limit cycle become increasingly ``bunched'' at the top as the cycle approaches the SNIC bifurcation. The asterisks and direction arrows were added to the image manually and were not generated by Fireflies.}\label{fig:stngpe_3d}
\end{figure}
\subsection{The Lorenz Equations}
In this example we use Fireflies to visualise the dynamics of the three-dimensional Lorenz system. We describe in detail the results of various visualisations obtained for different values of a parameter of the system, and then produce an animated bifurcation diagram that summarises these results.
The classical Lorenz system consists of three coupled non-linear differential equations:
\begin{equation}
\label{eqn:lorenz_x}
\dot{x} = \sigma (y-x)
\end{equation}
\begin{equation}
\label{eqn:lorenz_y}
\dot{y} = x(r - z) - y
\end{equation}
\begin{equation}
\label{eqn:lorenz_z}
\dot{z} = xy - \beta z
\end{equation}
Where $\sigma$, $r$ and $\beta$ are parameters of the system ($\sigma,r,\beta > 0$). These equations were originally studied as a simplified model of convection in the atmosphere \cite{Lorenz1963}, and a straightforward description of the different dynamical regimes that they can produce, including chaos, can be found in any textbook on nonlinear dynamics (e.g. \cite[pp.311-320]{Strogatz1994}). In this section we will demonstrate the ability of Fireflies to visualise three dimensional systems by exploring the behaviour of the Lorenz equations in the case where $\sigma=10$ and $\beta=\frac{8}{3}$, while the parameter $r$ is gradually increased from zero.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Figures/lorenz_composite.png}
\caption{Exploring the Lorenz equations by varying $r$. All particles are integrated forwards in time, different colours use different initial condition ranges. Asterisks approximately indicate the origin. (A) $r=0.5$: All particles approach the stable fixed point at the origin. The complex shape is due to the boundaries of the initial conditions ($x\in(-10,10), y\in(-30,30), z\in(0,50)$) and the shape of the fixed point's manifolds. (B) $r=5$: The origin is now a saddle and two new stable fixed points have emerged. Both groups of particles have initial conditions near to the origin, but on opposite sides of its incoming separatrix (green: $x,y\in(0,0.01), z\in(0,0.01)$, blue: $x,y\in(-0.01,0), z\in(0,0.01)$). Trajectories approach one of the spirals, based on which side of the separatrix they start on. (C-D) $r=15$: Trajectories now switch between the two spirals for some time, before settling down and approaching one of them (transient chaos). (C) has initial conditions near the saddle, as in (B); these trajectories loop once around one spiral before approaching the other. (D) has initial conditions further away from the saddle; these trajectories can switch sides repeatedly. (E) $r=25$: A strange attractor. Trajectories starting near the saddle (green and blue) begin by approaching one of the spirals, before slowly spiralling away and swapping sides chaotically. Another group of particles (bronze) with initial conditions spanning a large area show the shape of the attractor.}\label{fig:lorenz_composite}
\end{figure}
For values of $r$ less than 1, the origin is the only stable fixed point. Figure \ref{fig:lorenz_composite}A shows Fireflies when $r = 0.5$, shortly after the simulation has started and all the particles are quickly approaching the origin. As was also seen in the 2D system described above, Fireflies shows how the space occupied by the cube of initial conditions is deformed around the fixed point as the particles move. Note that we do not show any backwards-moving particles in this section, as they are not generally useful in visualisations of the Lorenz equations. This is because any unstable fixed points or cycles are of saddle type, which means that neither forward nor backward moving particles tend to them as $t\rightarrow\infty$.
At $r=1$ a supercritical pitchfork bifurcation occurs, and two new stable fixed points emerge from the origin. Figure \ref{fig:lorenz_composite}B shows a screenshot from Fireflies with $r = 5$. With the new parameter value the origin becomes a saddle, and particles begin to move away from it in one of two directions, spiralling in to one of the new stable fixed points that were created in the bifurcation. To clearly show how trajectories spiral in to the fixed point, this simulation has two groups of particles (green and blue) with initial conditions that are drawn from separate small volumes in state space, both very close to the saddle at the origin, but on opposite sides of its incoming separatrix. The paths that the particles in these two groups take away from the saddle are very close to the saddle's outgoing separatrices. Trajectories starting at other points in state space all approach the spiral that is on the same side of the saddle's incoming separatrix as their initial position.
At $r \approx 13.926$ a homoclinic bifurcation occurs, at which point the saddle's outgoing separatrices join up with its incoming ones, resulting in the creation of a pair of unstable limit cycles. For values of $r$ beyond the bifurcation the separatrices have ``crossed over'', and trajectories that begin near the saddle make one cycle around their nearest spiral before returning back towards the saddle, and then spiralling in to the stable fixed point on the opposite side of the incoming separatrix, as shown in Figure \ref{fig:lorenz_composite}C. Trajectories that start a bit further away from the saddle, however, rotate around their closest spiral at a much greater distance. When this rotation brings them close to the saddle's incoming separatrix, some of them split off and begin rotating around the opposite spiral. This unpredictable swapping, which corresponds to transient chaos, can happen many times before a particle finally spirals fully in to one of the two stable fixed points. Figure \ref{fig:lorenz_composite}D shows one group of particles which all start closer to the top spiral than the bottom one, but a little further away from the origin than those in Figure \ref{fig:lorenz_composite}C. Although most spiral in to the top fixed point, with each rotation a mass of particles splits off and switches to rotate around the bottom spiral.
Finally, at $r\approx24.06$ a strange attractor appears. Now, even though the two spirals remain stable until $r\approx24.74$, almost all trajectories flip backwards and forwards between the two spirals infinitely and chaotically. In Figure \ref{fig:lorenz_composite}E, the two groups of particles that start near to the origin (green and blue) initially behave as in \ref{fig:lorenz_composite}C, performing one large loop around their closest spiral before approaching quite close to the opposite spiral. The spirals are only very weakly stable now, however, and the trajectories spiral slowly away from them, instead of into them. After some time (as is just beginning to happen in \ref{fig:lorenz_composite}E), the particles come far enough away from the spiral that some of them switch sides, beginning an endless series of such seemly random side swappings on the strange attractor. The figure also contains a third set of particles (bronze coloured) which start from a much wider set of initial conditions; these show the general shape of the strange attractor's surface.
By applying the same technique as in the previous section, we can also use Fireflies to generate an animated bifurcation diagram for the Lorenz equations. To achieve this, we make parameter $r$ a state variable with $\dot{r}=0$ and set up a two dimensional projection with axes $r, y$. Note that a three dimensional projection could also be used, but due to the perspective transformation the results are not shown as clearly in this case. Figure \ref{fig:lorenz_bif} shows a screenshot from a simulation using this configuration, where each particle's initial value of $r$ is chosen from the interval $(0\cdots110)$. The supercritical pitchfork bifurcation is clearly visible at $r=1$, and the appearance of a strange attractor at $r\approx13.9$ is marked by particles beginning to form a cloud that resembles noise. This cloud clearly has some very detailed structure, however, as can be seen more and more clearly as $r$ increases. Several small windows of parameter values where the dynamics become regular are also visible, for example at $r=92$. Again, it should be kept in mind that Figure \ref{fig:lorenz_bif} is a static screenshot of a moving animation of 8 million particles. The other two parameters of the system can be changed interactively, and the effect of this variation on the bifurcation diagram is seen immediately.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Figures/lorenz_2dbif_full.png}
\caption{A ``live'' bifurcation diagram for the Lorenz equations. The diagram was created by setting up a simulation with 8 million particles, each with a random fixed value for $r$, projected onto the $(r, y)$ plane. The supercritical pitchfork at $r=1$ is clearly visible, as is the appearance of a strange attractor at $r\approx13.9$ and various transient windows of non-chaotic behaviour such as at $r=92$. Axes added manually.}\label{fig:lorenz_bif}
\end{figure}
\section{Coupled Hodgkin-Huxley Neurons}
\label{sec:hh}
In this section we demonstrate the use of Fireflies to visualise a system that is considerably more complex than those presented above, consisting of 15 coupled ODEs that describe the activity in three synaptically connected neurons.
In 1963 Alan Hodgkin and Andrew Huxley received a Nobel prize for their discovery of the mechanism that allows neurons to generate action potentials (the electrical impulses, or ``spikes'', that neurons use to communicate with each other). More than 50 years later the general structure of the equations that Hodgkin and Huxley used to describe this mechanism still forms the basis of an enormous number of biologically realistic computational models of neuronal activity. Here, however, we will use the original equations and parameters, which specifically relate to biophysical activity in the giant axon of the squid \cite{Hodgkin1952}. In this model, the electrical potential across a part of the neuron's membrane evolves according to the flow of sodium (Na) and potassium (K) ions through the membrane. The permeability of the membrane to these ions is controlled by gates, which open and close according to the membrane potential. Equations \ref{eqn:hh_v}--\ref{eqn:hh_n} describe the ``classical'' Hodgkin-Huxley model of neuronal dynamics in a population of $N$ neurons.
\begin{equation}
\label{eqn:hh_v}
C\dot{V_i} = \bar{g}_{lk}(e_{lk}-V_i) + h_i{m_i}^3\bar{g}_{na}(e_{na}-V_i) + {n_i}^4\bar{g}_k(e_k-V_i) + {I^i}_{syn}(t) + I_i
\end{equation}
\begin{equation}
\label{eqn:hh_h}
\dot{h_i} = \alpha_h(V_i)(1-h_i) - \beta_h(V_i)h_i
\end{equation}
\begin{equation}
\label{eqn:hh_m}
\dot{m_i} = \alpha_m(V_i)(1-m_i) - \beta_m(V_i)m_i
\end{equation}
\begin{equation}
\label{eqn:hh_n}
\dot{n_i} = \alpha_n(V_i)(1-n_i) - \beta_n(V_i)n_i
\end{equation}
\begin{equation*}
i = 1, 2, ..., N
\end{equation*}
Here $V_i$ is the membrane potential of the $i^{th}$ neuron and $h_i$, $m_i$ and $n_i$ represent the average state of the gates on its ion channels ($0\leq h_i,m_i,n_i \leq1$). The parameters $C$, $\bar{g}_{lk}$, $\bar{g}_{na}$, $\bar{g}_k$, $e_{lk}$, $e_{na}$ and $e_k$, along with the functions $\alpha_X(.)$ and $\beta_X(.)$ ($X \in \{h,m,n\}$), are described in \cite{Hodgkin1952} or any computational neuroscience textbook. All parameters mentioned so far take the values given in \cite{Hodgkin1952}. Parameter $I_i$ controls how much external current is injected into the cell - increasing this parameter causes a transition from quiescence to single action potential firing to repetitive firing. Finally, ${I^i}_{syn}$ represents the total synaptic current that arises as a result of inputs from the other neurons in the model. If $I_i={I^i}_{syn}=0$ then the membrane potential of neuron $i$ approaches a fixed point at the ``resting'' potential of 0 (mV). In this example we use a network where the neurons are arranged in a loop, with each receiving synaptic input from only the previous neuron:
\begin{equation}
\label{eqn:hh_isyn}
{I^i}_{syn}(t)=\bar{g}_{syn}(e_{syn}-V_i)s_{(i-1) mod N}
\end{equation}
\begin{equation}
\label{eqn:hh_s}
\dot{s_i}=\tau_r^{-1}(1+exp(-\sigma(V_i-\theta)))^{-1}(1 - s_i) - \tau_d^{-1}s_i
\end{equation}
The parameter $\bar{g}_{syn}$ is the maximum conductance (mS) of a synaptic connection (its ``strength''); each synaptic connection has the same strength in this model. $e_{syn}$ is the synaptic equilibrium potential: if $e_{syn}>0 (mV)$ then synaptic input is excitatory and acts to raise the membrane potential of the post-synaptic neuron above the resting state; in this example we set $e_{syn}=10$. For each neuron we consider a variable $s_i$, where $0<s_i<1$, which acts as an indicator for when the neuron is spiking. This variable increases to 1 very rapidly (with time constant $\tau_r$) when the neuron is firing an action potential, and then decays slowly (time constant $\tau_d$) after a spike, as described by equation \ref{eqn:hh_s}. Parameters $\sigma$ and $\theta$ are the slope and shift respectively of the sigmoid function which is used to increase $s_i$.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Figures/hh_single_final.png}
\caption{A screenshot of Fireflies running the single Hodgkin-Huxley neuron model with 500,000 particles. The position of each particle is projected into the 3D space $(h_1,m_1,n_1)$. Particles are coloured according to their position on each axis. Asterisk indicates the approximate position of the (weakly) unstable fixed point, with a cloud of particles moving slowly away from it. Axes added manually.}\label{fig:hh_single}
\end{figure}
We begin by briefly showing a visualisation of the four dimensional phase space of a single independent neuron under varying strengths of current injection. For low values of $I_1$ all trajectories approach the fixed point which corresponds to the resting state. As $I_1$ is increased past $I_1 \approx 6.25$ the resting state undergoes an Andronov-Hopf bifurcation and a stable limit cycle appears. This limit cycle corresponds to periodic spiking with a frequency that increases with $I_1$. Figure \ref{fig:hh_single} shows a screenshot of a simulation with a single neuron and $I_1=10$. The three dimensions of the projection are the three gating variables: $h_1$, $m_1$ and $n_1$. All the particles in this simulation eventually approach the stable limit cycle. However, there is also a spiral-shaped cloud of particles (marked with an asterisk), which is made up of trajectories that start close to the formerly stable fixed point. For parameter value $I_1=10$ this fixed point is only weakly unstable, so trajectories nearby spend a long time oscillating with a low (gradually increasing) amplitude before they approach the limit cycle.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth]{Figures/hh_3_composite.png}
\caption{A Hodgkin-Huxley model with three neurons spiking repetitively in response to current injection, connected by weak excitatory synapses. (A) An overview of the model's structure. (B) A screenshot of Fireflies running the model with 500,000 particles. The position of each particle is projected into the 3D space $(V_1,V_2,V_3)$. Particles are coloured according to their position on each axis. Asterisk indicates a small group of particles that are close to the limit cycle where each neuron spikes in turn. Axes added manually. (C) Individual plots (from XPPAUT) of trajectories starting from three different sets of initial conditions, leading to different limit cycles. i: Synchronous spiking; ii: Two neurons spiking, one suppressed; iii: Neurons spiking in turn.}\label{fig:hh_3}
\end{figure}
Next, we consider a ring of $N=3$ neurons (Figure \ref{fig:hh_3}A), where each neuron receives an identical current injection that is strong enough to cause repetitive firing ($I_{j}=10,j=1,2,3$). Such networks of coupled regular spiking neurons are typically able to synchronise their firing in either in-phase or anti-phase, depending on the strength and nature of synaptic connections \cite{Mirollo1990}. We set $e_{syn}=10 (mV)$ and $\bar{g}_{syn}=0.5 (mS)$ to simulate the case of weak excitatory coupling. From running a few simulations of the system from random initial conditions in XPPAUT \cite{Ermentrout2002} it was clear that with these parameter values there was a very stable synchronous state, where all three neurons fired regularly with the same frequency and phase shift. Figure \ref{fig:hh_3}B shows a screenshot of Fireflies after this system has been simulated for long enough that all particles have settled down onto stable attractors. Each particle's position projected onto the space $(V_1,V_2,V_3)$. The colour of each particle varies with its position,using a projection from the three dimensional co-ordinates to an RGB colour code. The prominenet solid white diagonal line is the synchronous limit cycle; most particles are attracted to this and continually move along the diagonal as the three neurons spike in unison. Figure \ref{fig:hh_3}C(i) shows the synchronous state as produced by XPPAUT.
In addition to the synchronous state, however, the visualisation also reveals four other stable limit cycles, three of which are easily visible with 500,000 particles. The three limit cycles appear as six coloured prongs in Figure \ref{fig:hh_3}B. By interactively moving around 3D space we were able to see that the particles were organised into three closed orbits, each of which included two differently coloured prongs. Each of these closed orbits corresponds to a limit cycle where two of the neurons are spiking and the third is silent. Simulations in XPPAUT revealed that each of these limit cycles corresponds to the state where the pre-synaptic neuron fires first, followed by the post-synaptic neuron, followed by a pause before the cycle repeats; this is shown in Figure \ref{fig:hh_3}C(ii). This detail about the order in which the neurons fire is not immediately visible in the Fireflies visualisation.
The other stable limit cycle that we have found has a relatively small basin of attraction, meaning that not many of the 500,000 particles approach it. It is also quite difficult to see in still screenshots, although some particles on this cycle are marked with an asterisk in Figure \ref{fig:hh_3}B. By following the path of these particles around the visualisation space, we can see that their trajectories correspond approach a stable limit cycle corresponding to the state where all three neurons spike in sequence. Interestingly, this order of this sequence is opposite to that of the direction of synaptic coupling (i.e. the firing sequence is neuron 1, neuron 3, neuron 2, ...). In order to verify that this was indeed a real limit cycle and not a numerical error in Fireflies, we wrote a short script to repeatedly (and sequentially) run simulations from random initial conditions in XPPAUT. The script recorded any sets of initial conditions that lead to solutions where all the neurons fired repetitively but non-synchronously. After performing a very large number of simulations this script had found only two sets of initial conditions that lead to the limit cycle that we had identified in Fireflies, and the resulting plots of $V(t)$ are shown in Figure \ref{fig:hh_3}C(iii).
The example in this section demonstrates a case where Fireflies could reveal several stable dynamical regimes that were not immediately obvious from running individual simulations of the system from random initial conditions. Due to the fact that the system under study is composed of three identical elements, it is not surprising to find that multiple regimes corresponding to different symmetries are possible. Furthermore, it is likely that other limit cycles exist - for example there may be one in which all three neurons fire in sequence in order of synaptic coupling, rather than in the opposite order as we observed here. However, because none of the hundreds of thousands of particles in our simulations approached such attractors, it is likely that they are either unstable or have extremely small basins of attraction.
\section{Discussion}
To facilitate the qualitative study of systems of ODEs, software such as XPPAUT \cite{Ermentrout2002}, often includes the ability to plot trajectories in phase space that start from multiple sets of initial conditions, chosen either from a regular grid in phase space or stochastically. However, in systems with high dimensional phase space (many ODEs) or with finely structured dynamical regimes, a very large number of initial conditions must be used in order to have a high level of confidence that all stable attractors have been found. This can make the qualitative investigation a very slow process, especially if one wishes to also examine how the picture varies with parameter values. It is also very difficult to visualise many different trajectories on the same phase portrait, as large numbers of them will completely fill the phase space. We believe that Fireflies offers a useful and exciting new way to investigate dynamical systems. This approach is more intuitive than traditional phase portraits, as it presents dynamical systems as they are: \textit{dynamic}.
In this paper we have demonstrated, through several examples, the power of Fireflies for qualitatively exploring the behaviour of dynamical systems. The visualisation of systems in explorable 2D or 3D space provide a new perspective on such systems that can help with intuitively understanding them. In addition to this, since Fireflies can simulate millions of trajectories in parallel, stable attractors can be discovered that might not have been found with other methods of qualitative investigation - as in section \ref{sec:hh} (although clearly as the dimensionality of systems increases, the number of particles required to uniformly sample phase space with a given resolution increases exponentially).
\begin{table}
\centering
\begin{tabular}{c|c|cc|c}
& & \multicolumn{2}{c|}{CPU} & \\
System & Particles & Single Core& Quad Core (est.) & GPU \\ \hline \hline
STN-GPe & 700k &182ms & 46ms & 1ms \\
Lorenz & 3m & 109ms & 27ms & 3ms \\
Hodgkin-Huxley & 500k & 942ms & 236ms & 22ms \\
\end{tabular}
\caption{The time taken to advance each of the systems shown in this paper by a single step (averaged over 1000 steps), when executed on a single core CPU and a GPU. The estimated best-case performance for a quad-core CPU was estimated by dividing the single core figure by four. The CPU used was an Intel Core i5-2500k (3.3Ghz) and the GPU was an Nvidia Geforce GTX 460. The C code for the CPU benchmark (available at www.bitbucket.org/rmerrison/fireflies) was adapted from the corresponding OpenCL kernel for each system, and was compiled using GCC 4.8.2 on Ubuntu 14.04 using the \texttt{-Ofast} option.}
\label{tab:benchmark}
\end{table}
Scientific computing is increasingly taking advantage of the power and availability of GPUs, and a range of techniques have emerged for using GPUs to solve problems in domains such as computational neuroscience \cite{Brette2012} and fluid dynamics \cite{Harris2004}. To demonstrate the necessity of ``massively parallel'' GPU computing for producing Fireflies' visualisations, we compared the average time taken to compute a single integration step using a traditional CPU (single core and estimated quad core) to the time taken when integration is performed on the GPU. Table \ref{tab:benchmark} shows the results of this comparison. For interactive graphical applications, it is normally considered necessary to render at least 30 frames per second in order to give the user a smooth experience, which means that each frame must be generated in 33ms or less. Since the times shown in table \ref{tab:benchmark} only represent the time taken to advance the simulation and do not include additional time needed to render particles to the screen, it would seem to be very difficult or impossible to run visualisations shown in this paper at an acceptable frame rate without using the GPU. Additionally, running the simulation on the CPU would also incur the significant additional overhead of passing the particles' current positions to the computer's graphical hardware for rendering; by performing integration on the GPU Fireflies minimizes the number of CPU-GPU memory transfers.
We have found that our new visualisation technique can be particularly useful in a teaching context, as it very easily demonstrates the behaviour that a particular set of equations produces. For example, visualisation of simple systems, such as the two-dimensional model of neuronal activity illustrated in section \ref{sec:stngpe}, can be used to show different bifurcations in a very intuitive way. For example, the parameter $w_{ss}$ is increased, Fireflies shows how particles move more and more slowly around one part of the limit cycle, becoming increasingly bunched together until finally being attracted into a new stable fixed point that appears in a SNIC bifurcation.
The ability of Fireflies to quickly simulate millions of trajectories at the same time is due to the fact that almost all processing occurs exclusively within the GPU, but this approach carries a number of disadvantages. In particular, due to the time taken to transfer data between GPU and CPU it is not possible to permanently record trajectories (e.g. for further analysis), however in future we plan to add the ability to record either a subset of trajectories, or all trajectories but with a recording interval that is greater than the time step. It should be noted, however, that this is not the main intended purpose of Fireflies, and other libraries for exploiting GPUs to solve ODEs, such as the Odeint library for C++ \cite{Ahnert2011}, may be more appropriate. Another limitation that arises from the use of the GPU is that Fireflies does not allow the equations of the system to contain branching (e.g. changing variable values in response to threshold crossing). This is because GPUs are based on a ``single instruction multiple data'' (SIMD) architecture, whereby each processing core executes the same instruction at the same time; due to this architecture branching causes a significant reduction in computation speed.
Fireflies is written in Python and utilises a number of cross-platform and open source libraries, namely NumPy \cite{VanDerWalt2011}, PyOpenGL\footnote{http://pyopengl.sourceforge.net/}, PyOpenCL \cite{Klockner2012}, PySide\footnote{http://www.pyside.org} and Mako\footnote{http://www.makotemplates.org}. The software has been tested in Linux and Windows, and should also work in Apple OSX. The full source code for Fireflies, along with instructions for its use, can be found at https://bitbucket.org/rmerrison/fireflies.
\section*{Acknowledgement}
I am extremely grateful to Roman Borisyuk for the valuable input that he gave during the development of Fireflies and the preparation of this manuscript.
\bibliographystyle{ieeetr}
|
1,477,468,750,905 | arxiv | \section{Introduction}
The use of random walks as a tool in
mathematical physics is now well established and they have been for
example widely used in classical statistical mechanics to study critical phenomena
\cite{FFS}. Analogous methods in
quantum statistical mechanics require the study of random walks on oriented
lattices,
due to the intrinsic non commutative character of the (quantum) world
\cite{CP2,LR}.
Although random walks in random and non-random environments have been
intensively studied for
many years, only a few results on random walks on oriented lattices are known. The
recurrence or transience properties of simple random walks on oriented
versions of $\mathbb Z^2$ are studied in \cite{CP} when the horizontal
lines are
unidirectional towards a random or deterministic direction. An interesting
feature of this model is that, depending on the orientation, the walk could
be either recurrent or transient. In a particular deterministic case, alternatively oriented horizontally rightwards
and leftwards, the recurrence of the simple random walk is proved, whereas the transience
naturally arises when the orientations are all identical in infinite regions.
More surprisingly, it is also proved that the recurrent character of the simple
random walk on $\mathbb Z^2$ is lost when the orientations are i.i.d. with zero mean.
In this paper, we study more general models and focus on spatially
inhomogeneous or dependent distributions of the orientations. We
introduce lattices for which the distribution of the orientation
is generated by a dynamical system and prove that the transience
of the simple random walk still holds under smoothness conditions
on this generation. We detail examples and counterexamples for
various standard dynamical systems. For ergodic dynamical systems,
we also prove a strong law of large numbers and, in the case of
i.i.d. orientations, a functional limit theorem with an
unconventional normalization due to the random character of the
environment of the walk, solving an open question of \cite{CP}.
The model and our results are stated in Section 2, Section 3 is
devoted to the proofs, while illustrative examples of such
dynamical orientations are given in Section 4.
\section{Model and results}
\subsection{Dynamically oriented lattices}
Let $S=(E,{\cal A},\mu,T)$ be a dynamical system where $(E,{\cal
A},\mu)$ is a probability space and $T$ is an invertible
transformation of $E$ preserving the measure $\mu$. This system is
used to introduce inhomogeneity or dependencies in the
distribution of the random orientations, together with a function
$f$ from $E$ to $[0,1]$, which satisfies $\int_E f d\mu=1/2$ to
avoid trivialities. By {\em orientations}, we mean a random field
$\ensuremath{\epsilon}=(\ensuremath{\epsilon}_y)_{y \in \mathbb{Z}} \in \{-1,+1\}^{\mathbb{Z}}$,
i.e. a family $\ensuremath{\epsilon}$ of $\{-1,+1\}$-valued random variables
$\ensuremath{\epsilon}_y,\;y \in \mathbb Z$, and we distinguish two different approaches
to introduce its distribution.
\subsubsection{Quenched case:} It describes orientations spatially inhomogeneously distributed. For
$x \in E$ the {\em quenched law} $\mathbb{P}_{T}^{(x)}$ is the
product probability measure on $\{-1,+1\}^{\mathbb{Z}}$, equipped
with the product $\sigma-$algebra
$\mathcal{F}=\mathcal{P}(\{-1,+1\})^{\otimes \mathbb{Z}}$, whose
marginals can be given by:
\[
\mathbb P_{T}^{(x)}(\ensuremath{\epsilon}_y= + 1) =f(T^{y} x).
\]
To simplify, we have used the same notation for the quenched law
and its marginals, which should be written
$\mathbb{P}_{T,y}^{(x)}$ with $\mathbb{P}_{T}^{(x)}= \otimes_y
\mathbb{P}_{T,y}^{(x)}$. This quenched case is an extension of
the i.i.d. case, with independent but not necessarily identically
distributed random variables. These random variables can be viewed
as the increments of a dynamic random walk \cite{gui0, gui1}.
\subsubsection{Annealed case:} We average on $x \in E$: the distribution of
$\ensuremath{\epsilon}$ is now $\mathbb{P}_\mu$ defined for all $A \in
\mathcal{F}$ by
\[
\mathbb{P}_\mu[\ensuremath{\epsilon} \in A] = \int_E \mathbb P_T^{(x)} [\ensuremath{\epsilon} \in A] d
\mu(x).
\]
The marginals are thus given for all $y \in \mathbb{Z}$ by
\[
\mathbb{P}_\mu[\ensuremath{\epsilon}_y=+1] = \int_E f(T^yx) d \mu(x)=\int_E f(x)
d\mu(x)=\frac{1}{2}
\]
and the hypothesis $\int_E f d\mu =\frac{1}{2}$ has been taken to
get $\mathbb{E}_\mu[\ensuremath{\epsilon}_y]=0$. The $T$-invariance of $\mu$
implies the translation-invariance of $\mathbb{P}_\mu$ but this
latter is not a product measure in general: The correlations of
the dynamical system for the function $f$, defined for all $y \in
\mathbb Z$ by
\begin{eqnarray}\label{cmu} C_\mu^f(y)&:=&\int_E f(x) \cdot f
\circ T^y(x) d\mu(x) - \int_E f(x)
d\mu(x) \cdot\int_E f \circ T^y(x) d \mu(x) \\
&=& \int_E f(x) f(T^y(x)) d\mu(x) - \frac{1}{4} \nonumber
\end{eqnarray}
are indeed related with the covariance of the $\ensuremath{\epsilon}$'s after a short
computation:
\begin{equation}\label{correlation} \forall y \in \mathbb Z, \rm{Cov}_\mu (\ensuremath{\epsilon}_0, \ensuremath{\epsilon}_y) =
4 \; C_\mu^f(y). \end{equation} One can thus construct dependent
variables whose dependence is directly related to the correlations
of the dynamical system. This annealed case leads in Section 4 to
another extension of the i.i.d. case where, independence is
dropped but translation-invariance is kept.
\subsubsection{Lattices}
We use these orientations to build {\em dynamically oriented
lattices}. They are oriented versions of $\mathbb Z^2$: the vertical
lines are not oriented and the horizontal ones are unidirectional,
the orientation at a level $y \in \mathbb Z$ being given by the random
variable $\ensuremath{\epsilon}_y$ (say right if the value is $+1$ and left if
it is $-1$). More formally we give the
\begin{definition}[Dynamically oriented lattices] Let $\ensuremath{\epsilon}=(\ensuremath{\epsilon}_y)_{y \in \mathbb Z}$
be a sequence of orientations defined as previously. The {\em
dynamically oriented lattice}
$\mathbb{L}^\ensuremath{\epsilon}=(\mathbb{V},\mathbb{A}^\ensuremath{\epsilon})$ is the (random)
directed graph with (deterministic) vertex set $\mathbb{V}=\mathbb Z^2$
and (random) edge set $\mathbb{A}^\ensuremath{\epsilon}$ defined by the condition
that for $u=(u_1,u_2), v=(v_1,v_2) \in \mathbb Z^2$, $(u,v) \in
\mathbb{A}^\ensuremath{\epsilon}$ if and only if $v_1=u_1$ and $v_2=u_2 \pm 1$, or
$v_2=u_2$ and $v_1=u_1+ \ensuremath{\epsilon}_{u_2}$.\end{definition}
\subsection{Simple random walk on $\mathbb{L}^\ensuremath{\epsilon}$}
We consider the simple random walk $M=(M_n)_{n \in \mathbb{N}}$
on $\mathbb{L}^\ensuremath{\epsilon}$.
For every realization $\ensuremath{\epsilon}$, it is a $\mathbb{Z}^2$-valued Markov chain defined on a probability space $(\Omega,
\mathcal{B},\mathbb{P})$, whose ($\ensuremath{\epsilon}$-dependent) transition probabilities are
defined for all $(u,v) \in \mathbb{V}\times \mathbb{V}$ by
\[\mathbb P[M_{n+1}=v | M_n=u]=\;\left\{
\begin{array}{lll} \frac{1}{3} \; &\rm{if} \; (u,v) \in \mathbb{A}^\ensuremath{\epsilon}&\\
\\
0 \; \; &\rm{otherwise.}&
\end{array}
\right.
\]
Its transience is proved in \cite{CP} for almost every orientation
$\ensuremath{\epsilon}$ when the $\ensuremath{\epsilon}_y$'s are i.i.d. and we generalize it in
this dynamical context when the orientations are either {\em
annealed} or {\em quenched}.
\begin{theorem} \label{thm1} Assume that
\begin{equation} \label{C}
\int_{E}
\frac{1}{\sqrt{f(1-f)}}\ d\mu <\infty \end{equation}
then:
\begin{enumerate} \item In the annealed case, for
$\mathbb{P}_{\mu}$-a.e. orientation $\ensuremath{\epsilon}$, the simple random walk
on dynamically oriented lattice $\mathbb{L}^{\ensuremath{\epsilon}}$ is transient.
\item In the quenched case, for $\mu$-a.e. $x\in E$, for
$\mathbb{P}_{T}^{(x)}$-a.e. realization of the orientation $\ensuremath{\epsilon}$,
the simple random walk on the dynamically oriented lattice
$\mathbb{L}^\ensuremath{\epsilon}$ is transient.
\end{enumerate}
\end{theorem} {\bf Remarks}
\begin{enumerate}
\item Non-invertible transformations $T$ of the space $E$ can also
be considered and in this case it is straightforward to extend the
conclusions of Theorem \ref{thm1} if the distribution of the
orientations $(\epsilon_{y})_{y\in \mathbb Z}$ have marginals defined by
$\mathbb P_{T}^{(x)} (\epsilon_{y} = +1) = f(T^{|y|} x)$. The measure
$\mathbb P_{\mu}$ is not stationary anymore in the annealed case (see
the example of Manneville-Pomeau maps of the interval in Section
4). \item In Section 4 we exhibit dynamical systems for which
Theorem \ref{thm1} applies (Bernoulli or Markov Shifts,
Manneville-Pomeau maps), but also counter-examples from a family
of dynamical systems (irrational and rational rotations on the
torus). The latter case provides instructive examples: when the
function $f$ satisfies (\ref{C}), the ergodicity (or not) of the
dynamical system is not required, whereas when $f$ does not fulfil
(\ref{C}), the properties of the underlying dynamical system can
play a role, e.g. in the non-ergodic case when, according to the
rational angle we choose, the simple random walk on the
corresponding oriented lattice can be transient or recurrent.
\end{enumerate}
\subsection{Limit theorems in the ergodic case}
Let us assume that the
dynamical system $S=(E,{\cal A},\mu,T)$ defined in Section 2.1 is
ergodic. \begin{theorem}[Strong law of large numbers] The random walk on the
lattice $\mathbb{L}^{\ensuremath{\epsilon}}$ has $\mathbb P \otimes \mathbb P_\mu$-almost
surely zero speed, i.e.
\begin{equation}
\lim_{n\rightarrow +\infty}\frac{M_{n}}{n}=(0,0)\ \ \ \ \ \mathbb P
\otimes \mathbb P_\mu-{\rm almost} \; {\rm surely.}
\end{equation}
\end{theorem}
\subsubsection{Functional limit theorem for i.i.d. orientations}
We also answer in this paper to an open question of \cite{CP} and
obtain a functional limit theorem with a suitable normalization.
We establish that the study of the simple random walk on
$\mathbb{L}^\ensuremath{\epsilon}$ is closely related to a {\it simple random walk
in a random scenery} defined for every $n\ge 1$ by
$$Z_n=\sum_{k=0}^{n} \ensuremath{\epsilon}_{Y_k}$$
where $(Y_k)_{k\ge 0}$ is the simple random walk on $\mathbb Z$ starting
from 0. Consider a standard Brownian motion $(B_{t})_{t\ge 0}$,
denote by $(L_{t}(x))_{t \ge 0}$ its corresponding local time at
$x\in\mathbb R$ and introduce a pair of independent Brownian motions
$(Z_{+}(x), Z_{-}(x)), x\geq 0$ defined on the same probability
space as $(B_{t})_{t\ge 0}$ and independent of him. The following
process is well-defined for all $t \geq 0$:
\begin{equation}\label{th}
\Delta_{t}=\int_{0}^{\infty}L_{t}(x)dZ_{+}(x)+\int_{0}^{\infty}L_{t}(-x)dZ_{-}(x).
\end{equation}
It has been proved by Kesten {\em et al.} \cite{KS} that this
process has a self-similar continuous version of index
$\frac{3}{4}$, with stationary increments. We denote
$\stackrel{\mathcal{D}}{\Longrightarrow}$ for a convergence in
the space of c\`adl\`ag functions $\mathcal{D}([0,\infty),\mathbb R)$
endowed with the Skorohod topology.
\begin{theorem}\label{thm11} \mbox{\bf [Kesten and Spitzer (1979)]}
\begin{equation} \Big(\frac{1}{n^{3/4}} Z_{[nt]} \Big)_{t \geq 0}
\; \stackrel{\mathcal{D}}{\Longrightarrow} (\Delta_t)_{t \geq 0}.
\end{equation} \end{theorem}
We introduce a real constant $m=\frac{1}{2}$, defined later as the
mean of some geometric random variables related to the behavior of
the walk in the horizontal direction\footnote{Our results are in
fact valid for similar model for which $m \neq \frac{1}{2}$
corresponding to non symmetric nearest neighbors random walks. Of
course the transience is not at all surprising in this case, but
getting the limit theorems can be of interest.}. Using Theorem
\ref{thm11}, we shall prove
\begin{theorem}[Functional limit theorem]\label{thm2}
\begin{equation} \label{flt}
\Big(\frac{1}{n^{3/4}} M_{[nt]} \Big)_{t \geq 0} \;
\stackrel{\mathcal{D}}{\Longrightarrow} \frac{m}{(1+m)^{3/4}}(
\Delta_t,0)_{t \geq 0}. \end{equation} \end{theorem} {\bf Remark} : It is not
surprising that the vertical component is negligible towards
$n^{3/4}$ because its fluctuations are of order $\sqrt{n}$. We
suspect that we have in fact
\[
\Big(\frac{1}{n^{3/4}} M^{(1)}_{[nt]}, \frac{1}{n^{1/2}
}M^{(2)}_{[nt]}\Big)_{t \geq 0} \;
\stackrel{\mathcal{D}}{\Longrightarrow} \Big(\frac{m}{(1+m)^{3/4}}
\Delta_t,B_t \Big)_{t \geq 0}
\]
but this is not straightforward because the horizontal
($M^{(1)}$) and vertical ($M^{(2)}$) components are not
independent. We believe that $(B_t)_{t \geq 0}$ and
$(\Delta_t)_{t \geq 0}$ are independent but this also has to be
proved.
\section{Proofs}
\subsection{Vertical and horizontal embeddings of the simple random walk}
The simple random walk $M$ defined on $(\ensuremath{\Omega},\mathcal{B},\mathbb P)$
can be decomposed into vertical and horizontal embeddings by projection to
the corresponding axis. The vertical one is a simple random walk
$Y=(Y_n)_{n \in \mathbb N}$ on $\mathbb{Z}$ and we define for all $n \in \mathbb N$
its {\em local time} at the level $y \in \mathbb Z$ by
\[
\eta_n(y)=\sum_{k=0}^n \mathbf{1}_{Y_k=y}.
\]
The horizontal embedding is a random walk with $\mathbb{N}$-valued
geometric jumps: a doubly infinite family $(\xi_i^{(y)})_{i \in
\mathbb{N}^*, y \in \mathbb Z}$ of independent geometric random variables
of mean $m=\frac{1}{2}$ is given and one defines the embedded
horizontal random walk $X=(X_n)_{n\in\mathbb N}$ by $X_0=0$ and for $n
\geq 1$,
\[
X_n=\sum_{y \in \mathbb Z} \ensuremath{\epsilon}_y \sum_{i=1}^{\eta_{n-1}(y)} \xi_i^{(y)}
\]
with the convention that the last sum is zero when
$\eta_{n-1}(y)=0$. Of course, the walk $M_n$ does not coincide
with $(X_n,Y_n)$ but these objects are closely related: Define for
all $n \in \mathbb N$
\[
T_n=n + \sum_{y \in \mathbb Z} \sum_{i=1}^{\eta_{n-1}(y)} \xi_i^{(y)}
\]
to be the instant just after the random walk $M$ has performed its
n$^{\rm{th}}$ vertical move. A direct and useful consequence of
this decomposition is the following result \cite{CP}.
\begin{lemma} \label{lem1} \begin{enumerate}
\item $M_{T_n}=(X_n,Y_n),\; \forall n \in \mathbb N$. \item For a given
orientation $\ensuremath{\epsilon}$, the transience of $(M_{T_n})_{n \in \mathbb N}$
implies the transience of $(M_n)_{n \in \mathbb N}$. \end{enumerate} \end{lemma}
\subsection{Proof of the transience of the simple random walk}
The vertical walk $Y$, independent of $\ensuremath{\epsilon}$, is known to be
recurrent with fluctuations of order $\sqrt{n}$. For any $i \in
\mathbb N$, $\delta_i$ is a strictly positive real number and we write
$d_{n,i}=n^{\frac{1}{2}+\delta_i}$ to introduce a partition of
$\Omega$ between typical or untypical paths of $Y$:
\[
A_n=\big\{ \ensuremath{\omega} \in \ensuremath{\Omega}; \max_{0 \leq k \leq 2n} \; |Y_k| <
d_{n,1} \big\} \; \cap \; \big\{ \ensuremath{\omega} \in \ensuremath{\Omega}; \max_{y \in \mathbb Z} \;
\eta_{2n-1}(y) < d_{n,2}\big\}
\]
and
\[
B_n=\big\{\ensuremath{\omega} \in A_n; \Big| \sum_{y \in \mathbb Z} \ensuremath{\epsilon}_y \eta_{2n-1}(y)
\Big| > d_{n,3}\big\}.
\]
We first consider the joint measures $\tilde{\mathbb{P}}_\mu =
\mathbb P \otimes \mathbb P_\mu$ (annealed case) or $\tilde{\mathbb P}_{T}^{(x)}=
\mathbb P \otimes \mathbb P_{T}^{(x)}$ (quenched case) and prove that \begin{equation}
\label{eqn1} \sum_{n \in \mathbb N} \tilde{\mathbb{P}}_\mu
[X_{2n}=0;Y_{2n}=0] \; < \; \infty.
\end{equation}
By definition
\[
\sum_{n \in \mathbb N} \tilde{\mathbb{P}}_\mu [X_{2n}=0;Y_{2n}=0] =
\int_E\sum_n \mathbb P[ \mathbb P_{T}^{(x)}[X_{2n}=0;Y_{2n}=0] ]d \mu(x)
\]
and we first decompose $\tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0]$
into \[ \tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0;A_n^c] +
\tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0;B_n]
+ \tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0; A_n \setminus B_n].
\]
Some results of the i.i.d. case of \cite{CP} still hold uniformly in $x$ and in particular
we can prove using standard techniques the following
\begin{lemma} \label{lem3} \begin{enumerate} \item For every $x\in E$,
$\sum_{n \in \mathbb N} \tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0;A_n^c]
\; < \; \infty$. \item For every $x\in E$, $\sum_{n \in \mathbb N}
\tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0;B_n] \; < \; \infty$.
\end{enumerate}
\end{lemma} Define the $\ensuremath{\sigma}$-algebras $\mathcal{F}=\sigma(Y)$ and
$\mathcal{G}=\sigma(\ensuremath{\epsilon})$ generated by the families of r.v.'s
$Y$ and $\ensuremath{\epsilon}$. Then one has
$$
p_n^{(x)}: =\tilde{\mathbb P}_{T}^{(x)}[X_{2n}=0;Y_{2n}=0;A_n \setminus
B_n] = \mathbb E\Big[ \mathbf{1}_{Y_{2n=0}} \mathbb E \big[\mathbf{1}_{A_n
\setminus B_n} \tilde{\mathbb P}_{T}^{(x)} \big[X_{2n}=0 \big|
\mathcal{F} \vee \mathcal{G} \big] \big| \mathcal{F} \big] \Big].
$$.
To prove the theorem, it remains to show that \begin{equation} \label{pn}\int_E
\left(\sum_{n \in \mathbb N} p_n^{(x)}\right)\ d\mu(x)<\infty. \end{equation} Recall
that for the simple random walk $Y$, there exists $C>0$ s.t. \begin{equation}
\label{srw} \mathbb P[Y_{2n}=0] \sim C \cdot n^{-\frac{1}{2}}, \; n
\rightarrow + \infty \end{equation} and we can prove as in \cite{CP} the
\begin{lemma} \label{lem5}
On the set $A_n \setminus B_n$, we have uniformly in $x\in E$, \begin{equation}
\label{sqrtln} \tilde{\mathbb P}_{T}^{(x)} \big[X_{2n}=0 \big|
\mathcal{F} \vee \mathcal{G}\big]= \mathcal{O} \Big(
\sqrt{\frac{\ln{n}}{n}}\Big). \end{equation}
\end{lemma} Hence, the transience of the simple random walk is a direct
consequence of the following
\begin{proposition} \label{prop1}
It is possible to choose
$\delta_1,\delta_2,\delta_3>0$ such that there exists $\delta>0$
and
\begin{equation} \label{eqn3}
\int_E \tilde{\mathbb P}_{T}^{(x)}\big[ A_n \setminus B_n \big|
\mathcal{F} \big]\ d\mu(x) = \mathcal{O} \big(n^{-\delta}). \end{equation}
\end{proposition}
{\bf Proof :} We have to estimate, on the event $A_{n}$, the
conditional probability
$$\tilde{\mathbb P}_{T}^{(x)}[|\sum_{y\in\mathbb{Z}}\zeta_{y}|\le
d_{n,3} \big| \mathcal{F} \big]$$
where
$\zeta_{y}=\ensuremath{\epsilon}_y \eta_{2n-1}(y), y\in\mathbb{Z}$. Let $G$ be a
centered Gaussian random variable with variance $d_{n,3}^2$,
(conditionally on $\mathcal{F}$) independent of the random
variables $\zeta_y$'s. Clearly,
$$\tilde{\mathbb P}_{T}^{(x)}\big[\sum_{y}\zeta_y\in [0,d_{n,3}]\big| \mathcal{F}\big] =
\frac{\tilde{\mathbb P}_{T}^{(x)}\big[\sum_{y}\zeta_y\in [0,d_{n,3}] ;
0\le G\le d_{n,3}\big| \mathcal{F}\big]}
{\tilde{\mathbb P}_{T}^{(x)}[0\le G\le d_{n,3}\big| \mathcal{F}]}$$
where $\tilde{\mathbb P}_{T}^{(x)}[0\le G\le d_{n,3}\big|
\mathcal{F}]=c>0$ is independent of $n$. Since $G$ is independent
of the random variables $\zeta_{y}$'s and using the symmetry of
the Gaussian distribution, we have
$$
\tilde{\mathbb P}_{T}^{(x)}\big[\sum_{y}\zeta_y\in [0,d_{n,3}] ; 0\le
G\le d_{n,3}\big|
\mathcal{F}\big]=\tilde{\mathbb P}_{T}^{(x)}\big[\sum_{y}\zeta_y\in
[0,d_{n,3}] ; -d_{n,3}\le G\le 0\big| \mathcal{F}\big].$$
Consequently, we obtain
$$\tilde{\mathbb P}_{T}^{(x)}\big[\sum_{y}\zeta_y\in [0,d_{n,3}]\big| \mathcal{F}]
\le \frac{1}{c}\tilde{\mathbb P}_{T}^{(x)}[|\sum_{y}\zeta_y +G|\le
d_{n,3} \big| \mathcal{F}\big] \ \rm{and}$$
$$\tilde{\mathbb P}_{T}^{(x)}\big[\sum_{y}\zeta_y\in [-d_{n,3},0]\big|
\mathcal{F}\big]\le
\frac{1}{c}\tilde{\mathbb P}_{T}^{(x)}\big[|\sum_{y}\zeta_y +G|\le
d_{n,3}\big| \mathcal{F}\big]$$ and then, we have the following
inequality
$$\tilde{\mathbb P}_{T}^{(x)}\big[|\sum_{y}\zeta_{y}|\le d_{n,3}\big|
\mathcal{F}\big]\le \frac{2}{c}\
\tilde{\mathbb P}_{T}^{(x)}\big[|\sum_{y}\zeta_y +G|\le d_{n,3}\big|
\mathcal{F}\big].$$ From Plancherel's formula, we deduce that
there exists a constant $C>0$ such that \begin{equation} \label{eqn4}
\tilde{\mathbb P}_{T}^{(x)}\big[ |\sum_{y}\zeta_y +G|\le d_{n,3} \big|
\mathcal{F} \big] \leq C \cdot d_{n,3} \cdot I_n(x) \end{equation} where
\[
I_n(x)=\int_{-\pi}^{\pi} \mathbb E\big[e^{it \sum_{y \in \mathbb Z} \ensuremath{\epsilon}_y
\eta_{2n-1}(y)} \big| \mathcal{F} \big]e^{-t^2 d_{n,3}^2/2} dt.
\]
To use that for $td_{n,3}$ small enough, $e^{-t^2 d_{n,3}^2/2}$
dominates the term under the expectation, we split the integral in
two parts. For $b_n=\frac{n^{\delta_2}}{d_{n,3}}$, we write
$I_n(x)=I_n^{1}(x) + I_n^2(x)$ with \begin{eqnarray*} I_n^1(x)=\int_{|t| \leq
b_n} \mathbb E\big[e^{it \sum_{y \in
\mathbb Z} \ensuremath{\epsilon}_y \eta_{2n-1}(y)} \big| \mathcal{F} \big] e^{-t^2 d_{n,3}^2/2} dt\\
I_n^2(x)=\int_{|t| > b_n} \mathbb E \big[e^{it \sum_{y \in \mathbb Z} \ensuremath{\epsilon}_y
\eta_{2n-1}(y)} \big| \mathcal{F} \big]e^{-t^2 d_{n,3}^2/2} dt.
\end{eqnarray*} To control the integral $I_n^2(x)$, we write \begin{eqnarray*}
|I_n^2(x)| &\leq& C \int_{|t| > b_n} e^{-t^2 d_{n,3}^2/2} dt=
\frac{C}{d_{n,3}} \int_{|s| > n^{\delta_2}} e^{-s^2/2} ds \; \leq
\; \frac{2C}{d_{n,3}} \/ n^{-\delta_2} \/ e^{-n^{2 \delta_2}
/2} \end{eqnarray*} to get uniformly in $x\in E$
\[
|I_n^2(x)|=\mathcal{O} \big(e^{-n^{2 \delta_2} / 2}).
\]
\begin{lemma} \label{lem6} For $\delta_3 > 2 \delta_2 $,
\[
\int_E |I_n^1(x)|\ d\mu(x)=\mathcal{O}
\big(n^{-\frac{3}{4}+\frac{\delta_1}{2}}\big).
\]
\end{lemma}
{\bf Proof :} From the definition of the orientations
$(\ensuremath{\epsilon}_{y})_{y}$, an explicit formula for the characteristic
function $\phi_{\ensuremath{\epsilon}_y}^{(x)}$ of the random variable $\ensuremath{\epsilon}_y$ can
be given and we deduce that
\[
|\phi_{\ensuremath{\epsilon}_y}^{(x)}(u)|^2 =\cos^2(u)+(2f(T^{y} x)-1)^2\sin^2(u) =
1-4f(T^{y} x)(1-f(T^{y} x))\sin^2(u) \] and by independence of the
$\ensuremath{\epsilon}$'s we get
$$|I_n^1(x)|\le \int_{|t| \leq b_n} |
\prod_{y}\phi_{\ensuremath{\epsilon}_y}^{(x)}(\eta_{2n-1}(y)t)|\ dt.$$ Denote
$p_{n,y}=\frac{\eta_{2n-1}(y)}{2n}$, $C_n=\{y:\eta_{2n-1}(y)\ne
0\}$ and use H\"older's inequality to get \[ |I_n^1(x)|\leq
\prod_{y} \Big[ \Big( \int_{|t| \leq b_n}
|\phi_{\ensuremath{\epsilon}_y}^{(x)}(\eta_{2n-1}(y)t)|^{1/p_{n,y}} dt
\Big)^{p_{n,y}}\Big]. \] Now, using the fact that we work on
$A_n$, we choose $\delta_3>2 \delta_2$ s.t.
$\lim_n b_n\eta_{2n-1}(y)= 0$ uniformly in $y$.
Using $\sin(x)\ge \frac{2}{\pi} x$ for $x\in [0,\frac{\pi}{2}]$
and $\exp(-x)\ge 1-x$, one has \begin{eqnarray*} |I_n^1(x)|& \leq
& \prod_{y\in C_n} \left(\frac{1}{\eta_{2n-1}(y)} \int_{|v|\leq
b_n \eta_{2n-1}(y)}\exp\left(-\frac{16}{p_{n,y}\pi^2}f(T^{y}
x)(1-f(T^{y}x))v^2\right)\ dv \right)^{p_{n,y}}\\
& \leq & \prod_{y\in C_n} \left(\frac{c
\textbf{1}_{f(T^{y}x)(1-f(T^{y}x))>0}}{\sqrt{2n\eta_{2n-1}(y)f(T^{y}x)(1-f(T^{y}x))}
}\right)^{p_{n,y}} \ \ (\mbox{with }\ \ c=\pi^{3/2}/{4})\\
& = & c \exp\big[-\frac{1}{2} \sum_{y\in C_n}
p_{n,y}\log(2n\eta_{2n-1}(y))\big]\cdot \prod_{y\in C_n}
\left(\frac{\textbf{1}_{f(T^{y}x)(1-f(T^{y}x))>0}}{\sqrt{f(T^{y}x)(1-f(T^{y}x))}
}\right)^{p_{n,y}}. \end{eqnarray*} The vector
$\textbf{p}=(p_{n,y})_{y \in C_n}$ defines a probability measure
on $C_n$ and we have \[ -\frac{1}{2}\sum_{y\in C_n}
p_{n,y}\log(2n\eta_{2n-1}(y))= -\log 2n -\frac{1}{2}\sum_{y\in
C_n} p_{n,y}\log p_{n,y} = -\log 2n+ \frac{1}{2}H(\textbf{p})
\]
where $H(\cdot)$ is the entropy of the probability vector
$\textbf{p}$, always bounded by $\log(\textrm{card}(C_n))$. We
thus have on the set $A_n$,
\[
|I_n^1(x)| \leq c\exp \left[-\log 2n + \frac{1}{2}\log (2 d_{n,1})\right]
\prod_{y\in C_n} \left(\frac{\textbf{1}_{\{f(T^{y}x)(1-f(T^{y}x))>0\}}}{\sqrt{f(T^{y}x)(1-f(T^{y}x))}}\right)^{p_{n,y}}.\]
By applying H\"{o}lder's inequality and the fact that $T$ preserves the measure $\mu$, we get
\begin{eqnarray*}
\int_{E}|I_n^1(x)|\ d\mu(x)& \leq & C\cdot n^{-\frac{3}{4}+\frac{\delta_{1}}{2}}
\int_{E}\prod_{y\in C_n}\left(\frac{\textbf{1}_{\{f(T^{y}x)(1-f(T^{y}x))>0\}}}{\sqrt{f(T^{y}x)(1-f(T^{y}x))}
}\right)^{p_{n,y}}\ d\mu(x)\\
&\leq & C \cdot n^{-\frac{3}{4}+\frac{\delta_{1}}{2}}
\prod_{y\in C_n}\left[\int_{E}\left(\frac{\textbf{1}_{\{f(T^{y}x)(1-f(T^{y}x))>0\}}}{\sqrt{f(T^{y}x)(1-f(T^{y}x))}
}\right)\ d\mu(x)\right]^{p_{n,y}}\\
&=& C\cdot n^{-\frac{3}{4}+\frac{\delta_{1}}{2}}
\int_{E}\frac{1}{\sqrt{f(x)(1-f(x))}}\ d\mu(x).\; \; \; \; \;
\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \;
\; \; \; \; \; \; \; \; \; \; \; \;
\diamond
\end{eqnarray*}
Now, using (\ref{eqn4}), write with the usual notation
$d_{n,3}= n^{\frac{1}{2}+\delta_3}$:
\[
\int_{E} \tilde{\mathbb P}_{T}^{(x)}[A_n \setminus B_n | \mathcal{F}]\
d\mu(x) \leq C\cdot d_{n,3} \int_{E}\left(|I_n^1(x)|+
|I_n^2(x)|\right)\ d\mu(x)
\]
and consider $\delta_3 > 2 \delta_2$. By the previous lemmata, we
have
\[
d_{n,3}\cdot \int_{E}|I_n^1(x)|\ d\mu(x)= \mathcal{O}
\big(n^{-\frac{1}{4}+\delta_3+\frac{\delta_1}{2}}\big), \; d_{n,3}
\cdot \int_{E}|I_n^2(x)|\ d\mu(x)= \mathcal{O} \big(e^{-n^{2
\delta_2} / 2})
\]
and the proposition follows by choosing $\delta_1, \delta_2,
\delta_3$ small enough. $\; \; \; \; \; \; \; \; \;\; \; \;
\; \; \; \; \; \;\; \; \; \; \; \; \; \; \;\; \; \; \;
\; \; \; \; \; \; \; \diamond$
Combining Equations (\ref{srw}), (\ref{sqrtln}) and (\ref{eqn3}),
we obtain (\ref{pn}) and then (\ref{eqn1}). By Borel-Cantelli's
Lemma, we get :
\[
\tilde{\mathbb P}_\mu \big[ M_{T_n}=(0,0)\;\rm{i.o.} \big]=\mathbb P_\mu
\big[ \mathbb P \big[ M_{T_n}=(0,0)\; \rm{i.o.} \big] \big] =0
\]
and thus for $\mathbb P_\mu$-almost every orientation $\ensuremath{\epsilon}$, $\mathbb P
\big[ M_{T_n}=(0,0) \; \rm{i.o.} \big] =0$. This proves that
$(M_{T_n})_{n \in \mathbb N}$ is transient for $\mathbb P_{\mu}$-almost every
orientation $\ensuremath{\epsilon}$, and by Lemma \ref{lem1}, the $\mathbb P_\mu$-almost
sure transience of the simple random walk on the annealed oriented
lattice. Transience in the quenched case is a direct consequence
of the transience in the annealed case.
\subsection{Proof of the strong law of large numbers}
\begin{lemma} \label{slln}\mbox{\bf (SLLN for the embedded random walk)}
\begin{equation}
\lim_{n\rightarrow +\infty}\frac{M_{T_n}}{n}=(0,0)\ \
\mbox{$\tilde{\mathbb P}_\mu$-almost surely.}
\end{equation}
\end{lemma}
{\bf Proof :} Since $(Y_{n})_{n\geq 0}$ is a simple random walk,
$\frac{Y_{n}}{n}$ goes to $0$ as $n \to \infty$
$\tilde{\mathbb P}_\mu$-a.s. as $n \to \infty$ and it is enough to
prove that $(\frac{X_{n}}{n})$ converges almost surely to 0.
Introduce
$$Z_{n}=\sum_{k=0}^{n-1} \ensuremath{\epsilon}_{Y_{k}}=\sum_{y\in\mathbb Z}\ensuremath{\epsilon}_{y} \eta_{n-1}(y).$$
Under the probability measure $\tilde{\mathbb P}_{\mu}^{~}$, the
stationary sequence $(\ensuremath{\epsilon}_{Y_{k}})_{k\geq 0}$ is ergodic
\cite{Ka}, so from Birkhoff's theorem, as $n$ tends to infinity,
\[
\frac{Z_{n}}{n}\rightarrow \mathbb E[\ensuremath{\epsilon}_{0}]=0\ \ \mbox{almost surely.}
\]
Clearly, $X_{n}-mZ_{n}=\sum_{y\in\mathbb Z}
\ensuremath{\epsilon}_{y}\sum_{i=1}^{\eta_{n-1}(y)} (\xi_{i}^{(y)}-m)$ and for an
even integer $r$
\small
$$\mathbb E[(X_{n}-mZ_{n})^{r}]=\sum_{y_1\in\mathbb Z,\ldots y_{r}\in\mathbb Z} \mathbb E\left[\ensuremath{\epsilon}_{y_1}\ldots
\ensuremath{\epsilon}_{y_{r}}\sum_{i_{1}=1}^{\eta_{n-1}(y_{1})}\ldots
\sum_{i_{r}=1}^{\eta_{n-1}(y_{r})} \mathbb E[(\xi_{i_1}^{(y_1)}-m)\ldots
(\xi_{i_{r}}^{(y_{r})}-m)|\mathcal{F} \vee \mathcal{G}]\right].$$
\normalsize
The $\xi_i^{(y)}$'s are independent of the vertical
walk and the orientations; moreover, the random variables
$\xi_{i}^{(y)}-m, i\geq 1, y\in \mathbb Z $ are i.i.d. and centered, so
the summands are non zero if and only if $i_{1}=\ldots=i_{r}$ and
$y_{1}=\ldots =y_{r}$. Then,
$$\mathbb E[(X_{n}-mZ_{n})^{r}]=n\mathbb E[(\xi_{1}^{(0)} -m)^{r}]:=nm_{r}\; \; \; \; \; \mbox{ (say)}.$$
Let $\delta>0$. By Tchebychev's inequality,
\begin{eqnarray*}
\mathbb P\Big[\left|\frac{X_{n}-mZ_{n}}{n}\right|\geq
\epsilon\Big]&\leq &\frac{1}{\delta^{r} n^{r}}
\mathbb E[(X_{n}-mZ_{n})^{r}] \; \leq \; \frac{m_{r}}{\delta^{r}
n^{r-1}}.
\end{eqnarray*}
We choose $r=4$ and thus from Borel-Cantelli Lemma, we deduce that
$\frac{X_{n}-mZ_{n}}{n}$ converges almost surely to 0 as $n$ goes
to infinity. $\diamond$
Using similar techniques, one also proves the
\begin{lemma} \label{tn} The sequence $(\frac{T_{n}}{n})_{n\ge 1}$
converges $\tilde{\mathbb P}_\mu$-a.s. to $(1+m)$ as $n\rightarrow
~+\infty.$ \end{lemma}
Let us prove now the almost sure convergence of the sequence
$(\frac{M_{n}}{n})_{n\geq 1}$ to $(0,0)$. Since the sequence
$(T_{n})_{n\geq 1}$ is strictly increasing, there exists a
non-decreasing sequence of integers sequence $(U_{n \geq 1})_n$
such that $T_{U_{n}}\leq n< T_{U_{n}+1}$. Denote
$M_n=(M_n^{(1)},M_n^{(2)})$, then we have $ M_{n}^{(1)} \in
[\min(M_{T_{U_{n}}}^{(1)},M_{T_{U_{n}+1}}^{(1)}),\max(M_{T_{U_{n}}}^{(1)},M_{T_{U_{n}+1}}^{(1)})]$
and $M_{n}^{(2)}=M_{T_{U_{n}}}^{(2)}$, by definition of the
embedding. The (sub-)sequence $(U_{n})_{n\geq 1}$ is nondecreasing
and $\lim_{n\rightarrow +\infty}U_n=+\infty$, and then by
combining Lemmata \ref{slln} and \ref{tn}, we get that as
$n\rightarrow +\infty$, \begin{equation}\label{tun}
\frac{M_{T_{U_{n}}}}{T_{U_{n}}}\rightarrow (0,0) \mbox{
$\tilde{\mathbb P}_\mu$ a.s.} \end{equation} Now,
\[
\displaystyle\left|\frac{M_{n}^{(1)}}{n}\right|\leq
\max\left(\left|\frac{M_{T_{U_{n}}}^{(1)}}{n}\right|,
\left|\frac{M_{T_{U_{n}+1}}^{(1)}}{n}\right|\right)\leq
\max\left(\left|\frac{M_{T_{U_{n}}}^{(1)}}{T_{U_{n}}}\right|,
\left|\frac{M_{T_{U_{n}+1}}^{(1)}}{T_{U_{n}}}\right|\right)
\]
and
$$\left|\frac{M_{n}^{(2)}}{n}\right|=
\left|\frac{M_{T_{U_{n}}}^{(2)}}{T_{U_{n}}}\right| \cdot
\frac{T_{U_{n}}}{n} \leq
\left|\frac{M_{T_{U_{n}}}^{(2)}}{T_{U_{n}}}\right|.$$ From
(\ref{tun}), we deduce the almost sure convergence of the
coordinates to 0 and then this of the sequence
$(\frac{M_{n}}{n})_{n\geq 1}$ to (0,0) as $n\rightarrow\infty$.
\subsection{Proof of the functional limit theorem}
\begin{proposition}\label{pr1} The sequence of random processes
$n^{-3/4}(X_{[nt]})_{t\ge 0}$ weakly converges in the space ${\cal
D}([0,\infty[,\mathbb R)$ to the process $(m\Delta_{t})_{t\ge 0}$. \end{proposition}
{\bf Proof :} Let us first prove that the finite dimensional
distributions of $n^{-3/4}(X_{[nt]})_{t\ge 0}$ converge to those
of $(m\Delta_{t})_{t\ge 0}$ as $n\rightarrow\infty$. We can
rewrite for every $n\in\mathbb N$, $X_{n}=X_{n}^{(1)}+X_{n}^{(2)}$ where
$$X_{n}^{(1)}=\sum_{y\in\mathbb Z} \epsilon_{y}\Big(\sum_{i=1}^{\eta_{n-1}(y)}\xi_{i}^{(y)} -
m\Big) \; \; \; , \; \; \; X_{n}^{(2)}=m\sum_{y\in\mathbb Z}
\ensuremath{\epsilon}_{y}\eta_{n-1}(y).$$ Thanks to Theorem \ref{thm11} the finite
dimensional distributions of $n^{-3/4}(X_{[nt]}^{(2)})_{t\ge 0}$
converge to those of $(m\Delta_{t})_{t\ge 0}$ as
$n\rightarrow\infty$. To conclude we show that the sequence of
random variables $n^{-3/4} (X_{n}^{(1)})_{n\in\mathbb N}$ converges for
the $L^2$-norm to 0 as $n\rightarrow +\infty$. We have
$$
\mathbb E\Big[(X_{n}^{(1)})^{2}\Big]=\mathbb E\Big[\sum_{x,y\in\mathbb Z}\ensuremath{\epsilon}_{x}\ensuremath{\epsilon}_{y}
\sum_{i=1}^{\eta_{n-1}(x)}\sum_{j=1}^{\eta_{n-1}(y)}\mathbb E[(\xi_{i}^{(x)}-m)(\xi_{j}^{(y)}-m)|\mathcal{F}
\vee \mathcal{G}]\Big]
$$
{}From the equality
$$\mathbb E[(\xi_{i}^{(x)}-m)(\xi_{j}^{(y)}-m)|\mathcal{F} \vee
\mathcal{G}]=m^2\delta_{i,j}\delta_{x,y},$$
we obtain
$$
n^{-3/2}\mathbb E\Big[(X_{n}^{(1)})^{2}\Big]=m^2 n^{-3/2}\sum_{x\in\mathbb Z}
\eta_{n-1}(x)=m^2n^{-1/2}=o(1). \; \; \; \diamond
$$
Let us recall that $M_{T_{n}}=(X_{n},Y_{n})$ for every $n\ge 1$.
The sequence of random processes $n^{-3/4}(Y_{[nt]})_{t\ge 0}$
weakly converges in ${\cal D}([0,\infty[,\mathbb R)$ to 0, thus the
sequence of $\mathbb R^2-$valued random processes
$n^{-3/4}(M_{T_{[nt]}})_{t\ge 0}$ weakly converges in ${\cal
D}([0,\infty[,\mathbb R^2)$ to the process $(m\Delta_{t},0)_{t\ge 0}$.
Theorem \ref{thm2} follows from this remark and Lemma \ref{tn}.
\section{Examples}
The main motivation of this work is the generalization of the
transience of the i.i.d. case of \cite{CP} to dependent or
inhomogeneous orientations. We obtain various extensions
corresponding to well known examples of dynamical systems such
that Bernoulli and Markov shifts, SRB measures, rotations on the
torus, etc., our framework is very general from this point of
view. To get the transience of the walk, we need to generate the
orientations by choosing a suitable function $f$ satisfying
(\ref{C}), which requires in some sense the model not to be too
close to the deterministic case: because to satisfy it, $f$
should not be "$\mu$-too often" 0 or 1. We describe now the
examples providing extensions of the i.i.d. case to various
disordered orientations.
{\bf 1. Shifts:} Bernoulli and Markov shifts provide the more
natural field of application of Theorem \ref{thm1},
including a dynamical construction of the i.i.d. case of \cite{CP} and a straightforward extension to inhomogeneous or dependent
orientations. Consider the {\em shift transformation} $T$ on the
product space $E=[0,1]^\mathbb Z$
endowed with the Borel $\sigma$-algebra, defined by
\begin{eqnarray*}
T: E & \longrightarrow & E\\
x=(x_{y})_{y \in \mathbb Z} & \longmapsto & (Tx)_y=x_{y+1}, \forall y\in \mathbb Z.
\end{eqnarray*}
{\em Bernoulli shifts} are considered when one starts from the
product Lebesgue measure $\mu=\lambda^{\otimes \mathbb Z}$ of the
Lebesgue measure $\lambda$ on $[0,1]$. It is $T$-invariant and
we choose as generating function $f$ the projection on the zero coordinate:
\begin{eqnarray*}
f:E & \longrightarrow & [0,1]\\
\;x & \longmapsto & x_0.
\end{eqnarray*}
For all $y \in \mathbb Z$, we then have $f \circ T^y(x)=x_y := \xi(y) \in
[0,1]$. We consider this $\xi$'s as new random variables on $E$
whose independence is inherited from the product structure of
$\mu$. The sufficient condition (\ref{C}) becomes
\[
\int_0^1 \frac{d\lambda(x)}{\sqrt{x(1-x)}} <\infty
\] and the transience holds in this particular case. In the annealed case, the
product form of $\mu$ allows another description of the i.i.d.
case of \cite{CP}, for which we check $\xi(y)\equiv \frac{1}{2}$
for all $y\in\mathbb Z$ and $\rm{Cov}_{\mu}[\ensuremath{\epsilon}_0, \ensuremath{\epsilon}_y]=\mathbb E_{\mu}[\ensuremath{\epsilon}_0\ensuremath{\epsilon}_y]=4 \mathbb E[\xi(0) \xi(y)]-1=0$. The
result is also valid in the quenched case, for which the
distribution of the orientation has an inhomogeneous product form.
If one considers a measure $\mu$ with correlations, then the same
holds for $\mathbb P_\mu$. Consider e.g. $\mu$ to be a
(shift-invariant) {\em Markovian measure} on $[0,1]^\mathbb Z$ whose
correlations are inherited from the shift via (\ref{correlation}),
with a stationary distribution $\pi$. The transience of the simple
random walk on this particular dynamically oriented lattice holds
then for $\mathbb P_{\mu}$-a.e. environment as soon as
\[
\int_0^1 \frac{d\pi(x)}{\sqrt{x(1-x)}} < \infty.
\]
It is the case when the invariant measure $\mu$ is the usual
Lebesgue measure or Lebesgue measure of index $p$. In the quenched
case, there are no correlations by construction and the law of the
orientations depends on the measurable transformation only. This
case is nevertheless different from this of the Bernoulli shift
because the typical set of points $x$ for which the transience
holds depends on the measure $\mu$.\\
{\bf 2. SRB measures:} They provide another source of examples for
dependent orientations generated by transformations on the
interval $E=[0,1]$. A measure $\mu$ of the dynamical system $S$ is
said to be an {\em SRB} measure if the empirical measure
$\frac{1}{n} \sum_{i=1}^{n} \delta_{T^i(x)}$ converge weakly to
$\mu$ for Lebesgue a.e. $x$. There exist many other definitions of
SRB measures, see e.g. \cite{J}. In particular, it has the {\em
Bowen boundedness property} in the sense that it is close to a
Gibbs measure on some increasing cylinder, i.e. there exists a
constant $C>0$ such that for all $x \in [0,1]$ and every $n \ge 1$
\[
\frac{1}{C} \leq \frac{\mu(I_{i_1,...,i_n}(x))}{\exp{(\sum_{k=0}^{n-1}
\Phi(T^k(x)))}} \leq C
\]
where $\Phi=- \log |T'|$ and $I_{i_1,...,i_n}$ is the interval of
monotonicity for $T^n$ which contains $x$.
In some cases, it is possible to control the correlations for SRB
measures and we detail now an example where our transience result
holds, the {\em Manneville-Pomeau maps} introduced in the 1980's
to study intermittency phenomenon in the study of turbulence in
chaotic systems \cite{BPV}. They are expanding interval maps on
$E=[0,1]$ and the original MP map is given by
\begin{eqnarray*}
T: [0,1]& \longrightarrow & [0,1]\\
x & \longmapsto & T(x)=x + x^{1+\alpha} \; {\rm mod} \; 1.
\end{eqnarray*}
The existence of an absolutely continuous (w.r.t. the Lebesgue
measure on $[0,1]$) SRB invariant measure $\mu$ has been
established by \cite{P} and the following bounds of Radon-Nikodym
derivative $h=\frac{d\mu}{d\lambda}$ has been proved \cite{MRTMV}:
\begin{equation}\label{h}
\exists C_\star,C^\star >0 \; \rm{s.t.} \;
\frac{C_\star}{x^\alpha} < h(x) < \frac{C^\star}{x^\alpha}.
\end{equation}
This measure is known to be mixing, and a polynomial decay of
correlation, with a power $\beta >0$, has even been proved for $g$
regular enough \cite{hu,L,MRTMV,Y} :
\[
\mid C_\mu^g(y) \mid = \mathcal{O} \big(\mid y \mid ^{-\beta}
\big).
\]
The map $T$ is not invertible but we use the remark following
Theorem \ref{thm1}. It remains to find suitable function $f$ who
generates orientations for which the simple random walk is
transient. By (\ref{h}), a sufficient condition for the condition
$(\ref{C})$ to hold is
\[ \int_0^1 \frac{dx}{x^\alpha \sqrt{f(x)(1-f(x))}} < \infty
\]
and this is for example true for the function
$f(x)=\frac{1}{2}(1+x- T(x))$ and the choice of an $\alpha
<\frac{1}{3}$.\\
{\bf 3. Rotations:} We consider the dynamical system $S=([0,1],
{\cal B}([0,1]), \lambda, T_{\alpha})$ where $T_{\alpha}$ is the
rotation on the torus $[0,1]$ with angle $\alpha\in \mathbb R$ defined by
$$x\longmapsto x+\alpha \ \rm{ mod }\ 1$$
and $\lambda$ is the Lebesgue measure on $[0,1]$. For every function $f:[0,1]\mapsto [0,1]$ such that $\int_0^1 f(x) \ dx=\frac{1}{2}$ and
$$ \int_0^1 \frac{dx}{\sqrt{f(x)(1-f(x))}}<\infty,$$
conclusions of Theorem \ref{thm1} hold uniformly in $\alpha$. Such
functions are called {\it admissible}. Every function uniformly
bounded from 0 and 1, with integral $\frac{1}{2}$ is admissible.
We also allow functions $f$ to take values 0 and 1: for instance,
$f_{1}(x)=x$ is admissible although $f_{2}(x)=\cos^2(2\pi x)$ is
not. We actually have no explanations about this phenomenon,
moreover we do not know the behavior (recurrence or transience) of
the simple random walk on the dynamically oriented lattice generated by
$f_{2}$.\\
When the generation function $f$ does not satisfy the condition
(\ref{C}), a variety of results can arise by tuning the angle
$\alpha$ to get different types of dynamical systems. Consider
$f_3={\bf 1}_{[0,1/2[ }$ and take $\alpha=\frac{1}{2q}$ for $q$ an
integer larger or equal to 1; the lattice we obtain is $\mathbb Z^2$ with
undirected vertical lines and horizontal strips of height $q$,
alternatively oriented to the left then to the right. The simple
random walk on this deterministic and periodic lattice is known to
be recurrent \cite{CP} and this provides an example of a non
ergodic system where (\ref{C}) is not fulfilled and the walk
recurrent. When the period becomes infinite, i.e. for $\alpha=0$,
the rotation is just the identity and the corresponding lattice is
$\mathbb Z^2$ with undirected vertical lines and horizontal lines all
oriented to the right (resp. all to the left) when $x\in [0,1/2[$
(resp. $x\notin [0,1/2[$). The simple random walk on this lattice
is known to be transient and this gives an example of a non
ergodic system where (\ref{C}) is not fulfilled and the walk
transient. In the ergodic case, i.e. when $\alpha$ is irrational,
we suspect the behavior of the walk to exhibit a transition
according to the type of the irrational $\alpha$. When its
approximation by rational numbers via a development in continuous
fraction is considered to be good, i.e. when its type is large,
the lattice is "quasi" periodic and the walk is believed to be
recurrent. On the other hand, the walk is believed to be transient
when this approximation is bad (when the type of the irrational is
close to 1). A deeper study of this particular choice of dynamical
system, in progress, is needed to describe more precisely the
transition between recurrence and transience in terms of the type
of the irrational.
\section{Comments}
We have extended the results of \cite{CP} to non-independent or
inhomogeneous orientations. In particular, we have proved that the
simple
random walk is still transient for a large class of models.
As the walk can be recurrent for deterministic orientations,
it would be interesting to perturb deterministic cases in order to get a
full picture of the transience versus recurrence properties
and a more systematic study of this problem is in progress. We
believe that the functional limit theorem could be extended, at least
to other ergodic dynamical systems, but this requires new
results on random walks in ergodic random sceneries. In the i.i.d. case, Campanino {\em et al.} have also proved
an improvement of the strong law of large numbers for the random walk in the random scenery $Z_n$: almost surely, $\frac{Z_n}{n^\beta} \longrightarrow 0$ for all
$\beta > \frac{3}{4}$.
Together with our functional limit theorem and the standard results for the vertical walk, this suggests the conjecture of a
local limit theorem, getting
a full picture of "purely random cases", for which the
condition on the generation $f$ holds. This work is in progress
and we also investigate the limit theorems in more general
cases.
\addcontentsline{toc}{section}{\bf References}
|
1,477,468,750,906 | arxiv | \section{Introduction}
The high complexity of biological systems has encouraged during the last decades extensive inter-disciplinary research comprising biology and other fields of science such as computer science, mathematics, physics and chemistry. This interdisciplinary approach to biology resulted in a new field of study, called systems biology, which focuses on the systematic study of complex interactions in biological systems.
Computer scientists find many interesting similarities between systems biology and theory of concurrency. Degano and Priami~\cite{degpri03} claim that both systems biology and formal methods for concurrency can cross-fertilize each other. Being based on sound and deep mathematics, concurrency theories may offer solid ways to describe biological systems and safely reason upon them. On the other hand, systems biology studies many complex biological phenomena. Modelling and reasoning about these complex phenomena may require techniques that are more efficient and reliable than existing techniques. It is expected that the effort to understand biological mechanisms in terms of computer technology will possibly lead to new techniques that are more robust, efficient and reliable to model and analyse complex systems.
Many mathematical formalisms have been proposed to model and analyse systems biology. Some of them are based on formalisms generally used for modelling concurrent systems, such as Petri Nets~\cite{redliemav96,redmavlie93,harrob04}, the $\pi$-Calculus~\cite{chiacurdegmar05,priqua05,verbus07,verbus08}, and CCS (Calculus of Communicating Systems) ~\cite{dankri07}. Other formalisms are inspired by biological phenomena, such as compartmentalisation:
Brane Calculi~\cite{cardelli05,danpra05}, P Systems~\cite{paunroz02,paun02}, Calculi of Looping Sequences~\cite{barcarmagmilpar08,SpatialCLS}, and Biocham ~\cite{Biocham06,Biocham05}. In particular, Calculi of Looping Sequences is a class of formalisms:
Calculus of Looping Sequences (CLS) is the basic formalism~\cite{barcarmagmilpar08} and has two important extensions,
Stochastic CLS~\cite{barcarmagmilpar08} in which reactions are associated with rates, and Spatial CLS~\cite{SpatialCLS}, which includes spatial information.
These formalisms support the analysis of biological systems using tools based on numerical simulation, stochastic simulation and model-checking techniques.
Bianco and Castellini developed PSim, a simulator for Metabolic P Systems, a variant of P Systems~\cite{PSim}. Scatena developed a simulator for Stochastic CLS~\cite{scatena}.
Biocham is equipped with a tool for the simulation and model-checking of biological systems. The probabilistic model-checker PRISM has also been used to analyse some biological properties~\cite{heaetal03,KNP08,heaetal08}.
All these tools provide textual output as well as a plot of the simulation.
Texts and plots provide very detailed information on specific aspects of the analysed biological system. However, they are often inadequate when the aim is to acquire global knowledge about the high-level organisation and dynamics of the biological system. For example, in most experiments the analyst can only vary molecular concentrations in the environment and within cells, whereas the aim of an experiment or simulation may be to observe the resultant behaviour of cells or even the whole organ or organism. Such high-level behaviours can be better described through two or three dimensional visualisation/animation rather than using texts and plots.
One approach for modelling and visualising biological systems is based on the use of L-systems. L Systems use rewriting systems for modelling biological processes~\cite{Gia04}. However, they are used mainly for visualising the development of plants~\cite{HamPru96,Pow99,GAU00}. Hammel and Prusinkiewicz model the behaviour of Anabaena at cellular level~\cite{HamPru96} by considering interactions between cells with external factors in the environment.
Michel, Spicher and Giavitto use rule-based programming language MGS to model and simulate the $\lambda$ phage genetic switch~\cite{Mic09}. They present a multilevel model of the system; a molecular level defined using built-in Gillespie's algorithm and a population of cells level defined using GBF (Group Based Field) and Delaunay topological collections.
David Harel and his group developed an approach in modelling at different levels of representation~\cite{harelvisual}.
They use object oriented approach and define the cell as the basic building block of their approach. Their approach uses scenario to define system behaviour and uses animation on a 2-dimensional grid~\cite{gemcell}. Scenarios define cell behaviour related with interactions between molecules in the environment and their receptors on cell membranes. Another interesting application of their approach is the modelling of pancreatic organogenesis~\cite{harelpancreas}. In this application they show how molecular interactions affect cell growth and, in the end, affect the growth of mammalian pancreas. A three dimensional visualisation is used to visualise the pancreatic organogenesis process.
Another tool used to visualise biological systems according to a model of the system at the molecular level is Virtual Cell~\cite{virtualcell}. This tool is based on a deterministic numerical simulation of the model, which is defined using differential equations.
Technology has helped biologists to observe biological systems at microscopic level, down to the molecular level. More knowledge has been gained about cell structure and behaviour. Although it is possible to track the causes of certain phenomena at cellular level down to specific biochemical reaction occurring at molecular level, we are still far from being able to entirely explain cell behaviour in terms of the biochemical reactions occurring within the cell and its environment. Moreover, based on natural observation and experiments, it is also possible to give an accurate description of all phenomena observable at cellular level.
We define an approach to model biological systems at different levels of representation. We consider the molecular level, in which organic molecules interact through biochemical reactions, as the lowest level of our hierarchy. At this level, the representation is merely a mathematical formalisation of biochemical reactions, with no visualisation. The representation of higher levels of organisation and dynamics such as a cell, a tissue, an organ or an organism, is inspired by observations of the system behaviour under normal conditions. At these levels the mathematical formalisation mimics organisation and dynamics observed in nature and replicated in controlled experiments, with the aim to give a visualisation, possibly in three dimensions, of the modelled phenomena. We refer to these higher levels as visual levels, in contrast to the molecular level, for which we do not provide any visualisation. Each visual level has to be linked to the molecular level by a formal description of the way biochemical reactions cause transitions of visual states. However such causal relations are not always fully understood in terms of biological theories. Therefore, transitions of visual states may sometimes be governed by rates observed in nature rather than by the underlying biochemistry modelled at molecular level or may be directly associated with the introduction or removal of biochemical signals, whose accumulation or degradation is not sufficiently explained in terms of biological theories.
This approach allows us to introduce changes to cells and their environment at the molecular level and observe the impact of such changes at the visual level. Changes at the molecular level may range from varying the concentration of a specific molecule or introducing a new kind of molecule with specific properties to the introduction of a plasmid within the cell or a virus in its environment.
The work of David Harel and his group~\cite{harelvisual,gemcell} and the work of Hammel and Prusinkiewicz~\cite{HamPru96} have similarity with our approach, in the sense that they also represent both molecular and cellular level. However their approaches consider molecules only as external factors that affect cells behaviour. The cellular level is modelled only in the environment and on the cell membrane, but not inside the cell.
Moreover in David Harel's approach, all rules are deterministic and randomness can only be introduced by defining random initial states of the system. In contrast, in our approach, we model and simulate stochastic behaviour by introducing reaction rates and controlling the occurrences of reactions using Gillespie's algorithm.
Slepchenko, Schaff, Macara and Loew take an approach closer to ours by developing a tool to model biological systems at molecular level and visualise them at cellular level~\cite{virtualcell}. Their approach, however, is based on differential equations, which are deterministically simulated using numerical simulation methods.
In this paper we use Spatial CLS to model both the molecular and visual level. We consider the cellular level as visual level and we use the cell cycle of budding yeast as a case study. However, spatial information is only introduced at the visual level, while the molecular level is modelled using a subset of Spatial CLS with no spatial information, which is equivalent to Stochastic CLS.
The simulation is performed using a variant of Gillespie's algorithm which extends the variant introduced by Basuki, Cerone and Milazzo~\cite{SCLSMaude} with the addition of a mechanism for choosing individual cells, which is governed by exponential distribution of reaction time
\section{Using Spatial CLS for Visualisation}\label{CLS}
Calculi of Looping Sequences is a class of formalisms developed at the University of Pisa for the purpose of modelling biological systems~\cite{milazzo,barcarmagmilpar08}. In this section we focus on one variant of the calculi called Spatial CLS~\cite{SpatialCLS}, which supports the modelling of details about position and movement of the components in the system.
A Spatial CLS model contains a term and a set of rewrite rules. The term describes the initial state of the system, and the rewrite rules describe the events which may cause the system to evolve. We start by defining the syntax of terms. We assume a possibly infinite alphabet $\mathcal{E}$ of symbols ranged over by $a,b,c,\ldots$ and a set $\mathcal{M}$ of names of movement functions. Each symbol represents an atomic component of the system.
\begin{definition} \emph{Terms} $T$, \emph{Branes} $B$ and \emph{Sequences} $S$ are
given by the following grammar:
\begin{eqnarray*}
T & \qqop{::=} & \lambda \quad\big|\quad (S)_d \quad\big|\quad \Loop{B_d} \ensuremath{\;\rfloor\;} T \quad\big|\quad T \ensuremath{\;|\;} T \\
B & \qqop{::=} & \lambda \quad\big|\quad (S)_d \quad\big|\quad B \ensuremath{\;|\;} B \\
S & \qqop{::=} & \epsilon \quad\big|\quad a \quad\big|\quad S \cdot S
\end{eqnarray*}
\normalsize
where $a$ is a generic element of $\mathcal{E}$, $\epsilon$ represents the empty sequence, and $d \in \mathcal{D} = ((\mathbb{R}^n\times\mathcal{M})\cup\{.\})\times\mathbb{R}^+$. We denote with $\mathcal{T}$, $\mathcal{B}$ and $\mathcal{S}$ the infinite set of terms, branes and sequences.
\end{definition}
There are four operators in the formalism. A sequencing operator $\_\cdot\_$ is used to compose some components of the system in a structure that has the properties of a sequence. For instance, sequencing can be used to model DNA/RNA strands or proteins. The looping operator $\Loop{\_}$ and containment operator $\_\ensuremath{\;\rfloor\;}\_$ are always applied together, hence they can be considered as a single binary operator $\Loop{\_} \ensuremath{\;\rfloor\;} \_$ to be applied to one brane and one term. Looping and containment allow the modelling of membranes and their contents. Finally the parallel composition operator $\_\ensuremath{\;|\;}\_$ is used to compose juxtaposition of entities in the system. Brackets can be used to indicate the order of application of the operators, and we assume $\Loop{\_} \ensuremath{\;\rfloor\;} \_$ to have precedence over $\_\ensuremath{\;|\;}\_$.
In Spatial CLS there are two kinds of terms, positional and non-positional terms. Positional terms have spatial information, while non-positional terms do not. A term with a `.` in its spatial information represents a non-positional term.
Every positional term is assumed to occupy a space, modelled as a sphere. The spatial information of a term contains three parts: the position of the centre of the sphere, its radius and a movement function. In this way every object is assumed to have an autonomous movement. Variable $d$ in the above syntax models such spatial information. In this paper we only consider terms without autonomous movement. Therefore, we can omit movement functions from our spatial information.
We now define patterns, which are terms enriched with variables. We assume a set of position variables $PV$, a set of symbol variables $\mathcal{X}$ and a set of sequence variables $SV$. We denote by \textbf{T} the set of all instantiation functions $\tau : PV \rightarrow \mathcal{D}$ for the position variables.
\begin{definition} \emph{Left Brane Patterns} $BP_L$, \emph{Sequence Patterns} $SP$ and \emph{Right Brane Patterns} $BP_R$ are given by the following grammar:
$
\begin{array}{rcl}
BP_L & \qqop{::=} & (SP)_u \quad\big|\quad BP_L \ensuremath{\;|\;} BP_L \\
BP_R & \qqop{::=} & (SP)_g \quad\big|\quad BP_R \ensuremath{\;|\;} BP_R \\
SP & \qqop{::=} & \epsilon \quad\big|\quad a \quad\big|\quad SP . SP \quad\big|\quad \tilde{x} \quad\big|\quad x
\end{array}
$
where $u\ \in\ PV$, $x\ \in\ \mathcal{X}$, $\tilde{x}\ \in\ SV$ and $g \in \mathbf{T}$
\end{definition}
\begin{definition} \emph{Left Patterns} $P_L$ and \emph{Right Patterns} $P_R$ are given by the following grammar:
\begin{eqnarray*}
P_L & \qqop{::=} & (SP)_u \quad\big|\quad \Loop{BP_{LX}}_u \ensuremath{\;\rfloor\;} P_{LX} \quad\big|\quad P_L \ensuremath{\;|\;} P_L\\
BP_{LX} & \qqop{::=} & BP_L \quad\big|\quad BP_L\ensuremath{\;|\;}\bar{X} \quad\big|\quad \bar{X}\\
P_{LX} & \qqop{::=} & P_L \quad\big|\quad P_L\ensuremath{\;|\;} X\\
P_R & \qqop{::=} & \epsilon \quad\big|\quad (SP)_g \quad\big|\quad \Loop{BP_{RX}}_g \ensuremath{\;\rfloor\;} P_R \quad\big|\quad P_R\ensuremath{\;|\;} P_R\quad\big|\quad X\quad\big|\quad \bar{X} \\
BP_{RX} & \qqop{::=} & BP_R \quad\big|\quad BP_R\ensuremath{\;|\;}\bar{X} \quad\big|\quad \bar{X}
\end{eqnarray*}
\normalsize
where $u\ \in\ PV$, $x\ \in\ \mathcal{X}$, $\tilde{x}\ \in\ SV$ and $g \in \mathbf{T}$.
\end{definition}
We denote the sets of all left patterns by $\mathcal{P}_L$, the set
of all right patterns by $\mathcal{P}_R$. We denote by Var($P$) the set of all variables appearing in a pattern
$P$, including position variables from $PV$.
\begin{definition} A \emph{rewrite rule} is a 4-tuple ($f_c, P_L, P_R, k$), usually written as
\[[f_c] P_L \stackrel{k}{\mapsto} P_R\]
where $f_c : \mathbf{T} \rightarrow \{tt,f\! f \}$, $k \in R^+$, Var($P_R$) $\subseteq$ Var($P_L$), and each function g appearing in $P_R$ refers only to position variables in Var($P_L$).
\end{definition}
Rewrite rules are used to define reactions that may occur in a system. The left and right patterns of a rule is matched against the term that represents the current state of the system, to check its applicability. It is possible to define a rewrite rule that has a precondition defined by $f_c$. A precondition is checked against any term matches in the left pattern of a rule. The rate constant $k$ models the propensity of a reaction. The value 1/$k$ represents the expected duration of a reaction involving reactant combinations.
Barbuti, Maggiolo-Schettini, Milazzo and Pardini~\cite{SpatialCLS} define a semantics for Spatial CLS, as a Probabilistic Transition System. In this semantics, they consider each evolution step of a biological system as composed of two phases: (1) application of at most one reaction; (2) updating of positions according to the movement functions. In this paper we omit movement functions, simplifying the evolution step into one phase only.
The combination of looping and containment operators and parallel composition operators in Spatial CLS defines a notion of layers in the terms. The parallel composition operators model the objects as multisets, while the looping and containment operators create boundaries between these multisets. Objects within the same boundary are considered to be in the same layer. We need to define some functions in order to calculate the rate of a rule application. In the following definitions, we denote the multiset of top-level elements appearing in a pattern P by \={P}, and assume the function $\mathbf{n} : \mathcal{T} \times \mathcal{T} \rightarrow \mathbb{N}$ such that $\mathbf{n}(T_1,T_2)$ gives the number of times $T_1$ appears at top-layer in $T_2$. We define $\tau$ as instantiation function for position variables and $\sigma$ as instantiation function for other variables.
$
\begin{array}{rlr}
comb(P_{L1}\ensuremath{\;|\;} P_{L2},\tau,\sigma) = & comb(P_{L1},\tau,\sigma) . comb(P_{L2},\tau,\sigma) & \\
comb(\Loop{BP_{LX}}_u \ensuremath{\;\rfloor\;} P_{LX},\tau,\sigma) = & comb'(BP_{LX},\tau,\sigma) . comb'(P_{LX},\tau,\sigma) & \\
comb(SP_u,\tau,\sigma) = & 1 & \\
comb(P_L\ensuremath{\;|\;} U,\tau,\sigma) =
& \prod_{T\in \bar{P_L\tau\sigma}} \left( \begin{array}{l}
\mathbf{n}((PL|U)\tau\sigma, T )\\
\mathbf{n}(P_L\tau\sigma, T ) \end{array} \right) . comb(P_L,\tau,\sigma), &
U \in BV \cup TV \\
comb'(P_L,\tau,\sigma) = & comb(P_L,\tau,\sigma) & \\
binom(T_1,T_2,T_3) = & \prod_{T\in \bar{T_1}} \prod_{i=1}^{\mathbf{n}(T_3,T)}
\frac{\mathbf{n}(T_2,T)+i}{\mathbf{n}(T_2,T)-\mathbf{n}(T_1,T)+i}& \\
\end{array}
$
\normalsize
Given a finite set of rewrite rules $\mathcal{R}$, let $\mathcal{R}_B \subseteq R$ be the set of all brane rules and let $\stackrel{R,T,c}{\rightarrow}$
with $R \in \mathcal{R}$, $T \in \mathcal{T}$ and $c \in \mathbb{N}$, be the least labeled transition
relation on terms satisfying inference rules in Figure~\ref{Semantics}.
\begin{figure}[!htb]
\begin{center}
$
\begin{array}{cccccc}
\multicolumn{2}{c}{(R: [f_c] P_L\stackrel{k}{\mapsto} P_R) \in \mathcal{R}} & & f_c (\tau) = tt & \tau \in \mathbf{T} & \sigma \in \sum \\ \hline
& \multicolumn{4}{c}{P_L\tau\sigma \ltrans{R,P_L\tau\sigma,comb(P_L,\tau,\sigma)} P_R\tau\sigma} & \\ \\
B \ltrans{R,T,c} B' & R \in \mathcal{R}_B & & & \multicolumn{2}{l}{T_1 \ltrans{R,T,c} T'_1} \\ \cline{1-2} \cline{4-6}
\multicolumn{2}{c}{\Loop{B}_d \ensuremath{\;\rfloor\;}{T_1} \ltrans{R,\Loop{B}_d \ensuremath{\;\rfloor\;}{T_1},c} \Loop{B'}_d \ensuremath{\;\rfloor\;}{T_1}} &
& \multicolumn{3}{c}{\Loop{B}_d \ensuremath{\;\rfloor\;}{T_1} \ltrans{R,\Loop{B}_d \ensuremath{\;\rfloor\;}{T_1},c} \Loop{B'}_d \ensuremath{\;\rfloor\;}{T_1}} \\ \\
& \multicolumn{3}{c}{T_1 \ltrans{R,T,c} T'_1} & & \\ \cline{2-4}
& \multicolumn{3}{c}{T_1\ensuremath{\;|\;} T_2 \ltrans{R,T,c.binom(T,T_1,T_2)} T'_1\ensuremath{\;|\;} T_2} & &
\end{array}
$
\caption{Inference rules used for calculating rates of rewrite rules}
\label{Semantics}
\end{center}
\end{figure}
The following definition gives all the reactions enabled in a state, by also taking
into account the subsequent rearrangement:
\[ Appl(R, T) = {(Tr, c, T', T'') | T \ltrans{R,T,c} T' \land T'' = Arrange(T') \neq \perp} \]
The number $m_T^{(R)}$ of different reactant combinations enabled in state T, for
a reaction $R$, and the total number $m_T$ of reactions considering a set of rules $\mathcal{R}$,
are defined as:
\[ m_T^{(R)} = \sum_{(Tr,c,T',T'')\in Appl(R,T)} c\] and \[m_T = \sum_{R\in \mathcal{R}} m_T^{(R)}\]
Let $T$ describe the state of the system at a certain step, and $k_R$ denote the rate
associated with a rewrite rule $R$. At each step of the evolution of the system, in
order to assume that at most one reaction can occur, we have to choose a time
interval $\delta t$ such that $(\sum_R\in \mathcal{R} k_R m_T^{(R)})\delta t \leq 1$. Given a set of rewrite rules $\mathcal{R}$, we
choose an arbitrary value $N$ such that for each rule $R \in \mathcal{R}$ it holds $0 < kR/N \leq 1$.
Then we compute the time interval for a step as $\delta t = 1/Nm_T$, thus satisfying the above
condition. The value of $N$ also determines the maximum permitted length of each
step as $1/N$ time units.
The probability that no reaction happens in the time interval $\delta t$ is:
\[\bar{p}_T = 1 - \sum_{R\in \mathcal{R}} (\sum_{(Tr,c,T',T'')\in Appl(R,T)} \frac{k_R}{Nm_T}c)\]
and the probability $P(T_1 \rightarrow T_2, t)$ of reaching state $T_2$ from $T_1$ within a time interval
$\delta t $ after $t$ is such that:
\[P(T_1 \rightarrow T_2, t) = \sum_{R\in \mathcal{R}} (\sum_{(Tr,c,T',T_2)\in Appl(R,T_1)} \frac{k_R}{Nm_{T_1}}c)+
\left\{ \begin{array}{ll} \bar{p}_{T_1} & \mbox{if $T_1 = T_2$}\\
0 & \mbox{otherwise} \end{array}\right.\]
The semantics of Spatial CLS is defined as Probabilistic Transition System as follows.
\begin{definition} Given a finite set of rewrite rules $\mathcal{R}$, the semantics of Spatial CLS is the least relation satisfying the following inference rules:
\\
$
\begin{array}{p{0.5in}ccc}
& (T_r, c, T', T_2) \in Appl(R, T_1) & & R \in \mathcal{R} \\
& p = P(T_1 \rightarrow T_2 , t) & & \delta t = \frac{1}{Nm_{T_1}} \\ \cline{2-4}
& & \langle T_1, t\rangle \stackrel{p}{\rightarrow}
\langle T_2, t + \delta t\rangle & \\ \\
& p = P(T \rightarrow T' , t) & & \delta t = \frac{1}{N\ max(1,m_{T_1})} \\ \cline{2-4}
& & \langle T, t\rangle \stackrel{p}{\rightarrow}
\langle T', t + \delta t\rangle &
\end{array}
$
\end{definition}
Application of a reaction may still result in non well-formed terms (e.g. a term that contains collision of two objects). To solve this problem, we define the algorithm described in Section \ref{Cell Cycle} to rearrange objects in the system. In this section we describe some assumptions about space and spatial information used in our approach.
First, we limit the direction where any object in the system can move. We assume that the space occupied by each object in the system is in $\mathbb{R}^n$ and each object can only move along one axis at one time. For every axis, there are two possible directions. So in total there are $2n$ possible directions. Therefore in $\mathbb{R}^3$ any object can move along 6 possible directions.
Positional terms in Spatial CLS have size, which is defined as the radius of the sphere that encloses the term. It is possible to define reactions that modify this size. In our algorithm, we assume that the maximum sizes of all objects in the same layers are known.
We assume the existence of an $n$-dimensional grid, which divides the space occupied by a biological system into cubes (we are working in n=3 dimensions) with fixed size defined in such a way that every object in the system fits inside it.
Initially objects of a biological system are positioned inside cubes. Reactions defined for a biological system can create new objects, move existing objects to different positions or remove objects from the system. New objects are also positioned inside cubes.
Therefore, the only time we need rearrangement is when two objects are inside the same cube. In this case, we will need to run the rearrangement algorithm explained in Section \ref{Cell Cycle}.
\section{Levels of Representation in Spatial CLS}
In this section we define an approach to model biological systems at different levels of representation using Spatial CLS. We distinguish between a molecular level, in which rewrite rules are used to model biochemical reactions among molecules, and one or more visual levels, in which rewrite rules define the dynamics of a higher level of organisation of the biological system under analysis.
These rewrite rules refer to a single level of representation. Therefore, we call them \textit{horizontal rules}.
In this paper we consider only the cellular level as a visual level. At visual level the state of the system, which is called visual state, is defined using the spatial information in positional terms of the system. A visual state describes three kinds of information:
\begin{enumerate}
\item spatial information;
\item a stage of the system evolution, which we call visual stage;
\item information on whether that stage has been visualised.
\end{enumerate}
Example of stages of the system at cellular level are a small cell at the beginning of the growing phase or a cell with two nuclei during mitosis. Within a specific stage, state transitions are triggered by rewrite rules that modify spatial information and tag the current stage of the system evolution as ``visualised''
In this way, we can attain visualisation using spatial information. Moreover, visual states represent both the biological stage of the system and the status of the visualisation,
while rewrite rules control the flow of the visualisation. Visualisation can then be used to simulate the behaviour of the system and to perform comparison with and prediction of in vivo and in vitro experiments.
At molecular level we can see biological systems as composed of molecules which are not associated with spatial information in our model. The state of the system is represented by the combination of molecular populations. State transitions are defined as rewrite rules representing biochemical reactions modifying molecular populations.
Therefore the molecular level is modelled using a subset of Spatial CLS equivalent to Stochastic CLS.
To model a biological system, we can start from the visual level. The information about the system behaviour at this level is purely descriptive. It is based on observation of visible events independently of the biochemical processes that cause them. Such visual events are modelled through transitions of visual states, whose spatial information is defined in such a way to mimic visible events observed during in vivo and in vitro experiments.
Molecules are confined within membranes using the looping and containment operator. However, such a confinement is logical rather than spatial. Chemical reactions are modelled by rules with no visual effect and then linked to the visual level by \textit{vertical rules}, which
control the transition of stage in the system evolution, by evaluating conditions at molecular level and checking that the current stage has been visualised.
\section{Case Study: Cell Cycle}\label{Cell Cycle}
In this section we illustrate our approach using the cell cycle of budding yeast as a case study.
\subsection{Visual Level}\label{visual}
Cell cycle consists of four phases: $G_1$ - $S$ - $G_2$ - $M$.
However, at cellular level, only phase $S$ fully characterises a visual stage, that is the stage where chromosomes inside the nucleus are replicated. Phase $G_1$ incorporates different steps of cell growth, phase $G_2$ does not have any visual counterpart and phase $M$ includes one visual stage corresponding to nucleus division.
In our model we consider only two steps in cell growth: the cell size before and after the growth. Therefore we define 4 visual stages:
\begin{enumerate}
\item small cell before growth (beginning of phase $G_1$);
\item big cell after growth (end of phase $G_1$);
\item chromosomes inside the nucleus (end of phase $S$);
\item cell with two nuclei (phase $M$ before cytokinesis).
\end{enumerate}
Based on the above explanation, we define three variables identifying visual stages:\begin{itemize}
\item cell radius;
\item number of nuclei in a cell;
\item number of chromosomes in a cell nucleus.
\end{itemize}
Table \ref{tab:visualstages} shows the values of these variables in each visual stage.
\begin{table*}
\centering
\begin{tabular}{||c|c|c|c|c||} \hline
Stage & cell radius & \# of nuclei & \# of chromosomes & avg. time (min) \\ \hline
1 & $3r/4$ & single & single & 40 \\ \hline
2 & $r$ & single & single & 30 \\ \hline
3 & $r$ & single & double & 25 \\ \hline
4 & $r$ & double & double & 5 \\ \hline
\end{tabular}
\caption{The four visual stages of cell-cycle}\label{tab:visualstages}
\end{table*}
State transitions are defined using Spatial CLS rewrite rules.
We adopt an example similar to the one presented by Barbuti, Maggiolo-Schettini, Milazzo and Pardini~\cite{SpatialCLS}.
We define four rewrite rules to model the cell cycle. Barbuti, Maggiolo-Schettini, Milazzo and Pardini consider the 24 hour mammalian cell cycle~\cite{SpatialCLS}. In this paper, we model budding yeast cell cycle whose duration is only about 100 minutes~\cite{Chenetal04}. The initial state of the system is defined by the following term:
\[(b)^L_{.,R}\ \rfloor\ (m)^L_{[0,0,f],\frac{3r}{4}}\ \rfloor\ ((n)^L\ \rfloor\ (cr . gN2 . gB5\,|\, cr . gB2 . gC20) | stage_1) \]
The above term represents a sphere with radius $R$, which contains a cell positioned in its centre. The cell contains one nucleus, with 2 chromosomes inside. Each chromosome contains 2 genes, whose function will be explained in Section \ref{molecular}. The cell is initially in stage 1 (phase $G_1$). At this level the cell cycle is defined by the following rules:
\small
\begin{flushleft}
\begin{description}
\item $R_1 : \Loop{m}_{[p,f],\frac{3r}{4}} \ensuremath{\;\rfloor\;}{(X\ \ensuremath{\;|\;}\ stage_1)} \stackrel{0.025}{\longmapsto} \Loop{m}_{[p,f],r} \ensuremath{\;\rfloor\;}{(X\ensuremath{\;|\;} stage_1\ensuremath{\;|\;} visualised_1)}$
\item $R_2 : \Loop{m}_{[p,f],r} \ensuremath{\;\rfloor\;}{((n)^L_u\ \rfloor\ (cr . \tilde{x}\, |\, cr . \tilde{y})\ \ensuremath{\;|\;}\ stage_2)} \stackrel{0.033}{\longmapsto} (m)^L_{[p,f],r}\ \rfloor\ ((n)^L_u\ \rfloor\ (2cr . \tilde{x}\ |\ 2cr . \tilde{y})\ \ensuremath{\;|\;}\ stage_2\ \ensuremath{\;|\;} visualised_2)$
\item $R_3 : \Loop{n}_{[(0,0,0),f],\frac{2r}{5}}\ \rfloor\ (2cr . \tilde{x}\ |\ 2cr . \tilde{y})\ |\ stage_3 \stackrel{0.04}{\longmapsto} (n)^L_{[(-\frac{r}{2},0,0),f],\frac{2r}{5}}\ \rfloor (cr . \tilde{x}\ |\ cr . \tilde{y})\ |\ \Loop{n}_{[(\frac{r}{2},0,0),f],\frac{2r}{5}}\ \rfloor\ (cr . \tilde{x}\ |\ cr . \tilde{y})\ |\ stage_3\ |\ visualised_3$
\item $R_4 : \Loop{m}_{[p,f],r}\ \rfloor\ (\Loop{n}_u\ \rfloor\ X\ |\ \Loop{n}_v\ \rfloor\ Y\ensuremath{\;|\;}\ stage_4) \stackrel{0.2}{\longmapsto} \Loop{m}_{[p,f],\frac{3r}{4}}\ \rfloor\ (\Loop{n}_u\ \rfloor\ X\ |\ stage_4\ \ensuremath{\;|\;} visualised_4)\ | \Loop{m}_{[get\! pos,f],\frac{3r}{4}}\ \rfloor\ (\Loop{n}_u\ \rfloor\ Y\ \ensuremath{\;|\;}\ stage_4\ \ensuremath{\;|\;} visualised_4)$
\end{description}
\end{flushleft}
\normalsize
In the above rules we only model objects without autonomous movement. Function $f$ on their spatial information represents a function that maps from position $p$ to the same position $p$.
Every cell is assumed to double its volume during cell cycle. This is shown by rule $R_1$, which represents the growing process in phase $G_1$. By changing the cell radius from $\frac{3r}{4}$ to $r$, the volume is nearly doubled. Rule $R_2$ represents the chromosomes replication inside nucleus. It is represented by modifying symbol $cr$ that precedes each chromosome to $2cr$. Rule $R_3$ represents the nucleus division, where the only nucleus inside a cell is duplicated into two identical nuclei. To avoid collision between nuclei, pairs of nuclei are moved toward opposite directions. Finally rule $R_4$ represents the cytokinesis, which divides the cell and all its contents into two daughter cells with the same size and content. This is represented by (1) removing the mother cell whose radius is $r$ and having two nuclei, (2) putting a daughter cell with radius $\frac{3r}{4}$ at mother cell's position,
(3) putting a daughter cell with radius $\frac{3r}{4}$ at a new position determined by $get\! pos()$.
Symbols $stage_1, stage_2, stage_3, stage_4$ are used to model the current visual stage of a cell. Symbols $visualised_1, visualised_2, visualised_3, visualised_4$ decompose each visual stage into two visual states, which model whether the current stage has been visualised or not. Therefore, at visual level, rewrite rule $R_i$ models the transition between the two visual states that correspond to visual stage $i$, that is from the non-visualised to the visualised state of stage $i$.
Although in general rewrite rules at visual level modify spatial information, this is not the case for rule $R_2$ because chromosomes are logically positioned within the nucleus, but do not have any quantitative spatial information associated with them. In fact rule $R_2$ just double the number of chromosomes; choices about where to visualise the duplicated chromosomes are purely aesthetic and are left to the implementation of the visualisation.
The constant associated with each rule defines the rate with which that rule is applied. The rate for rule $R_i$ is calculated as the inverse of the average duration for visual stage $i$. This ensures that the time needed for each stage to be visualised mimics the actual time observed in nature. The last column of Table~\ref{tab:visualstages} shows the average durations of the four stages.
\subsection{Molecular Level and Vertical Rules}\label{molecular}
To model cell cycle at molecular level, we adopt the model of budding yeast cell cycle introduced by Chen et al.~\cite{Chenetal04}. It is a very detailed model based on differential equations. There is also a variant of this model for the eukaryotic cell cycle~\cite{CsiNag06}. Li et al. developed a simpler model of budding yeast cell cycle using boolean network~\cite{Li04}.
Cell cycle is controlled by complexes of cyclin and cyclin-dependent kinases. There are 4 kinds of cyclins involved in cell cycle:\begin{itemize}
\item \emph{Cln3}, which starts phase $G_1$;
\item \emph{Cln2}, which induce the $G_1$/$S$ transition;
\item \emph{Clb5}, which controls the phase $S$;
\item \emph{Clb2}, which controls the phase $M$.
\end{itemize}
These four cyclins form complexes with \emph{Cdc28}.
We define rules that control the state of the system at molecular level. To define a link between the visual state (which is controlled by the horizontal rules at visual level) and the biochemical state at molecular level, we define 4 vertical rules that cause a transition to next visual stage when a specific condition at molecular level is verified. Since these rules do not correspond to any time-consuming biochemical process but operate at a meta-level by providing a link between distinct representation levels, we define them as instantaneous by using $\infty$ as the value for their rates
Let $cond_i$ be the condition at molecular level that triggers a transition from visual stage $i$ to next visual stage. We define vertical rule
\[ T_i : visualised_i | cond_i | stage_i \stackrel{\infty}{\longmapsto} cond_i | stage_{next(i)}\]
where \[ next(i) = \left\{
\begin{array}{ll}
1 & \mbox{if $i = 4$} \\
i+1 & \mbox{if $0 < i < 4$}
\end{array}
\right. \]
Symbol $visualised_i$ is introduced by the application of rule $R_i$, which marks the completion of the visualisation of stage $i$. The completion of the visualisation of the current stage is obviously a precondition for the transition to next visual stage. Therefore, symbol $visualised_i$ appears as a precondition in vertical rule $T_i$, which defines the transition to next visual stage $next(i)$. The removal of $visualised_i$ by rule $T_i$ enables rule $R_{next(i)}$ to be applied
Vertical rules also allow the introduction and removal of biochemical signals whose accumulation or degradation is not sufficiently understood in order to be dealt with at molecular level. The form of vertical rules that deal with introduction and removal of biochemical signals is as follows:
\[ \bullet\ \mbox{introduction,}\ T_i : visualised_i | cond_i | stage_i \stackrel{\infty}{\longmapsto} cond_i | stage_{next(i)} | signal\]
\[ \bullet\ \mbox{removal,}\ T_i : signal | visualised_i | cond_i | stage_i \stackrel{\infty}{\longmapsto} cond_i | stage_{next(i)}\]
We use the following convention for naming objects. Molecule names start with capital letters, except in some cases where the molecules could be in two different statuses. For example, we prefix the molecule names with $i$ to indicate that this molecule is in inactive status. We also prefix the molecule names with $p$ when the molecule is in phosphorylated status. Names starting with $g$ are used for genes. We use '-' to concatenate two names of molecules, indicating a complex formed by binding two molecules.
Cell cycle starts when a cell grows in phase $G_1$. This is triggered by a growth factor, which is present in the environment, and binds with its receptor in the cell membrane. The resultant complex then triggers the production of cyclin \emph{Cln3}. Cyclin \emph{Cln3} (after binding with its kinase partner \emph{Cdc28}) activates \emph{SBF} and \emph{MBF}, the transcription factors for cyclins \emph{Cln2} and \emph{Clb5}. Genes $gN2$ and $gB5$ control expression of cyclin \textit{Cln2} and \textit{Clb5}. Cyclin \emph{Cln2} controls the transition between phase $G_1$ and $S$. Cyclin \emph{Clb5} controls phase \emph{S}. At this point some molecules that are not needed in this phase, but will be needed in later phases, are temporarily deactivated. In phase $G_1$, \emph{Clb5} is deactivated by \emph{Sic1}. Later \emph{Cln2} forms a complex with \emph{Cdc28} and phosphorylates \emph{Sic1}, releasing \emph{Clb5}. \emph{Clb2}, which is a cyclin needed in mitosis, is bound with \emph{Sic1} in phase $G_1$. \emph{Cdc14}, which is needed in mitosis exit, is bound with \emph{Net1} in phase $G_1$. \emph{Cdc14}, which is abundant in this phase also activates phosphorylated \emph{Sic1}, the \emph{Clb5} inhibitor.
Based on their duration, we classify reactions into four categories: very fast, fast, slow and very slow. We define four numerical values to characterise reaction rates for the categories above: 20, 5, 1, 0.25. The higher the rate, the faster the reaction. Obviously, reaction times are many magnitudes smaller than durations of visual stages defined in Table~\ref{tab:visualstages}. Therefore, we introduce a speeding factor $s$, which defines the ratio between these magnitudes for the specific visual representation we are modelling. The actual rate of a reaction is then given by the product between the speeding factor and the numerical value corresponding to the category of that reaction
To simplify the model, we don't define rules for complexation of cyclins with its kinase partners (\emph{Cdc28}). Phase $G_1$ is therefore modelled as follows:
\small
$
\begin{array}{ll}
S_1 : & GF\, |\, (GFR\, |\, Y)^L\rfloor X \stackrel{20\cdot s}{\longmapsto} (iGFR\, |\, Y)^L\rfloor X\ |\ Cln3 \\
S_2 : & Cln3\,|\, iSBF\, |\, iMBF \stackrel{1\cdot s}{\longmapsto} Cln3\, |\, SBF\, |\, MBF \\
S_3 : & SBF\, |\, (n)^L\rfloor (\tilde{y}.gN2.\tilde{x}\, |\, Y) \stackrel{0.25\cdot s}{\longmapsto} SBF\, |\, (n)^L\rfloor (\tilde{y}.gN2.\tilde{x}\, |\, Y)\ |\ Cln2 \\
S_4 : & MBF\, |\, (n)^L\rfloor (\tilde{y}.gB5.\tilde{x}\, |\, Y) \stackrel{0.25\cdot s}{\longmapsto} MBF\, |\, (n)^L\rfloor (\tilde{y}.gB5.\tilde{x}\, |\, Y)\ |\ Clb5 \\
S_5 : & Sic1\ |\ Clb5 \stackrel{5\cdot s}{\longmapsto} Sic1-Clb5 \\
S_6 : & Net1\ |\ Cdc14 \stackrel{5\cdot s}{\longmapsto} Net1-Cdc14 \\
S_7 : & pSic1\ |\ Cdc14 \stackrel{20\cdot s}{\longmapsto} Sic1\, |\, Cdc14 \\
S_8 : & Sic1\, |\, Clb2 \stackrel{5\cdot s}{\longmapsto} Sic1-Clb2 \\
S_9 : & Cln2\ |\ Sic1-Clb5 \stackrel{5\cdot s}{\longmapsto} pSic1 \ |\ Clb5\ |\ Cln2 \\
\end{array}
$
\normalsize
The accumulation of \emph{Cln2} triggers the transition from phase $G_1$ to $S$~\cite{Chenetal04}. We introduce the following vertical rule:
\small
\[T_1 : visualised_1 | Cln2^{mc(Cln2,2)} | stage_1 \stackrel{\infty}{\longmapsto} Cln2^n | stage_2 \]
\normalsize
to instantaneously perform the transition from stage 1 to stage 2, after rule $R_1$ has visualised the cell growth.
The function $mc(r,i)$ represents the minimum concentration of $r$ that is needed to trigger the transition to stage $i$.
Cyclin \emph{Clb5} forms a complex with its kinase partner \emph{Cdc28}, and triggers the duplication of chromosomes. Meanwhile \emph{Cln2} is not needed anymore, so it is degraded by \emph{SCF}. Protein \emph{SCF} also degrades phosphorylated \emph{Sic1}, enabling \emph{Clb5} to optimally work. The cyclin-dependent kinases (\emph{Cln2-Cdc28} and \emph{Clb5-Cdc28}) activate \emph{Mcm1}, which is the transcription factor for \emph{Clb2}. Expression of Clb2 is controlled by gene $gB2$. Cyclin \emph{Clb2} controls phase $M$. Active \emph{Cdh1}, whose function is to degrade \emph{Clb2}, is deactivated by cyclin-dependent kinases. In this way the degradation of \emph{Clb2} is postponed until the end of phase $M$. Finally \emph{Clb2} is transcribed and then binds with \emph{Cdc28}.
Phase $S$ is therefore modelled as follows:
\small
$
\begin{array}{ll}
S_{10} : & pSic1\ |\ SCF \stackrel{1\cdot s}{\longmapsto} SCF \\
S_{11} : & Cln2\ |\ SCF \stackrel{1\cdot s}{\longmapsto} SCF\ \\
S_{12} : & Cln2\ |\ Cdh1 \stackrel{20\cdot s}{\longmapsto} Cln2\ |\ iCdh1 \\
S_{13} : & Clb5\ |\ Cdh1 \stackrel{20\cdot s}{\longmapsto} iCdh1 \ |\ Clb5 \\
S_{14} : & Clb5\ |\ iMcm1 \stackrel{0.25\cdot s}{\longmapsto} Mcm1 \ |\ Clb5 \\
S_{15} : & Mcm1\, |\, (n)^L\rfloor (\tilde{y}.gB2.\tilde{x}\, |\, Y) \stackrel{1\cdot s}{\longmapsto} iMcm1\, |\,
(n)^L\rfloor (\tilde{y}.gB2.\tilde{x}\, |\, Y)\ |\ Clb2 \\
\end{array}
$
\normalsize
The accumulation of $Clb5$ is the event that triggers the transition from phase $S$ to phase $G_2$. We introduce the following vertical rule:
\small
\[T_2 : visualised_2 | Clb5^{mc(Clb5,3)} | stage_2 \stackrel{\infty}{\longmapsto} Clb5^{mc(Clb5,3)} | stage_3 | SPN \]
\normalsize
to instantaneously perform the transition from stage 2 to stage 3, after rule $R_2$ has visualised chromosome duplication.
Besides changing visual stage, the above rule also sends a signal (\emph{SPN}) to start metaphase spindle. This signal is needed to activate \emph{Cdc15} in mitosis. Since it is unknown how to relate the accumulation of this signal with biochemical reactions at molecular level, we assume that this signal is available since the beginning of phase $G_2$.
The cyclin-dependent kinase (CDK) \emph{Clb2-Cdc28} is the main controller of phase $G_2$ and $M$. This CDK activates \emph{Mcm1}, allowing \emph{Clb2} to accumulate. It also degrades \emph{MBF} and \emph{SBF}, stopping the transcription of \emph{Cln2} and \emph{Clb5}. Cyclin \emph{Cln2} is then degraded by \emph{SCF}, while the degradation of \emph{Clb5} is regulated by \emph{Cdc20} and \emph{APC} (\emph{Anaphase Promoting Complex}). \emph{Mcm1} stimulates gene \textit{gC20} to produce \emph{Cdc20} and \emph{APC} must be phosphorylated by \emph{Clb2-Cdc28} before it can bind with \emph{Cdc20}. During metaphase, the \emph{SPN} signal activates \emph{Cdc15}. Protein \emph{Cdc15} then phosphorylates \emph{Net1}, releasing \emph{Cdc14}. Protein \emph{Cdc14} is needed later in mitosis exit and also activates \emph{Cdh1}. Tumor suppressor \emph{Cdh1} is needed for \emph{Clb2} degradation.
\small
$
\begin{array}{ll}
S_{16} : & Clb2\, |\, iMcm1 \stackrel{20\cdot s}{\longmapsto} Mcm1 \, |\, Clb2 \\
S_{17} : & Clb2\, |\, MBF \stackrel{1\cdot s}{\longmapsto} Clb2\, |\, iMBF \\
S_{18} : & Clb2\, |\, SBF \stackrel{1\cdot s}{\longmapsto} Clb2\, |\, iSBF \\
S_{19} : & Mcm1\, |\, (n)^L\rfloor (\tilde{y}.gC20.\tilde{x}\, |\, Y) \stackrel{20\cdot s}{\longmapsto} iMcm1\, |\,
(n)^L\rfloor (\tilde{y}.gC20.\tilde{x}\, |\, Y)\ |\ Cdc20 \\
S_{20} : & Clb2\, |\, APC \stackrel{20\cdot s}{\longmapsto} APC-P \, |\, Clb2 \\
S_{21} : & APC-P\, |\, Cdc20 \stackrel{1\cdot s}{\longmapsto} APC-Cdc20 \\
S_{22} : & SPN\, |\, iCdc15 \stackrel{0.25\cdot s}{\longmapsto} SPN \, |\, Cdc15 \\
S_{23} : & Cdc15\, |\, Net1-Cdc14 \stackrel{0.25\cdot s}{\longmapsto} Net1 \, |\, Cdc14 \\
S_{24} : & APC-Cdc20\, |\, Clb5 \stackrel{1\cdot s}{\longmapsto} APC \\
S_{25} : & iCdh1 \, |\, Cdc14 \stackrel{0.25\cdot s}{\longmapsto} Cdh1\, |\, Cdc14 \\
\end{array}
$
\normalsize
The accumulation of $APC-Cdc20$ is the event that triggers the beginning of cytokinesis. We introduce the following vertical rule:
\small
\[T_3 : SPN | visualised_3 | APC-Cdc20^{mc(APC-Cdc20,4)} | stage_3 \stackrel{\infty}{\longmapsto} APC-Cdc20^{mc(APC-Cdc20,4)} | stage_4 \]
\normalsize
to instantaneously perform the transition from stage 3 to stage 4, after rule $R_3$ has visualised nucleus division. It also removes signal \emph{SPN} which is not needed anymore in cytokinesis, but whose degradation is not clearly understood in terms of biochemical reactions.
The main activity in the stage 4 is the degradation of \emph{Clb2} by \emph{APC} (with the help of \emph{Cdc20} or \emph{Cdh1}). Protein \emph{Cdc14} is also active in this stage, producing \emph{Sic1}, which is needed to return to phase $G_1$.
\small
$
\begin{array}{ll}
S_{26} : & Cdc14 \stackrel{5\cdot s}{\longmapsto} Sic1\, |\, Cdc14 \\
S_{27} : & APC-Cdc20\, |\, Clb2 \stackrel{1\cdot s}{\longmapsto} APC \\
S_{28} : & APC\, |\, Cdh1\, |\, Clb2 \stackrel{0.25\cdot s}{\longmapsto} APC\, |\, Cdh1
\end{array}
$
\normalsize
Finally, transition from cytokinesis to phase $G_1$ is triggered by the accumulation of $Sic1$. We introduce the following vertical rule:
\small
\[T_3 : visualised_4 | Sic1^{mc(Sic1,1)} | stage_4 \stackrel{\infty}{\longmapsto} Sic1^{mc(Sic1,1)} | stage_1 \]
\normalsize
to instantaneously perform the transition from stage 4 back to stage 1, after rule $R_4$ has visualised cell division.
\subsection{Algorithms for Simulation and Visualisation}\label{algo}
The model of cell cycle defined using the formal approach
introduced in Section~\ref{visual} and Section~\ref{molecular}
is used as the input for a variant of Gillespie's simulation algorithm
which extends the variant introduced by
Basuki, Cerone and Milazzo~\cite{SCLSMaude}.
The extension consists of the selection of the cell
in which the current reaction has to occur
and the control of the visualisation.
\begin{description}
\item[Step 0] Input $M$ reactions $R_1,\ldots , R_M$, and $N$ values
$X_1,\ldots , X_N$ representing the initial numbers of each
of $N$ kinds $B_1,\ldots , B_N$ of molecules.
Initialise time variable $t$ to 0.
Calculate propensities $a_j$, for $j=1,...,M$.
Calculate $\sum_{j=1}^{M} a_j$.
\item[Step 1] If $visualised_l$ is present in the term
then visualise stage $l$ and execute vertical rule $T_l$.
\item[Step 2] If the space is fully occupied then stop simulation.
Otherwise generate $r_1$ and calculate $\tau$. Increment $t$ by $\tau$.
\item[Step 3] Generate $r_2$ and calculate $(\mu,\sigma)$.
\item[Step 4] Execute $R_\mu$ inside cell $\sigma$.
Update $X_1,\ldots , X_N$ and $a_1,\ldots , a_M$ according to the
execution of $R_\mu$.
\item[Step 5] Calculate $\sum_{j=1}^{M} a_j$.
Return to \textbf{Step 1}.
\end{description}
Gillespie defines that the probability of reaction $R_j$ to occur is
proportional to $a_j$, the \textit{propensity} of reaction $R_j$.
The propensity of reaction $R_j$ is the product of its rate $c_j$ and
the number $h_j$ of distinct combinations of reacting molecules.
Since reactions are confined within cells, we need to extend Gillespie's
algorithm to choose in which cell the chosen reaction $R_{\mu}$ should occur.
Let $C$ be the number of cells and $X_k^i$ be the number of molecules of kind
$B_k$ in the $i$-th cell.
We define $X_k = \sum_{i=1}^C X_k^i$.
Let $a_j^i$ be the propensity of reaction $R_j$ occurring inside the $i$-th cell.
Then $a_j^i$ is defined as the product of $c_j$ by the number $h_j^i$ of distinct
combinations of reacting molecules of $R_j$ within the $i$-th cell.
We define $a_j = \sum_{i=1}^C a_j^i$.
If $t$ is the current simulation time, then $t + \tau$ represents the time
at which next reaction occurs, with $\tau$ exponentially distributed with
parameter $a_0=\sum_{j=1}^{M}a_j$.
Time increment $\tau$,
the index $\mu$ of the reaction that occurs at time $t + \tau$ and
the index $\sigma$ of the cell in which such reaction occurs
are calculated as follows.
\begin{gather}
\tau = \frac{1}{a_0}ln(\frac{1}{r_1})
\label{taucalc}\\
(\mu,\sigma)=the\ integers\ f\! or\ which\ \sum_{j=1}^{\mu} \sum_{i=1}^{\sigma-1} a_j^i <\
r_2a_0\leq\ \sum_{j=1}^{\mu} \sum_{i=1}^{\sigma} a_j^i \label{muchoice}
\end{gather}
where $r_1$ and $r_2$ are two random real numbers which are uniformly distributed
over interval [0,1].
We assume the existence of a function, called $get\! pos()$, that
is responsible to find the correct position for a newborn cell and to resolve the spatial conflict that arises between cells. Naturally, the newborn cell should be attached to its parent cell. If we model the system in $n$-dimensional space, there are $2n$ positions for this newborn cell. Function $get\! pos()$ will find an empty position among these $2n$ positions. If it cannot find an empty position, it will choose one position and then push forward the other cells on that direction by one position. Since the space is limited by a sphere if the last cell is adjacent to the sphere boundary, it cannot be pushed forward. In this case $get\! pos()$ will search for any empty position for this cell. If no more empty position can be found, then the simulation must stop.
Figure \ref{fig:algogrid} shows an example in 2-dimensional space. In the left picture, cell number 1 is about to divide, but all neighboring positions are occupied by other cells. Then in the right picture $get\! pos()$ chooses the position of cell number 2 as its target and divides cell number 1 into two newborn cells (grey colour). It pushes cell number 2 toward cell number 3, but cannot push cell number 3 forward because of the boundary. Then cell number 3 is moved to an adjacent empty position, along a different direction.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width = 7 cm]{picture}
\caption{Application of $get\! pos()$}
\label{fig:algogrid}
\end{center}
\end{figure}
After executing reaction $R_{\mu}$ (step 4 of the algorithm), we need to update the molecular populations and propensity functions that are affected by application of $R_{\mu}$. Gibson and Bruck \cite{gibbru00} define a data structure to support an extension of Gillespie's First Reaction Method. We use their notion of dependency graph in order to simplify the process of updating molecular populations and propensity functions. In this way we need to update them only if they are affected by the application of reaction $R_{\mu}$.
\subsection{Visualisation of Cell Cycle}
Figure~\ref{fig:visual} shows the visualisation of cell cycle in our tool. The picture on the top left shows the initial stage of the system, in which the sphere that limits the proliferation space only contains one small cell. This cell grows (top centre) and duplicates its chromosomes (top right).
Nucleus division (bottom left) and cell division (bottom centre) complete the cell cycle. All newly born cells concurrently repeat the cell cycle and finally the simulation stops because the space is full of cells (bottom right).
\begin{figure}[!htb]
\begin{center}
\includegraphics[width = 12 cm]{prol}
\caption{Visualisation of cell cycle by our tool}
\label{fig:visual}
\end{center}
\end{figure}
The tool also allows to model the existence of a virus in the environment where cells proliferate. The virus can infect a cell through its membrane and duplicates itself inside the cell. We want to relate the virus infection on a cell with the cell's capability to proliferate. We model a virus that synthesises a protein able to degrade the growth factor receptor, which is needed in the growing phase of a cell. The virus can then move from the cell through the cell membrane back into the environment. From there it can then spread to another cell. We can classify the infection level of a cell based on the number of viruses inside it. We define a threshold $virusTH$ and use it to classify the level of infection of a cell into three classes as follows.
\begin{itemize}
\item healthy cell, if no virus is inside the cell;
\item lightly infected cell, if the number of viruses inside it is less than $virusTH$;
\item severely infected cell, if the number of viruses inside it is greater than or equal to $virusTH$.
\end{itemize}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width = 12 cm]{virus}
\caption{Visualisation of virus attack}
\label{fig:virus}
\end{center}
\end{figure}
To represent the infection level of a cell, we use colours in our visualisation. We colour healthy cells with orange, lightly infected cells with green and severely infected cells with blue. Figure~\ref{fig:virus} shows the visualisation of virus attack in our tool. The tool also supports the generation of a report file, describing the details of reactions occurring in the system. By combining our observation from the visualisation and the report, we can find interesting things about the model. For instance, in the visualisation we may observe that a severely infected cell can no longer proliferate, while other infected cells can still proliferate. We can later analyse the report generated by the tool and find out that the observed situation is related to the stage of the cell at the moment when it is infected by virus. If the cell is infected while it is still growing, this may cause the cell to loose its growing capability, which means that the cell can no longer proliferate. In contrast if the cell is infected after the growing phase is already over, it can still proliferate.
\section{Conclusions and Future Work}
We have defined an approach to model biological systems at different
levels of representation: a molecular level in which rewrite rules are
used to model biochemical reactions among molecules, and one or more visual
levels, in which rewrite rules define the dynamics of a higher level of
organisation of the biological system under analysis.
We have chosen Spatial CLS to model all representation levels in order to
formally describe, for every visual level, the spatial information needed
to quantitatively define all visual details that are believed to be essential
for an effective visualisation at that level.
Visual details whose quantitative definition is a purely aesthetic matter
are not included in the formal model and their quantitative definition is
left to the implementation.
Each molecular or visual level is defined by an initial term and a set of
rewrite rules, which we called horizontal rules since they only
refer to that level.
Each visual level is linked to the molecular level by a meta-level
of vertical rules, which also cater to lack of sufficient knowledge at
molecular level.
We have then presented the budding yeast cell cycle as the case study to
illustrate our approach, choosing the cellular level as the single visual
level.
Those visual details for which a quantitative characterisation is essential
for visualisation, such as the position and size of cells, have been formally
modelled, whereas irrelevant quantitative details, such as the position
within the nucleus of duplicated chromosomes, are not.
We have defined a variant of Gillespie's algorithm that exploits spatial
information to choose the cell in which next reaction has to occur according to the exponential distribution of reaction time,
and implemented the algorithm in a tool to illustrate the budding yeast cell
cycle case study.
In addition to the normal cell cycle, the tool supports the injection of a
virus in the environment where the cells proliferate.
The virus can enter a cell through its membrane, duplicate itself inside the
cell, and cause the degradation of the growth factor receptor.
The visualisation shows that some small infected cells can no longer proliferate
while big infected cells can still proliferate.
Although this specific situation is an obvious consequence of the degradation
effect the virus has on the growth factor receptor of the cell, it actually
illustrates that, in general, the visualisation of higher levels of organisation
has the potential to highlight important behaviours that would not
be captured through a formal analysis conducted at the molecular level.
As future work, we plan to extend the work by Basuki, Cerone and
Milazzo~\cite{SCLSMaude}, by implementing Spatial CLS within the MAUDE tool and parsing
the output generated by MAUDE to provide the appropriate input to our tool.
In this way, simulations performed using MAUDE could be visualised using the
tool.
Moreover, counterexamples generated by the model-checker associated with MAUDE
could be also visualised.
Finally we plan several extensions to the tool implementation, such as
\begin{itemize}
\item the definition of several visual levels, corresponding to different
magnifications of the biological system, with zooming capabilities to move
from consecutive visual levels;
\item the capabilities to save simulations and to re-visualise them at a later stage;
\item the structuring and visualisation of the textual information, currently
provided as a single report file, by attaching it to the visual objects for which
it is relevant.
\end{itemize}
|
1,477,468,750,907 | arxiv | \section{Introduction}
Neural network derived word embeddings are dense numerical representations of words that are able to capture semantic and syntactic information\cite{mikolov2013distributed}. Word embedding models are calculated by capturing word relatedness\cite{hou2018enhanced} in a corpus as derived from contextual co-occurrences. They have proven to be a powerful tool and have attracted the attention of many researchers over the last few years. The usage of word embeddings has improved various natural language processing (NLP) areas including named entity recognition\cite{do2018evaluating}, part-of-speech tagging\cite{santos2014learning}, and semantic role labelling\cite{he2018jointly, luong2013better}. Word embeddings have also given promising results on machine translation\cite{gouws2015bilbowa}, search\cite{ganguly2015word} and recommendation\cite{musto2016learning, ozsoy2016word}.
\\Similarly, there are many potential applications of the embeddings in the academic domain such as improving search engines, enhancing NLP tasks for academic texts, or journal recommendations for manuscripts. Published studies have mostly focused on generic text like Wikipedia\cite{levy2014neural,park2018conceptvector}, or informal text like reviews\cite{dos2014deep, lauren2018generating} and tweets\cite{masino2018detecting,yang2018using}. We aim to validate word embedding models for academic texts containing technical, scientific or domain specific nuances such as exact definitions, abbreviations, or chemical/mathematical formulas. We will evaluate the embeddings by matching articles to their journals. To quantify the match, we use the ranks derived by sorting similarity of embeddings between each article and all journals. Furthermore, we plot the journal embeddings as a 2-dimensional representation of journal relatedness. Our 2-dimensional plot of embeddings visualizes relatedness in a scatter plot\cite{dai2015document, hinton2003stochastic}.
\section{Data and environment}
In this study, we compare content models based on TFIDF, embeddings, and various combinations of both. This section describes the training environment and parameters as well as other model specifications to create our content models.
\subsection{Dataset:}
Previous studies have highlighted the benefits of learning embeddings in a similar context as they are later used in\cite{lai2016generate, Truong2017Thesis}. Thus, we trained our models on title and abstracts of approximately 70 million scientific articles from 30 thousand distinct sources such as journals and conferences. All articles are derived from Scopus abstract and citation database \cite{url_scopus}. After tokenizing, removal of stopwords and stemming the dataset contains a total of ca.~5.6 billion tokens (ca.~0.64 million unique tokens). The word occurrences in this training set follow a Pareto-like distribution as described by Wiegand et al\cite{wiegand2018word}. This distribution indicates that our original data has similar properties as standard English texts.
\subsection{TFIDF}
We used 3 TFIDF alternatives all created by the TFIDF and the hasher from the pySpark mllib\cite{url_spark}. We controlled TFIDF alternatives in two ways, (a) adjusting vocabulary size and (b) adjusting the number of hash buckets. We label the TFIDF alternatives as follows:``vocabulary-size / number-of-hash-buckets''. Thus, we label the TFIDF configuration that has a vocabulary size of 10,000 and 10,000 hash buckets as TFIDF 10K/10K. To select the TFIDF sets, we measured memory footprint of multiple TFIDF configurations vs our accuracy metric (see section 3 for detailed definition). As seen in Table~\ref{table:tfidfperformance}, the performance on both title and abstract stagnates; the same is true for the memory usage. Given these results, we selected the 10K/10K, 10K/5K and 5K/5K configurations for our research as reasonable compromises between memory footprint and accuracy. We also do not expect significantly better performance for higher vocabulary sizes such as 20K.
\newcolumntype{L}[1]{>{\raggedright\arraybackslash}p{#1}}
\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
\begin{table}
\caption{TFIDF accuracy and memory usage vs variable hash-bucket and vocabulary size}
\centering
\begin{tabular}{L{8cm}|R{1.5cm}||C{1.2cm}|C{1.2cm}}
& \textbf{Memory} & \multicolumn{2}{c}{\textbf{Median Rank}} \\
\textbf{Vocabulary Size and Number of Hash Buckets} &\multicolumn{1}{c||}{(GB)} & Title& Abstract\\
\bottomrule
1k (1k/1k)& 5.13 & 183 & 44\\
4k (4k/4k)& 9.29 & 59 & 20\\
7k (7k/7k)& 10.85 & 42 & 16\\
10k (10k/10k)& 11.61 & 35 & 14\\
\end{tabular}
\label{table:tfidfperformance}
\end{table}
\subsection{Embeddings}\label{embeddings}
Our word embeddings are obtained using a spark implementation\cite{url_spark} of the word2vec skip-gram model with hierarchical softmax as introduced by Mikolov et al\cite{mikolov2013efficient}. In this shallow neural network architecture, the word representations are the weights learned during a simple prediction task. To be precise, given a word the training objective is to maximize the mean log-likelihood of its context. We have optimized model parameters by means of a word similarity task using external evaluation sets\cite{Finkelstein2001PlacingSI, bruni2014multimodal, Luong2013BetterWR} and consequently used the best performing model (see \ref{modelopt}) as reference embedding model in this entire article (referred to as \textit{embedding}). Additionally, we created 4 variants of TFIDF combined with embedding. All embedding models are listed below:
\newcolumntype{L}[1]{>{\raggedright\arraybackslash}p{#1}}
\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
\begin{table}
\centering
\begin{tabular}{L{3cm} L{9cm}}
\textit{- embedding:} & unweighted mean embedding of all tokens \\
\textit{- TFIDF\_embedding:} & tfidf-weighted mean embedding of all tokens \\
\textit{- 10K\_embedding:} & tfidf-weighted mean of the top 10,000 most occurring tokens\\
\textit{- 5K\_embedding:} & tfidf-weighted mean embedding of the top 5,000 most occurring tokens\\
\textit{-1K\_6K\_embedding}: & tfidf-weighted mean embedding of the top 6,000 most occurring tokens excluding the 1,000 most occurring tokens.
\end{tabular}
\label{table:modelvariants}
\end{table}
\section{Methodology}
To measure the quality of embeddings, we calculate a ranking between each article and its corresponding journal. This ranking, calculated by comparing embedding of the article with the embedding of all journals, will resemble the performance of embeddings in a categorization task. Articles in 2017 are split into 80\%-20\% training and test sets. Within training set, we average embeddings per journal and define it as the embedding per journal. This study is limited to journals with at least 150 publications in 2017 and those who had papers in both test and training sets (roughly 3700 journals and 1.3 millions articles). We calculate the similarity of embeddings between each article in the test set and all journals in the training set. We order similarity scores such that rank one corresponds to the journal with the most similar embedding. We record the rank of the source journal of each article for evaluations. We do this for both title and abstract separately. We calculate the performance per set, therefore we combine the ranking results of all articles for a set into one score. We use the median and average for that: the average rank takes the total average of all ranks, while the median is the point at which 50\% of all the ranks are higher and 50\% of all the ranks are lower. We keep track of the following results when ranking: source journal rank, score as well as name of the best matching journal for both abstract and title. We furthermore monitor the memory usage and computation time. To plot the journal embeddings, we use PCA (Principal Component Analysis)-based tSNE. tSNE (t-Stochastic Neighbor Embedding) is a vector reduction method introduced by Maaten et al\cite{maaten2008visualizing}.
\section{Results}
In this section, the results of our research are presented; the detailed discussion on the meaning and implications of these results are presented in section 5, Discussion.
\subsection{Model Optimization} \label{modelopt}
During optimization, we tested the effect of several learning parameters on training time and quality using three benchmark sets for evaluating word relatedness, i.e.~the WordSimilarity-353 \cite{Finkelstein2001PlacingSI}, MEN \cite{bruni2014multimodal} and the Rare Word \cite{luong2013better} test collection that contain generic and rare English word pairs along with human-assigned similarity judgments. Only few parameters, i.e. number of iterations, vector size and the minimum word count for a token to be included in the vocabulary had significant effect on the quality. The learning rate was 0.025 and the minimum word count was 25. Our scores were close to external benchmarks from above studies. We manually investigated the differences and they were mostly due to word choice differences between academic context vs non-academic. Indeed, the biggest difference was between television and show pairs (because in academic context show would rarely relate to television). Table \ref{embeddings} contains the average scores and training times when tuning the parameters while fixing the remaining one. Our final and reference model is based on 300-dimensional vectors, a context window of 5 tokens, 1 iteration and 160 partitions.
\begin{table}
\caption{average accuracy scores and computation times during training.}
\centering
\begin{threeparttable}
\begin{tabular}{L{4cm}|C{1.5cm}|C{2.5cm}|C{2.5cm}}
& \textbf{value} & \textbf{average score} & \textbf{training time}\\
\bottomrule
vector size & 100 & 0.447 & 3.2 h\\
& 200 & 0.46 & -\tnote{a} \\
& 300 & 0.51 & -\tnote{a} \\
\hline
no. of iteration & 1 & 0.446 & 2.94 h\\
& 3 & 0.457 & 4.48 h \\
& 6 & 0.46 & 7.1 h \\
\hline
min. word count & 15 & 0.467 & -\tnote{b} \\
& 25 & 0.473 & -\tnote{b} \\
& 50 & 0.447 & -\tnote{b} \\
\end{tabular}
\begin{tablenotes}
\item[a] ran in different cluster due to memory issues.
\item[b] not significant.
\end{tablenotes}
\end{threeparttable}
\label{table:embeddings}
\end{table}
\subsection{Ranking}
Figures~\ref{figure:titleRanks} and \ref{figure:abstractRanks} show the result of the categorization task via ranking measures for titles and abstracts. The rank indicates the position of the correct journal in the sorted list of all journals. These graphs show both the average and the median ranks, based on the cosine-similarity between the article and journal embeddings. These embedding vectors, whether calculated by word2vec, TFIDF or their combinations can be considered as the feature vectors used elsewhere for machine-learning tasks.
\begin{figure}
\includegraphics[width=4.5in]{Plots/Title_ranks}
\caption{Median and average title rankings}\label{figure:titleRanks}
\includegraphics[width=4.5in]{Plots/Abstract_ranks}
\caption{Median and average abstract rankings}\label{figure:abstractRanks}
\end{figure}
\begin{figure}
\includegraphics[width=4.5in]{Plots/Rank_distributions}
\caption{Rank distribution for the title and the abstract: Y-axis shows the fraction of articles.}\label{figure:distributions}
\end{figure}
\subsection{Rank Distribution}
Figure~\ref{figure:distributions} shows the distributions of the ranks for our default embeddings, TFIDF weighted embedding and the 10K/10K TFIDF. The figure plots the cumulative percentage of articles as a function of rank. The plot gives a detailed view of the ranks presented in Figures~\ref{figure:titleRanks} and~\ref{figure:abstractRanks}.\\
\subsection{Memory Usage and Computation Time}
Table~\ref{table:memoryUsage} shows the total memory usage of each test set in gigabytes. Moreover, it provides the absolute hit percentage of the title and the abstracts, i.e. the percentage of articles that have their source journal as the first result in the ranking. The table furthermore lists the median rank and the median abstract rank, as visualized in Figures \ref{figure:titleRanks} and~\ref{figure:abstractRanks}. Thus, this table gives an overview of the memory usage of the sets, combined with their accuracy on the ranking task.\\
We furthermore investigated compute efficiency for different content models. To simulate what can happen during an online application, we selected 1000 random articles and then measured time needed for dot products between all pairs. Time recorded excluded input/output time and all calculations started from cached data with optimized numpy matrix/vector calculations. Table~\ref{table:computationTimes} shows computation time in seconds as well as ratios. Generally dot products can be faster for dense vectors as opposed to sparse vectors. Generally TFIDF vectors are stored as sparse vectors while embeddings are dense vectors. Hence, we also created a dense vector version of TFIDF sets to isolate the effect of sparse vs dense representation.
\begin{table}
\caption{memory usage and performance for various content models}
\centering
\begin{tabular}{L{4cm}|C{1.5cm}||C{1.5cm}|C{1.5cm}||C{1.5cm}|C{1.5cm}}
& \textbf{Memory} & \multicolumn{2}{c||}{\textbf{Absolute Hit}} & \multicolumn{2}{c}{\textbf{Median Rank}} \\
&\multicolumn{1}{c||}{(GB)}&Title & Abstract& Title& Abstract\\
\bottomrule
tfidf 5k 5K & $9.82$ & 5.42\% & 10.18\% & 50 & 27\\
tfidf 5K 10K & $11.47$ & 6.49\% & 11.08\% & 38 & 15\\
\textbf{tfidf 10K 10K} & 11.61& 6.79\% & \textbf{11.32\%}& 35 & \textbf{14}\\
\textbf{embedding} & $3.13$ & \textbf{7.92}\% & 9.24\% & \textbf{27} & 23\\
5k embedding & $3.13$ & 6.34\% & 8.36\% & 42 & 27\\
10k embedding & $3.13$ & 7.03\% & 8.76\% & 34 & 25\\
\textbf{tfidf embedding} & 3.13 & 7.89\% & 9.33\% & \textbf{27} & 22\\
1k 6k embedding & $\textbf{3.06}$ & 5.16\% & 7.86\% & 64 &31 \\
\end{tabular}
\label{table:memoryUsage}
\end{table}
\begin{table}
\caption{comparing computation time between embeddings and TFIDF models}
\centering
\begin{tabular}{L{4cm}|R{1.5cm}|R{1.5cm}||R{1.5cm}|R{1.5cm}}
& \multicolumn{2}{c|}{\textbf{Seconds}} & \multicolumn{2}{c}{\textbf{Ratio vs Embedding}} \\
& Title & Abstract & Title & Abstract \\
\bottomrule
TFIDF (sparse vector) & 154.95 & 169.89 & 231.25 & 184.36\\
TFIDF (dense vector) & 35.67 & 35.18 & 53.23 & 39.59 \\
\textbf{Embedding} & \textbf{0.67} & \textbf{0.89} & \textbf{1} & \textbf{1}\\
\end{tabular}
\label{table:computationTimes}
\end{table}
\subsection{Journal Plot}
Figure~\ref{figure:abstractPlotNormal} shows the 2-dimensional visualization of the (default) journal embeddings based on the abstracts. This plot is color coded to visualize publishers. Some journal names have been added to the plot to indicate research areas.
\begin{landscape}
\begin{figure}
\begin{center}
\includegraphics[height=4.34in]{Plots/Abstract_normal}
\end{center}
\caption{Journal plot of abstract embeddings after tSNE transformation. Red, green, blue, and gray represent respectively Wiley, Elsevier, Springer-Nature, and other/unknown publishers. }\label{figure:abstractPlotNormal}
\end{figure}
\end{landscape}
\section{Discussion}
\subsection{Results Analysis}
\subsubsection{Highest Accuracy}
The data, as presented in Figures~\ref{figure:titleRanks} and~\ref{figure:abstractRanks} shows that the 10k/10k set performs better than all other TFIDF sets, although the difference with the 5k/10k is low (a median rank difference of 1 on the abstracts and 3 on the titles). For the embeddings, the TFIDF weighted embedding outperforms other embedding models by a narrow margin: 1 median rank higher on the abstracts, and equal on the titles.
\subsubsection{Dataset and Model Optimization}
The determinants for choosing the final model parameters were constrained by their computational costs. Hence, even if increasing the number of iterations could have led to better performing word embeddings we chose 1 training iteration due to the increased training time. Similarly, we decided to stem tokens prior to training in order to decrease the vocabulary size. This might have led to a loss of syntactical information or caused ambiguous tokens.
\subsubsection{TFIDF}
The TFIDF feature vectors outperform the embeddings on abstracts, while the embeddings outperform the TFIDF on titles. The main difference between abstract and title is the number of tokens. Hence, embeddings which enhance tokens by their semantic meaning outperform TFIDF on the title. On the other hand, the TFIDF model outperforms on the abstract likely due to additional specification by additional tokens. In other words, longer text provides a better context and hence requires a less accurate semantic model for individual tokens.
\\Furthermore, none of our various vocabulary size cut-offs improved TFIDF ranks and indeed increasing the vocabulary size monotonically increased the performance of the TFIDF. In other words, we could not find a cut-off strategy to reduce noise and enhance TFIDF results. Although, it could still be possible that at even higher vocabulary sizes the cut-off would result in a sharper signal. However, since we noticed performance stagnation we did not investigate larger vocabulary sizes beyond 10k (presented in Table~\ref{table:tfidfperformance}).
\subsubsection{Combination of TFIDF and embeddings}
The limited TFIDF embeddings all fall short of full TFIDF embedding. We did not find a vocabulary size cut-off strategy to increase accuracy by reducing noise from rare or highly frequent words or their combinations. In other words, it is best not to miss any word. This is in line with what we found with the TFIDF results: larger vocabulary sizes enhances models.
\\\textit{Rank distribution}; Although the limited TFIDF embeddings underperform, we found that their rank distribution is different from the other embeddings. The rank distribution of the limited TFIDF embeddings shows the following pattern: a high/average performance on the top-rankings, a below average performance on the middle rankings and an increased ratio of articles with worsened higher ranks.
The rank distribution seems to indicate that the cut-offs marginalize ranks. The cut-off moved the ``middle-ranked'' articles to either the higher end or the lower end with a net effect to deteriorate median ranks. However, the articles that matched with limited TFIDF embedding had higher accuracy scores. The reduction in vocabulary size did not reduce the storage size for the embeddings, except for the 1K-6K case. This indicated that only the 1K-6K cuts removed all tokens from some abstract and titles resulting in null records and hence lower memory.
\subsubsection{TFIDF \& embeddings}
Our hypothesis on the difference between the TFIDF and the standard embedding is as follows:\\
The embeddings seem to outperform the TFIDF feature vectors in situations where there is little information available (titles). This indicates that the embeddings store some word meaning that enables them to perform relatively well on the titles. The abstracts, on the other hand, contain much more information. Our data seems to indicate that the amount of information available in the abstracts enables the TFIDF to cope with the lack of an explicit semantic model.
If this is the case, we could expect that there would be little performance increase on the title when we compare the embeddings to the weighted TFIDF embeddings, because the TFIDF lacks the information to perform well. This can be seen in our data, only the average rank increased by 3, indicating that there is a difference between the two embeddings, but not a significant one. We would also expect on the abstract an increase in performance since the TFIDF has more information in this context. We would expect that the weighting applied by the TFIDF model, an importance weighting, will improves the performance of the embedding. Our data shows a minor improvement in performance: 1 median rank and 10 average ranks. While these improvements cannot be seen as significant, our data at least indicates that weighting the embeddings with TFIDF values has a positive effect on the embeddings.
\subsubsection{Memory usage \& Calculation time}
TFIDF outperforms the embeddings on the abstracts, but requires more memory. Embedding uses 3.13 GB RAM, while the top performing TFIDF, 10K/10K, uses 11.61 GB (3.7 times more RAM footprint). This indicates that the embeddings are able to store the relatedness information more densely than the TFIDF. The embeddings furthermore need less calculation time for online calculations as shown in Table~\ref{table:computationTimes}. In average, embeddings are 200 times faster than sparse TFIDF. When the vectors are transformed to dense vectors this is reduced to 46 times. The difference between the sparse and dense vectors is due to dense vectors being processed more efficiently by low level vector operations. The difference between the embedding and TFIDF dense vectors is mainly due to the vector size. Embeddings use a 300 dimensional vector, while TFIDF uses a 10000 dimensional vector. Hence a time ratio of 33 is just normal and indeed close to measured values of 40 and 53 in Table ~\ref{table:computationTimes}. Note that even though dense representation is roughly 4-5 times faster, it requires 33 times more RAM which can be prohibitive.
\subsection{Improvements}
This research demonstrates that even though the embeddings can capture and preserve relatedness, TFIDF is able to outperform the embeddings on the abstracts. We used basic word2vec but earlier research already shows additional improvement potential for word2vec. Dai et al\cite{dai2015document} showed that using paragraph vectors improves the accuracy of word embeddings with 4.4\% on triplet creation with the Wikipedia corpus and a 3.9\% improvement on the same task based on the arXiv articles. Furthermore, Le et al\cite{le2014distributed} show that the usage of paragraph vectors decrease the error rate (positive/negative) with 7.7\% compared to averaging the word embeddings on categorizing text as either positive or negative. While the improvement looks promising, we have to keep in mind that our task differs from earlier research. We do not categorize on two categories but about 3700 journals. Since the classification task is fundamentally the same, still we would expect an improvement by using paragraph vectors. However, the larger scale here complicates the task due to the ``grey areas'' between categories. These are the areas in which the classification algorithm is ``in doubt'' and could reasonably assign the article to both journals. There are many similar journals and hence we cannot expect a rank 1 for most of articles. Indeed our classes here are not exactly mutually exclusive. Indeed in general, the number of these grey areas increase with increased target class size.\\ Pennington et al\cite{pennington2014glove} showed that the GloVe model outperforms the continuous-bag-of-words (CBOW) model, which is used in this research, on a word analogy task. Wang et al\cite{wang2016linked} introduced the linked document embedding method (LDE) method, which makes use of additional information about a document, such as citations. Their research specifically focused on categorizing documents, showed a 5.89\% increase of the micro-F1 score on LDE compared to CBOW, and a 9.11\% increase of the macro-F1 score. We would expect that applying this technique to our dataset would improve our scores, given earlier results on comparable tasks. Although our results seem to indicate that the embeddings work for academic texts, Schnabel et al\cite{schnabel2015evaluation} found that the quality of the embeddings are depended on the validation task. Therefore, conservatively we can only state that our research shows that embeddings work on academic texts for journal classifications.\\
Despite immense existing researches, we have not been able to find published results which are directly comparable to ours. This is due to our large target class size (3700 journals) that requires a ranking measure. Earlier research limited themselves to small number of groups such as binary classes or 3 classes \cite{shen2018baseline}. We have opted median rank as our key measure, but like existing research we have also reported absolute hit \cite{wang2016linked}. Our conclusions, were indifferent to exact metric used (median vs average rank vs absolute hit).
\section{Conclusion}
Our research, based on academic corpus, indicates that embeddings provide a better content model for shorter text such as title and fall short of TFIDF for larger texts such as abstracts. The higher accuracy of TFIDF may not be worth it, as it requires 3.7 more RAM and is roughly 200 times slower for online applications.
The performance of the embeddings have been improved by weighing them with the TFIDF values on the word level, although this improvement cannot be seen as significant on our dataset. The visualization of the journal embedding shows that similar journals are grouped together, indicating a preservation of relatedness between the journal embeddings.
\section{Future work}
\subsubsection{Intelligent cutting}
A better way of cutting could improve the quality of the embeddings. This improvement might be achieved by cutting the center of the vector space out before normalization. All words which are generic are in the center of the spectrum, removing these words prevents the larger texts to be pulled towards the middle of the vector space, where they lose the parts of their meaning which set them apart from the other texts. We expect that this way of cutting, instead of word-occurrence cutting, can enhance embeddings especially for longer texts.
\subsubsection{TFIDFs performance point}
In our research, TFIDF performed better on the abstracts than on the titles, which we think is caused by the difference in text size. Consequently, there could be a critical length of text where the best performing model switches from embedding to TFIDF. If this length is found, one could skip the TFIDF calculations in certain situations, and skip the embedding training in other scenario's, reducing costs.
\subsubsection{Reversed word pairs}
At this point, there are no domain-specific word pair sets available. However, as we demonstrated, we can still test the quality of word embeddings. Once one has established that the word vectors are of high quality, could one create word pairs from these embeddings? If this is the case we could create word pair sets using the embeddings and then reverse engineer domain specific word pairs for future use.
\section{Acknowledgement}
We would like to thank Bob JA Schijvenaars for his support, advice and comments during this project.
\bibliographystyle{splncs04}
|
1,477,468,750,908 | arxiv | \section{Introduction}
Observational studies have shown that magnetic fields
are present at all scales where star formation processes are at work, permeating molecular clouds, prestellar cores and protostellar envelopes
\citep[e.g.,][]{Girart2006, Alves2014, Zhang2014, Soler2019}. While magnetized models of star formation have been developed early on, it is only recently that complex physics is included in numerical magnetohydrodynamic (MHD) models, and that the predictions of dedicated models are directly compared to observations of star-forming regions. One of the major achievements of magnetized models is the prediction of small sizes for protostellar disks, regulated by magnetic braking \citep{Dapp2012, Hennebelle2019, Zhao2020a}. Recent observations have indeed shown that disks are compact, almost an order of magnitude smaller than those produced in purely hydrodynamical models \citep{Maury2019, Lebreuilly2021}.
In magnetized collapse models, non-ideal MHD effects play a major role in the regulation of the magnetic flux during the early stages of protostellar formation \citep{Machida2011, Li2011, Marchand2020}. Hence, these processes indirectly limit the angular momentum transported by the magnetic field from large envelope scales to the inner envelope, and set the resulting disk size. However, these resistive processes depend heavily on the local physical conditions, such as dust grain properties (electric charge and size), gas density and temperature, density of ions and electrons, and the cosmic-ray (CR) ionization rate, $\zeta$ \citep[namely the number of molecular hydrogen ionization per unit time, see e.g.,][for a review]{Zhao2020a}. Observationally, only very few measurements of these quantities have been obtained toward solar-type protostars: the effective coupling of
magnetic fields
to the infalling-rotating material, at scales where the circumstellar material feeds the growth in mass of the protostellar embryo and its disk, is thus still largely unknown.
B335, located at a distance of 164.5 pc \citep{Watson2020}, is an isolated Bok Globule which contains an embedded Class 0 protostar \citep{Keene1983}. The core is associated with an east-west outflow that is prominently detected in CO, with an inclination of 10\degr\ on the plane of the sky and an opening angle of 45\degr\ \citep{Hirano1988, Hirano1992, Stutz2008, Yen2010}. Its isolation and relative proximity has made of B335\ an ideal object to test models of star formation. Asymmetric double-peaked line profiles observed in the molecular emission of the gas toward the envelope have been interpreted as optically thick lines tracing infalling motions, and have been extensively used to derive mass infall rates from 10$^{-7}$ to $\sim$3$\times$10$^{-6}$ \msun\ yr$^{-1}$\ at radii of 100-2000 au \citep{Zhou1993,Yen2010,Evans2015}. However, new observations of optically thin emission from less abundant molecules revealed the presence of two velocity components tracing non-symmetric motions at these envelope scales, which could contribute significantly to the double-peaked line profiles \citep{Cabedo2021b}. These results have suggested that simple spherically symmetric infall models might not be adequate to describe the collapse of the B335\ envelope, and tentatively unveiled the existence of preferential accretion streamlines along outflow cavity walls.
B335\ is also an excellent prototype for the study of magnetized star formation models, since it has been proposed as an example of protostar where the disk size is set by a magnetically regulated collapse \citep[][hereafter Paper I]{Maury2018}. This hypothesis surges mainly from two observations. First, the rotation of the gas observed at large envelope radii is not found at small envelope radii ($<$ 1000 au) and no kinematic signature of a disk was reported down to $\sim$ 10 au \citep{Kurono2013,Yen2015b}. Second, observations of polarized dust emission have revealed an "hourglass" magnetic field morphology (Paper I) in the inner envelope: their comparisons to MHD models of protostellar formation \citep[see e.g.,][for a whole description of the models]{Masson2016, Hennebelle2020} suggest that the B335\ envelope is threaded by a rather strong magnetic field which is well coupled to the infalling-rotating gas, preventing the disk growth to large radii. The purpose of the analysis presented here is to characterize the level of ionization and its origin in order to test the model scenario proposed in Paper I, and put observational constraints on the efficiency of the coupling of the magnetic field to the star-forming gas in the inner envelope of B335.
In Sec. \ref{sec:Observations} we present the ALMA observations used to constrain physical and chemical properties of the gas at envelope radii $\lesssim$ 1000 au. In Sec.~\ref{sec:deutFrac} we derive the deuteration fraction from DCO$^{+}$\ (J=3-2) and H$^{13}$CO$^+$\ (J=3-2). In Sec. \ref{sec:IonFrac} we compute the ionization fraction and the CR ionization rate. Finally, in Sec. \ref{sec:Discussion}, we discuss our results concerning deuteration processes, the ionization, the possible influence of a local source of radiation and its effect on the coupling between the gas and the magnetic field.
\section{Observations and data reduction} \label{sec:Observations}
Observations of the Class 0 protostellar object B335 were carried out with the ALMA interferometer during the Cycle 4 observation period, from October 2016 to September 2017, as part of the 2016.1.01552.S\ project. The centroid position of B335\ is assumed to be $\alpha = $ 19:37:00.9\ and $\delta = $ +07:34:09.6\ (in J2000 coordinates), as the dust continuum peak obtained by \citet{Maury2018}.
We used DCO$^{+}$\ (J=3-2) and H$^{13}$CO$^+$\ (J=3-2) to obtain the deuteration fraction and
the ionization fraction. We used H$^{13}$CO$^+$\ (J=1-0) and C$^{17}$O\ (J=1-0) to derive the hydrogenation fraction. Additionally, we observed the dust continuum at 110 GHz\ to derive the estimated line opacities, CO depletion factor and ionization rate. We also targeted $^{12}$CO\ (J=2-1) and N$_{2}$D$^{+}$\ (J=3-2) for comparison purposes. All lines were targeted using a combination of ALMA configurations to recover the largest lengthscale range possible. Since our data for H$^{13}$CO$^+$\ (J=3-2) only included observations with the Atacama Compact Array (ACA), we used additional observations of this molecular line at smaller scales from the ALMA project 2015.1.01188.S. Technical details of the ALMA observations are shown in Appendix \ref{sec:ap_ObsDetails}.
Calibration of the raw data was done using the standard script for Cycle 4 ALMA data using the Common Astronomy Software Applications (CASA), version 5.6.1-8. The continuum emission was self-calibrated with CASA. Line emission from $^{12}$CO\ (J=2-1) and N$_{2}$D$^{+}$\ (J=3-2) was additionally calibrated using the self-calibrated continuum model at the appropriate frequency (231 GHz, not shown in this work).
Final images of the data were generated from the calibrated visibilities using the tCLEAN algorithm within CASA, using Briggs weighting with a robust parameter of 1 for all the tracers. Since we want to compare our data to the C$^{17}$O\ (J=1-0) emission presented in \citet{Cabedo2021b}, we adjust our imaging parameters to obtain matching beam maps with similar angular and spectral resolution. For DCO$^{+}$\ (J=3-2) and H$^{13}$CO$^+$\ (J=3-2) we restricted the baselines to a common \textit{u,v}-range, between 9 and 140 $k\lambda$, and finally smoothed them to the same angular resolution. The preliminary analysis shows that both, C$^{17}$O\ (J=1-0) and H$^{13}$CO$^+$\ (J=1-0), emission is barely detected in the most extended configurations. We proceed by applying a common \textit{uv-tapering} of 1.5\arcsec\ to both lines. This procedure allows weighting down the largest baselines, giving more weight to the smaller baselines and allowing to obtain a better S/N. Furthermore, we smoothed the data to obtain exactly the same angular resolution. Additionally, we smooth C$^{17}$O\ (J=1-0) and the dust continuum at 110 GHz\ to obtain DCO$^{+}$\ matching beam maps to compute the depletion factor. The imaging parameters used to obtain all the spectral cubes and their final characteristics are shown in Table \ref{table:ImageChar}. Even though the characteristics and properties of the C$^{17}$O\ (J=1-0) maps are slightly different than the ones presented in \citet{Cabedo2021b}, due to the differences in the imaging process, the line profiles show the same characteristics, i.e., line profiles are double-peaked and present the same velocity pattern at different offsets from the center of the source, confirming that the imaging process has no large effect on the shape of the line profile, and that the two velocity components can still be observed.
\begin{table*}[!ht]
\centering
\caption{Imaging parameters and final maps characteristics.}
\begin{tabular}{l c c c c c c c c c}
\toprule\toprule
& DCO$^{+}$\ & H$^{13}$CO$^+$\ & $^{12}$CO\ & N$_{2}$D$^{+}$\ & H$^{13}$CO$^+$\ & C$^{17}$O** & cont. \\
& (J=3-2) & (J=3-2) & (J=2-1) & (J=3-2) & (J=1-0) & (J=1-0) & \\
\midrule
&&&& \\
Rest. Freq. (GHz) & 216.112 & 260.255 & 230.538 & 231.321 & 86.754 & 112.359 & 110 \\
$\Theta_{\rm LRS}$* (arcsec) & 11.3 & 16.0 & 10.6 & 10.6 & & & 22.3 \\
Pixel size (arcsec) & 0.5 & 0.5 & 0.5 & 0.5 & 0.25 & 0.25 & 0.3 \\
$\Theta_{\rm maj}$ (arcsec) & 1.5 & 1.5 & 1.5 & 1.5 & 2.6 & 2.6 & 0.8 \\
$\Theta_{\rm min}$ (arcsec) & 1.5 & 1.5 & 1.5 & 1.5 & 2.6 & 2.6 & 0.7 \\
P.A. (\degree) & 0 & 0 & 0 & 0 & 0 & 0 & $-61.5$ \\
Spectral res. (km s$^{-1}$) & 0.2 & 0.2 & 0.2 & 0.2 & 0.15 & 0.15 & - \\
rms (mJy beam$^{-1}$) & 22.37 & 53.15 & 143.5 & 6.00 & 18.58 & 10.57 & 0.065 \\
vel. range (km s$^{-1}$) & 7.8 - 8.9 & 7.5 - 8.9 & 7.6 - 9.4 & 7.7 - 8.9 & 7.4-9.2 & 4.7-6.5 & - \\
& & & & & & 7.7-9.3 \\
rms (mJy beam$^{-1}$ km s$^{-1}$) & 11.17 & 21.28 & 634.6 & 5.23 & 17.17 & 17.58 & - \\
\bottomrule
\end{tabular}
\begin{list}{}{}
\item * Largest recoverable scale, computed as $\Theta_{\rm LRS} = 206265(0.6\lambda/b_{\rm min})$ in arcsec, where $\lambda$ is the rest wavelength of the line, and $b_{\rm min}$ is the minimum baseline of the configuration, both in m (\citealt{ALMAc4}).
\item ** The two velocity ranges correspond to the two resolved hyperfine components.
\end{list}
\label{table:ImageChar}
\end{table*}
\subsection{Data products} \label{sec:Results}
From the data cubes, we obtained spectral maps that present the spectrum (in velocity units) at each pixel of a determined region around the center of the source. These maps allow to evaluate the spectra at different offsets from the center of the envelope where the dust continuum emission peak localizes the protostar, and to detect any distinct line profile or velocity pattern. The obtained spectral maps are discussed in Appendix \ref{sec:ap_spectralMaps}.
We derived the integrated intensity maps by integrating the molecular line emission over the velocity range in which it is emitting (the velocity range used for each molecule is shown in Table \ref{table:ImageChar}). Figure \ref{fig:intensity_maps} shows the dust continuum emission map at 110 GHz\ and the integrated intensity contours of C$^{17}$O\ (J=1-0), DCO$^{+}$\ (J=3-2), H$^{13}$CO$^+$\ (J=1-0) and H$^{13}$CO$^+$\ (J=3-2). The H$^{13}$CO$^+$\ (J=1-0) emission clearly appears more spatially extended than the other lines, roughly following most of the dust emission. The emission from this line extends further than the dust toward the south-east outflow cavity wall direction. The C$^{17}$O\ (J=1-0), DCO$^{+}$\ (J=3-2) and H$^{13}$CO$^+$\ (J=3-2) emission present a similar, relatively compact, morphology centered around the dust peak and elongated along the north-south equatorial plane. However, the spatial extent along the equatorial plane is slightly more compact for the H$^{13}$CO$^+$\ emission ($\sim$2100~au) than the DCO$^{+}$\ emission ($\sim$2500~au), and both are more extended than C$^{17}$O\ ($\sim$1700~au). The DCO$^{+}$\ peak intensity is clearly more displaced from the continuum peak than the H$^{13}$CO$^+$\ (J=3-2) peak, this could be a local effect around the protostar due to the large temperature and irradiation conditions, which destroy DCO$^{+}$. The C$^{17}$O\ (J=1-0) emission appears to be slightly extended along the outflow cavities.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/IntensityImages/c17o_dco_h13co_cont110_titles2.pdf}
\caption{Map of the dust continuum emission at 110 GHz\ clipped for values smaller than 3$\sigma$ (see Table \ref{table:ImageChar}) and
integrated emission for the four molecular lines as indicated in the upper left corner (red contours). Contours are $-2$, 3, 5, 8, 11, 14 and 17$\sigma$ (the individual $\sigma$ values are indicated in Table \ref{table:ImageChar}). The velocity ranges used to obtain the integrated intensity maps are shown in Table \ref{table:ImageChar}. In the bottom-left of each plot is shown the beam size for each molecular line (in red) and for the dust continuum emission (in black). The physical scale is shown in the top-right corner of the figure.
}
\label{fig:intensity_maps}
\end{figure*}
Since all the molecules present a double-peaked profile with similar velocity patterns as the ones observed in \citet{Cabedo2021b}, namely two different velocity components with varying intensity across the source spatial extent, we use two independent velocity components to fit simultaneously the line profiles. Following \citet{Cabedo2021b}, we modeled the spectrum at each pixel by using the \textit{HfS}\ fitting program \citep{Estalella2017}. We obtained maps of peak velocity and velocity dispersion for each velocity component, deriving the corresponding uncertainties from the $\chi^2$ goodness of fit \citep[see][for a thorough description of the derivation of uncertainties]{Estalella2017}. A more detailed discussion of these maps is presented in Appendix \ref{sec:ap_LineMod}. In addition, an estimate of line opacities is given in Appendix \ref{ssec:Opacity}. In the following sections, the statistical uncertainties derived for deuteration fraction, ionization fraction, and ionization rate (shown in Figs.~\ref{fig:deutFrac_maps}, \ref{fig:ionFrac_maps}, and \ref{fig:ionRate_maps}, respectively)
are computed using standard error propagation analysis from the uncertainties of each parameter obtained with the line modeling
(peak velocity, velocity dispersion, and opacity). Other systematic uncertainties are discussed separately in the corresponding sections.
\section{Deuteration fraction} \label{sec:deutFrac}
The process of deuteration consists of an enrichment of the amount of deuterium with respect to hydrogen in molecular species. The deuteration fraction,
$R_{\rm D}$\ = [D]/[H], in molecular ions, in particular HCO$^{+}$, has been extensively used as an estimator of the degree of ionization in molecular gas \citep{Caselli1998, Fontani2017}. We apply here this method to obtain maps of $R_{\rm D}$, computed as the column density ratio of DCO$^{+}$\ (J=3-2) and its non-deuterated counterpart, H$^{13}$CO$^+$\ (J=3-2):
%
\begin{equation}
R_{\rm D} = \frac{1}{f_{\rm ^{12/13}C}}
\frac{N_{\rm DCO^{+}}}{N_{\rm H^{13}CO^{+}}}\,,
\label{eq:deutFrac_colDens}
\end{equation}
%
where $N_{i}$ is the column density of each species and $f_{^{12/13}C}$ is the abundance ratio of $^{12}$C to $^{13}$C. The column density
of both species is computed such as:
\begin{equation}
\begin{split}
N_i = &\frac{8 \pi}{\lambda^{3}A} \frac{1}{J_{\nu}(T_{\rm ex}) - J_{\nu}(T_{\rm bg})}\times \\ & \frac{1}{1-\exp(-h\nu/k_{\rm B}T_{\rm ex})} \frac{Q_{\rm rot}}{g_{u}\exp(-E_{l}/k_{\rm B}T_{\rm ex})}\times \\ & \int{I_{\rm 0} {\rm d} v},
\end{split}
\label{eq:colDens}
\end{equation}
%
where $\lambda$ is the wavelength of the transition, $A$ is the Einstein coefficient, $J_{v}(T)$ is the Planck function at the background (2.7~K) and at the excitation temperature of the line (assumed to be 25~K), $k_{\rm B}$ is the Boltzmann constant, $g_{u}$ is the upper state degeneracy,
$Q_{\rm rot}$ is the partition function at 25~K, $E_{l}$ is the energy of the lower level, and $\int{I_{0} {\rm d} v }$ is the integrated intensity. Values of these parameters for each molecule are listed in Table \ref{tab:colDens_pars}.
\begin{table}
\centering
\caption{Parameters for the computation of the column density.}
\begin{tabular}{l c c c}
\toprule\toprule
Transition & DCO$^{+}$\ (J=3-2) & H$^{13}$CO$^+$\ (J=3-2) & C$^{17}$O\ (J=1-0) \\
\midrule
$\log~(A/{\rm s}^{-1})$ & $-3.12$ & $-2.87$ & $-7.17$ \\
$g_{u}$ & 7 & 7 & 3\\
$Q_{\rm rot}$ & 25.22 & 22.91 & 25 \\
$E_{l}$ (cm$^{-1}$) & 7.21 & 8.68 & 0.00 \\
$N_{i}^{\rm mean}$ (cm$^{-2}$) & 3$\times$10$^{11}$ & 7$\times$10$^{11}$ & 3$\times$10$^{13}$ \\
\bottomrule
\end{tabular}
\label{tab:colDens_pars}
\end{table}
The top panels of Fig.~\ref{fig:deutFrac_maps} show the $R_{\rm D}$\ maps of B335\ for the blue- and red-shifted velocity components (left and right column, respectively). The mean values of the deuteration fraction are $\sim$1-2\%, being higher in the outer region of the envelope, where the gas is expected to be colder, and decreasing toward the center, where the protostar is located and the temperature is expected to rise.The bottom panels show that the statistical uncertainties are roughly one tenth of the deuteration values. We note that the highest values of $R_{\rm D}$\ are found for the blue-shifted component, as high as 3.5\%. However, these values have the largest associated errors due to the presence of a third velocity component in the line profiles of H$^{13}$CO$^+$\ (J=3-2) and should not be totally trusted (see Appendix \ref{sec:ap_spectralMaps}). Finally, we note that while the average $R_{\rm D}$\ values for both velocity components are similar, the most widespread blue-shifted gas component shows larger $R_{\rm D}$\ dispersion, with values from 0.06 to 2\%, while the more localized red-shifted gas exhibits more uniform values, between 0.01 and 1\%.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/DeutFrac/DeutFrac_DCO_H13CO_err_cont110_scale2.pdf}
\caption
Dust continuum emission at 110 GHz\ for $-2$, 3, 5, 7, 10, 30, 50$\sigma$ (black contours) superimposed on the
deuteration fraction maps (top panels) and on the corresponding statistical uncertainties (lower panels) for the blue- and red-shifted velocity components (left and right column, respectively). The spatial scale is shown in the top-right corner of the upper panels.}
\label{fig:deutFrac_maps}
\end{figure*}
One of the main uncertainties of the deuteration analysis is the assumption that both DCO$^{+}$\ (J=3-2) and H$^{13}$CO$^+$\ (J=3-2) have the same $T_{\rm ex}$. This should be accurate enough, since both molecules present a similar geometry and dipolar moment and are assumed to be spatially coexistent. However, to check the effect that the variation of $T_{\rm ex}$ would produce in our values, we computed the column densities of both
species at two additional temperatures, 20 and 30~K.
While the DCO$^{+}$\ column density do not change significantly at these two temperatures with respect to the value adopted (25~K),
the H$^{13}$CO$^+$\ column density changes by about $\sim$15\%.
Nevertheless, the new values are still in agreement with the derived values of $R_{\rm D}$\ from DCO$^{+}$\ reported in the literature.
\section{Ionization fraction and cosmic-ray ionization rate} \label{sec:IonFrac}
We followed the method presented in \citet{Caselli1998} to compute the ionization fraction, $\chi_{\rm e}$, and the CR ionization rate, $\zeta$, from $R_{\rm D}$. The two quantities are given by
%
\begin{equation}
\chi_{\rm e} = \frac{2.7\times10^{-8}}{R_{\rm D}} - \frac{1.2\times10^{-6}}{f_{\rm D}}\,,
\label{eq:ionization_frac}
\end{equation}
%
and
%
\begin{equation}
\zeta = \left[ 7.5\times10^{-4} \chi_{{\rm e}} + \frac{4.6\times10^{-10}}{f_{{\rm D}}} \right] \chi_{{\rm e}} n_{{\rm H_{2}}} R_{{\rm H}}\,,
\label{eq:ionization_rate}
\end{equation}
%
where $f_{\rm D}$ is the depletion fraction of C and O, $n_{\rm H_{2}}$ is the H$_{2}$\ volume density and $R_{\rm H}$\ is the hydrogenation fraction,
$R_{\rm H}$\ = [HCO$^{+}$]/[CO].
\subsection{Estimate of the CO depletion in B335} \label{sec:depletion}
Since the deuteration fraction, and therefore the ionization fraction, depend on the level of C and O depletion from the gas phase, we made the hypothesis that the CO abundance is proportional to the CO column density and estimated $f_{\rm D}$ as the ratio between the `expected', $N_{\rm CO}^{\rm exp}$, and the `observed' CO\ column density, $N_{\rm CO}^{\rm obs}$:
%
\begin{equation}
f_{\rm D} = \frac{N_{\rm CO}^{\rm exp}}{N_{\rm CO}^{\rm obs}}\,.
\label{eq:depletionFrac}
\end{equation}
Here, $N_{\rm CO}^{\rm exp}$ is computed as the product of the H$_{2}$\ column density, $N_{\rm H_2}$, and the expected CO\ to H$_{2}$\ abundance ratio, $X_{\rm CO}$ = [CO]/[H$_{2}$] = 10$^{-4}$ \citep{WilsonRodd1994,Gerner2014}, and $N_{\rm CO}^{\rm obs}$ is derived as the product between the C$^{17}$O\ column density, $N_{\rm C^{17}O}$, and the C$^{17}$O\ to CO\ abundance ratio, $f_{\rm C^{17}O}$ = [CO]/[C$^{17}$O] = 2317 \citep{Wouterloot2008}. Then, the depletion factor is given by
%
\begin{equation}
f_{\rm D} = \frac{N_{\rm H_2} X_{\rm CO}}{N_{\rm C^{17}O} f_{\rm C^{17}O}}\,.
\label{eq:depletion}
\end{equation}
%
The H$_{2}$\ column density has been estimated from the dust thermal emission at 110 GHz\ (see Fig.~\ref{fig:intensity_maps}) corrected for the primary beam attenuation. Indeed, $N_{\rm H_2}$ depends directly on the flux density measured on the continuum map, the beam solid angle $\Omega_{\rm beam}$, the Planck function $B_{\nu}(T_{\rm d})$ at the dust temperature, and the dust mass opacity $\kappa_{\nu}$ (in units of cm$^2$ g$^{-1}$) at the frequency the dust thermal emission was observed:
%
\begin{equation}
\label{eq:column-density}
N_{\rm H_2} = -\frac{1}{\mu_{\rm H_2} m_{\rm H} \kappa_{\nu}} \ln \left[ 1 - \frac{S_{\nu}^{\rm beam}}{\Omega_{\rm beam} B_{\nu}(T_d)} \right]\,,
\end{equation}
where $\mu_{\rm H_2}=2.8$, $m_{\rm H}$ is the atomic hydrogen mass, and for the dust opacity we used a power-law fit given by
%
\begin{equation}
\label{eq:kappa}
\kappa_{\nu} = \frac{\kappa_0}{\chi_{\rm d}} \left( \frac{\nu}{\nu_0} \right) ^{\beta}.
\end{equation}
%
We adopt a dust mass absorption coefficient $\kappa_0 = 1.6$~cm$^2$~g$^{-1}$ at $\lambda_0 = 1.3$~mm, following \citet{Ossenkopf1994}. We assume a standard gas-to-dust ratio $\chi_{\rm d}$ = 100, a dust emissivity exponent $\beta=0.76$ \citep{Galametz2019}, and a dust temperature $T_{\rm d}=25$~K. The resulting $N_{\rm H_2}$ column densities probed in the ALMA dust continuum emission map range from $\sim 10^{20}$ cm$^{-2}$ at radii $\sim 1600$ au up to a few $10^{22}$ cm$^{-2}$ at the protostar position. We note that the values of column density suffer from a systematic error due to assumptions on the parameters used to derive them (e.g. dust opacity, dust temperature). We estimate that the systematic error is about a factor of 3 to 5, being higher toward the peak position where standard dust properties and conditions of emission may not be met. We use Eq.~\ref{eq:colDens} to estimate the observed CO column density $N_{\rm CO} = N_{\rm C^{17}O} f_{\rm C^{17}O}$ from the C$^{17}$O emission map shown in Fig.~\ref{fig:intensity_maps}, assuming an excitation temperature equal to the value used for the dust temperature, $T_{\rm ex}=T_{\rm d}=25$~K. The CO column density values are subject to systematic uncertainties due to the adopted gas temperature, as it is not well constrained and expected to vary across the envelope. Altogether, the systematic uncertainty is about a factor of 2, also higher toward the peak position where different gas temperatures are expected to be present along the line of sight.
Using Eq.~\ref{eq:depletion}, we obtain the CO depletion fraction map, shown in Fig.~\ref{fig:depletionFrac}. The depletion factor ranges from $\sim 20$ in the outer regions to $\sim 70$ in the center of the object. Propagating the errors from the column density maps they stem from, we expect these depletion values are uncertain to a factor of 2 to 3, as the main source of errors, due to assumed gas and dust temperatures, are lessened by the ratio. The CO depletion seems highly asymmetric around the protostar, being the lowest in the south-eastern quadrant while high values are associated to the protostar position and the northern region. We note that the regions with high $f_{\rm D}$\ coincide with the regions of low deuteration, shown in Fig.~\ref{fig:deutFrac_maps}.
\begin{figure}
\centering
\includegraphics[width=9cm]{Images/DepletionFactor/fD_h2_c17o_scale2.pdf}
\caption{Map of the CO depletion factor and contours of dust continuum emission at 110 GHz, for emission over 3$\sigma$. The spatial scale is shown in the top-right corner of the figure.}
\label{fig:depletionFrac}
\end{figure}
\subsection{Ionization fraction of the gas} \label{sec:IonFrac_derivation}
We obtained the ionization fraction of the gas using our deuteration fraction maps, shown in Sect.~\ref{sec:deutFrac} and the CO depletion map, presented in Fig.~\ref{fig:depletionFrac}. Top panels of Fig.~\ref{fig:ionFrac_maps} show the derived ionization fraction and the bottom panels the associated uncertainties, for the blue- and red-shifted velocity components (left and right column, respectively). We obtained a mean value of $\chi_{\rm e}$\ = $2\times10^{-6}$ for both components. Generally, these values are subject to statistical uncertainties of $\sim1.5\times10^{-6}$. The range of values found for $R_{\rm D}$\ and $f_{\rm D}$\ indicates that the $\chi_{\rm e}$\ depends basically on the level of deuteration, hence the systematic error on the estimated ionization fraction of the gas from Eq.~\ref{eq:ionization_frac} are typically $<30\%$ and statistical uncertainties dominate in this case. Since the deuteration fraction decreases toward the center of the source, we found that ionization increases toward the center.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/IonFrac/IonFrac_fDMaps_def2.pdf}
\caption
Dust continuum emission at 110 GHz\ for $-2$, 3, 5, 7, 10, 30, 50$\sigma$ (black contours) superimposed on
the ionization fraction maps (top panels) and on the corresponding statistical uncertainties (lower panels) for the blue- and red-shifted velocity components (left and right column, respectively). The spatial scale is shown in the top-right corner of the upper panels.}
\label{fig:ionFrac_maps}
\end{figure*}
\subsection{Derivation of the CR ionization rate} \label{sec:IonRate}
We produced ionization rate maps using Eq.~\ref{eq:ionization_rate}. We obtained $n_{\rm H_{2}}$ as $N_{\rm H_{2}}/L$, using the $N_{\rm H_{2}}$ map obtained in Sect.~\ref{sec:depletion} and considering a core diameter, $L$, of 1720~au, derived from the C$^{17}$O\ (J=1-0) emission \citep{Cabedo2021b}. With this approximation, we find values of the volume density ranging from $\sim 10^{4}$ in the outer regions, to $\sim 10^{6}$ cm$^{-3}$\ at the protostar position. We obtained $R_{\rm H}$\ for both velocity components as the column density ratio of H$^{13}$CO$^+$\ (J=1-0) to C$^{17}$O\ (J=1-0), and accounting for $f_{^{12/13}C}$ and $f_{\rm C^{17}O}$:
%
\begin{equation}
R_{\rm H} = \frac{N_{\rm H^{13}CO^{+}} f_{^{12/13}C}}{N_{\rm C^{17}O} f_{\rm C^{17}O}}\,.
\end{equation}
Values of $R_{\rm H}$\ are relatively uniform across the source, with mean values between 2 and $3\times 10^{-7}$ for the two velocity components. The statistical uncertainties of these values are generally one order of magnitude smaller than $R_{\rm H}$.
The two terms within the brackets in Eq.~\ref{eq:ionization_rate} have values of the order of $\sim10^{-9}$ and $\sim10^{-11}$, respectively. Therefore, the derivation of the CR ionization rate is very sensitive to the ionization fraction but not to the CO depletion factor. Figure \ref{fig:ionRate_maps} shows in the top panels the derived CR ionization rate and in the bottom panels the associated statistical uncertainties, for the blue- and red-shifted velocity components (left and right column, respectively). As expected from the values of the ionization fraction, $\zeta$\ increases toward the center, reaching values of 7$\times$10$^{-14}$~s$^{-1}$ at the peak of the dust continuum. The uncertainties on $\zeta$\ are large, due to all the errors coming from the modeling of the different molecules. Considering the systematic uncertainties on n$_{\rm H_{2}}$ and $R_{\rm H}$\, we estimate the global uncertainty on the CR ionization rate values we find in B335\ are of the same order of magnitude as the $\zeta$\ values.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Images/IonRate/IonRate_fDMaps_def2.pdf}
\caption
Dust continuum emission at 110 GHz\ for $-2$, 3, 5, 7, 10, 30, 50$\sigma$ (black contours) superimposed on
the ionization rate maps (top panels) and on the corresponding statistical uncertainties (lower panels) for the blue- and red-shifted velocity components (left and right column, respectively). The scale is shown in the top-right corner of the upper panels.}
\label{fig:ionRate_maps}
\end{figure*}
To interpret the trend of the ionization rate as a function of the distance from the density peak in an unbiased way, we considered profiles of $\zeta$ along directions from the density peak outwards denoted by the position angle $\vartheta$, with $0\le\vartheta\le\pi$ (see the lower right panel of Fig.~\ref{fig:zeta_vs_radii}). The envelope of the profiles is represented by the orange filled region in the upper left panel of Fig.~\ref{fig:zeta_vs_radii}. Since the $\zeta(r,\vartheta)$ distributions between 40 and 700 au are skewed, for each radius $\bar{r}$ we considered the median value of $\zeta(\bar{r},\vartheta)$ and estimated the errors by using the first and third quartiles. We found that the trend of the ionization rate can be parameterised by two independent power-law profiles, $\zeta(r)\propto r^{s}$, with $s=-0.96\pm0.04$ and $-3.77\pm0.30$ for radii smaller and larger than 270~au, respectively. The ionization rate decreases with radius following two very different trends. At $r<270$~au, the slope is compatible with the diffusive regime ($r^{-1}$), then $\zeta$ drops abruptly.
\begin{figure*}
\centering
\resizebox{.85\hsize}{!}{
\includegraphics[]{Images/IonRate/zetaNeff_vs_r_Q1medQ3_final_paper.png}}
\caption{{\em Upper left panel}: ionization rate, $\zeta$, as a function of the radius, $r$. The two dotted black lines at 270 and 500~au identify the radius ranges used to compute the slopes ($s_{1}$ and $s_{2}$, respectively). The grey shaded region shows the expected X-ray ionization in the outflow cavity for a typical and a flaring T Tauri star \citep{Rab2017}. {\em Lower left panel}: Effective column density, $N_{\rm eff}$, accumulated by CRs and X-rays from the source centre outwards as a function of the radius. {\em Upper right panel}: ionization rate as a function of the effective column density. The upper x-axis shows the minimum energy that CR protons must have in order not to be thermalised \citep{Padovani2018}. The orange, magenta, and green filled regions in the three above panels represent the envelope of the $\zeta$ and $N_{\rm eff}$ profiles (see Sect.~5.2), while the red, purple, and dark green solid lines show their median value. {\em Lower right panel}: ionization rate map of the blue-shifted component (zoom of the upper left panel of Fig.~5) in logarithmic scale (coloured map). The purple hatched region shows a circle of radius 270~au, where CRs propagate according to the diffusive regime. For illustration purposes, the black arrow at the position angle $\vartheta$ shows a direction used to extract the $\zeta$ profile from the map. The black dashed lines identify the radius as well as the corresponding values of $N_{\rm eff}$ and $\zeta$, where the slope of the ionization rate changes from $s_{1}$ to $s_{2}$.
}
\label{fig:zeta_vs_radii}
\end{figure*}
\section{Discussion} \label{sec:Discussion}
\subsection{Local destruction of deuterated molecules}
The deuteration fractions observed in Class 0 protostars range between 1 and 10\%, with a large scatter from source to source and at the different protostellar scales \citep{Caselli2002,Roberts2002,Jorgensen2004}. The [DCO$^{+}$]/[H$^{13}$CO$^+$] values in B335\ we find are thus in broad agreement with the typical range reported in the literature. However, \citet{Butner1995} found a [DCO$^{+}$]/[HCO$^{+}$] of $\simeq$3\% in B335 at scales of $\sim$10000~au. This value is larger than the range of values we have found with ALMA at smaller scales, which indicates a decrease of the deuteration fraction toward the center of the envelope. Indeed, deuteration fraction decreases down to $<1$\% toward the center and the northern region of the source. This decrease could be attributed to local radiation processes occurring during the protostellar phase, after the protostar has been formed. An increase in the local radiation could promote processes that would lead to the destruction of deuterated molecules due to the evaporation of neutrals to the gas phase \citep{Caselli1998, Jorgensen2004}, or the increased abundance of ionized molecules like HCO$^{+}$\ \citep{Gaches2019}.
The decreasing trend in the deuteration fraction is further confirmed by our observations of N$_{2}$D$^{+}$\ (J=3-2). Figure \ref{fig:deutFrac_n2d_maps} shows the deuteration map overplotted with the N$_{2}$D$^{+}$\ (J=3-2) integrated emission. A lack of N$_{2}$D$^{+}$\ is observed in regions where the deuteration fraction is lower, suggesting that the physical processes lowering the abundance of O-bearing deuterated molecules also lower the abundance of N-bearing deuterated species, around the protostar.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{Images/DeutFrac/H13CO_DCO_N2Dcont_colDens_def2.pdf}
\caption{N$_{2}$D$^{+}$\ (J=3-2) emission in at 3, 5, 10, 15 and 20$\sigma$, where $\sigma$ = 5.23 mJy beam$^{-1}$ (black contours) and deuteration fraction maps
for the blue- and red-shifted velocity components (left and right column, respectively). The spatial scale is shown in the top-right corners of each panel.}
\label{fig:deutFrac_n2d_maps}
\end{figure*}
\subsection{High C and O depletion values in the source} \label{sec:depletion_discussion}
At core scales, observations of the depletion in B335\ have been rather inconclusive as some studies suggested very little CO depletion (JCMT 20-arcsec beam probing 3000 au, \citealt{Murphy1997}), while others suggest a CO depletion of one order of magnitude, $f_D \sim 10$, at similarly large scales \citep{Walmsley1987}. Our spatially resolved observations reveal large CO depletion values of the gas in the inner envelope of B335, at radii $<1000$ au. As shown in Fig.~\ref{fig:depletionFrac} we find depletion factors ranging from 20 to 40 at radii 1000 to 100 au, except at the core center where it increases up to 80. Such CO depletion at protostellar radii $100-1000$~au is somehow unexpected, if one assumes depletion is due to freeze-out onto dust grains. Indeed, gas and dust at such radii should have temperatures beyond $\sim20-30$~K,
where the evaporation of CO from the surface of the dust should replenish it in the gas phase \citep{Anderl2016}. The high depletion values found at the protostar position are highly uncertain due to unknown optical depth, and unconstrained dust and gas temperatures. We stress that the possible depletion increase toward the center in B335 should be confirmed with further observations of higher J transitions of CO.
Several observational studies of protostellar cores have found depletion factors $f_D>10$, at scales $\sim 5000$ au, with some spatially resolved studies pointing to values up to $\sim 20$ \citep{Alonso-Albi2010,Christie2012,Yildiz2012,Fuente2012}. Models of protostellar evolution with short warm-up timescales in the envelope \citep[see e.g.,][]{Aikawa2012} show that the CO depletion factor may remain high even inside the sublimation radius, especially at early stages, because of the conversion of CO into CH$_{3}$OH and CH$_{4}$ on the grain surfaces. Photodissociation due to the protostellar UV radiation could contribute to lower the global CO abundance in the innermost layers of protostellar envelopes \citep{Visser2009b}. Additionally, other sources of ionization, such as CRs, could decrease the abundance of CO\ in the gas phase by promoting its reaction for the formation of HCO$^{+}$\ \citep{Gaches2019}. This decrease in CO by transformation into HCO$^{+}$\ could also explain the observed coincidence between regions with large CO depletion and low $R_{\rm D}$.
Another possible cause for the observed depletion could be the presence of relatively large dust grains, as observed in several protostars at similar scales by \citet{Galametz2019}. Indeed, larger grains are less efficiently warmed up \citep{Hiqbal2018}, limiting the desorption of CO ices to the gas phase. Note that the presence of large dust grains in the inner envelope of B335\ could also favor enhanced ionization of the gas \citep{Walmsley2004}, and thus also be consistent with our findings regarding ionization.
\subsection{Origin of the ionization}
From the single-dish observations of \citet{Butner1995}, \citet{Caselli1998} find ionization fractions of protostellar gas
at core scales of the order of 10$^{-8}$--10$^{-6}$. In the B335 outer envelope where $n(\rm {H_2}) \sim 10^{4}$ cm$^{-3}$, they estimate $\chi_{\rm e}$\ of a few $10^{-6}$. In the inner envelope, where $n(\rm {H_2}) \sim 10^6$ cm$^{-3}$, we measure an ionization fraction of the gas $\chi_{\rm e}$\ around 2$\times$10$^{-6}$, with the largest values being $\sim$9$\times$10$^{-6}$. The large $\chi_{\rm e}$\ values could be produced by local UV and X-ray radiation produced in outflow shocks, altering the chemistry from the molecular content around the shocks \citep[e.g.,][]{Viti1999, Girart2002, Girart2005}. However, ionization by far-UV (FUV) photons can be ruled out for two other reasons. The first is due to the fact that the maximum energy of FUV photons is around 13~eV, thus below the threshold for ionization of molecular hydrogen (15.44~eV).
The second argument is related to the fact that the extinction cross section at FUV wavelengths ($\simeq0.1~\mu$m) is $\sigma_{\rm UV}\simeq2\times10^{-21}$~cm$^2$ per hydrogen atom \citep{Draine2003}. Thus, FUV photons are rapidly absorbed in a thin layer of column density equal to $(2\sigma_{\rm UV})^{-1}\simeq3\times10^{20}$~cm$^{-2}$.
The absorption cross section of X-ray photons, $\sigma_{\rm X}$, in the range 1$-$10~keV is much smaller than at FUV frequencies \citep{Bethell2011}. For example, at 1 keV and 10 keV, $\sigma_{\rm X}\simeq2\times10^{-22}$~cm$^2$ and $\simeq8\times10^{-25}$~cm$^2$
per H atom, respectively. While at 1~keV the corresponding absorption column density is still small, $(2\sigma_{\rm X})^{-1}\simeq2\times10^{21}$~cm$^{-2}$, at 10~keV it reaches about $6\times10^{23}$~cm$^{-2}$, much larger than the maximum effective column density, $N_{\rm eff}$, we found. The latter has been computed
by integrating the dust continuum at 110~GHz along directions identified by the position angle $\vartheta$ (see lower right panel of Fig.~\ref{fig:zeta_vs_radii}) and is representative of the
column density in the outflow cavity. We thus estimated the X-ray ionization rate, $\zeta_{\rm X}$, using the spectra described by \citet{Rab2017} for a typical and a flaring T Tauri star, assuming a
stellar radius of $2R_\odot$. Results for $\zeta_{\rm X}$ are shown in the upper left panel of Fig. 6. It is evident that X-ray ionization cannot explain the values of $\zeta$ estimated from observations. In addition, these values of $\zeta_{\rm X}$ must be considered as an upper limit, since the exponential attenuation was taken into account assuming $N_{\rm eff}$ accumulated in the outflow cavity. However, a fraction of these X-ray photons ends up in the disk, which has much higher densities than the cavity, so $N_{\rm eff}$ can easily reach values greater than $10^{24}$~cm$^{-2}$, as found by \citet{Grosso2020} for a Class 0 protostar. At such high $N_{\rm eff}$, $\zeta_{\rm X}$ decreases dramatically.
Once the high energy radiation field is excluded as the main source of the ionization rate estimated from observations, the only alternative left is CRs (see Sect.~\ref{sec:IonRate}). However, the range of derived values is much higher than those expected from Galactic CRs
\citep[see Appendix C in][for an updated compilation of observational estimates of $\zeta$]{Padovani2022}. Following \citet{McKee1989}, $\chi_{\rm e}$\ $=1.3 \times10^{-5} n(\rm H_2)^{-1/2}$, and for the volume densities derived in Sect.~\ref{sec:IonRate}, we should expect $\chi_{\rm e}$\ values of $10^{-7}$ to $10^{-8}$, which are between one and two orders of magnitude lower than the observed values in B335. Since we also observed a significant increase of $\chi_{\rm e}$, and specially of $\zeta$, toward the center, the origin of the high values should be of local origin. In particular, we are most likely witnessing the local acceleration of CRs in shocks located in B335\ as predicted by theoretical models \citep{Padovani2015, Padovani2016, Padovani2020}. The values of $\zeta$\ found in B335, between about $10^{-16}$ and $10^{-14}$~s$^{-1}$, are among the highest values reported in the literature toward star-forming regions \citep[e.g.][]{Maret2007,Ceccarelli2014b,Fuente2016,Fontani2017,Favre2017,Bialy2022}. We stress that this is the first time that the CR ionization rate is measured at such small scales in a solar-type protostar and can be
attributed to locally accelerated CRs, as well as the first time
that a map of $\zeta$ is obtained.
There are two possible origins for the production of local CRs close to a protostar. One is in strong magnetized shocks along the outflow
\citep{Padovani2015,Padovani2016,Fitz21,Padovani2021}. Indeed, synchrotron radiation, which is the signature of the presence of relativistic electrons, has been detected in some outflows \citep{Carrasco10, Ainsworth14, RodriguezK16, RodriguezK19, Osorio17}. The second possibility is in accretion shocks near the stellar surface (\citealt{Padovani2016,Gaches2018}). Both mechanisms are expected to generate low-energy CRs through the first-order Fermi acceleration mechanism with a rate of up to
$\sim10^{-13}$~s$^{-1}$.
The ionization rate slope found in B335\ for $r<270$~au is close to $-1$ and
compatible with a diffusive regime, in agreement with predictions of theoretical models. Surprisingly, at radii larger than $\sim$ 250 au, the slope decreases below $-2$, thus beyond the geometrical dilution regime. We speculate on two possible explanations. On the one hand, local CRs may have accumulated enough column density to start being thermalized. On the other hand, at radii above about 250~au, both the dust emission and the ionization rate maps gradually lose their central symmetry (see Figs.~\ref{fig:intensity_maps} and ~\ref{fig:ionRate_maps}, respectively). For example, the continuum map shows two higher density arms toward the northeast and southeast. Therefore, the slope, calculated by averaging over the position angle distribution $\vartheta$, could be less than $-2$. The loss of symmetry at larger radii in the ionization maps might also be an additional evidence as to the importance of non-symmetrical motions during the protostellar collapse (\citealt{Cabedo2021b}). However, we note that these values are also affected by our limited angular resolution, 1.5 arcsec\ ($\sim$ 250 au), implying that only 3 or 4 points in the plot are completely independent, and that observations at larger angular resolution are needed in order to confirm the observed trend.
Previous observations indicate that B335\ hosts a powerful and variable jet which might explain a large production of local CRs \citep{Galfalk2007, Yen2010}. Nevertheless, the exact origin of the local CR source is difficult to pinpoint due to our limited angular resolution. The effective column density calculated from the dust continuum map at 110~GHz (see lower left panel of Fig.~\ref{fig:zeta_vs_radii}) is consistent with that expected in the outflow cavity. To calculate the minimum energy that protons must have in order to pass through a given column density without being thermalized, we can make use of the stopping range function (see Fig.~2 in \citealt{Padovani2018}). At $N_{\rm eff}\simeq10^{22}$~cm$^{-2}$, protons with energies of about 10~MeV are thermalized and lose their ionizing power (see also the upper x-axis of Fig.~\ref{fig:zeta_vs_radii}).
As discussed above, if the shock is in the vicinity of the protostellar surface, an effective column density of the order of $10^{24}-10^{25}$~cm$^{-2}$ can easily be accumulated \citep{Grosso2020}. In this case, the minimum proton energy required to prevent thermalization is 100~MeV and 400~MeV at $10^{24}$ and $10^{25}$~cm$^{-2}$, respectively. Models of local CR acceleration in low-mass protostars predict that accelerated protons can reach maximum energies of the order of 100 MeV and 10 GeV if the shock in which they are accelerated is located in the jet or close to the protostellar surface, respectively \citep{Padovani2015,Padovani2016}. Thus, although the position of the shock that accelerates these local CRs in B335\ is unknown, models suggest that the maximum energies of the protons are sufficiently high to explain the observed ionization.
Finally, we note that B335\ has been observed to exhibit an organized magnetic field at scales similar to the ones probed here \citep{Maury2018}. This could also create favorable conditions for enhanced CR production.
\subsection{Magnetic field coupling and non-ideal MHD effects}
The ionization fraction of the gas in a young accreting protostar is not only important to understand the early chemistry around solar-type stars, feeding the scales where disks and planets will form. It is also the critical parameter that determines $(i)$
the coupling between the magnetic field and the infalling-rotating gas in the envelope, and $(ii)$ the role of diffusive processes, such as ambipolar diffusion, which counteract the outward transport of angular momentum due to magnetic fields (process known as magnetic braking). The higher the gas ionization, the more efficient the coupling,
and the braking.
The large fraction of ionized gas unveiled by our observations in the inner envelope of B335\ should lead to almost perfect coupling of the gas to the local magnetic field lines, generating a drastic braking of rotational motions. We note that these new results lend extra support to other observational evidence of a very efficient magnetic braking at work in B335: the highly pinched magnetic field lines observed at similar scales by \citet{Maury2018}, and the failure to detect any disk larger than $\sim10$ au in this object \citep{Yen2015b}, although a new kinematic analysis may be motivated after the detection of several velocity components in the accreting gas at scales of $\sim500$~au \citep{Cabedo2021b}.
At the end of the pre-stellar stage, the fraction of ionized gas toward the central part of the core is expected to be very low as Galactic CRs do not penetrate deeply and there is no local ionization source \citep{Padovani2013,Ceccarelli2014,Silsbee2019}. Hence, the initial stages of protostellar evolution are proceeding under low ionization conditions at typical $\zeta$\ $<10^{-16}\,\rm{s}^{-1}$ \citep{Padovani2018,Ivlev2019}, which enhance the importance of non-ideal MHD processes \citep{Padovani2014}, with efficient diffusion of magnetic flux outwards during the very first phases of the collapse,
and reduced magnetic braking. If the local ionization processes we observe in B335\ are prototypical of solar-type Class 0 protostars, then CRs accelerated in the proximity of the protostar could be responsible for changing the ionization fraction of the gas at disk-forming scales once the protostar is formed, setting quasi-ideal MHD conditions in the inner envelope. The timescale and magnitude of this transition from non-ideal MHD conditions to quasi-ideal MHD conditions may depend on the protostellar properties: more detailed modeling and observations toward other protostars are required to address this question. Moreover, we note that the observed large ionization fraction of the gas is not in agreement with the values used to calculate the simplified chemical networks in non-ideal MHD models of protostellar formation and evolution \citep{Marchand2016,Zhao2020a}, as gas ionization is usually predicted from the gas density following \citet{McKee1989}. Thus, our observations suggest that a careful treatment of ionizing processes in magnetized models may be crucial to properly describe the gas-magnetic field coupling at different scales in embedded protostars.
In this scenario, the properties of protostellar disks could be tightly related to the local acceleration of low-energy CRs, and the development of a highly ionized region around the protostar. Thus, we would not expect the ionization fraction of the gas present at large-scale in the surrounding cloud to be a key factor in setting the disk properties, as proposed for example by \citet{Kuffmeier2020}. The scenario we propose is also in agreement with recent ALMA observations of the Class II disks in Orion that do not find supporting evidence of local cloud properties affecting the disk properties \citep{vanTerwisga2022}.
\section{Conclusions and summary} \label{sec:Conclusions}
This work provides new insights in the physico-chemical conditions of the gas in the young Class 0 protostar B335. For the first time, we derived a map of the gas ionization fraction and of the CR ionization rate at envelope scales $<1000$ au in a Class 0 protostar. This allowed us to discuss the interplay between physical processes responsible for gas ionization at disk-forming scales, and its consequences on magnetized models of solar-type star formation. We summarize here the main results of our analysis:
\begin{itemize}
\item We used ALMA to obtain molecular line emission maps of B335 \,to characterize the ionization of the gas at envelope radii $\lesssim$ 1000 au, and found large fractions of ionized gas, $\chi_{\rm e}$, between 1 and $8\times10^{-6}$.
These values are remarkably higher than the ones usually measured at core scales.
\item We produced for the first time a map of the CR ionization rate, $\zeta$. Our map reveals very high values of $\zeta$, between $10^{-16}$ and $10^{-14}$~s$^{-1}$, increasing at small envelope radii, toward the central protostellar embryo. This suggests that local acceleration of CRs, and not the penetration of interstellar CRs, may be responsible for the gas ionization in the inner envelope, potentially down to disk forming scales.
\item The large fraction of ionized gas we find suggests an efficient coupling between the magnetic field and the gas in the inner envelope of B335. This interpretation is also supported by the observations of highly organized magnetic field lines, and no detection of a large rotationally-supported disk in B335.
\end{itemize}
If our findings are found to be prototypical of the low-mass star formation process, they might imply that the collapse at scales $<1000$ au transitions from non-ideal to a quasi-ideal MHD once the central protostar starts ionizing its surrounding gas, and very efficient magnetic braking of the rotating-infalling protostellar gas might then take place. Protostellar disk properties may thus be determined by local processes setting the magnetic field coupling, and not only by the amount of angular momentum available at large envelope scales and by the magnetic field strength in protostellar cores. We stress that the gas ionization we find in B335\ significantly stands out from the typical values routinely used in state-of-the-art models of protostellar formation and evolution. Our observations suggest that a careful treatment of ionizing processes in these magnetized models may be crucial to properly describe the processes responsible for disk formation and early evolution. We also note that more observations of B335\ at higher spatial resolution, and of other Class 0 protostars are crucial to confirm our results.
\section*{Acknowledgments}
This project has received funding from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (MagneticYSOs project, grant agreement N$\degr$ 679937). This work was also partially supported by the program Unidad de Excelencia María de Maeztu CEX2020-001058-M. JMG also acknowledges support by the grant PID2020-117710GB-I00 (MCI-AEI-FEDER,UE).
Additionally, this paper makes use of the following ALMA data: 2015.1.01188.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ
\bibliographystyle{aa}
|
1,477,468,750,909 | arxiv | \section{Introduction}
\label{sec:introduction}
The ability to compute observables at arbitrary values of coupling constant(s) is crucial to understand and test a model. In the context of string theory and AdS/CFT this is very hard for backgrounds that are supported by Ramond-Ramond (RR) fluxes. In some cases, however, it is possible to relate the spectral problem of a given (RR) background to the determination of the finite-volume spectrum of an integrable two-dimensional quantum field theory (IQFT). This approach has led to spectacular results for the study of type IIB strings on $AdS_5\times S^5$ and their holographic dual, $\mathcal{N}=4$ supersymmetric Yang-Mills theory (SYM) in the planar limit~\cite{'tHooft:1973alw}. The two-dimensional IQFT here arises on the string worldsheet in light-cone gauge, and it is a rather intricate non-relativistic theory, see~\cite{Arutyunov:2009ga,Beisert:2010jr} for reviews.
More recently, a similar program has been undertaken for backgrounds of the $AdS_3$ type, see~\cite{Sfondrini:2014via} for a review of early efforts in this direction. The first step to determine the finite-volume quantum spectrum of an IQFT is to have a firm handle on its S~matrix in infinite volume. For a string-theory in lightcone gauge the S~matrix should feature 8 Bosons and 8 Fermions, plus possibly bound states thereof. Hence, we are dealing at least wtth a $256\times 256$ matrix. The study of this object can be broken down in two parts: first, we may use the symmetries of the theory to constrain as many entries as possible. This was done for the $AdS_3\times S^3\times S^3\times S^1$ background in~\cite{Borsato:2012ud,Borsato:2015mma} and for $AdS_3\times S^3\times T^4$ in~\cite{Borsato:2013qpa, Borsato:2014exa}. The S~matrix of the latter background turns out to be simpler than the one of former, and it will be the focus of this paper. In $AdS_3\times S^3\times T^4$ it was found~\cite{Borsato:2014exa} that symmetries determine the S~matrix up to ten dressing factors --- compared to the single dressing factor of $AdS_5\times S^5$. Moreover, ``braiding'' unitarity (a symmetry expected in IQFTs, see \textit{e.g.}~\cite{Arutyunov:2009ga} for a review) and parity can be used to further reduce the number of independent dressing factors to five~\cite{Borsato:2014hja,upcoming:massless}. These dressing factors are not arbitrary. Rather, they have to satisfy some constraints due to (``physical'' or ``generalised'') unitarity as well as to a non-relativistic analogue of crossing symmetry~\cite{Janik:2006dc}. These constraints have been derived in~\cite{Borsato:2014hja,upcoming:massless} for the $AdS_3\times S^3\times T^4$ S~matrix.
The crossing equations typically force the dressing factors to have a very non-trivial structure, see \textit{e.g.}~\cite{Bombardelli:2016rwb}. For instance in relativistic theories it is convenient to introduce a rapidity variable~$\theta$ satisfying
\begin{equation}
p_i = M\,\sinh\theta_i\,,\qquad E_i =M\,\cosh\theta_i\,.
\end{equation}
As long as we disregard the dressing factor, the entries of the S~matrix are $2\pi i$-periodic functions of~$(\theta_1-\theta_2)$. However, the dressing factors are typically non-periodic functions on the~$(\theta_1-\theta_2)$ plane. Equivalently, they encode the information on all sheets of the Mandelstam plane. Still, the crossing and unitarity conditions do not have a unique solution; there are non-trivial solutions of the \textit{homogeneous} associated equations, the so-called Castillejo-Dalitz-Dyson (CDD) factors~\cite{Castillejo:1955ed}, by which we can multiply any particular candidate dressing factor to obtain a new one. In relativistic theories, however, it is usually possible to fix a solution based on its expected asymptotic behaviour and poles in $(\theta_1-\theta_2)$. Unfortunately, things are substantially harder for non-relativistic models like the one emerging from the worldsheet of $AdS$ strings.
Here, the S~matrix depends on two distinct rapidities, one for each particle, and it is not a meromorphic function of such variables. Rather, it has infinitely many pairs of branch points. Quite remarkably, the solution to the $AdS_5\times S^5$ crossing equation was found by Beisert, Eden and Staudacher (BES)~\cite{Beisert:2006ez}, who also relied on intuition coming from perturbative computations in the dual $\mathcal{N}=4$ SYM. That dressing factor turns out to have remarkable properties~\cite{Dorey:2007xn,Arutyunov:2009kf}, which are essential to define a ``mirror'' theory through analytic continuation --- a necessary step to eventually describe the finite-volume spectrum of the theory~\cite{Ambjorn:2005wa,Arutyunov:2007tc}.
For $AdS_3\times S^3\times T^4$ things are more complicated, at least in general.%
\footnote{There is one exception: the $AdS_3\times S^3\times T^4$ background can be realised without any RR flux. That case can be understood as a Wess-Zumino-Witten model~\cite{Maldacena:2000hw} and its S~matrix, dressing factor and spectrum can also be easily obtained by integrability~\cite{Baggio:2018gct, Dei:2018mfl}.}
In fact, the analytic structure of the worldsheet theory is further complicated by the presence of \textit{massless} excitations~\cite{Borsato:2014exa,Borsato:2014hja,upcoming:massless}, whose non-relativistic dispersion relation has no mass gap. Besides, very little is known about the dual theory.%
\footnote{%
Once again, with the exception of backgrounds that can be realised as WZW models, see~\cite{Giribet:2018ada,Eberhardt:2018ouy,Eberhardt:2021vsx}.}
Notwithstanding these difficulties, the dressing factors for $AdS_3\times S^3\times T^4$ were proposed in the case where the background is supported by RR flux only (this is the case most similar to $AdS_5\times S^5$) in~\cite{Borsato:2013hoa,Borsato:2016xns}.
While that proposal does solve the crossing equations and unitarity conditions, some of its analytic properties are troublesome, especially for what concerns the scattering processes involving massless modes. Qualitatively, when studying the proposal of~\cite{Borsato:2013hoa,Borsato:2016xns}, we encounter the following stumbling blocks:
\begin{enumerate}
\item The dressing factors scattering massive particles violate the parity invariance of the model. This fact was not appreciated in the existing literature, but can be easily checked as we do in appendix~\ref{app:monodromy}.
\item The dressing factors involving massless particles contained the Arutyunov-Frolov-Staudacher (AFS) dressing factor~\cite{Arutyunov:2004vx}, which is the leading-order term of the asymptotic expansion of the BES phase. However, unlike the BES factor, the AFS one has a rather complicated analytic structure that makes it hard to analytically continue it.
\item The massless-massive dressing factor has additional apparent square-root branch points whose position depends on the relative value of the momenta of the two scattered particles.
\item The form of the dressing factors makes it hard to ``fuse'' them, \textit{i.e.}\ to use them as building blocks of the bound-state S matrices.
\item It is not clear how to analytically continue the dressing factors to the ``mirror'' region.
\item The dressing factors are not compatible with some of the perturbative computations in the literature (though this may be due to subtleties in the perturbative computations related to infrared divergences).
\end{enumerate}
In our effort to resolve these issues, we have found a different solution of the crossing equations, which resolves problems 1--4. To do this, we have reformulated the crossing equations in terms of rapidities following~\cite{Beisert:2006ib,Fontanella:2019baq} so that part of the solution is of difference-type. Moreover, a careful analysis of the analytic properties of massless particles suggests that the path used for analytic continuation in the crossing transformation in~\cite{Borsato:2016xns} is not correct. As a consequence, the proposal of~\cite{Borsato:2016xns} could not be correct. Our proposal passes several nontrivial consistency checks, including having good properties in the ``mirror'' region and under ``fusion''. As for point 5 of the list above, we will still encounter discrepancies in the one-loop expansion of the dressing factors, which may indeed be explained by infrared issues in the perturbative computations.
This article is structured as follows. We start in section~\ref{sec:smatrix} by fixing our convention for the normalisation of the S~matrix and listing the constraints on the dressing factors; in section~\ref{sec:smatrix:BESnormalisation} we rewrite those conditions when a BES factor is stripped out of each dressing factor, which will facilitate our subsequent analysis.
In section~\ref{sec:rapidities} we discuss the analytic properties of massive and massless particles in the string region, mirror region, and anti-string (or crossed) region; in particular, we introduce rapidity variables by means of which we can write the crossing equations in difference form, which we do in section~\ref{sec:rapidities:crossing}. In section~\ref{sec:buildingblocks} we introduce the building blocks that we will need to construct our solutions, starting from the BES one in section~\ref{sec:buildingblocks:BES}, which we extend to the massive-massless and massless-massless kinematics; we then introduce three additional functions which we will use to solve the difference-form part of the crossing equations. Based on that, in section~\ref{sec:proposal} we present our proposal for the dressing factors, and discuss the analytic properties and perturbative expansion of the various functions. We also compare our proposal with the existing one, in section~\ref{sec:proposal:relations}, and discuss how it can be adapted to backgrounds with RR and NSNS fluxes too in section~\ref{sec:proposal:mixedflux}. Finally, in section~\ref{sec:BYE} we write the appropriately-normalised Bethe-Yang equations, which serve as a summary of our proposal as well as the starting point for the study of the spectrum of the theory, and we present our conclusions in section~\ref{sec:conclusions}.
The appendices contain a number slightly more technical discussions and derivations. The properties of the BES factor, especially in the mirror region, as well as its perturbative expansion, are discussed in appendix~\ref{app:BES} and~\ref{app:BESexpansion}, respectively.
In appendix~\ref{app:Fourier} we derive one of the new functions which we need for our solution.
In appendix~\ref{app:lcgauge} we discuss how to normalise the S-matrix elements in order to compare with perturbative results. In appendix~\ref{app:Zhukovsky} and~\ref{app:BMNexpansion} we discuss the perturbative expansion of rapidities and dressing factors. In appendix~\ref{app:monodromy} we show that the existing proposal for the massive dressing factors violates parity invariance of the~model.
\section{The S matrix and crossing equations}
\label{sec:smatrix}
The S~matrix for fundamental particles in $AdS_3\times S^3\times T^4$ is composed of several blocks, corresponding to different irreducible representations of the light-cone symmetry algebra. In terms of the Cartan elements of $\mathfrak{psu}(1,1|2)^{\oplus2}$ we consider in particular
\begin{equation}
\gen{E}=\gen{L}_0-\gen{J}^3 +\tilde{\gen{L}}_0-\tilde{\gen{J}}^3\,,\qquad
\gen{M}=\gen{L}_0-\gen{J}^3 -\tilde{\gen{L}}_0+\tilde{\gen{J}}^3\,.
\end{equation}
While $\gen{E}$ is the light-cone Hamiltonian, $\gen{M}$ is a combination of spin, which is quantised for the pure-RR backgrounds.%
\footnote{When NSNS fluxes are present, the quantisation of $\gen{M}$ only holds for states that satisfy the level-matching condition~\cite{Hoare:2013lja,Lloyd:2014bsa}.}
In fact, for fundamental particles the eigenvalue $M$ of $\gen{M}$ is
\begin{equation}
M = \begin{cases}
+1& \text{``left'':}\quad (Y,\psi^{\alpha},Z)\\
-1& \text{``right'':}\quad (\bar{Z},\bar{\psi}^{\alpha},\bar{Y})\\
0& \text{``massless'':}\quad (\chi^{\dot{\alpha}},T^{\alpha\dot{\alpha}},\tilde{\chi}^{\dot{\alpha}})
\end{cases}
\end{equation}
It is possible also to define bound states, which have $M\in\mathbb{Z}$~\cite{Borsato:2013hoa}.
Particles transform in short representations, and the shortening condition allows to write down a dispersion relation
\begin{equation}
E(p) =\sqrt{M^2+4^2\sin^2\Big(\frac{p}{2}\Big)}\,,
\end{equation}
where $h$ acts as a coupling constant. The dispersion is $2\pi$-periodic, and real particles have momentum $-\pi< p<\pi$. Note that for $M=0$ the dispersion is not gapped, and it is non-analytic. To properly account for this fact, it is necessary to split massless excitations between anti-chiral ($-\pi< p <0$) and chiral ($0<p<\pi$), and distinguish the limits $p\to0^\pm$~\cite{upcoming:massless}.
The worldsheet theory is weakly coupled when $h\gg1$ and the momentum is rescaled as $p/h$~\cite{Berenstein:2002jq}. In that case, the dispersion becomes relativistic with massive and massless particles.
By virtue of a discrete symmetry under flipping the sign of $M$ for all particles involved in a given process, we have only five distinct blocks in the S~matrix of fundamental particles:
\begin{enumerate}
\item left-left scattering, or equivalently right-right scattering,
\item left-right scattering, or equivalently right-left scattering.
\item left-massless scattering, or equivalently right-massless scattering.
\item massless-left scattering, or equivalently massless-right scattering.
\item massless-massless scattering.
\end{enumerate}
However, every time we encounter a massless block we have two options: the particle can be chiral (positive velocity) or anti-chiral (negative velocity). Effectively, this doubles the number of blocks for points 3.~and 4., and it quadruples it for point 5., at least in principle. However, as we will review, there are a number of discrete symmetries that relates several of these blocks to each other~\cite{upcoming:massless}.
In order to describe the S~matrix for the various blocks it is convenient to introduce the Zhukovsky variables, defined as
\begin{equation}
\label{eq:Zhukovsky}
x^\pm(p,|M|)=e^{\pm i p/2}\frac{|M|+\sqrt{M^2+4h^2\sin^2(p/2)}}{2h\sin(p/2)}\,.
\end{equation}
Note that for $M=0$ we have
\begin{equation}
x^+(p,0)\,x^-(p,0)=1\,.
\end{equation}
As a result, we introduce two distinct notations for $|M|=1$ and $M=0$, namely
\begin{equation}
x_p^\pm = x^\pm(p,1)\,,\qquad x_p = x^+(p,0)=\frac{1}{x^-(p,0)}\,.
\end{equation}
In terms of these expressions we have, for massive particles,
\begin{equation}
e^{ip}=\frac{x^+_p}{x^-_p}\,,\qquad
E=\frac{h}{2i}\left(
x^+_p-\frac{1}{x^+_p}-x^-_p+\frac{1}{x^-_p}\right)\,,
\end{equation}
which for massless particles takes the slightly simpler form
\begin{equation}
e^{ip}=(x_p)^2\,,\qquad
E=\frac{h}{i}\left(
x_p-\frac{1}{x_p}\right)\,.
\end{equation}
\subsection{S-matrix normalisation}
\label{sec:smatrix:normalisation}
Symmetries fix the scattering matrix up to a dressing factor for each block. In order to define in an unambiguous way our choice of normalisation, we fix the following scattering processes among highest-weight states in each representation, in the notation of~\cite{Borsato:2014hja}). For the massive states,
\begin{equation}
\label{eq:massivenorm}
\begin{aligned}
\mathbf{S}\,\big|Y_{p_1}Y_{p_2}\big\rangle&=&
e^{+i p_1}e^{-i p_2}
\frac{x^-_1-x^+_2}{x^+_1-x^-_2}
\frac{1-\frac{1}{x^-_1x^+_2}}{1-\frac{1}{x^+_1x^-_2}}\big(\sigma^{\bullet\bullet}_{12}\big)^{-2}\,
\big|Y_{p_1}Y_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|Y_{p_1}\bar{Z}_{p_2}\big\rangle&=&
e^{-i p_2}
\frac{1-\frac{1}{x^-_1x^-_2}}{1-\frac{1}{x^+_1x^+_2}}
\frac{1-\frac{1}{x^-_1x^+_2}}{1-\frac{1}{x^+_1x^-_2}}\big(\tilde{\sigma}^{\bullet\bullet}_{12}\big)^{-2}\,
\big|Y_{p_1}\bar{Z}_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\bar{Z}_{p_1}Y_{p_2}\big\rangle&=&
e^{+ip_1}
\frac{1-\frac{1}{x^+_1x^+_2}}{1-\frac{1}{x^-_1x^-_2}}
\frac{1-\frac{1}{x^-_1x^+_2}}{1-\frac{1}{x^+_1x^-_2}}\big(\widetilde{\sigma}^{\bullet\bullet}_{12}\big)^{-2}\,
\big|\bar{Z}_{p_1}Y_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\bar{Z}_{p_1}\bar{Z}_{p_2}\big\rangle&=&
\frac{x^+_1-x^-_2}{x^-_1-x^+_2}
\frac{1-\frac{1}{x^-_1x^+_2}}{1-\frac{1}{x^+_1x^-_2}}\big(\sigma^{\bullet\bullet}_{12}\big)^{-2}\,
\big|\bar{Z}_{p_1}\bar{Z}_{p_2}\big\rangle,
\end{aligned}
\end{equation}
where the latter two equations are related to the former two by left-right symmetry~\cite{Borsato:2012ud} (the middle two equations are also related to each other by braiding unitarity).
In the mixed-mass sector we have to pick a chirality for the massless particle. We will assume that the first particle has always positive velocity and the second one has negative velocity. Hence we have
\begin{equation}
\label{eq:mixednormalisation}
\begin{aligned}
\mathbf{S}\,\big|Y_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle&=&
e^{+\frac{i}{2} p_1}e^{-i p_2}
\frac{x^-_1-x_2}{1-x^+_1x_2}\big(\sigma^{\bullet-}_{12}\big)^{-2}\,
\big|Y_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\bar{Z}_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle&=&
e^{-\frac{i}{2} p_1}e^{-i p_2}
\frac{1-x^+_1x_2}{x^-_1-x_2}\big(\sigma^{\bullet-}_{12}\big)^{-2}\,
\big|\bar{Z}_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\chi^{\dot{\alpha}}_{p_1}Y_{p_2}\big\rangle&=&
e^{+i p_1}e^{-\frac{i}{2} p_2}
\frac{1-x_1x^+_2}{x_1-x^-_2}\big(\sigma^{+\bullet}_{12}\big)^{-2}\,
\big|\chi^{\dot{\alpha}}_{p_1}Y_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\chi^{\dot{\alpha}}_{p_1}\bar{Z}_{p_2}\big\rangle&=&
e^{+i p_1}e^{+\frac{i}{2} p_2}
\frac{x_1-x^-_2}{1-x_1x^+_2}\big(\sigma^{+\bullet}_{12}\big)^{-2}\,
\big|\chi^{\dot{\alpha}}_{p_1}\bar{Z}_{p_2}\big\rangle,
\end{aligned}
\end{equation}
where the first and second lines, as well as the third and fourth, are related by left-right symmetry, while the former two lines and the latter two are related by a combination of braiding unitarity and parity. As we will see the two remaining choices for the chiralities of the massive particle, which involve $\sigma^{-\bullet}_{12}$ and $\sigma^{\bullet+}_{12}$, are related to the ones above by braiding unitarity. Effectively, we will be dealing with a single dressing factor in the mixed-mass sector.
Finally, for massless modes in the kinematics regime where the first particle is chiral and the second particle is antichiral we have simply,
\begin{equation}
\label{eq:masslessnorm}
\mathbf{S}\,\big|\chi^{\dot{\alpha}}_{p_1}\chi^{\dot{\beta}}_{p_2}\big\rangle=
\big(\sigma^{+-}_{12}\big)^{-2}\,
\big|\chi^{\dot{\alpha}}_{p_1}\chi^{\dot{\beta}}_{p_2}\big\rangle\,.
\end{equation}
There are three more choices for the velocities of the massless particles. One, which involves $\sigma^{-+}_{12}$ is related to the above by braiding unitarity as well as by parity. The remaining two involve $\sigma^{\pm\pm}_{12}$. They are related to each other by parity but are in principle independent from $\sigma^{\pm\mp}_{12}$. Hence for completeness let us re-write the normalisation for the same-chirality scattering, which is
\begin{equation}
\mathbf{S}\,\big|\chi^{\dot{\alpha}}_{p_1}\chi^{\dot{\beta}}_{p_2}\big\rangle=
\big(\sigma^{++}_{12}\big)^{-2}\,
\big|\chi^{\dot{\alpha}}_{p_1}\chi^{\dot{\beta}}_{p_2}\big\rangle\,.
\end{equation}
Of course there is quite some freedom in the choice of the above normalisations. To justify them somewhat, let us briefly look at the behaviour which we expect from these dressing factors in the BMN limit~\cite{Berenstein:2002jq}, where the coupling is large, $h\gg1$ and momenta are small, scaling like $1/h$. We will later more systematically consider a near-BMN expansion, but for now let us just take $h\to\infty$ and keep only the leading term. From the definition of the Zhukovsky variables~\eqref{eq:Zhukovsky}, see also appendix~\ref{app:Zhukovsky} we see that in this limit $p\to0$, $x^\pm\to\infty$ and $x\to\pm1$, where the sign depends on whether the momentum goes to $0^\pm$, respectively. The theory is meant to become free, so that all of our diagonal S-matrix elements introduced above should go to one. As a consequence, we see that by our definition all of the dressing factors introduced above should also go to one when the theory becomes free.
It is worth remarking that they differ from what appeared in early literature for the normalisation of the mixed-mass dressing factor, see~\cite{Borsato:2014hja,Borsato:2016xns}. Those expressions involved a term of the form
\begin{equation}
\sqrt{\frac{x_1^--\frac{1}{x_2}}{x_1^+-x_2}\,\frac{x_1^--x_2}{x_1^+-\frac{1}{x_2}}}\,,
\end{equation}
which manifestly features branch points as $x^\pm(p_1)$ approaches $x(p_2)$ or $1/x(p_2)$. One of our initial motivations was investigating whether it is possible to self-consistently define the S~matrix so that such branch points are manifestly absent.
Given the normalisations above, the rest of the S~matrix elements are fixed by theory's symmetries in light-cone gauge~\cite{Borsato:2012ud,Borsato:2014hja,Borsato:2016xns}. Note that throughout the cited literature, unfortunately quite a variety of conventions have been used. The complete expression for the S-matrix elements has been recently collected in~\cite{Eden:2021xhe}.
Finally, the functions denoted by $\sigma$'s are the various dressing factors, whose discussion is the purpose of this work.
\subsection{Symmetries and constraints on the dressing factors}
\label{sec:smatrix:symmetries}
There are several discrete symmetries that may be used to relate the dressing factors to each other or to themselves. As discussed elsewhere~\cite{upcoming:massless} the most important such symmetries are parity, braiding unitarity, and physical unitarity, as well as crossing symmetry.
\subsubsection{Parity}
The parity transformation allows us to get constraints on the dressing factors under $(p_1,p_2)\to(-p_1,-p_2)$. In $AdS_3\times S^3\times T^4$ this is particularly interesting because it allows us to relate the chiral and anti-chiral kinematics for massless particles.
Let us start from the massive sectors, where we get the simple conditions on the dressing factors%
\footnote{Notice that all the constraints will hold, strictly speaking, for the square of the dressing factors, as that is what appears in the S-matrix elements.}
\begin{equation}
\sigma^{\bullet\bullet}(-p_1,-p_2)^2\,\sigma^{\bullet\bullet}(p_1,p_2)^2=1\,,\qquad
\widetilde{\sigma}^{\bullet\bullet}(-p_1,-p_2)^2\,\widetilde{\sigma}^{\bullet\bullet}(p_1,p_2)^2=1\,.
\end{equation}
For mixed-mass scattering we can write an expression valid for arbitrary chirality,
\begin{equation}
\begin{aligned}
\mathbf{S}\,\big|Y_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle&=&
e^{+\frac{i}{2} p_1}e^{-i p_2}
\frac{x^-_1-x_2}{1-x^+_1x_2}\big(\sigma^{\bullet\circ}_{12}\big)^{-2}\,
\big|Y_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\bar{Z}_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle&=&
e^{-\frac{i}{2} p_1}e^{-i p_2}
\frac{1-x^+_1x_2}{x^-_1-x_2}\big(\sigma^{\bullet\circ}_{12}\big)^{-2}\,
\big|\bar{Z}_{p_1}\chi^{\dot{\alpha}}_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\chi^{\dot{\alpha}}_{p_1}Y_{p_2}\big\rangle&=&
e^{+i p_1}e^{-\frac{i}{2} p_2}
\frac{1-x_1x^+_2}{x^-_2-x_1}\big(\sigma^{\circ\bullet}_{12}\big)^{-2}\,
\big|\chi^{\dot{\alpha}}_{p_1}Y_{p_2}\big\rangle,\\
%
\mathbf{S}\,\big|\chi^{\dot{\alpha}}_{p_1}\bar{Z}_{p_2}\big\rangle&=&
e^{+i p_1}e^{+\frac{i}{2} p_2}
\frac{x^-_2-x_1}{1-x_1x^+_2}\big(\sigma^{\circ\bullet}_{12}\big)^{-2}\,
\big|\chi^{\dot{\alpha}}_{p_1}\bar{Z}_{p_2}\big\rangle,
\end{aligned}
\end{equation}
which reduces to the ones given in~\eqref{eq:mixednormalisation} by choosing the correct kinematics. Parity poses a constraint on the dressing factor that we just introduced, namely
\begin{equation}
\sigma^{\circ\bullet}(p_1,p_2)^2\,\sigma^{\circ\bullet}(-p_1,-p_2)^2=-1\,,\qquad
\sigma^{\bullet\circ}(p_1,p_2)^2\,\sigma^{\bullet\circ}(-p_1,-p_2)^2=-1\,.
\end{equation}
It is also worth looking at the behaviour which we expect from these dressing factors in the BMN limit~\cite{Berenstein:2002jq}. When $h\to\infty$ the theory must become free, which forces for $p_2<0$
\begin{equation}
\big(\sigma^{\bullet\circ}_{12}\big)^{-2}=\big(\sigma^{\bullet-}_{12}\big)^{-2}\approx +1\,,
\end{equation}
while for $p_1>0$
\begin{equation}
\big(\sigma^{\circ\bullet}_{12}\big)^{-2}=-\big(\sigma^{+\bullet}_{12}\big)^{-2}\approx-1\,,
\end{equation}
where we need to be careful to specify physical regions for the momenta, see~\cite{upcoming:massless}.
Let us come now to massless--massless scattering. Here we get
\begin{equation}
\sigma^{+-}(p_1,p_2)^2\,\sigma^{-+}(-p_1,-p_2)^2=1\,,\qquad
\sigma^{++}(p_1,p_2)^2\,\sigma^{--}(-p_1,-p_2)^2=1\,.
\end{equation}
We can think of this as a way of defining $\sigma^{-+}(p_1,p_2)$ and $\sigma^{--}(p_1,p_2)$ in terms of $\sigma^{+-}(p_1,p_2)$ and $\sigma^{++}(p_1,p_2)$, respectively. As a consequence, we can replace the four massless phases with \textit{two phases}, one for opposite-chirality and one for same-chirality. We set
\begin{equation}
\sigma^{\circ\circ}(p_1,p_2)=\sigma^{++}(p_1,p_2)\,,\qquad
\widetilde{\sigma}^{\circ\circ}(p_1,p_2)=\sigma^{+-}(p_1,p_2)\,.
\end{equation}
We summarise the independent dressing factors in table~\ref{table:dressing}.
Despite the fact that this is not required by any physical symmetry, we will eventually be looking for a solution which may be expressed in terms of \textit{a single function} for all massless dressing factors, $\widetilde{\sigma}^{\circ\circ}=\sigma^{\circ\circ}$. We will return to this point in section~\ref{sec:proposal:massless}.
\begin{table}[t]
\centering
\begin{tabular}{|l | l|}
\hline
Dressing factor & Particles scattered \\
\hline
$\sigma^{\bullet\bullet}(p_1,p_2)$ & Two massive particles with $M_1=M_2=\pm1$.\\
$\widetilde{\sigma}^{\bullet\bullet}(p_1,p_2)$ & Two massive particles with $M_1=-M_2=\pm1$.\\
$\sigma^{\circ\bullet}(p_1,p_2)$ & One massive and one massless particle, $M_1=0$, $|M_2|=1$.\\
$\sigma^{\bullet\circ}(p_1,p_2)$ & One massless and one massive particle, $|M_1|=1$, $M_2=0$.\\
$\sigma^{\circ\circ}(p_1,p_2)$ & Two massless particles, $M_1=M_2=0$, same chirality.\\
$\widetilde{\sigma}^{\circ\circ}(p_1,p_2)$ & Two massless particles, $M_1=M_2=0$, opposite chirality.\\
\hline
\end{tabular}
\caption{A summary of the independent dressing factors once parity has been imposed.}
\label{table:dressing}
\end{table}
\subsubsection{Braiding unitarity}
Braiding unitarity is a consistency condition of the Zamolodchikov-Faddeev algebra (see \textit{e.g.}~\cite{Arutyunov:2009ga} for a review). On the massive dressing factors it imposes
\begin{equation}
\sigma^{\bullet\bullet}(p_1,p_2)^2\,\sigma^{\bullet\bullet}(p_2,p_1)^2=1\,,\qquad
\widetilde{\sigma}^{\bullet\bullet}(p_1,p_2)^2\,\widetilde{\sigma}^{\bullet\bullet}(p_2,p_1)^2=1\,.
\end{equation}
For the mixed-mass factors we get
\begin{equation}\label{eq:braidunit}
\sigma^{\bullet\circ}(p_1,p_2)^2\,\sigma^{\circ\bullet}(p_2,p_1)^2=1\,,
\end{equation}
which we may think of as the definition of $\sigma^{\circ\bullet}(p_2,p_1)$ in terms of $\sigma^{\bullet\circ}(p_1,p_2)$. Finally, for massless-massless dressing factors, we distinguish the case of opposite-chirality scattering and the case of same-chirality scattering. In the former we have
\begin{equation}
\sigma^{+-}(p_1,p_2)^2\,\sigma^{-+}(p_2,p_1)^2=1\,,
\end{equation}
which, in terms of the opposite-chirality phase $\widetilde{\sigma}^{\circ\circ}$, gives the constraint
\begin{equation}
\widetilde{\sigma}^{\circ\circ}(p_1,p_2)^2=\widetilde{\sigma}^{\circ\circ}(-p_2,-p_1)^2\,.
\end{equation}
Instead, for the same-chirality scattering we get two constraints
\begin{equation}
\sigma^{++}(p_1,p_2)^2\,\sigma^{++}(p_2,p_1)^2=1\,,\qquad
\sigma^{--}(p_1,p_2)^2\,\sigma^{--}(p_2,p_1)^2=1\,,
\end{equation}
which boil down to a constraint on $\sigma^{\circ\circ}$,
\begin{equation}
\sigma^{\circ\circ}(p_1,p_2)^2\,\sigma^{\circ\circ}(p_2,p_1)^2=1\,.
\end{equation}
We observe that, while for opposite-chirality scattering we can rescale the phase by a constant $\sigma^{\pm\mp}\to e^{\pm i \xi}\sigma^{\pm\mp}$ without spoiling braiding unitarity, this is not possible for same-chirality scattering.
\subsubsection{Physical unitarity}
The weakest unitarity requirement that we may impose is that, when we are scattering particles with real energy and we are considering a physical kinematics regime, the S~matrix is a unitary matrix. In formulae,
\begin{equation}
\gen{S}(p_1,p_2)^\dagger\,\gen{S}(p_1,p_2)=\gen{1}\,,\qquad
p_1,p_2\in\mathbb{R}\,,\quad v_1>v_2\,.
\end{equation}
The last condition, on the velocity of the particles, ensures that scattering can happen. For the various dressing factors this means simply that , when the velocities are as above,
\begin{equation}
|\sigma^{\bullet\bullet}|=|\widetilde{\sigma}^{\bullet\bullet}|=|\sigma^{\bullet\circ}|=|\sigma^{\circ\bullet}|=|\sigma^{\circ\circ}|=|\widetilde{\sigma}^{\circ\circ}|=1\qquad\text{(physical scattering)}.
\end{equation}
In general, and in particular for particles that may form bound states, we also want to consider complex values of momenta. In that case we can impose a stronger condition, generalised unitarity, which reads
\begin{equation}
\begin{aligned}
\Big(\sigma^{\bullet\bullet}(p_1^*,p_2^*)^2\Big)^*\sigma^{\bullet\bullet}(p_1,p_2)^2=
\Big(\widetilde{\sigma}^{\bullet\bullet}(p_1^*,p_2^*)^2\Big)^*\widetilde{\sigma}^{\bullet\bullet}(p_1,p_2)^2=
1\,,\\
\Big(\sigma^{\bullet\circ}(p_1^*,p_2^*)^2\Big)^*\sigma^{\bullet\circ}(p_1,p_2)^2=
\Big(\sigma^{\circ\bullet}(p_1^*,p_2^*)^2\Big)^*\sigma^{\circ\bullet}(p_1,p_2)^2=
1\,,\\
\Big(\sigma^{\circ\circ}(p_1^*,p_2^*)^2\Big)^*\sigma^{\circ\circ}(p_1,p_2)^2=
\Big(\widetilde{\sigma}^{\circ\circ}(p_1^*,p_2^*)^2\Big)^*\widetilde{\sigma}^{\circ\circ}(p_1,p_2)^2=
1\,.
\end{aligned}
\end{equation}
It is not completely clear, a priori, whether this strong condition should also hold for massless modes.
\subsubsection{Crossing symmetry}
The dressing factors must satisfy another highly nontrivial constraint --- the crossing equations, which were given originally in~\cite{Borsato:2014hja}, see in particular appendix~P there for their normalisation-independent form. The crossing transformation flips the sign of energy and momentum $(E,p)\to(-E, -p)$ and it can be realised by analytically continuing the S~matrix in an appropriate variable. As it turns out, the S~matrix is not a meromorphic function of the Zhukovsky variables (even before considering the dressing factors). Fortunately, there are other parametrisations that may be used to resolve this issue. For massive particles, in analogy with the case of $AdS_{5}\times S^5$, see \textit{e.g.}~\cite{Arutyunov:2009ga}, we may express the equations on the $z$-torus, where it amounts to a shift of the $z$-variable by half of the imaginary period~$\omega_2$. We will return in the next section to a more detailed discussion of the $z$-torus parametrisation. Similarly, for massless particles, it is possible to introduce a rapidity that greatly simplifies the form of the S~matrix~\cite{Fontanella:2019baq}. However, as described in detail in~\cite{upcoming:massless}, the S~matrix splits in different blocks depending on the chirality of the massless particles. As also reviewed above, by using braiding unitarity and parity we just have to distinguish two cases (in the massless-massless kinematics): particles of the same chirality, \textit{versus} particle of opposite chirality.
Postponing for a moment a detailed discussion of the various rapidity parametrisations and the explicit form of the crossing transformation, we introduce the crossed variables
\begin{equation}
\bar{x}^\pm\quad \text{(massive)}\,,\qquad
\bar{x}\quad \text{(massless)}\,.
\end{equation}
We have, for the dressing factors involving massive particles,
\begin{equation}
\begin{aligned}
\left(\sigma^{\bullet\bullet} (x_1^\pm,x_2^\pm)\right)^2\left(\tilde\sigma^{\bullet\bullet} (\bar{x}_1^\pm,x_2^\pm)\right)^2&=
\left(\frac{x_2^-}{x_2^+}\right)^2
\frac{(x_1^- - x_2^+)^2}{(x_1^- - x_2^-)(x_1^+ - x_2^+)}\frac{1-\frac{1}{x_1^-x_2^+}}{1-\frac{1}{x_1^+x_2^-}}\,,\\
\left(\sigma^{\bullet\bullet} (\bar{x}_1^\pm,x_2^\pm)\right)^2\left(\tilde\sigma^{\bullet\bullet} (x_1^\pm,x_2^\pm)\right)^2&=
\left(\frac{x_2^-}{x_2^+}\right)^2
\frac{\left(1-\frac{1}{x^+_1x^+_2}\right)\left(1-\frac{1}{x^-_1x^-_2}\right)}{\left(1-\frac{1}{x^+_1x^-_2}\right)^2}\frac{x_1^--x_2^+}{x_1^+-x_2^-}\,.
\end{aligned}
\end{equation}
For the dressing factors involving one massive and one massless particle we have%
\footnote{While writing this article, we noticed a misprint in the mixed-mass crossing equations of ref.~\cite{Borsato:2016xns}.}
\begin{equation}
\begin{aligned}
\left(\sigma^{\bullet\circ} (x_1^\pm,x_2)\right)^2\left(\sigma^{\bullet\circ} (\bar{x}_1^\pm,x_2)\right)^2&=
\frac{1}{(x_2)^4}}\frac{f(x_1^+,x_2)}{f(x_1^-,x_2)}\,,\\
\left(\sigma^{\circ\bullet} (x_1,x_2^\pm)\right)^2\left(\sigma^{\circ\bullet} (\bar{x}_1,x_2^\pm)\right)^2&=\frac{f(x_1,x_2^+)}{f(x_1,x_2^-)}\,,
\end{aligned}
\end{equation}
where $\bar{x}$ denotes crossing in the massless variable (which we will discuss later), and
\begin{equation}
\label{eq:ffunction}
f(x,y)=i\,\frac{1-xy}{x-y}\,.
\end{equation}
Finally, for massless particles we have
\begin{equation}
\begin{aligned}
&\big(\sigma^{\circ\circ} (x_1,x_2)\big)^2\big(\sigma^{\circ\circ} (\bar{x}_1,x_2)\big)^2&=&-f(x_1,x_2)^2\,,\\
&\big(\widetilde{\sigma}^{\circ\circ} (x_1,x_2)\big)^2\big(\widetilde{\sigma}^{\circ\circ} (\bar{x}_1,x_2)\big)^2&=&-f(x_1,x_2)^2\,,
\end{aligned}
\end{equation}
for same-chirality scattering and opposite-chirality scattering, respectively.
\subsection{Stripping out a BES factor}
\label{sec:smatrix:BESnormalisation}
We now redefine the dressing factors by stripping away the Beisert-Eden-Staudacher (BES) factor~\cite{Beisert:2006ez}. This is quite natural at least in the massive sector, where the BES structure is crucial to reproduce the analytic features related to the scattering of bound states~\cite{Dorey:2007xn, Arutyunov:2009kf, OhlssonSax:2019nlj}. It is possible to extend the BES factor to the case where one or both of the particles are in the massless kinematics, and to strip out a BES-like factor from the corresponding dressing factors. This is done simply by analytically continuing the massive BES phase to the massless kinematics.
Postponing a more complete discussion of the BES dressing factor and of its generalisations to section~\ref{sec:buildingblocks:BES} and appendix~\ref{app:BES}, for now we note its crossing equations. For massive particles we have the celebrated crossing equation by Janik~\cite{Janik:2006dc,Beisert:2006ez, Arutyunov:2009kf}
\begin{equation}
\sigma_{\text{\tiny BES}} (x_1^\pm,x_2^\pm)\sigma_{\text{\tiny BES}} (\bar{x}_1^\pm,x_2^\pm)
=\frac{x_2^-}{x_2^+}g(x_1^\pm,x_2^\pm)\,,
\end{equation}
where
\begin{equation}
\label{eq:gfunction}
g(x_1^\pm,x_2^\pm)
=\frac{x_1^--x_2^+}{x_1^--x_2^-}\frac{1-\frac{1}{x_1^+x_2^+}}{1-\frac{1}{x_1^+x_2^-}}=\frac{x_1^--x_2^+}{x_1^+-x_2^+}\frac{1-\frac{1}{x_1^-x_2^-}}{1-\frac{1}{x_1^+x_2^-}}\,.
\end{equation}
Denoting the massless variable by $x$ rather than $x^\pm$ we can also write
\begin{equation}
\sigma_{\text{\tiny BES}}(\bar{x}_1,x_2^\pm)\,\sigma_{\text{\tiny BES}}(x_1,x_2^\pm)
=1\,,\qquad
\sigma_{\text{\tiny BES}}(\bar{x}_1^\pm,x_2)\,\sigma_{\text{\tiny BES}}(x_1^\pm,x_2)
=\frac{1}{x_2^2} \frac{f(x_1^+,x_2)}{f(x_1^-,x_2)}\,,
\end{equation}
where $f(x,y)$ is given by eq.~\eqref{eq:ffunction}, and
\begin{equation}
\sigma_{\text{\tiny BES}}(\bar{x}_1,x_2)\,\sigma_{\text{\tiny BES}}(x_1,x_2)=1\,.
\end{equation}
\subsubsection{Massive-massive kinematics}
We want to define new factor by ``stripping out'' the BES factor. Hence we define
\begin{equation}
\begin{aligned}
\varsigma^{\bullet\bullet}(x_1^\pm,x_2^\pm) = \frac{\sigma^{\bullet\bullet}(x_1^\pm,x_2^\pm)}{\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)}\,,&&\qquad
\tilde{\varsigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm) = \frac{\tilde{\sigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)}{\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)}\,,\\
\varsigma^{\bullet\circ}(x_1^\pm,x_2) = \frac{\sigma^{\bullet\circ}(x_1^\pm,x_2)}{\sigma_{\text{\tiny BES}}(x_1^\pm,x_2)}\,,&&\qquad
\varsigma^{\circ\bullet}(x_1,x_2^\pm) = \frac{\sigma^{\circ\bullet}(x_1,x_2^\pm)}{\sigma_{\text{\tiny BES}}(x_1,x_2^\pm)}\,,\\
\varsigma^{\circ\circ}(x_1,x_2) = \frac{\sigma^{\circ\circ}(x_1,x_2)}{\sigma_{\text{\tiny BES}}(x_1,x_2)}\,,&&\qquad
\tilde{\varsigma}^{\circ\circ}(x_1,x_2) = \frac{\widetilde{\sigma}^{\circ\circ}(x_1,x_2)}{\sigma_{\text{\tiny BES}}(x_1,x_2)}\,.
\end{aligned}
\end{equation}
Note that the function $\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)$ indicates the BES factor in the massive-massive kinematics, whereas \textit{e.g.}\ $\sigma_{\text{\tiny BES}}(x_1^\pm,x_2)$ indicates the BES factor in the massive-massless kinematics, see appendix~\ref{app:BES} for details. For the massive-massive kinematics, it is also convenient to define
\begin{equation}
\varsigma^{\boldsymbol{+}}(x_1^\pm,x_2^\pm) = \varsigma^{\bullet\bullet}(x_1^\pm,x_2^\pm)\tilde{\varsigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)\,,\qquad
\varsigma^{\boldsymbol{-}}(x_1^\pm,x_2^\pm) = \frac{\varsigma^{\bullet\bullet}(x_1^\pm,x_2^\pm)}{\tilde{\varsigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)}\,.
\end{equation}
After this redefinition, the massive-massive factors satisfy
\begin{equation}
\begin{aligned}
\Big(\varsigma^{\bullet\bullet}(x_1^\pm,x_2^\pm)\Big)^{-2}\Big(\tilde{\varsigma}^{\bullet\bullet}(\bar{x}_1^\pm,x_2^\pm)\Big)^{-2}&=\frac{1-x_1^+x_2^+}{1-x_1^-x_2^+}\frac{1-x_1^-x_2^-}{1-x_1^+x_2^-}\,,\\
\Big(\varsigma^{\bullet\bullet}(\bar{x}_1^\pm,x_2^\pm)\Big)^{-2}\big(\tilde{\varsigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)\Big)^{-2}&=\frac{x_1^+ - x_2^-}{x_1^+ - x_2^+}\frac{x_1^- - x_2^+}{x_1^- - x_2^-}\,,
\end{aligned}
\end{equation}
which we may also rewrite as one crossing equation for the product of the dressing factors
\begin{equation}
\Big(\varsigma^{\boldsymbol{+}}(\bar{x}_1^\pm,x_2^\pm)\Big)^{-2}
\Big(\varsigma^{\boldsymbol{+}}(x_1^\pm,x_2^\pm)\Big)^{-2}
=
\frac{f(x_1^+,x_2^+)\,f(x_1^-,x_2^-)}{f(x_1^+,x_2^-)\,f(x_1^-,x_2^+)}\,,
\end{equation}
and a monodromy equation for the ratio of the factors,
\begin{equation}
\frac{\Big(\varsigma^{\boldsymbol{-}}(\bar{x}_1^\pm,x_2^\pm)\Big)^{-2}}{\Big(\varsigma^{\boldsymbol{-}}(x_1^\pm,x_2^\pm)\Big)^{-2}}
= \frac{(u_1-u_2+\frac{i}{h})(u_1-u_2-\frac{i}{h})}{(u_1-u_2)^2}\,,
\end{equation}
which we have rewritten in terms of the rapidity $u$ which obeys
\begin{equation}
u = x^++\frac{1}{x^+}-\frac{i}{h} = x^-+\frac{1}{x^-}+\frac{i}{h}\,.
\end{equation}
\subsubsection{Mixed-mass kinematics}
The crossing equations in the mixed-mass kinematics are
\begin{equation}
\begin{aligned}
\Big(\varsigma^{\bullet\circ}(\bar{x}_1^\pm,x_2)\Big)^{-2} \Big(\varsigma^{\bullet\circ}(x_1^\pm,x_2)\Big)^{-2}&=
\frac{f(x_1^+,x_2)}{f(x_1^-,x_2)}\,,
\\
\Big(\varsigma^{\circ\bullet}(\bar{x}_1,x_2^\pm)\Big)^{-2} \Big(\varsigma^{\circ\bullet}(x_1,x_2^\pm)\Big)^{-2}&=
\frac{f(x_1,x_2^-)}{f(x_1,x_2^+)}\,.
\end{aligned}
\end{equation}
As a consequence of braiding unitarity we have
\begin{equation}
\begin{aligned}
\Big(\varsigma^{\circ\bullet}(x_1,\bar{x}_2^\pm)\Big)^{-2} \Big(\varsigma^{\circ\bullet}(x_1,x_2^\pm)\Big)^{-2}&=
\frac{f(x_1,x_2^-)}{f(x_1,x_2^+)},\\
\Big(\varsigma^{\bullet\circ}(x_1^\pm,\bar{x}_2)\Big)^{-2} \Big(\varsigma^{\bullet\circ}(x_1^\pm,x_2)\Big)^{-2}&=
\frac{f(x_1^+,x_2)}{f(x_1^-,x_2)}\,,
\end{aligned}
\end{equation}
where we recall that $\bar{x}^\pm(z)=x^\pm(z+\omega_2)$. This is unlike how the crossing equation is normally written in the second variable: one would need to shift $z\to z-\omega_2$ to recast the last two equations in a more familiar form.
\subsubsection{Massless-massless kinematics}
For the massless-massless dressing factor we have
\begin{equation}
\begin{aligned}
\Big(\varsigma^{\circ\circ}(\bar{x}_1,x_2)\Big)^{-2} \Big(\varsigma^{\circ\circ}(x_1,x_2)\Big)^{-2}
=
-\frac{1}{f(x,y)^2}\,.\\
\Big(\tilde{\varsigma}^{\circ\circ}(\bar{x}_1,x_2)\Big)^{-2} \Big(\tilde{\varsigma}^{\circ\circ}(x_1,x_2)\Big)^{-2}
=
-\frac{1}{f(x,y)^2}\,.\\
\end{aligned}
\end{equation}
\section{Rapidity variables}
\label{sec:rapidities}
There are a number of useful ways to parametrise the S~matrix. It is particularly convenient to introduce a set of rapidity variables following Fontanella and Torrielli~\cite{Fontanella:2019baq} (see also Beisert, Hernandez and Lopez~\cite{Beisert:2006ib}). It will turn out that the equations for the ``stripped out'' factors admit a simple difference-form solution when expressed in terms of these variables.
\subsection{Massive variables}
\label{sec:rapidities:massive}
First of all, let us recall that it is possible to parametrise $x^\pm$ in terms of Jacobi elliptic functions of~$z$ in such a way that the matrix part of the S~matrix is meromorphic as a fuction of $(z_1,z_2)$.
Let us briefly review the construction~\cite{Arutyunov:2007tc}, introducing
\begin{equation}
p=2 \text{am} z\,,\qquad E=|M|\,\text{dn}(z,k)\,,\qquad
\sin\frac{p}{2}=\text{sn}(z,k)\,,
\end{equation}
so that
\begin{equation}
x^\pm =\frac{|M|}{2h}\left(\frac{\text{cn}(z,k)}{\text{sn}(z,k)}\pm i\right)\big(1+\text{dn}(z,k)\big)\,,
\end{equation}
where the elliptic modulus is~$k$ and is related to the periods of the tours as
\begin{equation}
k=-\frac{4h^2}{M^2}\,,\qquad
\omega_1=2K(k)\,,\qquad \omega_2=2iK(1-k)-2K(k)\,,
\end{equation}
in terms of the complete elliptic integral of the first kind~$K(k)$. Clearly these formulae make sense for $|M|>0$, though we will be mostly interested in the case $|M|=1$. For that case, we can introduce a new set of variables
\begin{equation}
\label{eq:gammapm}
x^+= \frac{i - e^{\gamma^+}}{i + e^{\gamma^+}}\,,\qquad x^-= \frac{i + e^{\gamma^-}}{i - e^{\gamma^-}}\,,
\end{equation}
which can be thought of as functions of~$z$.
The inverse of this parametrisation requires chosing a branch,
\begin{equation}
\gamma^+ = \log \left(-i \frac{x^+ - 1}{x^++1}\right)\quad \text{mod}\ 2\pi i\,,
\qquad
\gamma^- = \log\left(i\frac{ x^- - 1}{ x^-+1}\right)\quad\text{mod}\ 2\pi i\,,
\end{equation}
and we will discuss below how this should be done in various kinematics regimes.
\subsubsection{Physical region and crossing}
In the string region, physical values of $x^+$ satisfy $|x^+|>1$, which is the case if
\begin{equation}
\label{eq:physicalgammap}
-\pi+2k\pi<\mathfrak{I}[\gamma^+]<2k\pi\,,\qquad
k\in\mathbb{Z}\,.
\end{equation}
In what follows, we will choose as physical region the one given by $k=0$. Similarly, for $x^-$ the physical region is $|x^-|>1$, which gives
\begin{equation}
\label{eq:physicalgammam}
2k\pi<\mathfrak{I}[\gamma^-]<\pi+2k\pi\,,\qquad k\in\mathbb{Z}\,.
\end{equation}
Again, we pick the region with $k=0$, see figure~\ref{fig:stringregion}. Then we have%
\footnote{We choose the branches of the logarithms to avoid any cuts in the path that we will use to define the crossing transformation.}
\begin{equation}
\label{eq:stringgammaofx}
\gamma^+ = \log \left(-i \frac{x^+ - 1}{x^++1}\right),
\quad
\gamma^- = \log\left(i\frac{ x^- - 1}{ x^-+1}\right),\qquad\text{(string region)},
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}
\fill[pattern=horizontal lines, pattern color=red] (-2cm,-2cm) rectangle (2cm,2cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (1.6cm,1.6cm) {$x^+$};
\draw[thick, fill=white] (0,0) circle (1cm);
\draw[->, thick] (-2cm,0)--(2cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\end{tikzpicture}
\begin{tikzpicture}
\fill[pattern=vertical lines, pattern color=blue] (-2cm,-2cm) rectangle (2cm,2cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (1.6cm,1.6cm) {$x^-$};
\draw[thick, fill=white] (0,0) circle (1cm);
\draw[->, thick] (-2cm,0)--(2cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\end{tikzpicture}\ \
\begin{tikzpicture}
\fill[pattern=vertical lines, pattern color=blue] (-3cm,0cm) rectangle (3cm,1cm);
\fill[pattern=horizontal lines, pattern color=red] (-3cm,-1cm) rectangle (3cm,0cm);
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\draw[densely dashed,draw=gray] (-3cm,1cm)--(3cm,1cm);
\draw[densely dashed,draw=gray] (-3cm,-1cm)--(3cm,-1cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.6cm,1.6cm) {$\gamma^\pm$};
\node[anchor=south west,rounded corners, fill opacity=0.7, text opacity=1] at (0,1cm) {$\scriptsize{\color{gray}+i\pi}$};
\node[anchor=north west,rounded corners, fill opacity=0.7, text opacity=1] at (0,-1cm) {$\scriptsize{\color{gray}-i\pi}$};
\end{tikzpicture}
\caption{The physical string region. In the leftmost panel, the physical string region for $x^+$ is $|x^+|>1$; in the middle panel, for $x^-$ is $|x^-|>1$; in the rightmost panel, we draw it for the $\gamma^+$ and $\gamma^-$ panel, where it takes the form of the strips between $-i\pi$ and $0$, and between $0$ and $+i\pi$, respectively.}
\label{fig:stringregion}
\end{figure}
In the string region, the energy is
\begin{equation}
\label{eq:gammaenergy}
E= \frac{ih}{2}\left(x^- - \frac{1}{x^-}-x^++\frac{1}{x^+}\right)=h\left(\frac{1}{\cosh\gamma^+}+\frac{1}{\cosh\gamma^-}\right)\,,
\end{equation}
the momentum is given by
\begin{equation}
e^{i p }=\frac{i - e^{\gamma^+}}{i + e^{\gamma^+}}\frac{i - e^{\gamma^-}}{i + e^{\gamma^-}}\,,
\end{equation}
while the shortening condition reads
\begin{equation}
\tanh\gamma^- - \tanh\gamma^+ = \frac{i}{h}\,.
\end{equation}
It is worth emphasising that the energy is real and positive only when
\begin{equation}
\mathfrak{I}[x^+]>0,\qquad |x^+|>1\,,\qquad
x^- = (x^+)^*\,,
\end{equation}
which corresponds to
\begin{equation}
-\frac{1}{2}\pi< \mathfrak{I}[\gamma^+]<0\,,\qquad
\gamma^- = (\gamma^+)^*\,,
\end{equation}
which is the region of \textit{real-momentum} physical particles. More generally, for complex values of the momentum (or of the torus rapidity~$z$) we have
\begin{equation}
\big(x^+(z)\big)^* = x^-(z^*)\,,\qquad
\big(\gamma^+(z)\big)^* = \gamma^-(z^*)\,.
\end{equation}
Finally, we note that in the small-momentum limit
\begin{equation}
\label{eq:zeromomentummassive}
\lim_{p\to0}\gamma^{\pm}(p)=\mp \frac{i\pi}{2}\,.
\end{equation}
\subsubsection{Mirror kinematics}
The mirror theory is defined through a double Wick rotation, as discussed in detail in~\cite{Arutyunov:2007tc}. Starting from a particle of the string-worldsheet model with real momentum $p$ and positive energy $E\geq0$, we get mirror energies and momentum (denoted by tildes)
\begin{equation}
\tilde{E}=-i\,p\,,\qquad \tilde{p}=-i\,E\,.
\end{equation}
Now for a real mirror particle $\tilde{p}$ is real and $\tilde{E}\geq0$. Indeed by this defintion we have the mirror dispersion relation
\begin{equation}
\label{eq:mirrordispersion}
\tilde{E}(\tilde{p}) = 2\text{arcsinh}\left(\frac{\sqrt{M^2+\tilde{p}^2}}{2h}\right)\,.
\end{equation}
For massive particles it is understood~\cite{Arutyunov:2007tc} that the mirror region may be reached from the string region by analytic continuation on the rapidity torus.
It is well-known that we may obtain the mirror dispersion by shifting $z$ by half the imaginary period of the torus~$\omega_2$~\cite{Arutyunov:2007tc,Arutyunov:2009ga}, setting
\begin{equation}
z= \tilde{z}+\frac{\omega_2}{2}\,.
\end{equation}
Using explicit formulae for the energy and momentum in terms of Jacobi elliptic functions it is easy to verify that indeed
\begin{equation}
\tilde{E}(\tilde{z})=-i\,p(\tilde{z}+\tfrac{1}{2}\omega_2)\,,\qquad
\tilde{p}(\tilde{z})=-i\,E(\tilde{z}+\tfrac{1}{2}\omega_2)\,,
\end{equation}
are positive-semidefinite and real, respectively, for real rapidity~$\tilde{z}$ satisfying the condition $-\omega_1/2<\tilde z <\omega_1/2$. Moreover, we can explicitly evaluate
\begin{equation}
\tilde{p}(\tilde{z})=\sqrt{M^2+4h^2}\,\frac{\text{sn}(\tilde{z},k)}{\text{cn}(\tilde{z},k)}\,,
\end{equation}
as well as%
\footnote{%
It is worth noting that $\tilde{x}^\pm$ does not have a simple form in terms of $x^\pm$. In particular, it is not true that $\tilde{x}^+=1/x^+$ and $\tilde{x}^-=x^-$. This would be the case under the transformation $z\to-z+\tfrac{\omega_2}{2}+\tfrac{\omega_1}{2}$~\cite{Arutyunov:2009ga}.}
\begin{equation}
\tilde{x}^\pm(\tilde{z})=x^\pm(\tilde{z}+\tfrac{\omega_2}{2})=
-i\frac{\sqrt{1-k}\mp \text{dn}(\tilde{z},k)}{\sqrt{-k}\text{dn}(\tilde{z},k)}\left(
1+i\sqrt{1-k}\frac{\text{sn}(\tilde{z},k)}{\text{cn}(\tilde{z},k)}
\right)\,.
\end{equation}
From this we can find the relation valid in the mirror region
\begin{equation}
\tilde{x}^\pm(\tilde{p}) ={1\over 2h} \left(\sqrt{1+\frac{4h^2}{M^2+\tilde{p}^2}}\mp 1\right)\big(\tilde{p}-i |M|\big)\,.
\end{equation}
It is interesting to note that our expression for $\tilde{x}^\pm(\tilde{p})$ happens to be regular as $M\to0$.
From this formula we can find how mirror particles behave under complex conjugation. In fact, even if our discussion was mostly focused around real particles (with real $\tilde{p}$) we can analytically continue the above expression to complex mirror momentum to find that
\begin{equation}
\big(\tilde{x}^\pm(\tilde{p})\big)^* = \frac{1}{\tilde{x}^{\mp}(\tilde{p}^*)}\,,
\end{equation}
that is a different reality condition from the string model.
For massive particle it is necessary to consider such complex momenta. Indeed, there exist mirror bound-states, whose complex momenta live in a ``strip'' of sorts in the $z$-torus~\cite{Arutyunov:2007tc}.
More precisely, the mirror physical region is defined by having $\mathfrak{I}[x^\pm]<0$. This region has therefore overlaps with the string physical region $|x^\pm|>1$.
\begin{figure}
\centering
\begin{tikzpicture}
\fill[pattern=horizontal lines, pattern color=red] (-2cm,-2cm) rectangle (2cm,0);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (1.6cm,1.6cm) {$\tilde{x}^+$};
\draw[thick] (0,0) circle (1cm);
\draw[->, thick] (-2cm,0)--(2cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\end{tikzpicture}
\begin{tikzpicture}
\fill[pattern=vertical lines, pattern color=blue] (-2cm,-2cm) rectangle (2cm,0);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (1.6cm,1.6cm) {$\tilde{x}^-$};
\draw[thick] (0,0) circle (1cm);
\draw[->, thick] (-2cm,0)--(2cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\end{tikzpicture}\ \
\begin{tikzpicture}
\fill[pattern=vertical lines, pattern color=blue] (-3cm,-0.5cm) rectangle (3cm,0.5cm);
\fill[pattern=horizontal lines, pattern color=red] (-3cm,-1.5cm) rectangle (3cm,-0.5cm);
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\draw[densely dashed,draw=gray] (-3cm,1cm)--(3cm,1cm);
\draw[densely dashed,draw=gray] (-3cm,-1cm)--(3cm,-1cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.6cm,1.6cm) {$\tilde{\gamma}^\pm$};
\node[anchor=south west,rounded corners, fill opacity=0.7, text opacity=1] at (0,1cm) {$\scriptsize{\color{gray}+i\pi}$};
\node[anchor=north west,rounded corners, fill=white, fill opacity=0.6, text opacity=1] at (0,-1cm) {$\scriptsize{\color{gray}-i\pi}$};
\end{tikzpicture}
\caption{The mirror region. In the leftmost panel, the mirror region for $x^+$ is $\mathfrak{I}[x^+]<0$; in the middle panel, for $x^-$ is $\mathfrak{I}[x^-]<0$; in the rightmost panel, we draw it for the $\gamma^+$ and $\gamma^-$ panel, where it takes the form of the strips between $-\tfrac{3}{2}i\pi$ and $-\tfrac{1}{2}i\pi$, and between $-\tfrac{1}{2}i\pi$ and $+\tfrac{1}{2}i\pi$, respectively.}
\label{fig:mirrorregion}
\end{figure}
We now want to see how to parametrise the mirror kinematics in terms of the $\gamma^\pm$ variables, or of suitable mirror counterparts~$\tilde{\gamma}^\pm$. We begin by observing that $\mathfrak{I}[x^\pm]<0$ occurs when the rapidities~$\gamma^\pm$ live in the strips
\begin{equation}
-\frac{3}{2}\pi+2k\pi<\mathfrak{I}[\gamma^+]<-\frac{1}{2}\pi+2k\pi\,,\qquad
-\frac{1}{2}\pi+2k\pi<\mathfrak{I}[\gamma^-]<\frac{1}{2}\pi+2k\pi\,,\qquad
k\in\mathbb{Z}\,.
\end{equation}
Once again, we choose the regions given by $k=0$, see figure~\ref{fig:mirrorregion}. We see that the mirror regions are shifted downwards by $-\tfrac{i}{2}\pi$ with respect to the physical region.
To find $\tilde{\gamma}^\pm$ in the mirror theory let us once again start from real-momentum particles. First we want to start from $\gamma^\pm(z)$ using~\eqref{eq:stringgammaofx}, where the torus rapidity $z$ is on the real-string line, use it to define $\tilde{\gamma}^\pm(\tilde{z})=\gamma^\pm(\tilde z+\tfrac{1}{2}\omega_2)$. It should be noted that the effect of the $\tfrac{1}{2}\omega_2$-shift on the $\gamma^\pm$ variables is not particularly simple (just like it was not particularly simple for~$x^\pm$). More specifically, it results in a $z$-dependent imaginary shift of $\gamma^\pm$. We find that
\begin{equation}
\begin{aligned}
&\tilde{\gamma}^+(\tilde{z})=\gamma^+(\tilde z+\tfrac{1}{2}\omega_2)=
\log \left(i \frac{\tilde{x}^+ - 1}{\tilde{x}^++1}\right)-i \pi,\\
&\tilde{\gamma}^-(\tilde{z})=
\gamma^-(\tilde{z}+\tfrac{1}{2}\omega_2) = \log\left(i\frac{ \tilde{x}^- - 1}{ \tilde{x}^-+1}\right),
\end{aligned}
\end{equation}
where $\tilde{x}^\pm=x^\pm(\tilde{z}+\tfrac{1}{2}\omega_2)$.
Using this definition it is also possible to verify that, under complex conjugation
\begin{equation}
\label{eq:massivemirrorconj}
\left(\tilde{\gamma}^\pm(z)\right)^*
=\tilde{\gamma}^\mp(z^*)+i\pi\,.
\end{equation}
While below we shall use the definition of the mirror rapidities~$\tilde{\gamma}^\pm$, in the mirror region it can be convenient to redefine $\widetilde{\gamma}^\pm= \tilde{\gamma}^\pm(\tilde{z})+ i\pi /2$, in such a way that~$(\widetilde{\gamma}^\pm)^* = \widetilde{\gamma}^\mp$. This makes the discussion of the mirror dressing factors quite a bit simpler, and in fact we will employ this redefinition when discussing the mirror thermodynamic Bethe ansatz~\cite{upcoming:mirror}.
\subsubsection{Crossing transformation}
In a similar spirit, we can describe the crossing transformation which takes us to the ``anti-string'' region and flips the sign of energy and momentum. We denote this transformation with a bar:
\begin{equation}
\bar{E} = -E\,,\qquad
\bar{p}=-p\,.
\end{equation}
As remarked it is not sufficient to describe crossing in terms of $x^\pm$, as the S~matrix has cuts in the $x^\pm$ plane.
Fortunately, the above transformations may be easily realised on the $z$-torus by shifting $z$ by $\omega_2$ --- consistently with the relativistic intuition that the mirror transformation is ``half-crossing''. Hence we have%
\footnote{%
In analogy with what we have done for the mirror model, we may introduce the variable~$\bar{z}$; we avoid doing so because, unlike the mirror theory, the crossed theory is essentially identical to the original theory, \textit{i.e.}\ it has the same kinematics. Moreover, strictly speaking we may have wanted to define $\bar{E}(z)=-E(z+\omega_2)$ so that it is positive; we prefer to make the energy of the anti-string region negative to emphasise that it is not a physical region.}
\begin{equation}
\bar{E}(z)=E(z+\omega_2)=-E\,,\qquad
\bar{p}(z)=p(z+\omega_2)=-p\,,
\end{equation}
as well as
\begin{equation}
\bar{x}^\pm(z)=x^\pm(z+\omega_2)=\frac{1}{x^\pm(z)}\,.
\end{equation}
The same prescription, in terms of the $\gamma^\pm$ variables, gives
\begin{equation}
\bar{\gamma}^\pm(z)=\gamma^\pm(z+\omega_2)=\gamma^\pm(z)-i\pi\,.
\end{equation}
Interestingly, while the mirror transformation did not have a simple action on the $\gamma^\pm$, the crossing transformation simply amounts to a shift of $\gamma^\pm$ by $-i\pi$. This fact will play a crucial role in what follows.
We conclude this section by noting some roots of useful expressions involving two Zhukovsky variables, in terms of $\gamma^\pm$ rapidities. They are reported in table~\ref{table:zeros}.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{|l|lr|}
\hline
$x_1^\pm = x_2^\pm\quad$ & $\gamma_1^\pm=\gamma_2^\pm\quad$ &$\text{mod}\ 2\pi i$ \\
$x_1^\pm = \displaystyle\frac{1}{x_2^\pm}$ & $\gamma_1^\pm=\gamma_2^\pm+i\pi\quad$& $\text{mod}\ 2\pi i$ \\
$x_1^+ = x_2^-$ & $\gamma_1^+=\gamma_2^-+i\pi\quad$& $\text{mod}\ 2\pi i$ \\
$x_1^+ = \displaystyle\frac{1}{x_2^-}$ & $\gamma_1^+=\gamma_2^-$& $\text{mod}\ 2\pi i$ \\[0.3cm]
\hline
\end{tabular}
\caption{Some notable zeros of expressions involving $x^\pm_1$ and $x^\pm_2$, and their equivalent expressions in terms of the $\gamma^\pm_1$ and $\gamma^\pm_2$ rapidities. The physical interpretation of such zeros will depend on whether the correspondent rapidities fall in the physical region for string or mirror particles. We will return to this point in section~\ref{sec:proposal}.}
\label{table:zeros}
\end{table}
\subsection{Massless variables}
\label{sec:rapidities:massless}
We firstly observe that for massless variables the dispersion relation is not analytic,
\begin{equation}
E=2h\,\left|\sin\Big(\frac{p}{2}\Big)\right|\,,
\end{equation}
and similarly the relation between the massless Zhukovsky variable and the momentum is not analytic,
\begin{equation}
x_p =\text{sgn}\Big(\sin\tfrac{p}{2}\Big)\, e^{i p /2}\,,
\end{equation}
so that strictly speaking we need to treat separately the case of anti-chiral and chiral particles ($-\pi<p<0$ and $0<p<\pi$), see ref.~\cite{upcoming:massless} for a detailed discussion of this point. As we have summarised above, this means that the dressing factors which we will discuss correspond to a definite choice of chirality, and that the remaining ones arise by using parity and braiding unitarity.
Bearing all this in mind, in the massless case we set~\cite{Fontanella:2019baq}
\begin{equation}
\label{eq:gamma}
x=\frac{i-e^{\gamma}}{i+e^{\gamma}}\,.
\end{equation}
Notice that for real momenta $x$ is on the upper half of the unit circle, which corresponds to $\gamma$ being real.
The relation between $x_p$ and the energy is analytic, so that in terms of $\gamma$ we have simply
\begin{equation}
E(\gamma) = \frac{2h}{\cosh\gamma}\,,
\end{equation}
which is indeed positive semi-definite on the real line. Notice that the region where the momentum and the energy are small is mapped to (plus and minus) infinity in terms of $\gamma$. The map between the rapidity $\gamma$ and momentum is not analytic (as expected) and it is
\begin{equation}
\gamma =\log\left(-\cot\frac{p}{4}\right)\,,\quad
p=-2i\log\left(-\frac{i-e^{\gamma}}{i+e^{\gamma}}\right)\,,\qquad
\text{for}\quad\gamma_p\geq0\,,\quad -\pi\leq p \leq 0\,,
\end{equation}
for anti-chiral particles, whereas for chiral particles
\begin{equation}
\gamma =\log\left(\tan\frac{p}{4}\right)\,,\quad
p=-2i\log\left(+\frac{i-e^{\gamma}}{i+e^{\gamma}}\right)\,,\qquad
\text{for}\quad \gamma\leq0\,,\quad 0\leq p \leq \pi\,.
\end{equation}
\subsubsection{String region, mirror region, and crossing}
The discussion now is similar in spirit to what we have done for massive particles, with some important difference: massless particles cannot create bound states. As a consequence, it does not make sense to talk about a ``physical region'' for complex momenta. In fact, the physical region is just the line where momentum is real and the energy is positive. This occurs when $\gamma$ is on the real line of the string region. Despite this important difference, it still makes sense to require that we can analytically continue the rapidity to complex values and reach, for instance, the mirror or the anti-string (\textit{i.e.}, crossed) region. In fact, we have implicitly used the existence of such an analytic continuation when writing the crossing equation, while the possibility of analytically continuing the S~matrix to the mirror region is crucial to obtain the thermodynamic Bethe ansatz equations for excited states~\cite{Dorey:1996re}.
Let us now discuss the mirror ``region''. Once again, we expect this region to consist only of one line --- where the mirror energy~$\tilde{E}$ is positive semi-definite and the mirror momentum~$\tilde{p}$ is real.
In the massless sector we do not have access to the $z$-torus parametrisation. Still, since we can uniformise the dispersion in terms of a single real variable, we can use any such parametrisation (\textit{e.g.}, in terms of $x$ or in terms of~$\gamma$) to define the mirror energy. Working in terms of $\gamma$, we find
\begin{equation}
\tilde{E}(\tilde{\gamma}) = - i\,p(\tilde{\gamma}+\tfrac{i}{2}\pi)\,,\qquad
\tilde{p}(\tilde{\gamma}) = - i\,E(\tilde{\gamma}+\tfrac{i}{2}\pi)\,,
\end{equation}
so that, bearing in mind that the two branches of $p$ result in two branches of~$\tilde{E}$,
\begin{equation}
\tilde{E}(\tilde{\gamma}) = -2\log\Big|\tanh\frac{\tilde{\gamma}}{2} \Big|\,,\qquad \tilde{p}=-\frac{2h}{\sinh\tilde{\gamma}}\,.
\end{equation}
It is also worth working out the velocity of a mirror particle, which is
\begin{equation}
\tilde{v}=\frac{\partial\tilde{E}}{\partial\tilde{p}} = \frac{\pm 2}{\sqrt{4h^2+\tilde{p}^2}}\,,
\end{equation}
where the sign is negative for anti-chiral particles, and positive for chiral ones. This is compatible with the $M=0$ expression in~\eqref{eq:mirrordispersion}.
\begin{figure}
\centering
\begin{tikzpicture}
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (1.6cm,1.6cm) {$x$};
\draw[->, thick] (-2.5cm,0)--(2.5cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\draw[ultra thick,OliveGreen,domain=0:180] plot ({1.4*cos(\x)}, {1.4*sin(\x)});
\draw[ultra thick, dashed,Plum,domain=-1:1] plot ({1.4*\x}, {0});
\draw[ultra thick,Orange,domain=180:360] plot ({1.4*cos(\x)}, {1.4*sin(\x)});
\end{tikzpicture}\hspace{1cm}
\begin{tikzpicture}
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\draw[ultra thick,OliveGreen] (-2.8cm,0)--(2.8cm,0);
\draw[ultra thick,dashed,Plum] (-2.8cm,0.7cm)--(2.8cm,0.7cm);
\draw[ultra thick,Orange] (-2.8cm,1.4cm)--(2.8cm,1.4cm);
\draw[densely dashed,draw=gray] (-3cm,1.4cm)--(3cm,1.4cm);
\draw[densely dashed,draw=gray] (-3cm,-1.4cm)--(3cm,-1.4cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.6cm,1.7cm) {$\gamma$};
\node[anchor=south west,rounded corners, fill opacity=0.7, text opacity=1] at (0,1.4cm) {$\scriptsize{\color{gray}+i\pi}$};
\node[anchor=north west,rounded corners, fill=white, fill opacity=0.6, text opacity=1] at (0,-1.4cm) {$\scriptsize{\color{gray}-i\pi}$};
\end{tikzpicture}
\caption{The string, mirror and anti-string region in the massless kinematics. In all three cases, the ``region'' is actually a line, corresponding to real momentum particles. We denote the string region by a solid green line (upper-half-circle in the $x$-plane), the mirror region by a dashed purple line (real segment in the $x$-plane), and the anti-string region by a solid orange line (lower-half-circle in the $x$-plane).}
\label{fig:masslessgammax}
\end{figure}
Some remarks are in order. In the mirror region, the momentum is no longer periodic; rather, it takes any real value. This is also what happens for massive particles. We see that large (positive or negative) values of the mirror momentum are mapped to (negative or positive) values of~$\tilde{\gamma}$ in the vicinity of zero. Conversely, when the mirror rapidity is large in modulus, the momentum of the particles is close to zero; here, the mirror energy vanishes, while the velocity is in modulus as large as it can be, $\tilde{v}=\pm 1/h$. Finally, we remark once again that the transformation that takes us from the real to the mirror line should be defined separately for chiral and anti-chiral particles.
Let us now comment on the qualitative behaviour of the Zhukovsky map under crossing. We have
\begin{equation}
\tilde{x}(\tilde{\gamma})=x(\tilde{\gamma}+\tfrac{i}{2}\pi)=-\tanh{\frac{\tilde{\gamma}}{2}}\,.
\end{equation}
For real~$\tilde{\gamma}$, this takes values on the real line with~$-1<\tilde{x}<1$. In particular, the points $\tilde{x}=\pm1$ correspond to zero mirror momentum and mirror energy. In terms of the $x$ plane, we may think of taking the Zhukovski variable from the upper-half circle straight down to the mirror line, through a suitable path inside the unit disk.
In a similar way as we have discussed crossing, we may now define the crossing transformation to the anti-string region,
\begin{equation}
\bar{E}(\gamma) = E(\gamma+i\pi)= - E(\gamma)\,,\qquad
\bar{p}(\gamma) = p(\gamma+i\pi)=-p(\gamma)\,,
\end{equation}
under which we flip the sign of energy and momentum. In terms of the Zhukovsky variable $x$ this corresponds to taking
\begin{equation}
\bar{x} = \frac{1}{x}\,,
\end{equation}
where now real value of the momentum corresponds to the lower-half-circle. The transformation that takes us from the string region to the anti-string (crossed) region therefore takes us inside the unit disk, through the real line between $(-1,1)$, and down to the lower-half-circle.%
\footnote{%
Note that it is different from the transformation used in ref.~\cite{Borsato:2016xns}, which went outside the unit disk. That choice was taken in analogy with the path of $x^+(z)$ in the massive case, but upon closer inspection we see that such a path does not go through the mirror region in the $M\to0$ limit.
}
We represent the $x$- and $\gamma$-plane in figure~\ref{fig:masslessgammax}.
\begin{figure}
\centering
\begin{tikzpicture}
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.8cm,1.8cm) {$\gamma^\pm(u)$};
\node[anchor=south west,rounded corners, fill opacity=0.7, text opacity=1] at (0,1cm) {$\scriptsize{\color{gray}+\tfrac{i}{h}}$};
\node[anchor=north west,rounded corners, fill=white, fill opacity=0.6, text opacity=1] at (0,-1cm) {$\scriptsize{\color{gray}-\tfrac{i}{h}}$};
\draw[loosely dashed,draw=gray] (-3cm,1cm)--(3cm,1cm);
\draw[loosely dashed,draw=gray] (-3cm,-1cm)--(3cm,-1cm);
\draw[blue, snake, ultra thick] (-1.5cm,1cm) -- (1.5cm,1cm);
\draw[red, snake, ultra thick] (-1.5cm,-1cm) -- (1.5cm,-1cm);
\fill[fill=black] (-1.5cm,1cm) circle (0.05cm);
\fill[fill=black] (+1.5cm,1cm) circle (0.05cm);
\fill[fill=black] (-1.5cm,-1cm) circle (0.05cm);
\fill[fill=black] (+1.5cm,-1cm) circle (0.05cm);
\end{tikzpicture} \hspace{1.5cm}
\begin{tikzpicture}
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-2cm)--(0,2cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.8cm,1.8cm) {$\tilde{\gamma}^\pm(u)$};
\node[anchor=south west,rounded corners, fill opacity=0.7, text opacity=1] at (0,1cm) {$\scriptsize{\color{gray}+\tfrac{i}{h}}$};
\node[anchor=north west,rounded corners, fill=white, fill opacity=0.6, text opacity=1] at (0,-1cm) {$\scriptsize{\color{gray}-\tfrac{i}{h}}$};
\draw[loosely dashed,draw=gray] (-3cm,1cm)--(3cm,1cm);
\draw[loosely dashed,draw=gray] (-3cm,-1cm)--(3cm,-1cm);
\draw[blue, snake, ultra thick] (-1.5cm,1cm) -- (-3cm,1cm);
\draw[red, snake, ultra thick] (-1.5cm,-1cm) -- (-3cm,-1cm);
\draw[blue, snake, ultra thick] (1.5cm,1cm) -- (3cm,1cm);
\draw[red, snake, ultra thick] (1.5cm,-1cm) -- (3cm,-1cm);
\fill[fill=black] (-1.5cm,1cm) circle (0.05cm);
\fill[fill=black] (+1.5cm,1cm) circle (0.05cm);
\fill[fill=black] (-1.5cm,-1cm) circle (0.05cm);
\fill[fill=black] (+1.5cm,-1cm) circle (0.05cm);
\end{tikzpicture}
\caption{The $u$ plane. In the left panel, the function $\gamma^+(u)$ has branch points at $u=\pm 2-\tfrac{i}{h}$ and the branch cut runs on the red wavy line; the function $\gamma^-(u)$ has branch points at $u=\pm 2+\tfrac{i}{h}$ and the branch cut runs on the blue wavy line. In the right panel, we make a similar drawing for $\tilde{\gamma}^\pm(u)$; this time the branch cuts are ``long'', \textit{i.e.}\ they go to infinity.}
\label{fig:uplane}
\end{figure}
\subsection{The $u$-plane}
\label{sec:rapidities:uplane}
It can be useful to introduce the rapidity $u$, which for massive particles takes the form
\begin{equation}
u= x^++\frac{1}{x^+}-\frac{i}{h}=x^-+\frac{1}{x^-}+\frac{i}{h}\,.
\end{equation}
We can therefore parametrise $\gamma^\pm(u)$ on the $u$-plane with cuts, for particles in the string or mirror theory. We get for the string $u$-plane variable
\begin{equation}
\label{eq:gammas}
\gamma^+(u)=\frac{1}{2} \log \left(\frac{u+\frac{i}{h}-2}{u+\frac{i}{h}+2}\right)- \frac{i \pi}{2} \,,\qquad \gamma^-(u)=\frac{1}{2} \log \left(\frac{u-\frac{i}{h}-2}{u-\frac{i}{h}+2}\right)+\frac{i \pi}{2}\,.
\end{equation}
For the mirror $u$-plane variable, we have instead
\begin{equation}
\label{eq:gammaum}
\tilde{\gamma}^+(u)=\frac{1}{2} \log \left(-\frac{u+\frac{i}{h}-2}{u+\frac{i}{h}+2}\right) -i\pi\,,\qquad \tilde{\gamma}^-(u)=\frac{1}{2} \log \left(-\frac{u-\frac{i}{h}-2}{u-\frac{i}{h}+2}\right)\,.
\end{equation}
As a consequence, the cuts in the string kinematics are \textit{short}, while in the mirror kinematics they are \textit{long} (\textit{i.e.}, they go through infinity, see figure~\ref{fig:uplane}). This is very reminiscent of what happens in $AdS_5\times S^5$. Here the branch points are logarithmic: analytic continuation through a cut of $\gamma^\pm(u)$ from below to the same point $u$ of the next plane decreases $\gamma^\pm(u)$ by $i\pi$.
We see that in the string region physical particles have real rapidity $u$. The same is true for mirror particles. In terms of $u$ the complex-conjugation rule reads
\begin{equation}
(\gamma^+(u))^*= \gamma^-(u^*)\,,\qquad
(\tilde{\gamma}^+(u))^*= \tilde{\gamma}^-(u^*)+i\pi\,.
\end{equation}
In a similar way, we may define the $u$-rapidity for massless particles,
\begin{equation}
u = x+\frac{1}{x}\,.
\end{equation}
In analogy with the massive case, we set
\begin{equation}
\gamma(u)=\frac{1}{2} \log \left(\frac{u-2}{u+2}\right)- \frac{i \pi}{2} \,,\qquad
\tilde{\gamma}(u)=\frac{1}{2} \log \left(-\frac{u-2}{u+2}\right)+ \frac{i \pi}{2}\,,
\end{equation}
so that in the string region $\gamma(u)$ has a short cut, and $\tilde{\gamma}(u)$ has a long cut. In the string region, the cut runs from $-2$ to $+2$, and real values of $\gamma(u)$ occur just above the cut. In the mirror region, the cut runs between minus infinity and $-2$, and between $2$ and plus infinity. Real values of $\tilde{\gamma}$ occur just above the cut.
The two functions are related as follows
\begin{equation}
\tilde{\gamma}(u+i0) = \gamma(u+i0)+\frac{i}{2}\pi \,,
\end{equation}
and any of them can be used in both regions.
\begin{figure}
\centering
\begin{tikzpicture}
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-1cm)--(0,1cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.8cm,0.8cm) {$\gamma(u)$};
\draw[OliveGreen, snake, ultra thick] (-1.5cm,0cm) -- (1.5cm,0cm);
\draw[OliveGreen, ultra thick] (-1.5cm,0.1cm) -- (1.5cm,0.1cm);
\fill[fill=black] (-1.5cm,0cm) circle (0.07cm);
\fill[fill=black] (+1.5cm,0cm) circle (0.07cm);
\end{tikzpicture} \hspace{1.5cm}
\begin{tikzpicture}
\draw[->, thick] (-3cm,0)--(3cm,0);
\draw[->, thick] (0,-1cm)--(0,1cm);
\node[rounded corners, draw=black, fill=white, fill opacity=0.7, text opacity=1] at (2.8cm,0.8cm) {$\tilde{\gamma}(u)$};
\draw[Plum, snake, ultra thick] (-3cm,0cm) -- (-1.5cm,0cm);
\draw[Plum, snake, ultra thick] (1.5cm,0cm) -- (3cm,0cm);
\draw[Plum, ultra thick] (-3cm,0.1cm) -- (-1.5cm,0.1cm);
\draw[Plum, ultra thick] (1.5cm,0.1cm) -- (3cm,0.1cm);
\fill[fill=black] (-1.5cm,0cm) circle (0.07cm);
\fill[fill=black] (+1.5cm,0cm) circle (0.07cm);
\end{tikzpicture}
\caption{The $u$ plane for massless particles. In the left panel, the function $\gamma(u)$ has branch points at $u=\pm 2$ and the branch cut runs on the green wavy line; real momentum and positive energy corresponds to $\gamma(u)$ just above the cut. In the right panel, we make a similar drawing for $\tilde{\gamma}(u)$; this time the branch cuts are ``long'', \textit{i.e.}\ they go to infinity, and real values of the mirror momentum as well as positive values of the mirror energy correspond to $\tilde{\gamma}(u)$ just above the cut.}
\label{fig:uplanemassless}
\end{figure}
\subsection{Crossing equations and rapidity-difference solutions}
\label{sec:rapidities:crossing}
It is convenient to rewrite the crossing equations in terms of these rapidity variables. We will see that all equations can be formally solved in terms of two functions which depend only on the \textit{difference} of the rapidities.
To write more compact formulae, let us introduce the short-hand notation for the difference of two rapidities
\begin{equation}
\label{eq:gammashorthand}
\gamma^{ab}_{12}=\gamma_1^a-\gamma_2^b\,,\qquad a=\pm,\quad b=\pm\,,
\end{equation}
as well as
\begin{equation}
\gamma^{\pm\circ}_{12}=\gamma_1^\pm-\gamma_2\,, \qquad
\gamma^{\circ\pm}_{12}=\gamma_1-\gamma_2^\pm\,,\qquad
\gamma^{\circ\circ}_{12}=\gamma_1-\gamma_2\,.
\end{equation}
\subsubsection{Massive sector}
To begin with, we have
\begin{equation}
\Big(\varsigma^{\boldsymbol{+}}(\bar{\gamma}_1^\pm,\gamma_2^\pm)\Big)^{-2}\Big(\varsigma^{\boldsymbol{+}}(\gamma_1^\pm,\gamma_2^\pm)\Big)^{-2}
=
\coth\frac{\gamma_{12}^{++}}{2}
\coth\frac{\gamma_{12}^{+-}}{2}
\coth\frac{\gamma_{12}^{-+}}{2}
\coth\frac{\gamma_{12}^{--}}{2}\,.
\end{equation}
Similarly, we have for the monodromy equation
\begin{equation}
\frac{\Big(\varsigma^{\boldsymbol{-}}(\bar{\gamma}_1^\pm,\gamma_2^\pm)\Big)^{-2}}{\Big(\varsigma^{\boldsymbol{-}}(\gamma_1^\pm,\gamma_2^\pm)\Big)^{-2}}=
\frac{\sinh\gamma_{12}^{+-}\ \sinh\gamma_{12}^{-+}}{\sinh\gamma_{12}^{++}\ \sinh\gamma_{12}^{--}}\,.
\end{equation}
We can solve these equations \textit{in terms of an expression of difference form} in $\gamma$ by requiring that
\begin{equation}
\Big(\varsigma^{\boldsymbol{+}}(\gamma_1^\pm,\gamma_2^\pm)\Big)^{-2}=F^+_{\text{CDD}}(\gamma_1^\pm,\gamma_2^\pm)\,\varPhi(\gamma^{++}_{12})\,\varPhi(\gamma^{+-}_{12})\,\varPhi(\gamma^{-+}_{12})\,\varPhi(\gamma^{--}_{12})\,,
\end{equation}
with
\begin{equation}
\label{eq:varPhicrossing}
\varPhi(\gamma-i \pi) \varPhi(\gamma)=i \coth\frac{\gamma}{2}\,,\qquad
\varPhi(\gamma) \varPhi(\gamma+i\pi)=i \tanh\frac{\gamma}{2}\,,
\end{equation}
and where $F^+_{\text{CDD}}(\gamma_1^\pm,\gamma_2^\pm)$ is an undetermined CDD factor~\cite{Castillejo:1955ed}, \textit{i.e.}\ a solution of the homogeneous (trivial) crossing equation and of the unitarity conditions (which could also be of difference form).
Furthermore, to satisfy the unitarity conditions it must be
\begin{equation}
\varPhi(\gamma)\,\varPhi(-\gamma)=1\,,\qquad
\varPhi(\gamma)^*=\frac{1}{\varPhi(\gamma^*)}\,,
\end{equation}
with similar unitarity conditions holding separately for $F^+_{\text{CDD}}(\gamma_1^\pm,\gamma_2^\pm)$ too.
Similarly, the monodromy equation can be solved in terms of
\begin{equation}
\Big(\varsigma^{\boldsymbol{-}}(\gamma_1^\pm,\gamma_2^\pm)\Big)^{-2}=
F^-_{\text{CDD}}(\gamma_1^\pm,\gamma_2^\pm)\,
\frac{\widehat{\varPhi}(\gamma_{12}^{++})\,\widehat{\varPhi}(\gamma_{12}^{--})}{\widehat{\varPhi}(\gamma_{12}^{+-})\,\widehat{\varPhi}(\gamma_{12}^{-+})}\,,
\end{equation}
provided that it satisfies
\begin{equation}
\label{eq:varPsimonodromy}
\frac{\widehat{\varPhi}(\gamma+i \pi)}{\widehat{\varPhi}(\gamma)}=2i\,\sinh\gamma\,,\qquad
\frac{\widehat{\varPhi}(\gamma-i \pi)}{\widehat{\varPhi}(\gamma)}=\frac{i}{2\sinh\gamma}\,,
\end{equation}
as well as
\begin{equation}
\widehat{\varPhi}(\gamma)\,\widehat{\varPhi}(-\gamma)=1\,,\qquad
\widehat{\varPhi}(\gamma)^*=\frac{1}{\widehat{\varPhi}(\gamma^*)}\,,
\end{equation}
and $F^-_{\text{CDD}}(\gamma_1^\pm,\gamma_2^\pm)$ is another yet-to-be-determined CDD factor.
\subsubsection{Mixed-mass sector}
The mixed-mass crossing equations take the form
\begin{equation}
\label{eq:crossingmixedgamma}
\begin{aligned}
\Big(\varsigma^{\bullet\circ}(\bar{\gamma}_1^\pm,\gamma_2)\Big)^{-2} \Big(\varsigma^{\bullet\circ}(\gamma_1^\pm,\gamma_2)\Big)^{-2}&=
\coth\frac{\gamma_{12}^{+\circ}}{2}\coth\frac{\gamma_{12}^{-\circ}}{2}\,,
\\
\Big(\varsigma^{\circ\bullet}(\bar{\gamma}_1,\gamma_2^\pm)\Big)^{-2} \Big(\varsigma^{\circ\bullet}(\gamma_1,\gamma_2^\pm)\Big)^{-2}&=
\tanh\frac{\gamma_{12}^{\circ+}}{2}\tanh\frac{\gamma_{12}^{\circ-}}{2}\,,\\
\Big(\varsigma^{\circ\bullet}(\gamma_1,\bar{\gamma}_2^\pm)\Big)^{-2} \Big(\varsigma^{\circ\bullet}(\gamma_1,\gamma_2^\pm)\Big)^{-2}&=
\tanh\frac{\gamma_{12}^{\circ+}}{2}\tanh\frac{\gamma_{12}^{\circ-}}{2},\\
\Big(\varsigma^{\bullet\circ}(\gamma_1^\pm,\bar{\gamma}_2)\Big)^{-2} \Big(\varsigma^{\bullet\circ}(\gamma_1^\pm,\gamma_2)\Big)^{-2}&=
\coth\frac{\gamma_{12}^{+\circ}}{2}\coth\frac{\gamma_{12}^{-\circ}}{2}\,.
\end{aligned}
\end{equation}
We start by assuming that we can solve the first equation by setting
\begin{equation}
\Big(\varsigma^{\bullet\circ}(\gamma_1^\pm,\gamma_2)\Big)^{-2}=F_{\text{CDD}}(\gamma_{12}^{\pm\circ})\,\varPhi(\gamma_{12}^{+\circ})\,\varPhi(\gamma_{12}^{-\circ})\,,
\end{equation}
where like before $\varPhi(\gamma)$ satisfies eq.~\eqref{eq:varPhicrossing}. By braiding unitarity we necessarily have that
\begin{equation}
\Big(\varsigma^{\circ\bullet}(\gamma_1,\gamma_2^\pm)\Big)^{-2}=\frac{1}{F_{\text{CDD}}(\gamma_{21}^{\pm\circ})}\,\varPhi(\gamma_{12}^{\circ+})\,\varPhi(\gamma_{12}^{\circ-})\,,
\end{equation}
which solves the relevant crossing equation consistently with our prescription
\begin{equation}
\bar{\gamma}=\gamma+i\pi\,,
\end{equation}
in contrast with what we did for the massive case. The last two equations in~\eqref{eq:crossingmixedgamma} are then automatically satisfied.
\subsubsection{Massless sector}
Finally, for the massless-massless kinematics we have
\begin{equation}
\label{eq:masslesscrossingrapid}
\begin{aligned}
&\Big(\varsigma^{\circ\circ}(\bar{\gamma}_1,\gamma_2)\Big)^{-2} \Big(\varsigma^{\circ\circ}(\gamma_1,\gamma_2)\Big)^{-2}
=
\tanh^2\frac{\gamma_{12}^{\circ\circ}}{2}\,\\
&\Big(\tilde{\varsigma}^{\circ\circ}(\bar{\gamma}_1,\gamma_2)\Big)^{-2} \Big(\tilde{\varsigma}^{\circ\circ}(\gamma_1,\gamma_2)\Big)^{-2}
=
\tanh^2\frac{\gamma_{12}^{\circ\circ}}{2}\,.
\end{aligned}
\end{equation}
These equations look identical but there is an important difference in how we may solve them. In the case of opposite-chirality scattering we can simply set
\begin{equation}
\Big(\tilde{\varsigma}^{\circ\circ}(\gamma_1,\gamma_2)\Big)^{-2} = i\,\Big(\varPhi(\gamma_{12}^{\circ\circ})\Big)^2\,,
\end{equation}
as usual up to a CDD factor.
However, this is not possible for same-chirality scattering. Indeed in that case braiding unitarity forbids such a solution. Instead we need to set
\begin{equation}
\label{eq:masslesssamechirsol}
\Big(\varsigma^{\circ\circ}(\gamma_1,\gamma_2)\Big)^{-2} =
a(\gamma_{12}^{\circ\circ})\,\Big(\varPhi(\gamma_{12}^{\circ\circ})\Big)^2\,,
\end{equation}
where we have
\begin{equation}
a(\gamma)\,a(\gamma+i \pi) = -1\,,\qquad a(\gamma)\,a(-\gamma)=1\,,\qquad
a(\gamma)^*=\frac{1}{a(\gamma^*)}\,.
\end{equation}
It is also possible and perhaps advisable to use a solution of the form~\eqref{eq:masslesssamechirsol} also for the opposite-chirality scattering.
In~\cite{Fontanella:2019ury}, the massless-massless crossing equation was already rewritten and solved in the difference-form parametrisation of~\cite{Fontanella:2019baq}. However their result did not apparently feature any nontrivial function~$a(\gamma)$ (nor a factor of $\pm i$).
Yet, such a factor seems unavoidable for same-chirality scattering. Besides, the S~matrix should take specific values when one of the momenta is zero, which also constrains the normalisation~\cite{upcoming:massless}.
We shall return to this point in section~\ref{sec:proposal:massless} when discussing the analytic properties of our proposed solutions.
\section{Building blocks of the dressing factors}
\label{sec:buildingblocks}
We have assumed that the crossing equations may be solved in terms of the BES phase (appropriately generalised to the massless kinematics where needed) together with two functions $\varPhi(\gamma)$ and $\widehat{\varPhi}(\gamma)$ whose argument will be the difference of the rapidities introduced in section~\ref{sec:rapidities:massive}. Here we present the main features of these functions, relegating some details of their derivation to the appendices.
\subsection{The Beisert-Eden-Staudacher factor}
\label{sec:buildingblocks:BES}
The BES dressing factor for massive particles can be represented as
\begin{equation}
\sigma_{\text{\tiny BES}}(x^\pm_1,x^\pm_2) = e^{i\theta(x^\pm_1,x^\pm_2)}\,,
\end{equation}
where
\begin{equation}
\label{eq:thetaBES}
\theta(x_1^+,x_1^-,x_2^+,x_2^-) =\chi(x_1^+,x_2^+)
-\chi(x_1^+,x_2^-)-\chi(x_1^-,x_2^+)+\chi(x_1^-,x_2^-)\,.
\end{equation}
For $|x_1|>1$ and $|x_2|>1$ the function $\chi(x_1,x_2)$ is given by the integral representation~\cite{Dorey:2007xn}
\begin{equation}
\label{eq:chiBES}
\begin{aligned}
&\chi(x_1,x_2)=\Phi(x_1,x_2)\,,
\qquad|x_1|>1\,,\quad|x_2|>1\,,\\
&\Phi(x_1,x_2)=i\oint\frac{{\rm d}w_1}{2\pi i}\oint \frac{{\rm
d}w_2}{2\pi i}\frac{1}{(w_1-x_1)(w_2-x_2)} \log\frac{\Gamma\big[1+\tfrac{ih}{
2}\big(w_1+\tfrac{1}{w_1}-w_2-\tfrac{1}{w_2}\big)\big]}{
\Gamma\big[1-\tfrac{ih}{
2}\big(w_1+\tfrac{1}{w_1}-w_2-\tfrac{1}{w_2}\big)\big]}\, ,
\end{aligned}
\end{equation}
which is skew-symmetric. As we mentioned, the BES factor satisfies the crossing equation
\begin{equation}
\label{eq:crossingBES}
\sigma_{\text{\tiny BES}} (x^\pm_1,x^\pm_2)\sigma_{\text{\tiny BES}} (\bar{x}^\pm_1,x^\pm_2)=\frac{x_2^-}{x_2^+}g(x_1^\pm,x_2^\pm)\,,
\end{equation}
see eq.~\eqref{eq:gfunction}. It is worth emphasising that the BES factor is regular as long as $|x_i|>1$. Moreover, the function~\eqref{eq:chiBES} can be continued inside the unit circle, even though in that region there are singularities whose position depends on the coupling~$h$. Still, no singularities occur outside a disk of radius $(\sqrt{1+(2h)^{-2}}-(2h)^{-1})$, so that the continuation is straightforward in an annulus around the unit circle. See~\cite{Arutyunov:2009kf} for a discussion of the analytic properties of $\chi(x_1,x_2)$.
It is also worth recalling that the BES factor admits a well-known large-$h$ asymptotic expansion.%
\footnote{The BES factor also admits a rather well-behaved small-$h$ expansion, which is very important in AdS5/CFT4 but which will be less useful for us at least presently, as the CFT2 dual of our model is unknown.}
Specifically, the function $\Phi(x_1,x_2)$ can be expanded in terms of the AFS phase~\cite{Arutyunov:2004vx} and of the Hernandez-Lopez (HL) one~\cite{Hernandez:2006tk},
\begin{equation}
\Phi(x_1,x_2) = \Phi_{\text{AFS}}(x_1,x_2) + \Phi_{\text{HL}}(x_1,x_2) +\cdots\,.
\end{equation}
The leading term scales with~$h$ and, when $|x_1|>1$, $|x_2|>1$, it takes the form
\begin{equation}
\label{eq:AFS}
\Phi_{\text{AFS}}(x_1,x_2)
=\frac{h}{2} \left[\frac{1}{x_1}-\frac{1}{x_2}+\big(-x_1+x_2+\frac{1}{x_2}-\frac{1}{x_1}\big) \log \big(1-\frac{1}{x_1 x_2}\big)\right]\,,
\end{equation}
while the sub-leading one takes the form
\begin{equation}
\label{eq:HL}
\Phi_{\text{HL}}(x_1,x_2) =- \frac{\pi}{2}\oint\frac{{\rm d}w_1}{2\pi i}\oint \frac{{\rm
d}w_2}{2\pi i}\frac{\text{sgn}\big(w_1+\frac{1}{w_1}-w_2-\frac{1}{w_2}\big)}{(w_1-x_1)(w_2-x_2)}\,,
\end{equation}
which can be computed explicitly in terms of dilogarithms~\cite{Arutyunov:2006iu,Beisert:2006ib}.
Finally, it is worth recalling that the BES factor does not satisfy phsyical unitarity in the mirror region. Rather it obeys~\cite{Arutyunov:2007tc, Arutyunov:2009kf}
\begin{equation}
\big(\sigma_{\text{\tiny BES}}(x_{1,m}^\pm,x_{2,m}^\pm)\big)^{-2}
\Big(\big(\sigma_{\text{\tiny BES}}(x_{1,m}^\pm,x_{2,m}^\pm)\big)^{-2}\Big)^*
=
e^{-2ip_1}e^{+2ip_2}\,.
\end{equation}
This factor precisely compensates the one arising from the normalisation~\eqref{eq:massivenorm}. In fact, this can be extended to the case of the dressing factor describing the scattering of two mirror bound states with bound state numbers $Q_1\geq1$ and $Q_2\geq1$, in which case~\cite{Arutyunov:2009kf}
\begin{equation}
\label{eq:boundstatemirrorBES}
\big(\sigma_{\text{\tiny BES}}(\tilde{x}_{1}^\pm,\tilde{x}_{2}^\pm)\big)^{-2}
\Big(\big(\sigma_{\text{\tiny BES}}(\tilde{x}_{1}^\pm,\tilde{x}_{2}^\pm)\big)^{-2}\Big)^*
=
e^{-2iQ_2p_1}e^{+2iQ_1p_2}\,,
\end{equation}
where $\tilde{x}_i^\pm$ are real mirror particles.
More generally, when considering the mirror region, it is often useful to introduce an ``improved'' BES phase~\cite{Arutyunov:2009kf}, which is unitary in the mirror region and has simple properties when considering bound states of the mirror theory (it behaves nicely under the fusion procedure for such bound states). While this improved phase has good properties in the mirror kinematics, it is not convenient to study the original (string-theory) kinematics. In this work we are mainly interested in the string-theory kinematics, hence we postpone most of the discussion of the mirror dressing factors to future work~\cite{upcoming:mirror}.
\subsubsection{Mixed-mass kinematics}
We want to define the BES factor for the case where one particle, e.g. the first one, is massless, that is when
\begin{equation}
|x_1^+|=1\,,\qquad \mathfrak{I}[x^+_1]\geq0\,,\qquad
x_1^-\,x_1^+=1\,.
\end{equation}
For finite $h$ we can just use the fact that if one of the circles in~\eqref{eq:chiBES} is of unit radius then the radius of the second
circle can be reduced up to $(\sqrt{1+(2h)^{-2}}-(2h)^{-1})$. Then we can place $x_1^\pm$ in \eqref{eq:chiBES} on the unit circle, and get a representation for the mixed-mass BES phase for real momentum.
Consider now the crossing equation \textit{when we cross the massive particle}. In this case, the massless rapidity is a spectator, and we can continue it to take values on the unit circle without encountering any obstruction. This may be done in each step of the derivation of the crossing equation of ref.~\cite{Arutyunov:2009kf}, finding
\begin{equation}
\sigma(\bar{x}^\pm_1,x_2)\,\sigma(x^\pm_1,x_2)=\frac{1}{x_2^2}\frac{f(x_1^+,x_2)}{f(x_1^-,x_2)}\,,
\end{equation}
where the right-hand side is the limit of the right-hand-side of eq.~\eqref{eq:crossingBES} when $x_2^\pm\to (x_2)^{\pm1}$.
Things are not so straightforward when considering the crossing transformation for the \textit{massless} variable, because it is a priori not obvious which path we should follow. Let us recall that, for massive particles, the analytic continuation that yields crossing takes a path inside the unit circle of the $x$ plane. In that region, the BES phase has cuts~\cite{Arutyunov:2009kf}. Depending on the mass $M$ of the particle which we are considering, we need to follow different paths to perform the crossing transformation. More precisely, for $M$-particle bound states, the crossing equation is reproduced when crossing \textit{exactly $M$} branch cuts~\cite{Arutyunov:2009kf} --- and for a fundamental particle with $M=1$, it is reproduced when crossing exactly one cut. Postponing a more detailed discussion of the analytic properties of the mixed-mass and massless-massless BES phase to appendix~\ref{app:BES}, in this case we find that, as long as $h$ is finite, it is possible to perform the crossing transformation \textit{without crossing any of the cuts inside the unit disk}. This is consistent with the mass-$M$ particle crossing for the case $M=0$. In this way, we find that the crossing equation becomes trivial,
\begin{equation}
\label{eq:mixedmasstrivialcrossing}
\sigma_{\text{\tiny BES}}(\bar{x}_1,x_2^\pm)\,\sigma_{\text{\tiny BES}}(x_1,x_2^\pm)=1\,.
\end{equation}
While the preceding discussion is perfectly fine for finite $h$, it is not well-suited for the asymptotic large-$h$ expansion. To have a representation well-defined for any $h$ we can analytically continue the BES phase to complex momentum keeping the relation $x_1^-=1/x_1^+$. An advantage of this way is that for complex $p$ both circles can be unit, and taking the large-$h$ limit is the same as for massive particles. The only difference is that if, for example, $|x^+_1|<1$ then $|x^-_1|>1$, which means that the integral representation of $\theta(x_1^\pm,x_2^\pm)$ in terms of $\Phi(x_1,x_2)$ --- \textit{cf.}\ eqs.~\eqref{eq:thetaBES} and \eqref{eq:chiBES} --- needs to be amended. By analytic continuation we find that in this region
\begin{equation}
\label{eq:massless-massive-BES}
\begin{aligned}
\theta(x_1,x_2^\pm) &=\Phi(x_1,x_2^+)
-\Phi(x_1,x_2^-)-\Phi\big(\frac{1}{x_1},x_2^+\big)+\Phi\big(\frac{1}{x_1},x_2^-\big)
\\
&+\Psi\big(\frac{1}{x_1},x_2^+\big)-\Psi\big(\frac{1}{x_1},x_2^-\big)\,,
\\&=2\Phi(x_1,x_2^+)
-2\Phi(x_1,x_2^-)-\Phi(0,x_2^+)+\Phi(0,x_2^-)
\\
&+\Psi({ x_1},x_2^+)-\Psi({ x_1},x_2^-)\,,
\end{aligned}
\end{equation}
where
\begin{equation}
\Psi(x_1,x_2)=i\oint\frac{{\rm d
}w}{2\pi i} \frac{1}{w-x_2}\log\frac{\Gamma\big[1+\frac{ih}{2}\big(x_1+\frac{1}{x_1}-w-\frac{1}{ w}\big)\big]}
{\Gamma\big[1-\frac{ih}{2}\big(x_1+\frac{1}{x_1}-w-\frac{1}{ w}\big)\big]}\,,
\end{equation}
and we used the identity $\Phi(x_1,x_2)+\Phi(\tfrac{1}{x_1},x_2)=\Phi(0,x_2)$ and the properties of the $\Psi$ function to write the result in a slightly simpler way in the last equality of~\eqref{eq:massless-massive-BES}.
This formula will allow us to expand the phase at strong coupling, $h\gg1$.
In appendix~\ref{app:BESexpansion} we discuss how to derive the asymptotic expansion for~\eqref{eq:massless-massive-BES} when $|x_1|=1$.
We also will need to consider the dressing factor in the mirror kinematics. Following the logic of~\cite{Arutyunov:2009kf}, this can be obtained by analytic continuation from the string region to the mirror region (where for massless particles these ``regions'' are lines). To perform this continuation in the massless variable we need to give a prescription because we may potentially encounter branch points. In analogy with what was done for the crossing transformation, we perform the continuation by avoiding all cuts. The resulting expression is not much different from the one for the BES (massive) factor in the mirror-mirror region, and we present it in appendix~\ref{app:BES}. In the same appendix, using that computation we establish the behaviour of the mirror-mirror mixed-mass phase under complex conjugation, which turns out to be
\begin{equation}
\label{eq:mixedmirrorBES}
\big(\sigma_{\text{\tiny BES}}(\tilde{x}_{1},\tilde{x}_{2}^\pm)\big)^{-2}
\Big(\big(\sigma_{\text{\tiny BES}}(\tilde{x}_{1},\tilde{x}_{2}^\pm)\big)^{-2}\Big)^*
=
e^{-2ip_1}\,,
\end{equation}
where $\tilde{x}_1$ and $\tilde{x}_{2}^\pm$ are in the mirror kinematics (with real mirror momentum; we may also generalise this to complex momenta).
Interestingly, this is equivalent to the ``mass-to-zero'' limit of the mirror bound state relation~\eqref{eq:boundstatemirrorBES} obtained by formally taking $Q_1\to0$ , while $Q_2=1$.
\subsubsection{Massless kinematics}
In a similar way, we want to define the BES factor when \textit{both} particles are massless. By the same logic as above, for finite $h$ we can deform both integration contours in~\eqref{eq:chiBES}, as long as they lie outside a disk of radius $(\sqrt{1+(2h)^{-2}}-(2h)^{-1})$. It is therefore straightforward to derive the crossing equation in this kinematics. We start from the crossing equation for the mixed-mass kinematics~\eqref{eq:mixedmasstrivialcrossing}, and continue analytically $x_2^\pm\to (x_2)^{\pm1}$ where $|x_2|=1$. Like we argued before, since $x_2$ is a spectator in the equation, we can straightforwardly take the limit in the right-hand side, which in this case happens to be trivial. Hence
\begin{equation}
\sigma_{\text{\tiny BES}}(\bar{x}_1,x_2)\,\sigma_{\text{\tiny BES}}(x_1,x_2)=1\,.
\end{equation}
Once again, for the purpose of a large-$h$ expansion this representation is not convenient. Instead, we repeat what we did for the mixed-mass sector. Starting now from eq.~\eqref{eq:massless-massive-BES} we take $x_2^\pm$ to satisfy $x_2^+=1/x_2^-$, approaching the unit circle with $|x_2^+|>1$. Therefore, we need to analytically continue~\eqref{eq:massless-massive-BES} because $x_2^-$ is inside the circle. We have then, for $|x_1|=|x_2|=1$
\begin{equation}
\label{eq:masslessBES}
\theta(x_1,x_2)
=2\Phi(x_1,x_2) -2\Phi\big(x_1,\frac{1}{x_2}\big)-\Phi(0,x_2)+\Phi\big(0,\frac{1}{x_2}\big)
+\Psi(x_1,x_2)-\Psi\big(x_1,\frac{1}{x_2}\big)\,,
\end{equation}
where we used the identity $\Phi(x_1,x_2)+\Phi(\tfrac{1}{x_1},x_2)=\Phi(0,x_2)$ to somewhat simplify the expression for $\theta(x_1,x_2)$.
We can then proceed to expand the various terms asymptotically at large~$h$, which again yields an AFS-like leading order, and an HL-like sub-leading order. We refer the reader to appendix~\ref{app:BESexpansion} for the details.
As for the behaviour in the mirror region, similarly to what we did above we can obtain it by analytic continuation from the real string line to the real mirror line (avoiding the cuts). Once again, we report the expression in appendix~\ref{app:BES:massless}, where we also check the behaviour under complex conjugations of the massless-massless BES in the mirror region. We find that
\begin{equation}
\label{eq:masslessmirrorBES}
\big(\sigma_{\text{\tiny BES}}(\tilde{x}_{1},\tilde{x}_{2})\big)^{-2}
\Big(\big(\sigma_{\text{\tiny BES}}(\tilde{x}_{1},\tilde{x}_{2})\big)^{-2}\Big)^*
=
1\,.
\end{equation}
which is compatible with formally taking $Q_1\to0$ and $Q_2\to0$ in~\eqref{eq:boundstatemirrorBES}.
\subsection{The Sine-Gordon factor}
\label{sec:buildingblocks:SG}
A natural candidate for $\varPhi(\gamma)$ is the Sine-Gordon dressing factor,
\begin{equation}
\label{eq:SineGordonfactor}
\varPhi(\gamma) = \prod_{\ell=1}^\infty R\big(\ell,\gamma\big)\,,\qquad R(\ell,\gamma) =\frac{\Gamma^2(\ell -\tfrac{\gamma}{2\pi i}) \Gamma(\tfrac{1}{2}+\ell +\tfrac{\gamma}{2\pi i})\Gamma(-\tfrac{1}{2}+\ell +\tfrac{\gamma}{2\pi i})}{\Gamma^2(\ell +\tfrac{\gamma}{2\pi i})\Gamma(\tfrac{1}{2}+\ell -\tfrac{\gamma}{2\pi i})\Gamma(-\tfrac{1}{2}+\ell -\tfrac{\gamma}{2\pi i})}\,.
\end{equation}
The function so defined satisfies
\begin{equation}
\varPhi(\gamma)\varPhi(-\gamma)=1\,,\qquad
\varPhi(\gamma)^*=\frac{1}{\varPhi(\gamma^*)}\,,\qquad
\varPhi(\gamma)\varPhi(\gamma+i\pi)=i\tanh\frac{\gamma}{2}\,,
\end{equation}
as it is possible to verify from~\eqref{eq:SineGordonfactor}.
It should be stressed that this is by no means \emph{the only} solution of the above equations. Indeed, we can multiply this solution by any solution~$F(\gamma)$ of the homogenous equation
\begin{equation}
F(\gamma)F(-\gamma)=1\,,\qquad
F(\gamma)^*=\frac{1}{F(\gamma^*)}\,,\qquad
F(\gamma)F(\gamma+i\pi)=1\,,
\end{equation}
and thus obtain a new solution $F(\gamma)\varPhi(\gamma)$. We will discuss later the choice of this CDD factor.
It is convenient to consider the logarithmic derivative $\varphi{}'(\gamma)$ with
\begin{equation}
\varphi(\gamma) = \log\varPhi(\gamma)\,,
\end{equation}
which can be expressed as an integral by means of the integral representation of the Digamma function,
\begin{equation}
\psi(z)\equiv\frac{d}{dz}\log\Gamma(z) = \int\limits_{0}^{+\infty}\left(\frac{e^{-t}}{t}-\frac{e^{-z\,t}}{1-e^{-t}}\right)dt\,,\qquad\mathfrak{R}[\gamma]>0\,.
\end{equation}
Then we have that~$\varphi{}'(\gamma)$ is simply
\begin{equation}
\varphi{}'(\gamma)=\frac{i}{4\pi}\int\limits_0^{+\infty}\frac{\cos({\gamma t}/{2\pi})}{\cosh^2(t/{4})}dt = \frac{i\gamma}{\pi\sinh\gamma}\,,\qquad -\pi<\mathfrak{I}[\gamma]<\pi\,.
\end{equation}
From this we can derive the integral representation for $\varphi(\gamma)$
\begin{equation}
\varphi(\gamma)=\frac{i}{2}\int\limits_0^{+\infty}\frac{\sin({\gamma t}/{2\pi})}{t\cosh^2(t/4)}dt\,,\qquad-\pi<\mathfrak{I}[\gamma]<\pi\,,
\end{equation}
as well as the explicit expression
\begin{equation}
\label{eq:varphiexplicit}
\begin{aligned}
\varphi(\gamma)&=
\frac{i}{\pi}\text{Li}_2(-e^{-\gamma})-\frac{i}{\pi}\text{Li}_2(e^{-\gamma})+\frac{i\gamma}{\pi}\log(1-e^{-\gamma})-\frac{i\gamma}{\pi}\log(1+e^{-\gamma})+\frac{i\pi}{4}\,
\end{aligned}
\end{equation}
again valid in the region $-\pi<\mathfrak{I}[\gamma]<\pi$. For real $\gamma$, $\varphi(\gamma)$ is purely imaginary. Three notable values of $\varphi(\gamma)$ are
\begin{equation}
\varphi(\pm\infty) = \pm \frac{i\pi}{4}\,,\qquad
\varphi(0) = 0\,.
\end{equation}
\subsection{The monodromy factor}
\label{sec:buildingblocks:monodromy}
We may define a function
\begin{equation}
\label{eq:phihatproduct}
\widehat{\varPhi}(\gamma) = e^{\frac{\gamma}{i\pi}} \prod_{\ell=1}^\infty \frac{\Gamma(\ell +\tfrac{\gamma}{i\pi})}{\Gamma(\ell -\tfrac{\gamma}{i\pi})} \,e^{\frac{2i}{\pi}\,\psi(\ell)\,\gamma } \,,
\end{equation}
where $\psi(z)$ is the Digamma function.%
\footnote{The factor involving the Digamma function is needed to make the product convergent, while the exponential term in front of the product is a convenient normalisation.}
By its definition this satisfies what we want
\begin{equation}
\frac{\widehat{\varPhi}(\gamma\pm i\pi)}{\widehat{\varPhi}(\gamma)}=i(2\sinh\gamma)^{\pm1}\,,\qquad
\widehat{\varPhi}(z)\,\widehat{\varPhi}(-z)=1\,,\qquad
\widehat{\varPhi}(z)^*=\frac{1}{\widehat{\varPhi}(z^*)}\,.
\end{equation}
Once again, this is not \textit{the only} solution to the above equations.
Defining $\widehat{\varphi}(\gamma)=\log\widehat{\varPhi}(z)$ we have that
\begin{equation}
\widehat{\varphi}{}'(\gamma)=\frac{\gamma\coth\gamma}{i\pi}\,,
\end{equation}
and
\begin{equation}
\label{eq:varphihatexplicit}
\widehat{\varphi}(\gamma) =
\frac{i}{2 \pi }\text{Li}_2\left(e^{-2\gamma }\right)-\frac{i}{2 \pi}\gamma^2 -\frac{i}{\pi} \log
\left(1-e^{-2\gamma }\right)-\frac{i\pi }{12}\,.
\end{equation}
This solution is ``minimal'' in the sense that we may derive it under the assumption that it does not have singularities in the strip between zero and $i\pi$ using the Fourier transform, see appendix~\ref{app:Fourier}. Let us also introduce
\begin{equation}
\widehat{R}(\ell,\gamma) =\frac{\Gamma({\ell} +\tfrac{\gamma}{2\pi i})^2\, \Gamma({\ell} +\tfrac{1}{2}+\tfrac{\gamma}{2\pi i})\, \Gamma({\ell} -\tfrac{1}{2}+\tfrac{\gamma}{2\pi i})}
{\Gamma({\ell} -\tfrac{\gamma}{2\pi i})^2\, \Gamma({\ell} +\tfrac{1}{2}-\tfrac{\gamma}{2\pi i}) \,
\Gamma({\ell} -\tfrac{1}{2}-\tfrac{\gamma}{2\pi i}) } \,,
\end{equation}
and observe that we may write
\begin{equation}
\label{eq:Rhatrepresentation}
\frac{\widehat{\varPhi}(\gamma^{++}_{12})\,\widehat{\varPhi}(\gamma^{--}_{12})}{\widehat{\varPhi}(\gamma^{+-}_{12})\,\widehat{\varPhi}(\gamma^{-+}_{12})}
=
\prod_{\ell=1}^\infty\frac{ \widehat{R}(\ell,\gamma^{++}_{12})\, \widehat{R}(\ell,\gamma^{--}_{12})}{ \widehat{R}(\ell,\gamma^{+-}_{12})\, \widehat{R}(\ell,\gamma^{-+}_{12})}\,.
\end{equation}
Note that the infinite product over a single factor, $\prod_\ell^\infty \widehat{R}(\ell,\gamma)$, does not converge, while the expression~\eqref{eq:Rhatrepresentation} does.
\subsection{The auxiliary function \texorpdfstring{$a(\gamma)$}{a(gamma)}}
\label{sec:buildingblocks:a}
To solve the massless scattering we introduced an auxiliary function $a(\gamma)$ that must satisfy
\begin{equation}
a(\gamma)\,a(\gamma+i \pi) = -1\,,\qquad a(\gamma)\,a(-\gamma)=1\,,\qquad
a(\gamma)^*=\frac{1}{a(\gamma^*)}\,.
\end{equation}
One such function is
\begin{equation}
a(\gamma) = -i\,\tanh\left(\frac{\gamma}{2}-\frac{i\pi}{4}\right)\,,
\end{equation}
and we note for later convenience that
\begin{equation}
a(\mp\infty)=\pm i\,,\qquad
a(0)=-1\,.
\end{equation}
It is worth noting that this function has appeared before in the context of massless integrable quantum field theories, see for instance~\cite{Zamolodchikov:1991vx}.
\section{Proposal for the dressing factors}
\label{sec:proposal}
Using the functions introduced above, we can write down solutions of the crossing equations for the various dressing factors.
\subsection{Massive sector}
\label{sec:proposal:massive}
Here the building blocks are the BES phase~$\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)$ and the functions $\varPhi(\gamma)$ and $\widehat{\varPhi}(\gamma)$, which appear in the equations for the product and ratio of the dressing factors, respectively.
Moreover, it turns out that we need to include a particular solution of the homogeneous equations, which will provide us with the correct pole structure and match the expected perturbative result.
\begin{equation}
\begin{aligned}
&\left(\varsigma^{\boldsymbol{+}} (x_1^\pm,x_2^\pm)\right)^{-2} =-\frac{\tanh\frac{\gamma^{-+}_{12}}{2}}{\tanh\frac{\gamma^{+-}_{12}}{2}}\, \varPhi(\gamma_{12}^{--})\varPhi(\gamma_{12}^{++}) \varPhi(\gamma_{12}^{-+})\varPhi(\gamma_{12}^{+-})\,,\\
&\left(\varsigma^{\boldsymbol{-}} (x_1^\pm,x_2^\pm)\right)^{-2} =-\frac{\sinh \gamma^{-+}_{12}}{\sinh\gamma^{+-}_{12}}\, \frac{\widehat{\varPhi}(\gamma^{++}_{12})\,\widehat{\varPhi}(\gamma^{--}_{12})}{\widehat{\varPhi}(\gamma^{+-}_{12})\,\widehat{\varPhi}(\gamma^{-+}_{12})}\,.
\end{aligned}
\end{equation}
By manipulating the product representation for these functions we may then introduce
\begin{equation}
\label{eq:Rplusminus}
R_+(\ell,\gamma)= \frac{ \Gamma(\ell +\tfrac{1}{2}+\tfrac{\gamma}{2\pi i})\, \Gamma(\ell -\tfrac{1}{2}+\tfrac{\gamma}{2\pi i})}
{ \Gamma(\ell +\tfrac{1}{2}-\tfrac{\gamma}{2\pi i})\, \Gamma(\ell -\tfrac{1}{2}-\tfrac{\gamma}{2\pi i})}\,,\qquad
R_-(\ell,\gamma)=\frac{\Gamma^2(\ell-\frac{\gamma}{2\pi i})}{\Gamma^2(\ell+\frac{\gamma}{2\pi i})}\,,
\end{equation}
which satisfy
\begin{equation}
R_+(\ell,\gamma)\,R_-(\ell,\gamma)=
R(\ell,\gamma)\,,\qquad
\frac{R_+(\ell,\gamma)}{R_-(\ell,\gamma)}=
\widehat{R}(\ell,\gamma)\,.
\end{equation}
Additionally, these factors satisfy a crossing equation of sorts,
\begin{equation}
\label{eq:crossingregularisation}
\begin{aligned}
\prod_{\ell=1}^\infty R_{+}(\ell,\gamma)R_{-}(\ell,\gamma+i\pi)=\frac{1}{\cosh\tfrac{\gamma}{2}}\,,\qquad
\prod_{\ell=1}^\infty R_{+}(\ell,\gamma)R_{-}(\ell,\gamma-i\pi)= \cosh\tfrac{\gamma}{2}\,,\\
%
\prod_{\ell=1}^\infty R_{-}(\ell,\gamma)R_{+}(\ell,\gamma+i\pi)=i\sinh\tfrac{\gamma}{2}\,,\qquad
\prod_{\ell=1}^\infty R_{-}(\ell,\gamma)R_{+}(\ell,\gamma-i\pi)= \frac{i}{\sinh\tfrac{\gamma}{2}}\,,
\end{aligned}
\end{equation}
where strictly speaking the left-hand side does not converge and the right-hand side is the result of a regularisation.%
\footnote{Namely, we consider the series for the logarithmic derivative of the left-hand-side, which is convergent. This does leave an ambiguity in fixing an overall multiplicative factor in the crossing equations.
We could also amend the definition of $R_\pm(\ell,\gamma)$ by a Digamma term similar to~\eqref{eq:phihatproduct} to make the products over~$\ell$ convergent. This would modify the crossing equations~\eqref{eq:crossingregularisation} by constant factors which drop out when considering the full dressing factors.
}
We then write
\begin{equation}
\label{eq:massiveproductrepr}
\begin{aligned}
&\left(\varsigma^{\bullet\bullet}(x^\pm_1,x^\pm_2)\right)^{-2} =
-\frac{\sinh \frac{\gamma_{12}^{-+}}{2}}{\sinh \frac{\gamma_{12}^{+-}}{2}}\,
\prod_{\ell=1}^\infty { R_+(\ell,\gamma_{12}^{--})R_+(\ell,\gamma_{12}^{++}) R_-(\ell,\gamma_{12}^{-+})R_-(\ell,\gamma_{12}^{+-})}\,,\\
&\left(\tilde{\varsigma}^{\bullet\bullet}(x^\pm_1,x^\pm_2)\right)^{-2} =
+\frac{\cosh \frac{\gamma_{12}^{+-}}{2}}{ \cosh \frac{\gamma_{12}^{-+}}{2}}\,
\prod_{\ell=1}^\infty { R_-(\ell,\gamma_{12}^{--})R_-(\ell,\gamma_{12}^{++}) R_+(\ell,\gamma_{12}^{-+})R_+(\ell,\gamma_{12}^{+-})}\,.
\end{aligned}
\end{equation}
The products have the following representations
\begin{equation}
\begin{aligned}
&\prod_{\ell=1}^\infty { R_+(\ell,\gamma_{12}^{--})R_+(\ell,\gamma_{12}^{++}) R_-(\ell,\gamma_{12}^{-+})R_-(\ell,\gamma_{12}^{+-})} =
e^{\varphi^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)}\,,\\
&\prod_{\ell=1}^\infty { R_-(\ell,\gamma_{12}^{--})R_-(\ell,\gamma_{12}^{++}) R_+(\ell,\gamma_{12}^{-+})R_+(\ell,\gamma_{12}^{+-})} =
e^{\tilde\varphi^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)}\,,
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
&\varphi^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)
=\varphi_+^{\bullet\bullet}(\gamma_{12}^{--}) +\varphi_+^{\bullet\bullet}(\gamma_{12}^{++})+\varphi_-^{\bullet\bullet}(\gamma_{12}^{-+})+\varphi_-^{\bullet\bullet}(\gamma_{12}^{+-})\,,
\\
&\tilde\varphi^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)
=\varphi_-^{\bullet\bullet}(\gamma_{12}^{--}) +\varphi_-^{\bullet\bullet}(\gamma_{12}^{++})+\varphi_+^{\bullet\bullet}(\gamma_{12}^{-+})+\varphi_+^{\bullet\bullet}(\gamma_{12}^{+-})\,,
\end{aligned}
\end{equation}
The right-hand side can be evaluated explicitly in terms of
\begin{equation}
\label{eq:massivesoldilog}
\begin{aligned}
&\varphi_-^{\bullet\bullet}(\gamma) =+ \frac{ i}{\pi} \text{Li}_2\left(+e^{\gamma}\right)- \frac{i}{4\pi} \gamma^2+\frac{i}{\pi} \gamma\, \log
\left(1-e^{\gamma }\right)-\frac{i \pi }{6}\,,
\\
&\varphi_+^{\bullet\bullet}(\gamma) =-\frac{ i}{\pi} \text{Li}_2\left(-e^{\gamma}\right)+ \frac{i}{4\pi} \gamma^2-\frac{i}{\pi}\gamma \, \log
\left(1+e^{\gamma}\right)-\frac{i \pi }{12}\,.
\end{aligned}
\end{equation}
In conclusion, we can write the full dressing factors $\sigma^{\bullet\bullet}(x_1^\pm,x_2^\pm)$ and $\widetilde{\sigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)$ in terms of the BES phase $\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)$ as
\begin{equation}
\begin{aligned}&\left(\sigma^{\bullet\bullet}(x^\pm_1,x^\pm_2)\right)^{-2} =
-\frac{\sinh \frac{\gamma_{12}^{-+}}{2}}{ \sinh \frac{\gamma_{12}^{+-}}{2}}\,
e^{\tilde\varphi^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)}\,\left(\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)\right)^{-2},\\
&\left(\widetilde{\sigma}^{\bullet\bullet}(x^\pm_1,x^\pm_2)\right)^{-2} =
+\frac{\cosh \frac{\gamma_{12}^{+-}}{2}}{\cosh \frac{\gamma_{12}^{-+}}{2}}\,
e^{\varphi^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)}\,\left(\sigma_{\text{\tiny BES}}(x_1^\pm,x_2^\pm)\right)^{-2}.
\end{aligned}
\end{equation}
\subsubsection{Properties}
We start by discussing the singularity structure of these dressing factors in the physical region for string theory and for the mirror theory. The location of these regions for the $\gamma^\pm$ parametrisation was described in section~\ref{sec:rapidities:massive}. Since we have stripped of the poles expected for the bound states of the theory in the normalisation~\eqref{eq:massivenorm}, we do not expect any additional poles to appear.
\paragraph{Singularities in the string region.}
In principle, we may expect singularities at any of the points $\gamma_{1}^\pm=\gamma^\pm_2$ mod$\,\pi$ and $\gamma_{1}^\pm=\gamma^\mp_2$ mod$\,\pi$, \textit{cf.}\ table~\ref{table:zeros}. Most of these configurations are not in the physical region. Indeed, the only physical configurations are $\gamma_1^+=\gamma_2^- -i\pi$, $\gamma_1^-=\gamma_2^+ +i\pi$, and $\gamma_1^\pm=\gamma_2^\pm$. Let us start from $\gamma_1^+=\gamma_2^- -i\pi$, and look at the product representation~\eqref{eq:massiveproductrepr} (we do not need to worry about the BES factor, which is regular in this region). For $\varsigma^{\bullet\bullet}(x_1^\pm,x_2^\pm)^{-2}$, the factor $\sinh\tfrac{\gamma^{+-}_{12}}{2}=-i$ is regular. Possible singularities might in principle come from the terms $R_-(\ell,\gamma^{-+}_{12})$, but recalling eq.~\eqref{eq:Rplusminus} we see that never happens. For $\tilde{\varsigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)^{-2}$ we get a simple zero from $\cosh\tfrac{\gamma^{+-}_{12}}{2}=0$. However, we also have
\begin{equation}
R_{+}(\ell,\gamma^{+-}_{12})\Big|_{\gamma^{+-}_{12}=-i\pi}=\frac{1}{(\ell-1)\ell}\,,\qquad \ell=1,2,\dots\,,
\end{equation}
so that the pole for $\ell=1$ precisely compensates the zero.
The discussion for $\gamma_1^-=\gamma_2^+ +i\pi$ is similar, with the only caveat that for $\tilde{\varsigma}^{\bullet\bullet}(x_1^\pm,x_2^\pm)^{-2}$ now $\cosh\tfrac{\gamma^{-+}_{12}}{2}=0$ appears in the denominator, yielding a pole, which is compensated by a zero of $R_{+}(\ell,\gamma^{-+}_{12})=(\ell-1)\ell$.
Finally, for $\gamma_1^+=\gamma_2^+$ or $\gamma_1^-=\gamma_2^-$ no singularity occurs because $R_\pm(\ell,0)=1$, \textit{cf.}~\eqref{eq:Rplusminus}.
\paragraph{Singularities in the mirror region.}
Even though the string and mirror region lie in different strips of $\gamma^\pm$ plane, the differences $\gamma_1^\pm-\gamma_2^\mp$ are still physical only if $\gamma_1^+=\gamma_2^- -i\pi$ or $\gamma_1^-=\gamma_2^+ +i\pi$. Additionally, we always have the possibility $\gamma_1^\pm=\gamma_2^\pm$. These are precisely the same configurations discussed for the string region. It follows from the above discussion that there are no poles in the physical mirror region.
\paragraph{String bound states and fusion.}
Two massive ``left'' particles, or two massive ``right'' particles, may form a supersymmetric bound state~\cite{Borsato:2013hoa}. Compatibly with the pole in~\eqref{eq:massivenorm}, this happens when
\begin{equation}
\label{eq:stringboundstate}
x_1^+ =x_2^- \qquad \Leftrightarrow\qquad \gamma_1^+=\gamma_2^- - i\pi\,.
\end{equation}
Given such a bound state of particles with rapidities $\gamma_1^\pm$ and $\gamma_2^\pm$, we may ask what are the S-matrix elements when scattering a particle of rapidity $\gamma_3^\pm$. For the matrix part of the S~matrix, this is fixed by representation theory. For the dressing factor, we expect the result to depend on $x_1^-$ and $x_2^+$ only---we say in this case that the dressing factor is \textit{fused}, in which case the argument can be iterated to study the scattering of $M$-particle bound states. This is what happens for the BES phase~\cite{Chen:2006gq}.
We now want to check that this is the case for $\varsigma^{\bullet\bullet}(\gamma_1^\pm,\gamma_2^\pm)^{-2}$. We have, schematically,%
\footnote{To prove this identity we impose the condition~\eqref{eq:stringboundstate} and may use the regularised expressions~\eqref{eq:crossingregularisation}.
Alternatively, we may work with the square of the equation and use the crossing equations, which gives the same result up to a possible sign ambiguity.}
\begin{equation}
\begin{aligned}
\left(\varsigma^{\bullet\bullet}_{13}\right)^{-2}\left(\varsigma^{\bullet\bullet}_{23}\right)^{-2}&=-\frac{\sinh\tfrac{\gamma^{-+}_{13}}{2}}{\sinh\tfrac{\gamma^{+-}_{13}}{2}}
\prod_{\ell=1}^\infty R_+(\ell,\gamma^{++}_{13})R_+(\ell,\gamma^{--}_{13})
R_-(\ell,\gamma^{+-}_{13})R_+(\ell,\gamma^{-+}_{13})\\
&\times\left(-\frac{\sinh\tfrac{\gamma^{-+}_{23}}{2}}{\sinh\tfrac{\gamma^{+-}_{23}}{2}}
\prod_{\ell=1}^\infty R_+(\ell,\gamma^{++}_{23})R_+(\ell,\gamma^{--}_{23})
R_-(\ell,\gamma^{+-}_{23})R_+(\ell,\gamma^{-+}_{23})\right)\\
&=-\frac{\sinh\tfrac{\gamma^{-+}_{13}}{2}}{\sinh\tfrac{\gamma^{+-}_{23}}{2}}
\prod_{\ell=1}^\infty R_+(\ell,\gamma^{++}_{23})R_+(\ell,\gamma^{--}_{13})
R_-(\ell,\gamma^{+-}_{23})R_+(\ell,\gamma^{-+}_{13})\,,
\end{aligned}
\end{equation}
In a similar way we find
\begin{equation}
\left(\tilde{\varsigma}^{\bullet\bullet}_{13}\right)^{-2}\left(\tilde{\varsigma}^{\bullet\bullet}_{23}\right)^{-2}=
\frac{\cosh \frac{\gamma_{23}^{+-}}{2}}{\cosh \frac{\gamma_{13}^{-+}}{2}}\,
\prod_{\ell=1}^\infty { R_-(\ell,\gamma_{13}^{--})R_-(\ell,\gamma_{23}^{++}) R_+(\ell,\gamma_{13}^{-+})R_+(\ell,\gamma_{23}^{+-})}\,.
\end{equation}
\paragraph{Mirror bound states and fusion.}
Next, we look at fusion for mirror bound states, and consider
\begin{equation}
x_1^- = x_2^+
\qquad \Leftrightarrow\qquad
\gamma_1^- =\gamma_2^+ +i\pi\,.
\end{equation}
Now we expect the fused result to depend only on $\gamma_1^+$ and $\gamma_2^-$. Indeed we find
\begin{equation}
\left(\varsigma^{\bullet\bullet}_{13}\right)^{-2}\left(\varsigma^{\bullet\bullet}_{23}\right)^{-2}=
-\frac{\sinh \frac{\gamma_{23}^{-+}}{2}}{ \sinh \frac{\gamma_{13}^{+-}}{2}}\,
\prod_{\ell=1}^\infty { R_+(\ell,\gamma_{23}^{--})R_+(\ell,\gamma_{13}^{++}) R_-(\ell,\gamma_{23}^{-+})R_-(\ell,\gamma_{13}^{+-})}\,,
\end{equation}
and
\begin{equation}
\left(\tilde{\varsigma}^{\bullet\bullet}_{13}\right)^{-2}\left(\tilde{\varsigma}^{\bullet\bullet}_{23}\right)^{-2}=
\frac{\cosh \frac{\gamma_{13}^{+-}}{2}}{\cosh \frac{\gamma_{23}^{-+}}{2}}\,
\prod_{\ell=1}^\infty { R_-(\ell,\gamma_{23}^{--})R_-(\ell,\gamma_{13}^{++}) R_+(\ell,\gamma_{23}^{-+})R_+(\ell,\gamma_{13}^{+-})}\,.
\end{equation}
\paragraph{Physical unitarity in the string and mirror regions.}
We need to check that the dressing factors have modulus one for real values of the momenta. For string particles, recall that we have
\begin{equation}
\label{eq:conjstring}
(x^\pm)^* = x^\mp\,,\qquad
(\gamma^\pm)^* = \gamma^\mp\,,
\end{equation}
for real momenta, while for mirror particles recalling~\eqref{eq:massivemirrorconj} we have
\begin{equation}
\label{eq:conjmirror}
(\tilde{x}^\pm)^*=\frac{1}{\tilde{x}^\mp}\,,\qquad
(\tilde{\gamma}^-)^*=\tilde{\gamma}^++i\pi\,,\quad (\tilde{\gamma}^+)^*=\tilde{\gamma}^-+i\pi\,.
\end{equation}
Let us start from the rational prefactors and the BES term. In the string region, they are both unitary. In the mirror region, instead, the rational prefactor in~\eqref{eq:massivenorm} is not unitary by itself; the offending term is the $e^{i (p_1-p_2)}$ factor in front of it. That term precisely compensates the non-unitarity of the BES factor in the mirror region, see eq.~\eqref{eq:boundstatemirrorBES}. As for the rest of the dressing factors, physical unitarity in the string region follows from the behaviour of $\varPhi(\gamma)$ and $\widehat{\varPhi}(\gamma)$ under complex conjugation. In the mirror region the conclusion is the same, given that the mirror rapidities undergo the same constant shift in~\eqref{eq:conjmirror} and $\varPhi(\gamma),\widehat{\varPhi}(\gamma)$ depend only on the difference of rapidities.
\paragraph{Zero-momentum limit.}
We expect the S~matrix to simplify when the momentum of either particle is zero. Indeed, given a state represented in the Bethe-Yang equations by a set of $N$ momenta we generally should be able to construct \textit{a new state}, with one more momentum $p_{N+1}=0$, which still solves the Bethe-Yang equations. Such a state is then a symmetry descendant of the original one. We therefore expect that for any dressing factor $\sigma(p_1,p_2)$, $\sigma(p_1,0)=1$, possibly up to an integer power of $\exp(\tfrac{i}{2}p_1)$.
This is the case for the rational prefactor in~\eqref{eq:massivenorm} and for the BES factor. Let us now inspect the new ingredients in our construction. Recalling eq.~\eqref{eq:zeromomentummassive}, we set
\begin{equation}
\gamma_2^\pm=\mp\frac{i\pi}{2}\,.
\end{equation}
Using this we compute
\begin{equation}
\begin{aligned}
(\widetilde{\sigma}^{\bullet\bullet}_{12})^{-2}&=\frac{\cosh\tfrac{1}{2}(\gamma_1^+-\tfrac{i\pi}{2})}{\cosh\tfrac{1}{2}(\gamma_1^-+\tfrac{i\pi}{2})}
\prod_{\ell=1}^\infty [R_-(\ell,\gamma^{--}_{12})R_+(\ell,\gamma^{-+}_{12})][R_-(\ell,\gamma^{++}_{12})R_+(\ell,\gamma^{+-}_{12})]\\
&=\frac{\cosh\tfrac{1}{2}(\gamma_1^+-\tfrac{i\pi}{2})}{\cosh\tfrac{1}{2}(\gamma_1^-+\tfrac{i\pi}{2})} \frac{\cosh\tfrac{1}{2}\gamma_{12}^{-+}}{\cosh\tfrac{1}{2}\gamma_{12}^{+-}}=1\,,
\end{aligned}
\end{equation}
where in the second equality we used~\eqref{eq:crossingregularisation} for the terms in the square brackets.
In a similar way we find
\begin{equation}
(\sigma^{\bullet\bullet}_{12})^{-2}
=
-\frac{\sinh\tfrac{1}{2}(\gamma_1^- +\tfrac{i\pi}{2})}{\sinh\tfrac{1}{2}(\gamma_1^+ -\tfrac{i\pi}{2})}
\frac{\cosh\tfrac{1}{2}(\gamma_1^+ +\tfrac{i\pi}{2})}{\cosh\tfrac{1}{2}(\gamma_1^- -\tfrac{i\pi}{2})} =1\,,
\end{equation}
as it should be.
\subsubsection{Perturbative expansion}
It is useful to work out the large-$h$ expansion of the dressing factors, which can in principle be used to compare with string-NLSM computations. For the BES phase~$\sigma(x_1^\pm,x_2^\pm)$, this expansion is well-known, being given at leading order by the AFS phase~\cite{Arutyunov:2004vx} and at next-to-leading-order by the HL phase~\cite{Hernandez:2006tk} of eqs.~\eqref{eq:AFS} and~\eqref{eq:HL}, respectively.
The other pieces of our proposal can be expanded quite straightforwardly by starting from their representation in terms of polylogarithms, see eqs.~\eqref{eq:varphiexplicit} and~\eqref{eq:varphihatexplicit}. In this way we can find the near-BMN~\cite{Berenstein:2002jq} expansion of the massive dressing factors, see appendix~\ref{app:BESexpansion}. This should be compared with the perturbative expansion of the existing proposal~\cite{Borsato:2013hoa}, as well as with the perturbative computations of ref.~\cite{Rughoonauth:2012qd,Sundin:2013ypa,Engelund:2013fja,Roiban:2014cia,Bianchi:2014rfa,Sundin:2016gqe}.
The perturbative computations for the $AdS_3\times S^3\times T^4$ background have been performed by different techniques and in different kinematics regimes. The tree-level S~matrix was first worked out in~\cite{Rughoonauth:2012qd}. In refs.~\cite{Sundin:2013ypa} the dressing factors were computed at one-loop in the near-flat-space limit~\cite{Maldacena:2006rv}. In~\cite{Engelund:2013fja}, the cut-constructible part of the one- and two-loop dressing factor was worked out by unitarity. Later, the complete one-loop dressing factors were considered in~\cite{Roiban:2014cia} (see also~\cite{Sundin:2016gqe}) by evaluating the Feynman graphs by means of integral identities and dimensional regularisation. The same calculation was also done in~\cite{Bianchi:2014rfa} by exploiting both unitarity as well as symmetry and regularisation of certain divergent integrals. All these results are compatible with the proposal of~\cite{Borsato:2013hoa}.
Comparing our result with the perturbative results we find what follows:
\begin{enumerate}
\item A tree level, our proposal agrees with~\cite{Rughoonauth:2012qd} and subsequent computations.
\item At one-loop in the near-flat-space kinematics, our proposal agrees with the perturbative results~\cite{Sundin:2013ypa}. Note that the near-flat-space kinematics is more restricted than the near-BMN, so that the near-flat-space expansion contains less information than the near-BMN one.
\item At one- and two-loops, our proposal agrees with the cut-constructible part of the dressing factors~\cite{Engelund:2013fja}.
\item At one-loop in the near-BMN limit, our proposal \textit{does not agree} with~\cite{Roiban:2014cia,Bianchi:2014rfa}. In particular, while the sum of the phases $\varsigma^{\boldsymbol{+}}_{12}$ agrees, we find a mismatch for the difference~$\varsigma^{\boldsymbol{-}}_{12}$, namely
\begin{equation}
\label{eq:massivediscrepacy}
\log (\varsigma^{\boldsymbol{-}}_{12})^{-2}\Big|_{\text{ours}}-
\log (\varsigma^{\boldsymbol{-}}_{12})^{-2}\Big|_{\text{theirs}}=
\frac{i}{2\pi h^2}\left(\omega_1p_2-\omega_2p_1\right)p_1p_2+O(h^{-3})\,.
\end{equation}
Compatibly with the above observation, this term is not cut-constructible and it is zero in the near-flat-space limit. It is also interesting to note that this term could be interpreted as arising from a local counterterm.
\end{enumerate}
This could look troublesome, especially in view of the fact that the original proposal of~\cite{Borsato:2013hoa} matches all known perturbative computations. However, a closer inspection of the monodromy factor of~\cite{Borsato:2013hoa} shows that it violates parity invariance of the model, see appendix~\ref{app:monodromy}. Hence, it cannot be the correct solution.
Having made these observations, we postpone their discussion to the section~\ref{sec:proposal:relations} and to the conclusions.
\subsection{Mixed-mass sector}
\label{sec:proposal:mixed}
In the mixed-mass sector we define
\begin{equation}
\begin{aligned}
&\big(\varsigma^{\bullet\circ}(\gamma_1^\pm,\gamma_2)\big)^{-2} =
+i\, \frac{\tanh\tfrac{\gamma_{12}^{-\circ}}{2}}{\tanh\tfrac{\gamma_{12}^{+\circ}}{2}}\,
\varPhi(\gamma_{12}^{+\circ})\,\varPhi(\gamma_{12}^{-\circ})\,,\\ &\big(\varsigma^{\circ\bullet}(\gamma_1,\gamma_2^\pm)\big)^{-2} =
-i\, \frac{\tanh\tfrac{\gamma_{12}^{\circ+}}{2}}{\tanh\tfrac{\gamma_{12}^{\circ-}}{2}}\,
\varPhi(\gamma_{12}^{\circ+})\,\varPhi(\gamma_{12}^{\circ-})\,,\\
\end{aligned}
\end{equation}
In most of what follows, it will be sufficient to focus our attention on one of the two phases (say, $\varsigma^{\bullet\circ}_{12}$) since they are related by braiding unitarity~\eqref{eq:braidunit}.
To complete our definition we now need the BES dressing factor in the mixed-mass sector $\sigma(x_1^\pm,x_2)$, which we discussed above, see eq.~\eqref{eq:massless-massive-BES}. Using this we have finally
\begin{equation}
\big(\sigma^{\bullet\circ}(x^\pm,x_2)\big)^{-2} =
+i\, \frac{\tanh\tfrac{\gamma_{12}^{-\circ}}{2}}{\tanh\tfrac{\gamma_{12}^{+\circ}}{2}}\,
\varPhi(\gamma_{12}^{+\circ})\,\varPhi(\gamma_{12}^{-\circ})\big(\sigma(x_1^\pm,x_2)\big)^{-2}\,,
\end{equation}
and we recall that the whole S-matrix elements are given in eq.~\eqref{eq:mixednormalisation}.
\subsubsection{Properties}
Let us analyze some properties of the dressing factor that we just introduced. Unlike what we did for the massive sector, we will not restrict our discussion to the $\varsigma^{\bullet\circ}(p_1,p_2)$ piece: here the properties of the (mixed-mass) BES factor are less obvious.
\paragraph{Poles and (absence of) branch points.}
The dressing factors, taken together with their rational prefactors~\eqref{eq:mixednormalisation}, show a number of apparent poles at
\begin{equation}
\label{eq:mixedpoles}
x_1^+=x_2\,,\quad x_1^+=\frac{1}{x_2}\,,\quad
x_1^-=x_2\,,\quad
x_1^+=\frac{1}{x_2}\,.
\end{equation}
Some of these poles, in particular those appearing at $x_1^\pm=(x_2)^{\mp1}$, appear also in the matrix part of the S-matrix and as such cannot be removed by CDD factors; note also that such poles were also present in the proposal of~\cite{Borsato:2016xns}. As we discussed, none of the four conditions~\eqref{eq:mixedpoles} may be used to describe a bound state with real momentum and real (and positive) energy, neither in the string region nor in the mirror one. Therefore, we shall not consider them.
Unlike the proposal of~\cite{Borsato:2016xns}, our solution is manifestly free from square-root branch points at the positions~\eqref{eq:mixedpoles}. This simplifies considerably the discussion of the analytic properties of the dressing factor. We will see in section~\ref{sec:proposal:relations} that, as a matter of fact, the proposal of~\cite{Borsato:2016xns} was actually free from branch-points at~\eqref{eq:mixedpoles} due to a quite non-trivial cancellation between the dressing factor and its normalisation. To the best of our knowledge, this fact was not previously appreciated.%
\footnote{Some analytic properties of the proposal~\cite{Borsato:2016xns} have been discussed in ref.~\cite{OhlssonSax:2019nlj}, but unfortunately the branch cuts of the mixed-mass factors have not been discussed there.}
Nonetheless, the proposal of~\cite{Borsato:2016xns} manifestly has the AFS singularities, see~\eqref{eq:AFS}, which make its analytic continuation rather subtle if not ill defined.
\paragraph{String bound states and fusion.}
Let us stress that, even if we are considering the mixed-mass scattering, when discussing fusion we refer of \textit{the fusion of two massive particles} parametrised by $x_1^\pm$ and $x_2^\pm$ (say, both of charge $M=1$) to create a bound state (say, of mass $M=2$); this bound state may then scatter with a massless particle parametrised by~$x_3$. This setup should not be confused with an attempt to construct bound states of one massive and one massless particles, which as recalled above does not seem to yield a physical (massless) excitation.
Like before, the condition for bound states in the string region is
\begin{equation}
\label{eq:fusionstring}
x_1^+ ={ x_2^-} \qquad \Leftrightarrow\qquad \gamma_1^+=\gamma_2^- -i\pi\,.
\end{equation}
We now need to analyze various contributions to fusion. We start by considering $\varsigma^{\bullet\circ}(x_1^\pm,x_3)$ and $\varsigma^{\bullet\circ}(x_2^\pm,x_3)$ and using~\eqref{eq:varPhicrossing},
\begin{equation}
\begin{aligned}
(\varsigma^{\bullet\circ}_{13})^{-2}(\varsigma^{\bullet\circ}_{23})^{-2} &=-
\frac{\tanh \frac{\gamma_{13}^{-\circ}}{2}}{\tanh \frac{\gamma_{13}^{+\circ}}{2}}
\frac{\tanh \frac{\gamma_{23}^{-\circ}}{2}}{\tanh \frac{\gamma_{23}^{+\circ}}{2}}\,
\varPhi(\gamma_{13}^{-\circ})\,\varPhi(\gamma_{13}^{+\circ})\,\varPhi(\gamma_{23}^{-\circ})\,\varPhi(\gamma_{23}^{+\circ})
\\
&=-
\frac{\tanh \frac{\gamma_{13}^{-\circ}}{2}}{\tanh \frac{\gamma_{13}^{+\circ}}{2}}
\frac{\coth \frac{\gamma_{13}^{+\circ}}{2}}{\tanh \frac{\gamma_{23}^{+\circ}}{2}}\,
\,i\tanh \frac{\gamma_{13}^{+\circ}}{2}\,\varPhi(\gamma_{13}^{-\circ})\, \varPhi(\gamma_{23}^{+\circ})
\\
&=-i\coth\frac{\gamma_{13}^{+\circ}}{2}\,\frac{\tanh \frac{\gamma_{13}^{-\circ}}{2}}{\tanh \frac{\gamma_{23}^{+\circ}}{2}}
\,\varPhi(\gamma_{13}^{-\circ})\, \varPhi(\gamma_{23}^{+\circ})\,.
\end{aligned}
\end{equation}
We see that we are left with a rather unpleasant dependence on $\gamma_1^+$ through $\coth \frac{\gamma_{13}^{+\circ}}{2}$.
We have also to consider the BES factor and any rational prefactor. The BES factor $\sigma(x_1^\pm, x_2)$ has the same fusion properties as its massive progenitor $\sigma(x_1^\pm, x_2^\pm)$, \textit{i.e.}\ it can be fused over the massive particles without problems following ref.~\cite{Arutyunov:2009kf}. For the prefactor, we should now consider carefully which process to pick for the normalisation. The simplest picture, since the condition~\eqref{eq:fusionstring} identifies bound states in the symmetric representation~\cite{Borsato:2013hoa} (whose highest-weight state is $Y(p_1)Y(p_2)$), should arise when we are dealing with left-particles ($M_1=M_2=+1$). As a result, we should consider the normalisation appearing in the first line of~\eqref{eq:mixednormalisation}, \textit{i.e.}\ the prefactor
\begin{equation}
\sqrt{\frac{x_1^+}{x_1^-}}e^{-i p_3} \frac{x^-_1-x_3}{1-x^+_1x_3}
\sqrt{\frac{x_2^+}{x_2^-}}e^{-i p_3} \frac{x^-_1-x_3}{1-x^+_1x_3}
=
-\sqrt{\frac{x_2^+}{x_1^-}}e^{-2i p_3}\frac{x^-_1-x_3}{1-x^+_2x_3}\,\tanh\frac{\gamma^{+\circ}_{13}}{2}\,.
\end{equation}
We see that the last term cancels the $\coth \frac{\gamma_{13}^{+\circ}}{2}$ contribution as we wanted, leaving
\begin{equation}
\label{eq:leftmassivefusion}
+i\,
\sqrt{\frac{x_2^+}{x_1^-}}e^{-2i p_3}\frac{x^-_1-x_3}{1-x^+_2x_3}\,
\frac{\tanh \frac{\gamma_{13}^{-\circ}}{2}}{\tanh \frac{\gamma_{23}^{+\circ}}{2}}
\,\varPhi(\gamma_{13}^{-\circ})\, \varPhi(\gamma_{23}^{+\circ})\,
(\sigma_{13})^{-2}(\sigma_{23})^{-2}.
\end{equation}
We could have fused two \textit{right-massive particles} ($M_1=M_2=-1$). Here the highest-weight state is $\bar{Z}(p_{1,2})$. Repeating the same treatment much of the discussion would go through in the same way, except that at the very end we would encounter a different rational pre-factor. If we look at $\bar{Z}(p_{1,2})$, the normalisation is fixed by the second line of~\eqref{eq:mixednormalisation}. The overall difference with the case we just considered is a rational term (whose form follows from representation theory and left-right symmetry~\cite{Borsato:2014hja}). The new term, which multiplies the fused S~matrix~\eqref{eq:leftmassivefusion}, is
\begin{equation}
\frac{x_1^-}{x_1^+}\frac{(1-x_1^+x_3)^2}{(x_1^- - x_3)^2}
\frac{x_2^-}{x_2^+}\frac{(1-x_2^+x_3)^2}{(x_2^- - x_3)^2}
=\frac{x_1^-}{x_2^+}\frac{(1-x_2^+x_3)^2}{(x_1^- - x_3)^2}
\frac{(1-x_1^+x_3)^2}{(x_1^+ - x_3)^2}\,.
\end{equation}
We see that the new prefactor does not fuse nicely (due to the last term in the last equation). This is not surprising because we are looking at a symmetric-representation bound state in the antisymmetric sector. To fuse this properly we would need (in the language of the Bethe-Yang equations) to also consider auxiliary Bethe roots which would remove this additional contribution.
\paragraph{Mirror bound states and fusion.}
Let us now consider massive bound states of the mirror theory, which will satisfy
\begin{equation}
x_1^-=x_2^+\,,\qquad \gamma_1^- = \gamma_2^+ +i\pi\,,
\end{equation}
and transform in the antisymmetric representation. We start by looking at the fusion properties of $\varsigma^{\bullet\circ}(\gamma_1^\pm,\gamma_2)^{-2}$. We find
\begin{equation}
\begin{aligned}
(\varsigma^{\bullet\circ}_{13})^{-2}(\varsigma^{\bullet\circ}_{23})^{-2} &=-
\frac{\tanh \frac{\gamma_{13}^{-\circ}}{2}}{\tanh \frac{\gamma_{13}^{+\circ}}{2}}
\frac{\tanh \frac{\gamma_{23}^{-\circ}}{2}}{\tanh \frac{\gamma_{23}^{+\circ}}{2}}\,
\varPhi(\gamma_{13}^{-\circ})\,\varPhi(\gamma_{13}^{+\circ})\,\varPhi(\gamma_{23}^{-\circ})\,\varPhi(\gamma_{23}^{+\circ})
\\
&=-
\frac{\coth \frac{\gamma_{23}^{+\circ}}{2}}{\tanh \frac{\gamma_{13}^{+\circ}}{2}}
\frac{\tanh \frac{\gamma_{23}^{-\circ}}{2}}{\tanh \frac{\gamma_{23}^{+\circ}}{2}}\,
\,i\tanh \frac{\gamma_{23}^{+\circ}}{2}\,\varPhi(\gamma_{13}^{+\circ})\, \varPhi(\gamma_{23}^{-\circ})
\\
&=-i\coth\frac{\gamma_{23}^{+\circ}}{2}\,\frac{\tanh \frac{\gamma_{13}^{+\circ}}{2}}{\tanh \frac{\gamma_{23}^{-\circ}}{2}}
\,\varPhi(\gamma_{13}^{+\circ})\, \varPhi(\gamma_{23}^{-\circ})\,.
\end{aligned}
\end{equation}
Let us now look at the rational pre-factor for the scattering of $\bar{Z}$ particles, which is what should give a simple result for the anti-symmetric bound state. We get an additional factor of
\begin{equation}
\sqrt{\frac{x_1^-}{x_1^+}\frac{x_2^-}{x_2^+}}e^{-2i p_3}\frac{1- x_1^+ x_3}{x_1^--x_3}\frac{1- x_2^+ x_3}{x_2^--x_3}=\sqrt{\frac{x_1^-}{x_1^+}\frac{x_2^-}{x_2^+}}\frac{1- x_1^+ x_3}{x_2^--x_3}e^{-2i p_3} \coth\frac{\gamma_{23}^{+\circ}}{2}\,,
\end{equation}
which does not cancel the unwanted term --- rather, it produces its square. In fact, the term $\coth^2\tfrac{\gamma^{+\circ}_{23}}{2}$ is cancelled by terms coming from the fusion of the BES factor. In fact, much like in the ordinary (massive) case, the BES factor does not fuse well. It can actually be convenient to defined an ``improved'' dressing factor for the mirror region (which fuses well there, but not in the string region). In any case, considering all this, we find that the $\bar{Z}\bar{Z}$-scattering fuses well in the mirror region. The same will not be true for the scattering of $YY$, which will fuse up to terms involving the auxiliary Bethe-Yang roots.
\paragraph{Physical unitarity in the string and mirror regions.}
Let us consider the dressing factor~$\varsigma^{\bullet\circ} (\gamma_1^\pm,\gamma_2)$ for particles with real momenta in the string region, where we have $(x^\pm)^*=x^\mp$ and $x^*=1/x$, as well as $(\gamma^\pm)^*=\gamma^\mp$ and $\gamma^*=\gamma$. We find
\begin{equation}
(\varsigma^{\bullet\circ}_{12})^{-2} \left((\varsigma^{\bullet\circ}_{12})^{-2}\right)^{*}= {\tanh {\gamma_{12}^{-\circ}\ov2}\over \tanh {\gamma_{12}^{+\circ}\ov2}}
\varPhi(\gamma_{12}^{-\circ}) \varPhi(\gamma_{12}^{+\circ})
{\tanh {\gamma_{12}^{+\circ}\ov2}\over \tanh{\gamma_{12}^{-\circ}\ov2}}
{1\over\varPhi(\gamma_{12}^{+\circ}) \varPhi(\gamma_{12}^{-\circ})}
=1\,.
\end{equation}
It can be checked that the prefactors in~\eqref{eq:mixednormalisation}, as well as the BES factor, are also unitary in the string region.
Coming now to the mirror region, we have that the dressing factor now takes the form
\begin{equation}
\widetilde{(\varsigma^{\bullet\circ}_{12})}^{-2}=i
\frac{\tanh\frac{\tilde{\gamma}_{12}^{-\circ}-i\pi/2}{2}}{\tanh\frac{\tilde{\gamma}_{12}^{+\circ}-i\pi/2}{2}}\,\varPhi(\tilde{\gamma}^{+\circ}_{12}-\tfrac{i}{2}\pi)\,\varPhi(\tilde{\gamma}^{-\circ}_{12}-\tfrac{i}{2}\pi)\,,
\end{equation}
where the wide tilde denotes that the whole expression has been analytically continued to the mirror-mirror region.
Recall that for real mirror momenta $(\tilde{x}^\pm)^*=1/\tilde{x}^\mp$ and $\tilde{x}^*=\tilde{x}$. In terms of the rapidities, this gives $(\tilde{\gamma}^\pm)^*=\gamma^\mp+i\pi$ and $\tilde{\gamma}^*=\tilde{\gamma}$. Thus,
\begin{equation}
\begin{aligned}
\widetilde{(\varsigma^{\bullet\circ}_{12})}^{-2}\Big(\widetilde{(\varsigma^{\bullet\circ}_{12})}^{-2}\Big)^*&=
\frac{\tanh\frac{\tilde{\gamma}_{12}^{-\circ}-i\pi/2}{2}}{\tanh\frac{\tilde{\gamma}_{12}^{+\circ}-i\pi/2}{2}}\frac{\tanh\frac{\tilde{\gamma}_{12}^{+\circ}+3i\pi/2}{2}}{\tanh\frac{\tilde{\gamma}_{12}^{-\circ}+3i\pi/2}{2}}
\,\frac{\varPhi(\tilde{\gamma}^{+\circ}_{12}-\frac{i}{2}\pi)\,\varPhi(\tilde{\gamma}^{-\circ}_{12}-\frac{i}{2}\pi)}{\varPhi(\tilde{\gamma}^{-\circ}_{12}+\frac{3i}{2}\pi)\,\varPhi(\tilde{\gamma}^{+\circ}_{12}+\frac{3i}{2}\pi)}\,,\\
&=\tanh^2\left(\tfrac{\tilde{\gamma}^{+\circ}_{12}-\frac{i}{2}\pi}{2}\right)\,\tanh^2\left(\tfrac{\tilde{\gamma}^{-\circ}_{12}-\frac{i}{2}\pi}{2}\right)\\
&=\left(\frac{\tilde{x}^+_1 - \tilde{x}_2}{1-\tilde{x}^+_1\tilde{x}_2}\right)^2\left(\frac{1-\tilde{x}^-_1\tilde{x}_2}{\tilde{x}^-_1 - \tilde{x}_2}\right)^2.
\end{aligned}
\end{equation}
In the second equality we have used the periodicity of the hyperbolic tangent and the $2\pi i$-monodromy of the Sine-Gordon factor,
\begin{equation}
\frac{\varPhi(z)}{\varPhi(z+2\pi i)}=
\frac{\varPhi(z)\,\varPhi(z+i\pi)}{\varPhi(z+i\pi)\,\varPhi(z+2\pi i)}=\tanh^2\Big(\frac{z}{2}\Big)\,.
\end{equation}
The rational prefactor also is not invariant under physical unitarity,
\begin{equation}
\sqrt{\frac{\tilde{x}^+_1}{\tilde{x}^-_1}}\frac{1}{\tilde{x}_2}\frac{\tilde{x}^-_1-\tilde{x}_2}{1-\tilde{x}^+_1\tilde{x}_2}
\Bigg(\sqrt{\frac{\tilde{x}^+_1}{\tilde{x}^-_1}}\frac{1}{\tilde{x}_2}\frac{\tilde{x}^-_1-\tilde{x}_2}{1-\tilde{x}^+_1\tilde{x}_2}\Bigg)^*=\frac{1}{(\tilde{x}_2)^4}\,.
\end{equation}
It remains to check the contribution of the BES dressing factor in the mirror-mirror region. As we discuss in appendix~\ref{app:BES:mixed}, we have
\begin{equation}
\big(\sigma_{\text{\tiny BES}}^{\bullet\circ}(\tilde{x}_1^\pm,\tilde{x}_2)\big)^{-2}
\Big(\big(\sigma_{\text{\tiny BES}}^{\bullet\circ}(\tilde{x}_1^\pm,\tilde{x}_2)\big)^{-2}\Big)^*=(\tilde{x}_2)^4
\Big(\frac{1-\tilde{x}_1^+\tilde{x}_2}{\tilde{x}^+_1-\tilde{x}_2}\Big)^2\Big(\frac{\tilde{x}_1^--\tilde{x}_2}{1-\tilde{x}^-_1\tilde{x}_2}\Big)^2\,.
\end{equation}
We see a very non-trivial cancellation between the contributions of the Sine-Gordon factor and of the mixed-mass BES factor.
\paragraph{Zero-momentum limit.}
We may consider two distinct zero-momentum limits. One, which is conceptually simpler, is when we add a massive mode with zero momentum in the Bethe-Yang equations. This should represent a supersymmetry descendant. In that case the Zhukovsky variables~$x^\pm(p_1)$ both go either to plus infinity or minus infinity, and we have
\begin{equation}
\gamma^{\pm}_1=\mp \frac{i}{2}\pi\,.
\end{equation}
Hence we have
\begin{equation}
(\varsigma^{\bullet\circ}_{12})^{-2}=i\frac{\tanh\tfrac{+\frac{i}{2}\pi-\gamma_2}{2}}{\tanh\tfrac{-\frac{i}{2}\pi-\gamma_2}{2}}\,\varPhi(+\tfrac{i}{2}\pi-\gamma_2)\,\varPhi(+\tfrac{i}{2}\pi-\gamma_2)
=
\tanh\big(\frac{\gamma_2}{2}-\frac{i\pi}{4}\big)= - x_2\,.
\end{equation}
Observing that the BES phase does not contribute in this limit, this gives the scattering elements
\begin{equation}
\gen{S}|Y_0\chi_{p_2}\rangle= -e^{-\tfrac{3}{2}ip_2}x_2|\chi_{p_2}Y_0\rangle\,,\qquad
\gen{S}|\bar{Z}_0\chi_{p_2}\rangle= -e^{-\tfrac{1}{2}ip_2}x_2|\chi_{p_2}\bar{Z}_0\rangle\,,
\end{equation}
which can be further simplified depending on the chirality of $p_2$. In the physical region where $p_2$ is anti-chiral, $x_2=-e^{\frac{i}{2}p_2}$ and we get that the scattering reduces to $e^{-i J_1p_2}$, where $J_1$ is the R-charge of the first particle.
We may also consider the case where the massless particle has zero momentum, which means $\gamma=\pm\infty$. Recall that $\varphi(\pm\infty)=\pm i \pi/4$. As a consequence, when considering the dressing factor in presence of a zero-momentum massless particle in the physical region we have
\begin{equation}
\varsigma^{\bullet\circ}(-\infty) = i \varPhi(-\infty)^{2}=1\,.
\end{equation}
While the BES phase does not contribute in this limit, it is interesting to observe that the rational prefactor in the highest-weight scattering elements is not trivial; rather, it takes the form \textit{e.g.}
\begin{equation}
e^{i p_1/2}\frac{1+x^-_1}{1+x^+_1}\,,
\end{equation}
which can be compensated by adding an auxiliary root in the Bethe-Yang equations (see section~\ref{sec:BYE}).
\subsubsection{Perturbative expansion}
Given that there is no standard definition of the normalisation of the dressing factors in the mixed-mass sector, we shall consider a whole S-matrix element in the $a=0$ uniform light-cone gauge~\cite{Arutyunov:2005hd}, see appendix~\ref{app:lcgauge}. Focusing on the scattering of $Y(p_1)$ and $\chi(p_2)$ in the kinematics region where $p_2<0$ we have, in the near-BMN expansion,
\begin{equation}
\label{eq:mixedmassexpansion}
\begin{aligned}
\log\langle Y_1\chi^{\dot{\alpha}}_2|\mathbf{S}|Y_1\chi^{\dot{\alpha}}_2\rangle
&=
-\frac{i}{2h}\big(p_1(\omega_1-p_1)+2p_2\omega_1\big)\\
&\quad-
\frac{ip_1^2}{2\pi h^2}
(\omega_1-p_1)p_2\,\log\Big(\tfrac{-(\omega_1-p_1)p_2}{4h}\Big)
+O(h^{-3})
\,,
\end{aligned}
\end{equation}
which is worked out in appendix~\ref{app:BMNexpansion}.
A first observation is that this justifies the normalisation of $\varsigma^{\bullet\circ}(x_1^\pm,x_2)$ by an explicit factor of~$i$. In fact, this diagonal S-matrix element is correctly normalised so that $\langle Y_1\chi^{\dot{\alpha}}_2|\mathbf{S}|Y_1\chi^{\dot{\alpha}}_2\rangle=1+O(h^{-1})$, without any spurious signs.
This result should be compared with the existing perturbative computations. Unfortunately, for processes involving massive and massless external legs, not many results are known. Results have been computed up to one loop in ref.~\cite{Sundin:2016gqe}, and we report here their results, translated in the $a=0$ uniform lightcone gauge:
\begin{equation}
\begin{aligned}
\log\langle Y_1\chi^{\dot{\alpha}}_2|\mathbf{S}_{\text{SW}}|Y_1\chi^{\dot{\alpha}}_2\rangle
&=
-\frac{i}{2h}\big(p_1(\omega_1-p_1)+2p_2\omega_1\big)\\
&\quad-
\frac{ip_1^2}{2\pi h^2}
(\omega_1-p_1)p_2\,\left[1+\log\Big(\tfrac{\omega_1-p_1}{-2p_2}\Big)
\right]+O(h^{-3})
\,.
\end{aligned}
\end{equation}
We observe the following matches and mismatches:
\begin{enumerate}
\item The tree-level result matches.
\item The rational part of the one-loop result does not match. The difference is given by
\begin{equation}
\frac{i}{2\pi h^2}(\omega_1-p_1)p_1^2p_2\,,
\end{equation}
which is actually quite reminiscent of~\eqref{eq:massivediscrepacy}. Like that term, this could be interpreted as arising from a local counter-term.
\item The logarithmic piece is actually quite different. In fact, the discrepancy of the logarithmic part was also noticed when comparing the proposal of~\cite{Borsato:2016xns} with ref.~\cite{Sundin:2016gqe}. The expansion of \cite{Borsato:2016xns} as well as ours naturally produces a one-loop result where the argument of the logarithm is of the form~\eqref{eq:mixedmassexpansion} and depends on~$h$. As far as we can see, the reason for the disagreement lies in the order of limits. In~\cite{Sundin:2016gqe} a UV regulator was
removed first, and then the IR regulator, which was chosen to be a small mass of a particle, was taken to zero. The correct order is in fact opposite, and, moreover, one does not have to remove UV divergent terms because a natural UV regularisation for the model is a lattice one with
the propagator replaced by $1/(m^2 + 4h^2\sin^2p/2h)$ so that the UV regulator, the inverse lattice step, is identified with the coupling constant $h$. This follows from the form of the exact dispersion relation, and at one-loop order would naturally lead to the appearance of a $\log h$ term.
\end{enumerate}
\subsection{Massless sector}
\label{sec:proposal:massless}
We now come to the massless dressing factors, starting from the one scattering particles of the same chirality, see table~\ref{table:dressing}. Here we set
\begin{equation}
\big(\varsigma^{\circ\circ}(\gamma_1,\gamma_2)\big)^{-2}
=a(\gamma_{12})\, \big(\varPhi(\gamma_{12})\big)^2\,.
\end{equation}
As emphasised in section~\ref{sec:smatrix:symmetries} we need $a(\gamma)$, which satisfies the crossing equation $a(\gamma)a(\bar{\gamma})=-1$, in order to solve eq.~\eqref{eq:masslesscrossingrapid}. It is not unusual to encounter this function in the context of relativistic massless integrable QFTs, see \textit{e.g.}~\cite{Zamolodchikov:1991vx}. We stress once more that could have not obtained the minus sign by multiplying the square of the Sine-Gordon dressing factor $\varPhi^2(\gamma)$ by $\pm i$, as that would have spoiled braiding unitarity.%
\footnote{%
The pre-factor $a(\gamma)$ was not considered in ref.~\cite{Fontanella:2019ury} where the connection between the Sine-Gordon dressing factor and $AdS_3\times S^3\times T^4$ was first noticed.
}
We now consider the case of opposite-chirality scattering. As we have commented in section~\ref{sec:smatrix:normalisation}, here \textit{we could} obtain the correct sign in the crossing equations~\eqref{eq:masslesscrossingrapid} by multiplying the dressing factor by $\pm i$. This is because braiding unitarity is not a constraint on a single dressing factor in the case of opposite chirality, but rather a relation between two distinct functions that have some freedom in their overall normalisation. Regardless, \textit{we shall assume} that the solution for opposite chirality scattering is the same as above, namely
\begin{equation}
\label{eq:oppositechiralitysol}
\big(\tilde{\varsigma}^{\circ\circ}(\gamma_1,\gamma_2)\big)^{-2}
=a(\gamma_{12})\, \big(\varPhi(\gamma_{12})\big)^2\,.
\end{equation}
The main reason for doing so is that it seems more natural not to have the dressing factor ``jump'' abruptly when changing the chirality of one of the two particles. We could always remove $a(\gamma)$ by multiplying eq.~\eqref{eq:oppositechiralitysol} by the CDD factor $i/a(\gamma)$. Ultimately, its perturbation theory will tell us which is the correct result (see below for the perturbative expansion of the dressing factors).
In conclusion we propose that all massless-massless dressing factors are given in terms of a single expression,
\begin{equation}
\big(\widetilde{\sigma}^{\circ\circ}(\gamma_1,\gamma_2)\big)^{-2}
=\big(\sigma^{\circ\circ}(\gamma_1,\gamma_2)\big)^{-2}= a(\gamma_{12})\, \big(\varPhi(\gamma_{12})\big)^2\,\big(\sigma_{\text{\tiny BES}}(x_1,x_2)\big)^{-2}\,.
\end{equation}
It is straightforward to verify that this satisfies the parity constraints of section~\ref{sec:smatrix:symmetries}.
\subsubsection{Properties}
Let us now analyze the main features of our proposal. Because the massless kinematics is quite restrictive --- the only physical values of the momenta, both in the mirror and in the string theory, are the real ones --- and because massless particles do not form bound states, the discussion will be a bit simpler than in the cases above.
\paragraph{Poles and branch points.}
The discussion of poles here is quite simple because the physical ``region'' is just given by the real~$\gamma$ line (either in the string or in the mirror theory). It is however worth remarking that the massless limit of the BES dressing factors has branch points inside and outside the unit disk, see section~\ref{sec:buildingblocks:BES}. While these points are outside of the physical region, we have to be careful when analytically continuing the dressing factor from the real string line (the upper-half circle in the $x$-plane) to the real mirror line (the segment $-1<x<1$). We will take any path that does not cross the cuts (such as a path that runs closely to the upper-half circle).
\paragraph{Physical unitarity in the string and mirror regions.}
The physical unitarity of $\varsigma^{\circ\circ}(\gamma_{12})$ in the string region follows straightforwardly from the properties of $a(\gamma)$ and $\varPhi(\gamma)$. Moreover, this also applies to the mirror-mirror region because, since $\varsigma^{\circ\circ}(\gamma_{12})$ depends on the difference of two massless rapidities, it does not change when both are taken to the mirror region. As for the massless-massless BES phase, by the results of appendix~\ref{app:BES:massless}, it is unitary by itself.
\paragraph{Zero-momentum limit.}
Let us now consider the case where either particle has zero momentum. We shall always order the particles so that if one particle has momentum equal to $0^+$, so that its velocity is positive (and maximum), it is the first particle; vice-versa, if the momentum is $0^-$, and the velocity is negative (and minimum), then this must be the second particle. This is the physical setup, and any other configuration can be related to this by using braiding unitarity. Hence, either $p_1=0^+$ and $\gamma_1=-\infty$, or $p_2=0^-$ and $\gamma_2=+\infty$. Either way, $\gamma_{12}=-\infty$. We can immediately verify that
\begin{equation}
\big(\varsigma^{\circ\circ}(-\infty)\big)^{-2}=1\,,
\end{equation}
where the contribution of $a(-\infty)=i$ is important.
It remains to compute the contribution of the BES phase. From~\eqref{eq:masslessBES} for instance, it is manifest that the phase vanishes if $x_2=\pm1$; though that expression is not manifestly antisymmetric, the phase is antisymmetric by construction, so that it must also vanish if $x_1=\pm1$. In conclusion,
\begin{equation}
\big(\sigma^{\circ\circ}(0^+,p)\big)^{-2}=
\big(\sigma^{\circ\circ}(p,0^-)\big)^{-2}=1\,.
\end{equation}
It is also easy checking what is the form of the dressing factor for coincident momenta, as the BES and Sine-Gordon pieces are (separately) trivial. The only non-trivial contribution comes from $a(0)=-1$, so that
\begin{equation}
\big(\sigma^{\circ\circ}(p,p)\big)^{-2}=-1\,.
\end{equation}
This matched with what expected by general considerations in ref.~\cite{upcoming:massless}.
\subsubsection{Perturbative expansion}
The near-BMN expansion of the massless-massless dressing factor is reported in appendix~\ref{app:BMNexpansion}. Working in the perturbative regime where $p_1>0$ and $p_2<0$ we find
\begin{equation}
\label{eq:masslessmasslessexp}
\log\Big(\sigma^{\circ\circ}(p_1,p_2)\Big)^{-2}
=
-\frac{i}{h}p_1p_2-\frac{i}{h^2}\frac{p_1p_2}{8}
-i\frac{p_1p_2}{4\pi h^2} \left(\log\frac{-p_1p_2}{16h^2}-1\right)+O(h^{-3})\,.
\end{equation}
This result should be compared with the massless-massless perturbative computation of Sundin and Wulff~\cite{Sundin:2016gqe}, which gives
\begin{equation}
\log\langle \chi^{\dot{\alpha}}_1\chi^{\dot{\beta}}_2|\mathbf{S}_{\text{SW}}|\chi^{\dot{\alpha}}_1\chi^{\dot{\beta}}_2\rangle
=-\frac{i}{h}p_1p_2
+
i\frac{p_1p_2}{4\pi h^2}
\Big(\log(-4p_1p_2)-1\Big)+O(h^{-3})
\,.
\end{equation}
We observe the following matches and mismatches:
\begin{enumerate}
\item At tree-level, our result matches with the perturbative computation of~\cite{Sundin:2016gqe}. In particular, the relevant term arises from the AFS order of the BES phase.
\item At one-loop, the coefficient of the logarithm does not match; the sign is opposite. This is perhaps not entirely surprising in light of what we have encountered in the mixed-mass region.
\item The argument of the logarithm is sensitive to the UV cutoff, which in our case is provided by the coupling constant~$h$ and in the case of Sundin and Wulff has been removed. As such, the finite pieces of the one-loop result can be removed by a change in the UV cutoff.
\item It is interesting to notice that the one-loop result of Sundin and Wulff comes with a $1/\pi$ coefficient, while ours also involves a rational term. This discrepancy is reminiscent of the two-loop mismatch in the dispersion relation for massless modes, which in that case involves a relative factor of~$\pi^2$~\cite{Sundin:2015uva}. Nonetheless, it is interesting to note that the first $O(h^{-2})$ term in~\eqref{eq:masslessmasslessexp} is precisely due to $-i a(\gamma)$, which we may eliminate in the opposite-chirality scattering (at the price of having different dressing factors for same-chirality and opposite-chirality scattering).
\end{enumerate}
\subsection{Relations with earlier proposals}
\label{sec:proposal:relations}
We are now able to comment in more detail on the similarities and differences between our proposal and the solutions of the crossing equations found in~\cite{Borsato:2013hoa} for massive-massive scattering and in~\cite{Borsato:2016xns} for mixed-mass and massless-massless scattering. We will see that such a comparison will result in a some precise relation between pieces of the two proposals.
In the massive sector, both our solution and that of ref.~\cite{Borsato:2013hoa} featured the BES phase, supplemented by an additional function at HL order, \textit{i.e.}\ at order $O(h^0)$. Hence it is sufficient to compare those additional functions, which we shall do below. For mixed-mass and massless-massless scattering, instead, the difference between our proposal and that of~\cite{Borsato:2016xns} is much more fundamental: in the latter, the BES phase did not appear at all, but only its AFS and HL orders did. One concern is that, at finite~$h$, the analytic properties of individual orders of the BES phase are less transparent than that of the whole function. In particular, the dressing factors of~\cite{Borsato:2016xns} feature the AFS phase, whose branch points depend on the relative rapidities of the two particles, see eq.~\eqref{eq:AFS}, like $x_1=1/x_2^\pm$, and so on. This makes it hard to perform the analytic continuation necessary, for instance, to define the dressing factors in the mirror theory. Physically, it is not hard to explain why (a limit of) the BES phase might appear in the mixed-mass and massless-massless scattering. In fact, massive particles can and do appear in the internal lines of mixed-mass and massless processes; the BES phase may be result of all such processes at all-loop order.
Let us now consider in more detail the $O(h^0)$ terms appearing in the various dressing factors.
We start by recalling that, by an asymptotic expansion of $\Phi(x,y)$ in $h\gg1$, see~\eqref{eq:chiBES}, we obtain at $O(h^0)$ order the HL integral $\Phi_{\text{HL}}$, see~\eqref{eq:HL}.
We can then define $\chi_{\text{HL}}(x_1,x_2) = \Phi_{\text{HL}}(x_1,x_2)$ in the region where $|x_1|>1$ and $|x_2|>1$. By means of this we may define the massive-massive HL phase
\begin{equation}
\theta_{\text{HL}}(x^\pm_1,x^\pm_2)=
\chi_{\text{HL}}(x_1^+,x_2^+)- \chi_{\text{HL}}(x_1^+,x_2^-)- \chi_{\text{HL}}(x_1^-,x_2^+)+ \chi_{\text{HL}}(x_1^-,x_2^-)\,.
\end{equation}
The mixed-mass limit of this expression, which we will denote simply by $\theta_{\text{HL}}(x^\pm_1,x_2)$, is defined by taking $x^+_2\to x_2$ on the upper-half circle and $x_2^-\to 1/x_2$~\cite{Borsato:2016xns}. The massless-massless limit is obtained by doing the same for $x_1^\pm$ and it will be denoted by $\theta_{\text{HL}}(x_1,x_2)$.
\paragraph{Sum of the massive dressing factors.}
In this case we have
\begin{equation}
\label{eq:massiveHLmatch}
\Big(\varsigma^{\boldsymbol{+}}(x_1^\pm,x_2^\pm)\Big)^{-2}=
-{\tanh{\gamma_{12}^{-+}\ov2}\over \tanh{\gamma_{12}^{+-}\ov2}}\,{ \varPhi(\gamma_{12}^{--})\varPhi(\gamma_{12}^{++}) \varPhi(\gamma_{12}^{-+})\varPhi(\gamma_{12}^{+-})}=e^{2i\theta_{\text{HL}}(\gamma_1^\pm,\gamma_2^\pm)}\,,
\end{equation}
where the last equation has been verified numerically.
This means that, as far as the sum of the phases is concerned, our solution matches that of~\cite{Borsato:2013hoa}. Quite conveniently, the expression in terms of $\gamma_{\pm}$'s is of difference form which, as we have seen in the sections above, makes it much easier to understand its analytic properties and to consider bound states and fusion.
\paragraph{Difference of the massive dressing factors.}
In this case, we should not be comparing with the HL phase but rather with the difference phase of~\cite{Borsato:2013hoa}. We have already seen in the near-BMN limit that our phase $\varsigma^{\boldsymbol{-}}(x_1,x_2)$ is genuinely different from the previous proposal. While $\varsigma^{\boldsymbol{-}}(x_1,x_2)$ is designed to take a simple (difference) form in terms of the $\gamma^\pm$ rapidities, the proposal of~\cite{Borsato:2013hoa} takes a simple form in the $x$-plane, or more precisely, on the $u$-plane.
However by looking more closely at the proposal of~\cite{Borsato:2013hoa}, see appendix~\ref{app:monodromy}, we find that it is not invariant under the parity transformation which indicates that, at least as defined, it cannot be correct.
\paragraph{Mixed-mass dressing factor.}
By taking the massless limit of~\eqref{eq:massiveHLmatch} we can easily obtain an identity that involves the mixed-mass dressing factor, namely
\begin{equation}
-{\tanh{\gamma_{12}^{-\circ}\ov2}\over \tanh{\gamma_{12}^{+\circ}\ov2}}\, \varPhi(\gamma_{12}^{-\circ})^2\varPhi(\gamma_{12}^{+\circ})^2=e^{2i\theta_{\text{HL}}(x_1^\pm,x_2)}\,.
\end{equation}
In terms of our dressing factor~$\varsigma^{\bullet\circ}(x_1^\pm,x_2)$ this means
\begin{equation}
\frac{\tanh\frac{\gamma_{12}^{+\circ}}{2}}{\tanh\frac{\gamma_{12}^{-\circ}}{2}}\Big(\varsigma^{\bullet\circ}(x_1^\pm,x_2\Big)^{-4}=
e^{2i\theta_{\text{HL}}(x_1^\pm,x_2)}\,.
\end{equation}
If we were to consider a single power of $e^{i\theta_{\text{HL}}(x_1^\pm,x_2)}$, as it was done in~\cite{Borsato:2016xns}, this would yield a function with square-root branch points at $x_1^\pm=x_2$ and $x_1^\pm=1/x_2$, which would then recombine with the explicit square-root term in the normalisation of the S-matrix elements in~\cite{Borsato:2016xns}. It is immediate to see that this removes all putative square-root branch points. Still, we remark that our proposal differs from theirs by the presence of the BES factor. The argument used in~\cite{Borsato:2016xns} to rule out the presence of the BES factor was the fact that, if one first performs the $h\gg1$ expansion of the massive factor and then takes the limit to the massless kinematics, only the AFS and HL orders survive. In hindsight, the problem with that argument is that taking the $h\gg1$ expansion and going to the massless kinematics are non-commuting operations. Moreover, when performing them in the order used in~\cite{Borsato:2016xns} (by expanding at $h\gg1$ under the integral sign), one finds that the resulting integrals are divergent in the massless kinematics and hence need to be regularised. We take this as a further indication that it is more appropriate to continue the whole BES factor to the massless kinematics, before taking any limit.
\paragraph{Massless dressing factor.}
It is also easy to take one further massless limit, in the above formulae, obtaining
\begin{equation}
-\varPhi(\gamma_{12})^4=e^{2i\theta_{\text{HL}}(x_1,x_2)}\,.
\end{equation}
Taking the square root of this expression gives the HL order of the phase~\cite{Borsato:2016xns}. At this order, our solution differs by the $-a(\gamma_{12})$ factor; as we remarked, this factor is necessary if we insist that we have the same dressing factor both for same-chirality scattering and opposite-chirality scattering.
Once again, at all order our proposal involves the complete BES factor in the appropriate kinematics as opposed to the AFS~one.
\subsection{Application to mixed-flux backgrounds}
\label{sec:proposal:mixedflux}
The $AdS_3\times S^3\times T^4$ background can be supported by a mixture of Ramond-Ramond and Neveu-Schwarz-Neveu-Schwarz background fluxes, which does not spoil integrability~\cite{Cagnazzo:2012se}. In fact, the S~matrix for such backgrounds is remarkably similar to that of the pure-RR background~\cite{Hoare:2013pma}, even though the kinematics is rather different~\cite{Hoare:2013lja}. If the parameter~$h$ identifies the strength RR background fluxes, and we introduce a new (quantised) parameter $k=1,2,\dots$ to indicate the amount of NSNS flux, the dispersion relation is~\cite{Hoare:2013lja,Lloyd:2014bsa}
\begin{equation}
E(p)=\sqrt{\Big(M+\frac{k}{2\pi}p\Big)^2+4h^2\sin^2\Big(\frac{p}{2}\Big)}\,.
\end{equation}
It is possible to derive the matrix part of the S~matrix~\cite{Lloyd:2014bsa} and express it in terms of the Zhukovsky variables (following the notation of~\cite{Eden:2021xhe})
\begin{equation}
\label{eq:mixedZhukovsky}
\begin{aligned}
x^{\pm}_{\text{\tiny L}}(p) &=
\frac{\big(1+\tfrac{k}{2\pi}p\big)+\sqrt{\big(1+\tfrac{k}{2\pi}p\big)^2+4h^2\sin^2(\tfrac{p}{2})}}{2h\,\sin(\tfrac{p}{2})}\,e^{\pm \frac{i}{2} p},\\
x^{\pm}_{\text{\tiny R}}(p) &=
\frac{\big(1-\tfrac{k}{2\pi}p\big)+\sqrt{\big(1-\tfrac{k}{2\pi}p\big)^2+4h^2\sin^2(\tfrac{p}{2})}}{2h\,\sin(\tfrac{p}{2})}\,e^{\pm \frac{i}{2} p},\\
x^{\pm}_{\text{\tiny 0}}(p) &=
\frac{\big(0+\tfrac{k}{2\pi}p\big)+\sqrt{\big(0+\tfrac{k}{2\pi}p\big)^2+4h^2\sin^2(\tfrac{p}{2})}}{2h\,\sin(\tfrac{p}{2})}\,e^{\pm \frac{i}{2} p}.
\end{aligned}
\end{equation}
Up to taking care of distinguishing ``left'' and ``right'' Zhukovsky variables, the S-matrix elements are essentially the same as in the pure-RR case, so that the same is also true for the crossing equations. In particular, for the massive dressing factors we have~\cite{Lloyd:2014bsa}
\begin{equation}
\begin{aligned}
\sigma^{\bullet\bullet}_{\text{\tiny LL}} (x_{\text{\tiny L}1}^\pm,x_{\text{\tiny L}2}^\pm)^{2}\tilde\sigma^{\bullet\bullet}_{\text{\tiny RL}} (\bar{x}_{\text{\tiny R}1}^\pm,x_{\text{\tiny L}2}^\pm)^{2}&=
\left(\frac{x_{\text{\tiny L}2}^-}{x_{\text{\tiny L}2}^+}\right)^{2}
\frac{(x_{\text{\tiny L}1}^- - x_{\text{\tiny L}2}^+)^{2}}{(x_{\text{\tiny L}1}^- - x_{\text{\tiny L}2}^-)(x_{\text{\tiny L}1}^+ - x_{\text{\tiny L}2}^+)}
\frac{1-\frac{1}{x_{\text{\tiny L}1}^-x_{\text{\tiny L}2}^+}}{1-\frac{1}{x_{\text{\tiny L}1}^+x_{\text{\tiny L}2}^-}},\\
\sigma^{\bullet\bullet}_{\text{\tiny RR}} (\bar{x}_{\text{\tiny R}1}^\pm, x_{\text{\tiny R}2}^\pm)^{2}\tilde\sigma^{\bullet\bullet}_{\text{\tiny LR}} (x_{\text{\tiny L}1}^\pm,x_{\text{\tiny R}2}^\pm)^{2}&=
\left(\frac{x_{\text{\tiny R}2}^-}{x_{\text{\tiny R}2}^+}\right)^{2}
\frac{\big(1-\frac{1}{x^+_{\text{\tiny L}1}x^+_{\text{\tiny R}2}}\big)\big(1-\frac{1}{x^-_{\text{\tiny L}1}x^-_{\text{\tiny R}2}}\big)}{\big(1-\frac{1}{x^+_{\text{\tiny L}1}x^-_{\text{\tiny R}2}}\big)^{2}}
\frac{x_{\text{\tiny L}1}^--x_{\text{\tiny R}2}^+}{x_{\text{\tiny L}1}^+-x_{\text{\tiny R}2}^-}.
\end{aligned}
\end{equation}
despite this apparent simplicity, no solution to these crossing equations is known due to their unusual underlying kinematics. The solution of the mixed-mass and massless crossing equations is also unknown when $k\neq0$.
We will see in a moment that the approach we used above allows us to find a solution, at least formally, in terms of the very same functions used for the pure-RR case. First we introduce the BES factor as a function of the Zhukovsky variables of~\eqref{eq:mixedZhukovsky}. Formally, it satisfies the same crossing equations when these are expressed in terms of the Zhukovsky variables. Like we did above, we can strip out the BES factor to get a simpler set of equations,
\begin{equation}
\begin{aligned}
\Big(\varsigma^{\bullet\bullet}_{\text{\tiny LL}}(x_{\text{\tiny L}1}^\pm,x_{\text{\tiny L}2}^\pm)\Big)^{-2}\Big(\tilde{\varsigma}^{\bullet\bullet}_{\text{\tiny RL}}(\bar{x}_{\text{\tiny L}1}^\pm,x_{\text{\tiny L}2}^\pm)\Big)^{-2}&=\frac{1-x_{\text{\tiny L}1}^+x_{\text{\tiny L}2}^+}{1-x_{\text{\tiny L}1}^-x_{\text{\tiny L}2}^+}\frac{1-x_{\text{\tiny L}1}^-x_{\text{\tiny L}2}^-}{1-x_{\text{\tiny L}1}^+x_{\text{\tiny L}2}^-}\,,\\
\Big(\varsigma^{\bullet\bullet}_{\text{\tiny RR}}(\bar{x}_{\text{\tiny R}1}^\pm,x_{\text{\tiny R}2}^\pm)\Big)^{-2}\big(\tilde{\varsigma}^{\bullet\bullet}_{\text{\tiny LR}}(x_{\text{\tiny L}1}^\pm,x_{\text{\tiny R}2}^\pm)\Big)^{-2}&=\frac{x_{\text{\tiny L}1}^+ - x_{\text{\tiny R}2}^-}{x_{\text{\tiny L}1}^+ - x_{\text{\tiny R}2}^+}\frac{x_{\text{\tiny L}1}^- - x_{\text{\tiny R}2}^+}{x_{\text{\tiny L}1}^- - x_{\text{\tiny R}2}^-}\,,
\end{aligned}
\end{equation}
which can be straightforwardly expressed in terms of rapidities, giving
\begin{equation}
\begin{aligned}
\Big(\varsigma^{\bullet\bullet}_{\text{\tiny LL}}(x_{\text{\tiny L}1}^\pm,x_{\text{\tiny L}2}^\pm)\Big)^{-2}\Big(\tilde{\varsigma}^{\bullet\bullet}_{\text{\tiny RL}}(\bar{x}_{\text{\tiny L}1}^\pm,x_{\text{\tiny L}2}^\pm)\Big)^{-2}&
=
\frac{
\cosh(\frac{1}{2}\gamma^{++}_{\text{\tiny LL},12})\,\cosh(\frac{1}{2}\gamma^{--}_{\text{\tiny LL},12})
}{
\sinh(\frac{1}{2}\gamma^{+-}_{\text{\tiny LL},12})\,\sinh(\frac{1}{2}\gamma^{-+}_{\text{\tiny LL},12})
}
\,,\\
\Big(\varsigma^{\bullet\bullet}_{\text{\tiny RR}}(\bar{x}_{\text{\tiny R}1}^\pm,x_{\text{\tiny R}2}^\pm)\Big)^{-2}\big(\tilde{\varsigma}^{\bullet\bullet}_{\text{\tiny LR}}(x_{\text{\tiny L}1}^\pm,x_{\text{\tiny R}2}^\pm)\Big)^{-2}&
=
\frac{
\cosh(\frac{1}{2}\gamma^{+-}_{\text{\tiny LR},12})\,\cosh(\frac{1}{2}\gamma^{-+}_{\text{\tiny LR},12})
}{
\sinh(\frac{1}{2}\gamma^{++}_{\text{\tiny LR},12})\,\sinh(\frac{1}{2}\gamma^{--}_{\text{\tiny LR},12})
}
\,,
\end{aligned}
\end{equation}
which can be solved in terms of the functions~$\varphi^{\bullet\bullet}$ ans $\tilde{\varphi}^{\bullet\bullet}$ introduced in section~\ref{sec:proposal:massive}, exploiting in particular the crossing relation~\eqref{eq:crossingregularisation}. To this end it is sufficient to set
\begin{equation}
\begin{aligned}
&\left(\sigma_{\text{\tiny LL}}^{\bullet\bullet}(x^\pm_{\text{\tiny L}1},x^\pm_{\text{\tiny L}2})\right)^{-2} =
-
\frac{
\sinh(\frac{1}{2}\gamma^{-+}_{\text{\tiny LL},12})
}{
\sinh(\frac{1}{2}\gamma^{+-}_{\text{\tiny LL},12})
}\,
e^{\varphi^{\bullet\bullet}(\gamma_{\text{L}1}^\pm,\gamma_{\text{L}2}^\pm)}\,\left(\sigma_{\text{\tiny BES}}(x_{\text{\tiny L}1}^\pm,x_{\text{\tiny L}2}^\pm)\right)^{-2},\\
&\left(\sigma_{\text{\tiny RR}}^{\bullet\bullet}(x^\pm_{\text{\tiny R}1},x^\pm_{\text{\tiny R}2})\right)^{-2} =
-
\frac{
\sinh(\frac{1}{2}\gamma^{-+}_{\text{\tiny RR},12})
}{
\sinh(\frac{1}{2}\gamma^{+-}_{\text{\tiny RR},12})
}\,
e^{\varphi^{\bullet\bullet}(\gamma_{\text{R}1}^\pm,\gamma_{\text{L}2}^\pm)}\,\left(\sigma_{\text{\tiny BES}}(x_{\text{\tiny R}1}^\pm,x_{\text{\tiny R}2}^\pm)\right)^{-2},
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
&\left(\widetilde{\sigma}_{\text{\tiny LR}}^{\bullet\bullet}(x^\pm_{\text{\tiny L}1},x^\pm_{\text{\tiny R}2})\right)^{-2} =
+
\frac{
\cosh(\frac{1}{2}\gamma^{+-}_{\text{\tiny LR},12})
}{
\cosh(\frac{1}{2}\gamma^{-+}_{\text{\tiny LR},12})
}\,
e^{\tilde{\varphi}^{\bullet\bullet}(\gamma_{\text{L}1}^\pm,\gamma_{\text{R}2}^\pm)}\,\left(\sigma_{\text{\tiny BES}}(x_{\text{\tiny L}1}^\pm,x_{\text{\tiny R}2}^\pm)\right)^{-2},\\
&\left(\widetilde{\sigma}_{\text{\tiny RL}}^{\bullet\bullet}(x^\pm_{\text{\tiny R}1},x^\pm_{\text{\tiny L}2})\right)^{-2} =
+
\frac{
\cosh(\frac{1}{2}\gamma^{+-}_{\text{\tiny RL},12})
}{
\cosh(\frac{1}{2}\gamma^{-+}_{\text{\tiny RL},12})
}\,
e^{\tilde{\varphi}^{\bullet\bullet}(\gamma_{\text{R}1}^\pm,\gamma_{\text{L}2}^\pm)}\,\left(\sigma_{\text{\tiny BES}}(x_{\text{\tiny R}1}^\pm,x_{\text{\tiny L}2}^\pm)\right)^{-2},
\end{aligned}
\end{equation}
which is consistent with the left-right symmetry of the model~\cite{Lloyd:2014bsa}.
While a detailed analysis of these dressing factors, as well as of those involving massless particles, requires a thorough understanding of the analytic properties of the mixed-flux $x$-, $z$- and $\gamma$-planes, it is very encouraging that the factorisation structure appears to be quite robust. We believe that this approach can be used to resolve the outstanding question of determining the mixed-flux dressing factors, and we plan to present those results elsewhere~\cite{upcoming:mixed}.
\section{Bethe-Yang equations}
\label{sec:BYE}
To conclude, we write the full Bethe-Yang equations using the normalisation as well as the dressing factors constructed above. The auxiliary equations, which were derived in~\cite{Borsato:2016xns}, will be unchanged.
Let us start by summarising the type of excitations. First of all, we will have $N_1$ massive ``left'' particles with $M=+1$; the highest-weight state of their representation is $Y(p)$. Then, we will have $N_{\bar{1}}$ ``right'' particles with $M=-1$ (highest-weight state $\bar{Z}(p)$. Then, we will have $N_0^{(\dot{\alpha})}$ massless excitations, with $\dot{\alpha}=1,2$ distinguishing whether we are considering the representation with highest-weight state $\chi^{\dot{\alpha}}(p)$ with $\dot{\alpha}=1$ or $\dot{\alpha}=2$.
Note that the S~matrix (and hence the Bethe equations) are blind to $\dot{\alpha}=1,2$. Nonetheless, the split between $N_0^{(1)}$ and $N_0^{(2)}$ is important to reproduce the correct degeneracy of the states.
From these highest-weight states we can create descendants by acting with the lowering operators of $psu(1|1)^{\oplus 4}$ centrally extended. There are four such lowering operators, two of which are associated to the ``left'' part of the algebra (and we associate to them $N_y^{(1)}$ and $N_y^{(2)}$ auxiliary roots) and two of which are associated to the ``right'' part of the algebra ($N_y^{(\bar 1)}$ and $N_y^{(\bar 2)}$). However, due to the central extension relating the left- and right-algebras, these sets are equivalent (acting with a right charge is tantamount to acting with a left one, at generic values of momentum and coupling). Hence, for regular roots we can treat as a single family $N_y^{( 1)}$ and $N_y^{(\bar 1)}$, as well as $N_y^{( 2)}$ and $N_y^{(\bar 2)}$.%
\footnote{At $h\ll1$ some of the auxiliary will go to infinity, and other will go to zero. This will reproduce the split between left- and right- supercharges, and make it manifest that they can be associated to two copies of $psu(1,1|2)$~\cite{Borsato:2013qpa}.}
We collect in table~\ref{table:roots} the notation for the roots.
\begin{table}[t]
\centering
\begin{tabular}{|l | l|}
\hline
Excitation numbers & Particles \\
\hline
$N_1$& Left momentum-carrying mode ($Y(p)$).\\
$N_{\bar{1}}$& Right momentum-carrying mode ($\bar{Z}(p)$).\\
$N_0^{(1)}$& Massless momentum-carrying mode, flavour $\dot{\alpha}=1$ ($\chi^1(p)$).\\
$N_0^{(2)}$& Massless momentum-carrying mode, flavour $\dot{\alpha}=2$ ($\chi^2(p)$).\\
$N_y^{(1)}$& Auxiliary root with $\alpha=1$ (lowering operator $\gen{Q}^{1}$ or $\widetilde{\gen{S}}^1$).\\
$N_y^{(2)}$& Auxiliary root with $\alpha=2$ (lowering operator $\gen{Q}^{2}$ or $\widetilde{\gen{S}}^2$).\\
\hline
\end{tabular}
\caption{A summary of the excitations number appearing in the Bethe-Yang equations. We refer to the discussion of the $psu(1|1)^{\oplus4}$ centrally extended as presented for instance in~\cite{upcoming:massless}.}
\label{table:roots}
\end{table}
We begin with the equation for a ``left'' magnon (with $M=+1$), of momentum $p_k$ ($k=1,\dots N_1$) which reads
\begin{equation}
\begin{aligned}
&1=
e^{ip_kL}\prod_{\substack{j=1\\j\neq k}}^{N_1}
e^{+i p_k}e^{-i p_j}
\frac{x^-_k-x^+_j}{x^+_k-x^-_j}
\frac{1-\frac{1}{x^-_kx^+_j}}{1-\frac{1}{x^+_kx^-_j}}\big(\sigma^{\bullet\bullet}_{kj}\big)^{-2}
\prod_{j=1}^{N_{\bar{1}}}e^{-i p_j}
\frac{1-\frac{1}{x^-_kx^-_j}}{1-\frac{1}{x^+_kx^+_j}}
\frac{1-\frac{1}{x^-_kx^+_j}}{1-\frac{1}{x^+_kx^-_j}}\big(\tilde{\sigma}^{\bullet\bullet}_{kj})^{-2}
\\
&\qquad
\times
\prod_{\dot{\alpha}=1,2}
\prod_{j=1}^{N_0^{(\dot{\alpha})}}e^{+\frac{i}{2} p_k}e^{-i p_j}
\frac{x^-_k-x_j}{1-x^+_kx_j}\big(\sigma^{\bullet\circ}_{kj}\big)^{-2}
\prod_{\alpha=1,2}
\prod_{j=1}^{N_y^{(\alpha)}}e^{-\tfrac{i}{2}p_k}\frac{x^+_k-y_{j}^{(\alpha)}}{x^-_k-y_{j}^{(\alpha)}}\,,
\end{aligned}
\end{equation}
To keep the notation light we omitted the index $\dot{\alpha}$ from the rapidity of the massless modes.
For a ``right'' magnon ($M=-1$) with momentum $p_k$ ($k=1,\dots N_{\bar{1}}$) we have
\begin{equation}
\begin{aligned}
&1=
e^{ip_kL}\prod_{\substack{j=1\\j\neq k}}^{N_{\bar{1}}}\frac{x^+_k-x^-_j}{x^-_k-x^+_j}
\frac{1-\frac{1}{x^-_kx^+_j}}{1-\frac{1}{x^+_kx^-_j}}\big(\sigma^{\bullet\bullet}_{kj}\big)^{-2} \prod_{j=1}^{N_1}e^{ip_k}
\frac{1-\frac{1}{x^+_kx^+_j}}{1-\frac{1}{x^-_kx^-_j}}
\frac{1-\frac{1}{x^-_kx^+_j}}{1-\frac{1}{x^+_kx^-_j}}\big(\widetilde{\sigma}^{\bullet\bullet}_{kj}\big)^{-2}
\\
&\qquad\times
\prod_{\dot{\alpha}=1,2}
\prod_{j=1}^{N_0^{(\dot{\alpha})}}e^{-\frac{i}{2} p_k}e^{-i p_j}
\frac{1-x^+_kx_j}{x^-_k-x_j}\big(\sigma^{\bullet\circ}_{kj}\big)^{-2}
\prod_{\alpha=1,2}
\prod_{j=1}^{N_y^{(\alpha)}}e^{-\tfrac{i}{2}p_k}\frac{1-\frac{1}{x^-_ky_{j}^{(\alpha)}}}{1-\frac{1}{x^+_ky_{j}^{(A)}}}\,.
\end{aligned}
\end{equation}
A massless magnon of momentum $p_k$ could belong to the representation with highest-weight state $\chi^1(p)$ or to the one with highest-weight state $\chi^2(p)$. The equation for a particle of the former type is
\begin{equation}
\begin{aligned}
&1=
e^{ip_kL}
\prod_{\substack{j=1\\j\neq k}}^{N_{0}^{(1)}}\big(\sigma^{\circ\circ}_{kj}\big)^{-2}
\prod_{j=1}^{N_{0}^{(2)}}\big(\sigma^{\circ\circ}_{kj}\big)^{-2}
\prod_{j=1}^{N_1}e^{+i p_k}e^{-\frac{i}{2} p_j}
\frac{x_kx^+_j-1}{x_k-x^-_j}\big(\sigma^{\circ\bullet}_{kj}\big)^{-2}
\\
&\qquad\times
\prod_{j=1}^{N_{\bar{1}}}e^{+i p_k}e^{+\frac{i}{2} p_j}
\frac{x_k-x^-_j}{x_kx^+_j-1}\big(\sigma^{\circ\bullet}_{kj}\big)^{-2}
\prod_{\alpha=1,2}
\prod_{j=1}^{N_y^{(\alpha)}}e^{-\tfrac{i}{2}p_k}\frac{x_k-y_{j}^{(\alpha)}}{\frac{1}{x_k}-y_{j}^{(\alpha)}}
\,.
\end{aligned}
\end{equation}
Finally, the auxiliary Bethe equations read, for $\alpha =1,2$
\begin{equation}
1=\prod_{j=1}^{N_1}\frac{y_{k}^{(\alpha)}-x^+_j}{y_{k}^{(\alpha)}-x^-_j}e^{-ip_j/2}
\prod_{j=1}^{N_{\bar{1}}}\frac{1-\frac{1}{y_{k}^{(\alpha)}x^-_j}}{1-\frac{1}{y_{k}^{(\alpha)}x^+_j}}e^{-ip_j/2}
\prod_{\dot{\alpha}=1,2}
\prod_{j=1}^{N_0^{(\dot{\alpha})}}\frac{y_{k}^{(\alpha)}-x_j}{y_{k}^{(\alpha)}-\frac{1}{x_j}}e^{-ip_j/2}\,,
\end{equation}
where $k=1,\dots N_y^{(\alpha)}$.
Let us also summarise the various dressing factors that enter in the Bethe-Yang equations. For massive excitations we have
\begin{equation}
\begin{aligned}
\big(\sigma^{\bullet\bullet}_{12}\big)^{-2}=&-\frac{\sinh\tfrac{\gamma^{-+}_{12}}{2}}{\sinh\tfrac{\gamma^{+-}_{12}}{2}}e^{\varphi^{\bullet\bullet}(\gamma^\pm_1,\gamma^\pm_2)}\ \sigma_{\text{\tiny BES}}^{-2}(x_1^\pm,x_2^\pm)\,,\\
%
\big(\widetilde{\sigma}^{\bullet\bullet}_{12}\big)^{-2}=&+\frac{\cosh\tfrac{\gamma^{+-}_{12}}{2}}{\cosh\tfrac{\gamma^{-+}_{12}}{2}}e^{\tilde\varphi^{\bullet\bullet}(\gamma^\pm_1,\gamma^\pm_2)}\ \sigma_{\text{\tiny BES}}^{-2}(x_1^\pm,x_2^\pm)\,.
\end{aligned}
\end{equation}
For mixed-mass scattering we have
\begin{equation}
\begin{aligned}
%
\big(\sigma^{\bullet\circ}_{12}\big)^{-2}=&\ i\,\frac{\tanh\tfrac{\gamma^{-\circ}_{12}}{2}}{\tanh\tfrac{\gamma^{+\circ}_{12}}{2}}e^{\tfrac{1}{2}(\varphi^{\bullet\bullet}(\gamma^\pm_1,\gamma_2)+\tilde{\varphi}^{\bullet\bullet}(\gamma^\pm_1,\gamma_2))}\ \sigma_{\text{\tiny BES}}^{-2}(x_1^\pm,x_2)\,\\
=&\ i\,\frac{\tanh\tfrac{\gamma^{-\circ}_{12}}{2}}{\tanh\tfrac{\gamma^{+\circ}_{12}}{2}}\varPhi(\gamma_{12}^{+\circ})\varPhi(\gamma_{12}^{-\circ})\,\sigma_{\text{\tiny BES}}^{-2}(x_1^\pm,x_2)\,.
\end{aligned}
\end{equation}
Finally, for massless scattering we are picking the same solution regardless of the chirality of the scattered particle, and we have
\begin{equation}
\big(\sigma^{\circ\circ}_{12}\big)^{-2}=\ \big(\widetilde{\sigma}^{\circ\circ}_{12}\big)^{-2}=a(\gamma_{12})\,\varPhi(\gamma_{12}^{\circ\circ})^2\ \sigma_{\text{\tiny BES}}^{-2}(x_1,x_2)\,.
\end{equation}
\section{Conclusions}
\label{sec:conclusions}
We have presented a new solution to the crossing equations for $AdS_3\times S^3\times T^4$. The general structure of our solution is such that all of the dressing factors include a BES factor (in the appropriate kinematics) times a piece which depends on the difference of rapidities which we introduced following~\cite{Beisert:2006ib,Fontanella:2019baq}.
In some ways, our solution is similar to that of~\cite{Borsato:2013hoa,Borsato:2016xns}. In fact, for the product of the massive dressing factors, which we called~$\varsigma^{\boldsymbol{+}}(x_1^\pm,x_2^\pm)$, we find that our solution coincides with the HL phase used in~\cite{Borsato:2013hoa}; in fact, as a byproduct of our work, we find a difference representation of the HL phase which is very convenient both for its analytic continuation and for fusion.
However, already for the difference of the massive phases~$\varsigma^{\boldsymbol{-}}(x_1^\pm,x_2^\pm)$ we find something fundamentally different from~\cite{Borsato:2013hoa}. Our solution is minimal when formulated in the $\gamma^\pm$-plane (see appendix~\ref{app:Fourier}) whereas the one of~\cite{Borsato:2013hoa} appears perhaps more natural on the $u$-plane. However, as we argue in appendix~\ref{app:monodromy}, the previous proposal is incompatible with parity invariance. This is a strong indication that it needs to be modified.
In any case, it would be nice to carefully compute the one-loop dressing factor $\varsigma^{\boldsymbol{-}}(x_1^\pm,x_2^\pm)$, to see whether it agrees with our proposal. It is also intriguing that, as we discussed in section~\ref{sec:proposal:massive}, the difference between our proposal and the existing perturbative computation is quite small, and could be due to a local counterterm. Such counterterms are known to be sometimes necessary in the renormalisation of integrable models, see \textit{e.g.}~\cite{deVega:1981ka,Bonneau:1984pj}.
When considering the dressing factors that involve massless modes, the differences with~\cite{Borsato:2016xns} are rather substantial. The first key difference is that the path for the crossing transformation which we introduce in section~\ref{sec:rapidities:massless} is different from that used in~\cite{Borsato:2016xns}. Our choice is dictated by the compatibility with the mirror transformation, which was not analysed in the literature thus far. The main difference in the functional form of the solutions which we found is that they depend on the BES phase, rather than on its leading and sub-leading order pieces (the AFS and HL phase). One argument made in~\cite{Borsato:2016xns} to justify the appearance of the AFS and HL orders only is that the remaining pieces of the phase would go to zero. We find that, when $h$ is finite, this is not the case --- there is a substantial difference between the AFS and HL orders of the phase, and the whole BES phase. This highlights a difference between asymptotically expanding the BES phase, and going to the massless kinematics (as it was done in~\cite{Borsato:2016xns}), \textit{versus} going to the massless kinematics for the finite-coupling phase. The latter procedure, which we followed here, is most natural when considering the finite-coupling and finite-volume spectrum of the theory.
Additionally, it is relatively straightforward to analytically continue our proposal to other kinematic regimes (such as the mirror one), while this is harder for the proposal of~\cite{Borsato:2016xns} due to the appearance of the AFS phase.
Indeed, our proposal for the dressing factors and its nice properties in the mirror kinematics give us the necessary tools to study the finite-volume (and finite-coupling) spectrum of the theory by means of the mirror thermodynamic Bethe ansatz. We plan to return to this question in a forthcoming publication~\cite{upcoming:mirror}.
It would also be interesting to see if this approach, based on splitting off a BES factor from a rapidity-difference part of the crossing equations, could lead to solutions for other $AdS_3$ worldsheet S~matrices. A natural candidate is the pure-RR $AdS_3\times S^3\times S^3\times S^1$ background, whose S~matrix and crossing equations were found in~\cite{Borsato:2012ud,Borsato:2015mma}. Another interesting setup is the one where the background is supported both by RR fluxes and Neveu-Schwarz-Neveu-Schwarz ones. In this case we also know the S~matrix and crossing equations~\cite{Hoare:2013pma,Lloyd:2014bsa}, but the kinematics is more intricate~\cite{Hoare:2013lja}. Intriguingly, we have seen that our approach formally works also for this more complicated setup in section~\ref{sec:proposal:mixedflux}, see also~\cite{upcoming:mixed}.
This is particularly exciting in view of the obvious physical significance of the setup: it interpolates between the pure-RR case which we studied here (and which is reminiscent of $AdS_5\times S^5$) and the pure-NSNS case which can be described as a Wess-Zumino-Witten model~\cite{Maldacena:2000hw} and studied in great detail, both by integrability~\cite{Baggio:2018gct, Dei:2018mfl} and by worldsheet CFT techniques~\cite{Giribet:2018ada,Eberhardt:2018ouy,Eberhardt:2021vsx}.
Finally, it would be interesting to see how this proposal for the dressing factors would amend the current understanding of the hexagon formalism~\cite{Basso:2015zoa,Eden:2016xvg,Fleury:2016ykk} for $AdS_3\times S^3\times T^4$, whose study has been recently initiated~\cite{Eden:2021xhe}.
\section*{Acknowledgements}
AS thanks Juan Maldacena for interesting related discussions. AS gratefully acknowledges support from the IBM Einstein Fellowship.
|
1,477,468,750,910 | arxiv | \section{Introduction}
Upon deformation, amorphous materials behave as solids when the applied shear stress is lower than the yield stress and start to flow when this threshold is exceeded.
In the limit of very slow shear rate and at low temperature, the stress response becomes very jerky as seen for instance in bulk metallic glasses \cite{hufnagel2016deformation,zhang2017expe}, foams \cite{Cantat2006}, granular matter \cite{Dahmen2011} or porous silica \cite{Baro2013}. The sudden stress drops, or avalanches, originate in localized microscopic plastic rearrangements involving a small number of particles in shear transformation zones (STs) \cite{Argon1979,spaepen1977}, which have been observed both in atomistic simulations \cite{FalkLanger1998} and in colloidal glasses with confocal microscopy \cite{Schall2007}. Avalanches of size $S$ are expected to be scale free in the thermodynamic limit following the distribution $P(S) \sim S^{-\tau}$ where $\tau $ is the avalanche exponent. However, for finite systems of linear dimension $L$, they present an upper cutoff $S_c \sim L^{d_f}$ where $d_f$ is the fractal dimension that characterises the geometry of the failure event.
\begin{figure}
\centering
\includegraphics{pass_06_prx/avseries128_2.eps}
\caption{Segment of a stress-strain obtained from the elastoplastic model with $L=128$. Stress drops correspond to plastic rearrangements comprised of shear transformations. Insets: Plastic activity in the model for different sizes of stress-drop. Larger avalanches tend to form line-like events. \label{fig:epm_behaviour} }
\end{figure}
\red{Our present understanding of the yielding phenomenon in these materials is built upon the notion of anisotropic elastic interactions produced by individual STs in the surrounding medium \cite{Eshelby1957}. Since these interactions are long-ranged with an alternating sign, they stabilize or destabilize distant sites, and therefore act as a mechanical noise. As the load $\Sigma$ approaches the yield (or critical) stress $\Sigma_Y$, however, plastic events become collective and strongly correlated. The presence of such correlations is signaled, for instance by anomalous scaling of the fluctuations of the stress, $\delta \Sigma \sim L^{-\phi}$, where $L$ is the linear size of a system of dimension $d$ \cite{Karmakar2010,SalernoRobbins,LinWyart2014}. Multiple studies have consistently found $\phi < d/2$ at criticality, i.e.~stresses do not add up independently but are correlated.
The mechanical noise bath experienced by a distant site therefore not only contains contributions from single, point-like STs, but also from extended, line-like plastic events. While the stress change due to a distant point-like event is of order $L^{-d}$, the typical one induced by an extended avalanche is of order $S_c/L^d\sim L^{-(d-df)}$, where $d_f\le d$ in amorphous materials.}
In this work, we first show that the stress fluctuations produced by these spatially extended plastic events have distinctly different statistical properties than those coming from individual STs. We then proceed by showing that the presence of the collective avalanches has a profound and so far overlooked effect on the stability of very slowly deformed amorphous solids. In the limit when thermal effects play no role, often referred to as the athermal quasistatic regime (AQS), the statistical properties of the macroscopic failure events (avalanches) are closely linked to the distribution of residual stresses, i.e.~how far a given local region finds itself from instability. This distribution is very sensitive to the underlying mechanical noise. Indeed we will show that there exists a transition that arises due to the mechanical noise associated to spatially extended events and that this transition occurs above the average value of weakest site.
As a consequence, a central result of our work is that the spatial shape of the plastic events, reflected in their fractal dimension $d_f$, enters the system size scaling of the typical value of the weakest site, which in turn controls the mean stress drop caused by avalanches under steady state flow conditions. We then propose a new scaling law linking the exponents characterizing the avalanche size and residual stress distributions. While a numerical demonstration of these concepts can only be performed for finite system sizes, we argue that the influence of the spatial extent of the avalanches remains in the thermodynamic limit.
\section{Elastoplastic Model}
\red{We use a coarse-grained, elastoplastic mesoscopic model (EPM) \cite{nicolas2018deformation}, which contains $N=L^2$ sites on a two-dimensional square lattice, that is ideally suited to study the statistics of avalanches and residual stresses. In our finite element implementation \cite{budrikis2017universal}, the Eshelby stress propagator $\mathcal{G}(r) \sim \cos(4\theta)/r^d$ \cite{Eshelby1957,Picard2004} for STs emerges naturally. Sites yield when the local stress exceeds their local yield threshold $|\sigma_{xy}| > \sigma_Y$. Upon yielding, the sites reduce their local stress to zero by accumulating plastic strain. The yield threshold $\sigma_Y$ for that site is then redrawn from a Weibull distribution with shape parameter $k=2$, the same distribution that is used to initialize the yield thresholds. }
\red{We implement a strain-controlled deformation protocol using extremal dynamics \cite{Talamali2011}, in which avalanches are initiated by uniformly loading the system in a simple shear configuration until the weakest site fails, and fixing the strain until the avalanche ends (cf. Fig.~\ref{fig:epm_behaviour}). Single STs have the characteristic quadrupolar form, while large avalanches show a line-like structure (cf. inset of Fig~\ref{fig:epm_behaviour}). Loading is done by specifying the displacement at the boundary vertices of the system. We thus use surfaces instead of periodic boundary conditions \cite{sandfeld2015avalanches}. Previous studies indicate that the universal critical behavior is insensitive to the detailed form of the loading conditions \cite{Budrikis2017}. From a stable configuration, the system is loaded to until one site exactly meets its yield stress: $|\sigma_{xy}| = \sigma_Y$. The yielding site initiates an avalanche. After a site yields, all sites are checked for stability. The most unstable site then yields next, until the system returns to stability. Throughout the avalanche, the displacements at the boundaries of the system remain fixed. All data reported in this paper was taken after a steady-state flow state was reached (initial transients were discarded).}
\begin{figure*}[t]
\centering
\includegraphics{pass_06_prx/px_rescales.eps}
\caption{(a) Probability distribution function of the residual stresses $P(x)$ for different system sizes $L$. In panels (b) and (c) $x$ has been rescaled by the $x_p\sim L^{-2.0}$ and by $x_c \sim L^{-1.15}$ (see text and SI Fig.~\ref{fig:fss}), respectively. Filled circles indicate the location of the lower cutoff of the mechanical noise $\delta x_c\sim L^{-2.0}$ and filled diamonds indicate the mean values of the weakest site $\xm$.
\label{fig1}}
\end{figure*}
\section{Distribution of residual stresses}
The probability distribution function $P(x)$ of residual stresses $x=\sigma_Y-\sigma_{xy}$ is shown in Fig \ref{fig1}(a) for different system sizes and exhibits three distinct regimes. For larger values of $x$ we observe that $P(x) \sim x^{\theta}$ with a pseudogap exponent $\theta \approx 0.5$ in good agreement the literature \cite{LinWyart2014,LiuBarrat2016,Budrikis2017}. \red{Several recent studies have investigated the behavior of $P(x)$ for smaller values of $x$ in more detail, and reported that the power law regime gives way to a finite plateau value as $x\rightarrow 0$ \cite{Tyukodi2019,Ferrero2019,Ruscher2020}. Our results here confirm a departure from the pseudogap regime, but reveal that the situation is more complex: Below a crossover value $x_c$, $P(x)$ in fact develops an intermediate power law regime $P(x) \sim x^{\tilde{\theta}}$ with $\tilde{\theta}<\theta$, before finally saturating in a system size dependent plateau value $P_0$ for $x \rightarrow 0$. }
\subsection{Origin of the terminal plateau}
To gain more insight into what happens for $x<x_c$, we first investigate the origin of the plateau region. As already observed in ref.~\cite{Tyukodi2019}, the plateau depends on the system size as $P_0 \sim L^{-p}$. We find that $p \approx 0.61$ (Appendix Fig.~\ref{fig:fss}) as also reported in \cite{Tyukodi2019, Ferrero2019}. In ref.~\cite{Ruscher2020} it was suggested that the emergence of the plateau is related to the discreteness of the underlying mechanical noise arising from the stress redistribution during avalanches. For a given site, one defines the mechanical noise $\delta x_i = x_i(n+1)-x_i(n)$ where $n$ is the number of plastic events in chronological order. Due to the long-range nature of the elastic interaction described by the stress propagator $\mathcal{G}(r) \sim r^{-s}$, this noise is broadly distributed \cite{Lemaitre2007} and can be expected to follow a L\'evy distribution $P (\delta x) \sim |\delta x|^{-\mu -1}$ \cite{LinWyartPRX}. The aforementioned exponent $s=d/\mu$ and when $\mu=1$, $\mathcal{G}(r)$ corresponds to the Eshelby propagator for STs. Assuming the different rearrangements are independent and correspond to single STs, one expects a lower cutoff $\delta x_c \sim L^{-d/\mu}$ for the noise distribution. The definition of the lower cutoff is not unique \cite{Ferrero2020, ParleySollich2020} and depends on the nature of the plastic objects considered (single STs vs. large avalanches). This aspect will be discussed in more detail below. The noise distribution also presents an upper cutoff $\delta x_{uc} \sim 1$ coming from the yield events occurring at neighboring sites.
In Fig.~\ref{fig2} we show the distributions of stress changes $\delta x$, also called ``kicks'', for plastic activity measured in our EPM for different system sizes. These distributions collapse when the stress change is rescaled by \bnew{$L^{-1.9}$} and the distribution scales as $P(\delta x) \sim \delta x^{-2}$, \bnew{which is very nearly the same as for the single ST Eshelby interactions $(\mu=1)$ we show in the inset.} We conclude that the overall noise mechanical noise is largely dominated by point-like events.
\bnew{These single STs set the smallest scale in the system, with a lower cutoff kick-size $\delta x_c$.} To highlight the role of the lower cutoff in the emergence of the plateau in $P(x)$, we use $x_p = \delta x_c$ and consider a rescaled distribution $P(x)L^{-p}$ vs. $x/\delta x_c$ in Fig \ref{fig1}(b). A good finite system size collapse is obtained for $x$ below and in the vicinity of $\delta x_c$ showing that this region is dominated by the influence of the lower cutoff of the mechanical noise $\delta x_c$.
\subsection{Origin of the intermediate regime}
However, this rescaling fails to collapse points at or above $\xm$, the average value of the weakest site and the second smallest characteristic scale indicated by diamonds in Fig \ref{fig1}. We notice that values of $\xm$ are systematically located below the crossover $x \lesssim x_c$ that demarks the departure from the pseudogap regime $P(x)\sim x^\theta$ and is defined as the intersection between the two power law regimes. This crossover scales as $x_c \sim L^{-c}$ with $c \approx 1.15$ (Appendix Fig.~\ref{fig:fss}). In Fig.~\ref{fig1}(c), we show $P(x)L^{0.61}$ vs $x/x_c$ and find a good collapse of the upper power law regime in the region $x > x_c$ and of the plateau region. However, in the intermediate region where $\delta x_c < x< x_c$ the collapse fails. The phenomenology below and above $x_c$ is thus different as suggested by the different scaling with system size.
To shed more light on the origin of this intermediate power law regime, we focus our attention on $\xm$. It is intrinsically related to the macroscopic flow, in particular the average value of stress drops $\langle |\Delta \sigma| \rangle \sim \xm \sim L^{-\alpha}$, where $\alpha=1.4$ (Appendix Fig.~\ref{fig:fss}). One can envision $P(x)$ as the survival probability of a random walker performing a L\'evy flight near an absorbing boundary \cite{LinWyartPRX}. It is thus interesting to investigate $P(x)$ subject to the condition of absorption or survival after the occurrence of an avalanche. Results are shown in Fig \ref{fig3}(a), where we see first that, as mentioned above, no site survives below $\delta x_c$ and most of the sites in the plateau region are likely to be absorbed after an avalanche. We then observe that $P(x)$ conditioned on the surviving sites departs from the pseudogap regime when $x \approx \xm $, where both failing and surviving distributions are equal. Therefore, $\xm$ marks the onset of a transition; for $x > \xm$ sites are more likely to survive while below $\xm$, absorption dominates.
Moreover, the extremal dynamics protocol induces a global shift $-x_{min}$ of residual stresses to initiate each avalanche. This invites the introduction of a drift velocity: $ v =N x_{min} / \ell$ over the course of an avalanche with $\ell$ plastic events. \bnew{ An interesting question is then: what drift do sites experience before landing at a particular value of $x$? We check this in Fig.~\ref{fig3}b, and find a strong size-dependent enhancement in the drift velocity.}
For a L\'evy flight of index $\mu=1$ with a drift $v$, the persistence exponent, i.e. the pseudogap exponent in the context of sheared amorphous solids, can be expressed as a function $v$ and the amplitude of the mechanical noise $A$ as $\theta = \arctan(A\pi/v)/\pi$ \cite{Doussal2009, LinWyartPRX}. Assuming $A$ to be constant, one expects a decrease of $\theta$ with increasing $v$ that we compute in the regions $x > \xm$ and $x < \xm$ in Fig \ref{fig3}(b). The inset reveals that, although the maximum drift scales as $\xm$, the drift enhancement begins at $x \sim L^{-1} \approx x_c$. This observation is coherent with the increase in probability of failing shown above.
\begin{figure}[t]
\centering
\includegraphics[]{pass_06_prx/pkick_all.eps}
\caption{Finite-size scaling of the mechanical kick distributions experienced by sites over the course of plastic rearrangements. Inset: Kick distributions from plastic events with a single ST, with the onset of the plateau defining $\delta x_c$.
\label{fig2}}
\end{figure}
The drift also reveals a change in the nature of avalanches above and below $\xm$. Indeed $\tilde{v} > v$ implies that $\langle\tilde{\ell}\rangle < \langle \ell \rangle$. The typical avalanches are larger above $\xm$, which thus marks a transition between collective and individual rearrangements. To illustrate this difference\bnew{, we study which avalanches lead to sites much less or much more stable than $ \xm$. This is accomplished by, for each avalanche of size $S$ and for each site $l$ with stability $x_l$, constructing the pairs $(x_l, S)$. For those pairs with $x_l \ll \xm$ (or $\gg \xm$), we compute the distribution of $S$ as $P(S | x \ll \xm)$ (or $P(S|x \gg \xm)$)} which are shown in Fig \ref{fig4}(a). We emphasize that even if we focus on $ x \ll \xm$, we observe large values of $S$ as they can lead to values of $x < \xm$. We notice that the avalanche exponent $\tau$ differs substantially. Indeed, for sites well above $\xm$, $\tau \approx 1.37$ corresponds to the avalanche exponent measured for the whole distribution while for $x \ll \xm$, $\tau \approx 1.5$ as expected in a mean-field picture of plasticity emerging from rearrangements of independent STs. Interestingly, the same value of $\tau$ has been measured for small avalanches in atomistic simulations \cite{Oyama2020}. Our interpretation is also consistent with the findings of Karimi et al. \cite{karimi2017}, who reported much narrower distributions of inertial (extended) avalanches than for overdamped (localized) ones.
The decrease of $\tilde{\theta}$ with increasing system size $L$ (see also Appendix Fig.~\ref{fig:fss}) comes from the joint effect of the decrease in the density of sites below $\xm$ with system size as the cumulative distribution function $\int_0^{\xm} P(x)dx \sim L^{-d}$ \cite{Karmakar2010}, and of an increase of the drift $\tilde{v}$ with $L$ as observed in Fig \ref{fig3}(b). Assuming that the average of failing sites per avalanche is $\langle \tilde{\ell} \rangle \approx 1$, the drift $\tilde{v} \sim \langle S \rangle \sim L^{d-\alpha}$ and therefore we immediately see that the larger $L$ the larger the drift in the region where $x < \xm$. More and more sites are brought on the verge of instability and in the thermodynamic limit, when $L\rightarrow \infty$, one expects the drift to become infinite and the density of sites below $\xm$ to be zero. This implies $\tilde{\theta}(L) \rightarrow 0$ and thus $P(x)$ should plateau for $x \le \xm$.
\begin{figure}[t]
\centering
\includegraphics[]{pass_05_pnas/px_live_vx.eps}
\caption{(a) Representation of the distribution $P(x)$ rescaled by $\xm$ conditioned on survival and absorption in the next avalanche. (b) Finite-size evolution of the drift $v$ experienced by sites with respect to the value of residual stresses. Inset: $\langle v \rangle(x)$, rescaled by $x\cdot L$, showing that the onset of drift enhancement occurs with a different scaling than the maximum drift enhancement.}
\label{fig3}
\end{figure}
From the discussion so far, we hypothesise that the probability of residual stresses can be written as
\begin{equation}
P(x;L)= \left\lbrace
\begin{array}{ll}
& P_0(L) \hspace{1.0cm} \forall \, \, x \le x_p \\
& c_1(L) x^{\tilde{\theta}(L)} \hspace{0.3cm} \forall \, \, x_p \le x \le x_c \\
& c_2 x^{\theta} \hspace{1.2cm} \forall \, \, x \ge x_c
\label{pdex_eq}
\end{array}
\right.
\end{equation}
Continuity at $x=x_p$ implies $c_1(L) \sim L^{2\tilde{\theta}-p}$. In the thermodynamic limit, $P(x)$ below $x_c$ reduces to $P(x) = \tilde{P_0}$ where $\tilde{P_0} \sim L^{-p}$. Moreover, the continuity at $x=x_c$ and the relation $x_c \sim L^{-c}$ allow us to establish an expression for the plateau exponent,
\begin{align}
p=c \theta + \tilde{\theta}(L)(2 -c) \stackrel{L \rightarrow \infty}{=} c \theta
\label{eq_for_p}
\end{align}
In the thermodynamic limit, when $\tilde{\theta}$ vanishes, with $c \approx 1.15$ and $\theta\approx 0.52$, this relation predicts $p = 0.60$ in good agreement with the value we find meaning that the intermediate power law has not strong influence on $p$ for the range of sizes investigated.
\begin{figure}[t]
\centering
\includegraphics[]{pass_05_pnas/ps_x_reverse.eps}
\caption{(a) Avalanche size distribution for $L=512$ conditioned on $x < \xm$ and $x > \xm$. (b) Unconditioned avalanche distributions $P(S,L)$. Colors indicate system sizes as in Fig.~\ref{fig1}.
}
\label{fig4}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[]{pass_06_prx/px_s.eps}
\caption{ Solid line is the $P(x)$ distribution for L=256. Dashed lines are $P(x)$ immediately after large avalanches of increasing size.
}
\label{fig:px_s}
\end{figure}
\section{Influence of the spatial extent of avalanches}
\subsection{Origin of the crossover $P(x)$}
For now we have linked the emergence of the plateau in $P(x)$ to the lower cutoff of mechanical noise and showed that $\xm$ sets the scale \red{at which the distributions $P(x)$ conditioned on failing vs. surviving sites become equal. However, it is evident from Fig.~\ref{fig3}(b) that the drift enhancement begins already for $x>\xm$, which is also signaled by a sharp increase in $P(x|{\rm fail})$ in the same region. These observations point to the presence of an additional scale.}
The question of the origin of the crossover in $P(x)$ remains as well. What is the physical meaning of $x_c$? To tackle this question, we investigate the role of the spatial extent of avalanches on the distributions of residual stresses and mechanical noise. In practice, we adopt a top to bottom approach by considering only specific sizes of avalanches and by conditioning the probability distributions on these specific values.
As a first step, we compute the unconditional distribution of avalanches $P(S)$ and find in Fig \ref{fig4}(b) an avalanche exponent $\tau \approx 1.37$ and a fractal dimension $d_f \approx 0.95$ consistent with results from atomistic simulations \cite{LiuBarrat2016, SalernoRobbins} but slightly lower than results from other EPM implementations \cite{LinWyart2014,Ferrero2019}. \bnew{In analogy to Fig.~\ref{fig4}(a), we consider the distribution of residual stresses in the system, conditioned on the preceding avalanche and find in Fig.~\mbox{\ref{fig:px_s}} a plateau that depends on the size of the avalanche. Since $P(x) = \int \mathrm{d} s P(x|s)P(s)$, the plateau with the earliest (i.e. largest $x$) onset in $P(x|s)$ must therefore result in the deflection from the power-law.}
\bnew{The typical stress release of the largest avalanches is set by $S_c$, corresponding to system-spanning events. Avalanches of this size would correspond to the scale of the earliest deviation from $P(x)\sim x^\theta$ at $x_c$. Fig.~\mbox{\ref{fig5}} tests this hypothesis by considering the distribution of stress kicks $\Delta x$ on sites from plastic events of size $S\approx S_c$.
As can be seen in Fig.~\mbox{\ref{fig5}}a), this changes the power law regime $P(\delta x)\sim \delta x^{-2.0}$ found for all kicks to $P(\Delta x) \sim \Delta x^{-2.2}$ and implies an apparent value of $\mu=1.2$. The typical elastic interaction can thus be seen as more long-ranged \cite{ParleySollich2020} with an effective kernel decaying as $\mathcal{G}(r) \sim r^{-1.8}$. Moreover, the lower cutoff of the kick distribution now scales as $\Delta x_c \sim L^{-1.13}$ (Fig.~\mbox{\ref{fig5}}a), which has a finite-size scaling in near perfect agreement with $x_c \sim L^{-1.15}$ found from direct analysis of $P(x)$ (Fig.~\mbox{\ref{fig1}}c and Appendix Fig.~\mbox{\ref{fig:fss}}). This behavior reflects collective instabilities, where one unstable site triggers further rearrangements and so on. A distant site not only feels one rearrangement but an apparent kick coming from the accumulation of noise from consecutive single rearrangements. Conditioning $P(x)$ on avalanches $S\approx S_c$ (Fig.~\mbox{\ref{fig5}}b) all but eliminates the intermediate power law ($\tilde{\theta} = 0$), and the pseudogap region immediately gives way to a plateau, whose scaling verifies equation (\ref{eq_for_p}). The onset of the plateau in $P(x \vert S=S_c)$ has the same finite size scaling and therefore corresponds to the plateau in the avalanche kick-distribution $\Delta x_c$. We thus conclude that the first deviation from the pseudogap behavior of $P(x)$ occurs at $x_c\sim \Delta x_c$. }
\bnew{Another scale that could be relevant is the average avalanche size $\langle S \rangle$. This scale plays a central role in the scaling relations for the yielding transition, as it represents the typical stress release occurring during flow, which must (in steady-state flow) be equal to the typical loading. Since the loading between avalanches is controlled by $\xm$, this connects the macroscopic flow to the microscopic description. In the Appendix, we show that this connection manifests in the finite-size scaling of a plateau in the kick-distribution from avalanches of the mean size $S \approx \langle S \rangle$ (Fig.~\ref{fig:si_savg_scaling}(a)). This plateau coincides with the plateau of $P(x)$ after such avalanches (Fig.~\ref{fig:si_savg_scaling}(b)). However, the rescaling of $P(x)L^p$ vs. $x/\xm$ is not sufficient to obtain a good collapse (i.e.~$x_c \not \sim \xm$) in the pseudogap regime contrary to what has been suggested recently in ref.~\mbox{\cite{Ferrero2020}}.}
\begin{figure}[t]
\centering
\includegraphics[]{pass_06_prx/residual_stress_sc.eps}
\caption{(a) Kick distribution from avalanches $S\approx S_c$, with finite-size scaling effected. (b) Residual stress distribution after avalanches with size $S\approx S_c$, collapsed with the same finite-size scaling. }
\label{fig5}
\end{figure}
\begin{table*}[t]
\centering
\caption{Summary of critical exponents and scaling laws. Error ranges are plausible values given the data. \label{tab:crit_exponents} }
\begin{tabular}{lrrrc}
\hline
Exponent & Defining Eq. & Scaling relation & Measured value & Predicted value\\
\hline
$\theta$ & $P(x>x_c) \sim x^\theta$ & - & $0.52\pm 0.05$ &\\
$c$ & $x_c \sim L^{-c}$ & - & $1.15 \pm 0.05$ &\\
$p$ & $P(x\ll1)\sim L^{-p} $& $p = c\theta$ & 0.61$\pm 0.02$ & 0.57\\
$\tau$ & $P(S) \sim S^{-\tau} G(S/S_c)$ & $2 - \frac{c\theta}{d_f}$ & 1.37$\pm 0.05$ & 1.40\\
$d_f$ & $S_c \sim L^{-d_f}$ & $d_f=d-c$ & $0.95\pm 0.1$ & 0.85\\
$\alpha$ & $\langle x_{min}\rangle \sim L^{-\alpha}$ & $\alpha = d- p$ & $1.4\pm0.02$ & 1.40\\
$\phi$ & $P(\Sigma;L) \sim L^{-\phi} H(\Sigma / L^{-\phi}) $ &- & 0.88$\pm 0.03$ &\\
\hline
\end{tabular}
\end{table*}
\subsection{Interpretation in a random walk picture}
The above analysis suggests that the crossover from the pseudogap regime is associated with the stress drop of the largest avalanches that release a plastic stress (or strain) precisely of the order $\sim L^{-(d-d_f)}$ \cite{SalernoRobbins,Tyukodi2019}.
In the following, we develop a physical interpretation based on the picture of an elastoplastic block preforming a random walk in residual stress space in the presence of an absorbing booundary at $x=0$ (local yielding) \cite{LinWyartPRX}.
A stable walker located in the pseudogap region is going to feel a stress redistribution originating from the rearrangement of the weakest site destabilized through extremal dynamics. If a plastic rearrangement occurs in a site nearby, the kick felt by the stable walker is going to be of the order of $\delta x_u \sim 1 $ and if the kick is destabilizing, then the walker has a high chance of being absorbed. This situation can occur whatever the size of the avalanche. However, if the rearrangement is happening very far from the walker, the latter will experience only a far field effect, that is very sensitive to the size of the avalanche. In other words, it will only feel a kick proportional to the value of the lower cutoff of the mechanical noise. Obviously, in that case, the size of the avalanche matters. But as the walker is still located at $x>x_c$, it is not absorbed and its exploration can continue. The previously described scenario, i.e absorption triggered mainly by the near field, is valid up to moment where the walker reaches $x \sim \Delta x_c \sim x_c$. For this value of $x$, the largest avalanches induce far field kicks that can trigger the absorption of the stable site. Consequently, for all stable walkers located around and below $x_c$, large avalanches increase the probability of being absorbed. Large avalanches are not predominant, and it is more likely to have smaller avalanches. This makes it possible for the walker to explore the region below $x_c$ but now, the mechanical noise associated with avalanches of smaller size can also trigger absorption. The closer the walker comes to the absorbing boundary, the higher are the chances of absorption as almost all types of avalanches (extended or less extended) can induce a far field kick that can trigger absorption. Actually, the walker can explore small residual stress values up to $x \sim \delta x_c$, the lower cutoff of the stress kicks from the smallest events. In this case, surviving is only possible if the kick is stabilizing. \\
The enhanced absorption for $x \le x_c$ manifests by an increase of the drift in that region and is associated with an increase of destabilizing kicks related to far field rearrangements. The number of large avalanches increases with the system size, and so does the probability of being absorbed below $x_c$. This explains why $\tilde{\theta}$ is decreasing with $L$ and should completely vanish in the thermodynamic limit.
We expect that the departure of the pseudogap regime at $x_c \sim L^{-(d-d_f)}$ is shared by every EPM provided that small enough $x$ values are investigated. Although other recent works showing the departure from the pseudogap regime do not envision this scenario, they report values of crossover exponents $c$ compatible with $c=d-d_f$ both in 2d \cite{Tyukodi2019,Ferrero2020} and in 3d \cite{Ferrero2020}.
\subsection{Scaling relations} An important consequence of our findings pertains to the finite size scaling of the weakest sites in the system. Assuming that residual stresses can be seen as independent random variables, extreme value statistics dictates that $P(x)\sim x^{\theta}$, $\langle x_{min}\rangle\sim L^{-\alpha}$ with $\alpha=d/(1+\theta)$. Setting $\langle x_{min}\rangle\sim \langle S \rangle L^d$ leads to an important scaling law linking the pseudogap exponent to the avalanche statistics, $\tau=2 - \theta \alpha / d_f = 2 - \theta d/(d_f(\theta +1))$ \cite{LinWyart2014}.
As shown in the Appendix, this relation continues to hold even in the presence of an intermediate power law or plateau in $P(x)$ but only if the departure vanishes at least as fast as $\xm$ with increasing system size \cite{Ferrero2020}.
However, the fluctuations of total stress observed in our data indicate that correlations among the random variables play a significant role. We also show in the Appendix that when $P(x)$ gives way to a plateau below $x_c\sim \Delta x_c$, $\alpha = d-p$ in the thermodynamic limit, and thus with $c=d-d_f$
\begin{align}
\tau = 2 - \frac{(d-d_f)\theta}{d_f}\, .
\end{align}
With our measured value of $p=0.61$, $\theta = 0.52$, and $d_f = 0.95$ in the accessible range of system sizes, we find that $\alpha=1.39$ and $\tau=1.43$. These new scaling relations for $\alpha$ and $\tau$ are therefore in better agreement with our values $\alpha=1.4$ and $\tau = 1.37$ than the previously derived scaling laws $\alpha=d/(1+\theta)$ and $\tau = 2-\theta d /(d_f(1+\theta))$ \cite{LinWyart2015} which would predict $\alpha = 1.32$ and $\tau = 1.28$.
Table~\ref{tab:crit_exponents} summarizes all critical exponents and scaling laws proposed in the present work.
\section{Conclusion}
\red{In an amorphous solid sheared under athermal quasistatic conditions, the stress of a local region evolves due to both individual and collective plastic events elsewhere. Despite the presence of large avalanches, the unconditioned distribution of mechanical noise in our EPM implementation is still mostly dominated by small plastic events coming from individual STs bounded from below by the system size. This first characteristic scale manifests as a terminal plateau in the distribution of residual stresses that vanishes as $\sim L^{-d/\mu}$ with increasing system size and becomes irrelevant in the thermodynamic limit. This plateau corresponds to the one discussed in previous works \cite{Ferrero2020}, where it was attributed to the scale set by the typical or mean size of the stress kicks. Our interpretation is compatible with this picture because for $1\le \mu <2$, the mean of the kick distribution is indeed proportional to the lower cutoff which sets the smallest physical scale.}
Nevertheless, the mechanical noise from extended plastic events has dramatic implications. The stress change due to the largest avalanches sets a second, larger characteristic scale for the magnitude of the mechanical noise that causes a departure from the pure power law form for larger residual stresses. Below the crossover $x_c$, large avalanches become rare as a consequence of the enhanced absorption related to large events.
\red{For any finite system size $L$, our calculations reveal a previously unnoticed intermediate power-law regime in the distribution $P(x)$ below $x_c$ that is characterized by a system-size dependent exponent $\tilde{\theta}(L)$. The average size $\langle S \rangle$ of the plastic events increases with system size, implying an increase of the drift and an enhanced reduction in the density of individual rearrangements, leading to the plateauing of the intermediate power-law in the thermodynamic limit. Since the crossover is set by large avalanches, we expect $\xm$ to belong to an extended plateau region. Indeed $\xm$ vanishes more quickly with increasing system size than $x_c$. As explained above, this implies $\xm \sim L^{-\alpha}$ with $\alpha=d-\theta(d-d_f)$ at all system sizes and determines the average value of weakest site which in turn controls the mean avalanche size. In our new scaling relation, the avalanche shape as described by the fractal dimension $d_f$ enters explicitly. This insight is the central result of our work. }
Our results offer possible new routes of interpretation for the yielding transition. In particular, it would be interesting to see how the phenomenology observed here in two dimensions would manifest in three dimensions as the geometry of avalanches encoded in $d_f$ would change. Moreover, one might wonder whether the picture of correlated events inducing the departure of the pseudogap regime is still valid in the transient regime for which recent results from atomistic simulations reported the appearance of a plateau in $P(x)$ after only few percent of deformation \cite{Ruscher2020} and an increase in the fractal dimension \cite{Oyama2020} with respect to the elastic regime \cite{shang2020elastic}.
\begin{acknowledgements}
We thank Peter Sollich and Jack Parley for a critical reading of our manuscript. This research was undertaken thanks, in part, to funding from the Canada First Research Excellence Fund, Quantum Materials and Future Technologies Program. High performance computing resources were provided by ComputeCanada. C.R acknowledges financial support from the ANR LatexDry project, grant ANR-18-CE06-0001 of the French Agence Nationale de la Recherche.
\end{acknowledgements}
\noindent
|
1,477,468,750,911 | arxiv | \section*{Acknowledgment(s)}
|
1,477,468,750,912 | arxiv | \section*{Computational Details}
Geometry optimizations of the anion rings were performed
using the program package Gaussian16\cite{g16short} with the
PBE0\cite{adamo1999toward} density functional theory method
and a def2-TZVP basis set. After eliminating
unconverged, non-planar and acyclic structures,
the resulting 109 rings were combined with monovalent cations to construct
unit cell geometries for periodic calculations.
An initial lattice constant of minimum $c=5.0$ \AA{}---with the
metal placed at $c/2$ \AA{} from the ring---and
{\bf a} and {\bf b} vectors of length $50$ \AA~ were used to emulate vacuum.
Full relaxation of RMQ1Ds were performed with
the all-electron, numeric atom-centered orbital code
FHI-aims\cite{blum2009ab} with the PBE\cite{perdew1996generalized} functional. In all calculations, we used
1x1x64 $k$-grids, and
tight, tier-1 basis set for all atoms.
Lattice vectors and atomic coordinates were fully relaxed
with electron density converged to $10^{-6}$ $e/$\AA$^3$,
\INBLUE{analytic forces to $5\times10^{-4}$ eV/\AA},
and the maximum force
component \INBLUE{for lattice relaxation} to $5\times10^{-3}$ eV/\AA.
Phonon spectra were obtained \INBLUE{with a 113 supercell} using
finite-derivatives of analytic forces
with an atomic displacement of 0.005 \AA{}
and tighter thresholds \INBLUE{($10^{-7}$ $e/$\AA$^3$ for density;
$10^{-6}$ eV/\AA{} for analytic forces)}
using Phonopy\cite{togo2015first}
interfaced with FHI-aims. \INBLUE{These control settings yield
converged results as shown in Table~S2 and Table~S3 in the SI.
Table~S1 compares the performance of the PBE functional
with other semilocal and hybrid ones. Since the packing interaction in the
RMQ1D materials is essentially ionic in nature, van der Waals
correction to PBE influences the
geometries and phonons negligibly. The lack of critical long-range
interactions in the RMQ1D materials is also indicated by the fact that
the PBE results are comparable to that of the hybrid
methods HSE03 or PBE0.
Treatments at the latter level are necessary for
accurate modeling of Peierls phases with significant
long-range interaction within the unit cell as noted for polyacetylenes\cite{dumont2010peierls}.}
Single point calculations were
performed with the PBE0 functional
for accurate estimations of total energies and band gaps.
\section*{Supplementary Material}
The supplementary material contains:
(i) additional tables
with results benchmarking the performance of
DFT methods and control parameters
for selected RMQ1D materials,
(ii) reference chemical potential for calculating formation energies, and
(iii) screenshots for data-mining (see Data Availability).
\section*{Acknowledgments}
RR gratefully acknowledges Prof. Matthias Scheffler for providing a license to the FHI-AIMS program.
The authors thank Salini Senthil for setting up the data-mining framework.
We acknowledge support of the Department of Atomic Energy, Government
of India, under Project Identification No.~RTI~4007.
All calculations have been performed using the Helios computer cluster, which is an integral part of the MolDis Big Data facility, TIFR Hyderabad \href{https://moldis.tifrh.res.in}{(https://moldis.tifrh.res.in)}.
\section*{Data Availability}
The data that support the findings of this study are openly available
at \href{http://moldis.tifrh.res.in/data/rmq1d}{(https://moldis.tifrh.res.in/data/rmq1d)}.
\INBLUE{
Input and output files of corresponding calculations are deposited in the NOMAD repository \href{https://nomad-lab.eu/}{(https://nomad-lab.eu/)}.}
|
1,477,468,750,913 | arxiv | \section{Introduction}
With the growing amount of biomedical information available in the textual form, there has been considerable interest in applying natural language processing (NLP) techniques and machine learning (ML) methods to the biomedical literature~\cite{huang2015community,leaman2016taggerone,singhal2016text,peng2016improving}. One of the most important tasks is to extract protein-protein interaction relations~\cite{krallinger2008overview}.
Protein-protein interaction (PPI) extraction is a task to identify interaction relations between protein entities mentioned within a document. While PPI relations can span over sentences and even cross documents, current works mostly focus on PPI in individual sentences~\cite{pyysalo2008comparative,tikk2010comprehensive}. For example, ``ARFTS'' and ``XIAP-BIR3'' are in a PPI relation in the sentence ``ARFTS$_{\text{PROT1}}$ specifically binds to a distinct domain in XIAP-BIR3$_{\text{PROT2}}$''.
Recently, deep learning methods have achieved notable results in various NLP tasks~\cite{manning2015computational}. For PPI extraction, convolutional neural networks (CNN) have been adopted and applied effectively~\cite{zeng2014relation, quan2016multichannel, hua2016shortest}. Compared with traditional supervised ML methods, the CNN model is more generalizable and does not require tedious feature engineering efforts. However, how to incorporate linguistic and semantic information into the CNN model remains an open question. Thus previous CNN-based methods have not achieved state-of-the-art performance in the PPI task~\cite{zhao2016protein}.
In this paper, we propose a multichannel dependency-based convolutional neural network, McDepCNN\xspace, to provide a new way to model the syntactic sentence structure in CNN models. Compared with the widely-used one-hot CNN model (e.g., the shortest-path information is firstly transformed into a binary vector which is zero in all positions except at this shortest-path's index, and then applied to CNN), McDepCNN\xspace utilizes a separate channel to capture the dependencies of the sentence syntactic structure.
To assess McDepCNN\xspace, we evaluated our model on two benchmarking PPI corpora, AIMed~\cite{bunescu2005comparative} and BioInfer~\cite{pyysalo2007bioinfer}. Our results show that McDepCNN\xspace performs better than the state-of-the-art feature- and kernel-based methods.
We further examined McDepCNN\xspace in two experimental settings: a cross-corpus evaluation and an evaluation on a subset of ``difficult'' PPI instances previously reported~\cite{tikk2013detailed}. Our results suggest that McDepCNN\xspace is more generalizable and capable of capturing long distance information than kernel methods.
The rest of the manuscript is organized as follows. We first present related work. Then, we describe our model in Section~\ref{sec:cnn}, followed by an extensive evaluation and discussion in Section~\ref{sec:results}. We conclude in the last section.
\section{Related work}
From the ML perspective, we formulate the PPI task as a binary classification problem where discriminative classifiers are trained with a set of positive and negative relation instances. In the last decade, ML-based methods for the PPI tasks have been dominated by two main types: the feature-based vs. kernel based method. The common characteristic of these methods is to transform relation instances into a set of features or rich structural representations like trees or graphs, by leveraging linguistic analysis and knowledge resources. Then a discriminative classifier is used, such as support vector machines~\cite{vapnik1995nature} or conditional random fields~\cite{lafferty2001conditional}.
While these methods allow the relation extraction systems to inherit the knowledge discovered by the NLP community for the pre-processing tasks, they are highly dependent on feature engineering~\cite{fundel2007relex, vanlandeghem2008extracting, miwa2009rich, bui2011ahybrid}. The difficulty with feature-based methods is that data cannot always be easily represented by explicit feature vectors.
Since natural language processing applications involve structured representations of the input data, deriving good features is difficult, time-consuming, and requires expert knowledge. Kernel-based methods attempt to solve this problem by implicitly calculating dot products for every pair of examples~\cite{erkan2007semi, airola2008all, miwa2009protein, kim2010walk, chowdhury2011astudy}. Instead of extracting feature vectors from examples, they apply a similarity function between examples and use a discriminative method to label new examples~\cite{tikk2010comprehensive}. However, this method also requires manual effort to design a similarity function which can not only encode linguistic and semantic information in the complex structures but also successfully discriminate between examples. Kernel-based methods are also criticized for having higher computational complexity~\cite{collins2002new}.
Convolutional neural networks (CNN) have recently achieved promising results in the PPI task~\cite{zeng2014relation, hua2016shortest}. CNNs are a type of feed-forward artificial neural network whose layers are formed by a convolution operation followed by a pooling operation~\cite{lecun1998gradient}. Unlike feature- and kernel-based methods which have been well studied for decades, few studies investigated how to incorporate syntactic and semantic information into the CNN model. To this end, we propose a neural network model that makes use of automatically learned features (by different CNN layers) together with manually crafted ones (via domain knowledge), such as words, part-of-speech tags, chunks, named entities, and dependency graph of sentences. Such a combination in feature engineering has been shown to be effective in other NLP tasks also (e.g.~\cite{shimaoka2017neural}).
Furthermore, we propose a multichannel CNN, a model that was suggested to capture different ``views'' of input data. In the image processing, \cite{krizhevsky2012imagenet} applied different RGB (red, green, blue) channels to color images. In NLP research, such models often use separate channels for different word embeddings~\cite{yin2015multichannel, shi2016multichannel}. For example, one could have separate channels for different word embeddings~\cite{quan2016multichannel}, or have one channel that is kept static throughout training and the other that is fine-tuned via backpropagation~\cite{kim2014convolutional}. Unlike these studies, we utilize the head of the word in a sentence as a separate channel.
\section{CNN Model}
\label{sec:cnn}
\subsection{Model Architecture Overview}
Figure~\ref{fig:overview} illustrates the overview of our model, which takes a complete sentence with mentioned entities as input and outputs a probability vector (two elements) corresponding to whether there is a relation between the two entities. Our model mainly consists of three layers: a multichannel embedding layer, a convolution layer, and a fully-connected layer.
\begin{figure*}
\centering
\includegraphics[width=.98\textwidth,trim={.5cm 4cm 22cm 0}]{images/v4-figure.pdf}
\caption{Overview of the CNN model.\label{fig:overview}}
\end{figure*}
\subsection{Embedding Layer}
In our model, as shown in Figure~\ref{fig:overview}, each word in a sentence is represented by concatenating its word embedding, part-of-speech, chunk, named entity, dependency, and position features.
\subsubsection{Word embedding}
Word embedding is a language modeling techniques where words from the vocabulary are mapped to vectors of real numbers. It has been shown to boost the performance in NLP tasks. In this paper, we used pre-trained word embedding vectors~\cite{pyysalo2013distributional} learned on PubMed articles using the word2vec tool~\cite{mikolov2013distributed}. The dimensionality of word vectors is 200.
\subsubsection{Part-of-speech}
We used the part-of-speech (POS) feature to extend the word embedding. Similar to~\cite{zhao2016drug}, we divided POS into eight groups. Then each group is mapped to an eight-bit binary vector. In this way, the dimensionality of the POS feature is 8.
\subsubsection{Chunk}
We used the chunk tags obtained from Genia Tagger for each word~\cite{tsuruoka2005bidirectional}. We encoded the chunk features using a one-hot scheme. The dimensionality of chunk tags is 18.
\subsubsection{Named entity}
To generalize the model, we used four types of named entity encodings for each word. The named entities were provided as input by the task data. In one PPI instance, the types of two proteins of interest are PROT1 and PROT2 respectively. The type of other proteins is PROT, and the type of other words is O. If a protein mention spans multiple words, we marked each word with the same type (we did not use a scheme such as IOB). The dimensionality of named entity is thus 4.
\subsubsection{Dependency}
To add the dependency information of each word, we used the label of ``incoming'' edge of that word in the dependency graph. Take the sentence from Figure~\ref{fig:dg} as an example, the dependency of ``ARFTS'' is ``nsubj'' and the dependency of ``binds'' is ``ROOT''. We encoded the dependency features using a one-hot scheme, and their dimensionality is 101.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{images/figure2.pdf}
\caption{Dependency graph.\label{fig:dg}}
\end{figure}
\subsubsection{Position feature}
In this work, we consider the relationship of two protein mentions in a sentence. Thus, we used the position feature proposed in~\cite{sahu2016relation}, which consists of two relative distances, $d1$ and $d2$, for representing the distances of the current word to PROT1 and PROT2 respectively. For example in Figure~\ref{fig:dg}, the relative distances of the word ``binds'' to PROT1 (``ARFTs'') and PROT2 (``XIAP-BIR3'') are 2 and -6, respectively. Same as in Table S4 of~\cite{zhao2016drug}, both $d1$ and $d2$ are non-linearly mapped to a ten-bit binary vector, where the first bit stands for the sign and the remaining bits for the distance.
\subsection{Multichannel Embedding Layer}
A novel aspect of McDepCNN\xspace is to add the ``head'' word representation of each word as the second channel of the embedding layer. For example, the second channel of the sentence in Figure~\ref{fig:dg} is ``binds binds ROOT binds domain domain binds domain'' as shown in Figure~\ref{fig:overview}. There are several advantages of using the ``head'' of a word as a separate channel.
First, it intuitively incorporates the dependency graph structure into the CNN model. Compared with~\cite{hua2016shortest} which used the shortest path between two entities as the sole input for CNN, our model does not discard information outside the scope of two entities. Such information was reported to be useful~\cite{zhou2007tree}. Compared with~\cite{zhao2016drug} which used the shortest path as a bag-of-word sparse 0-1 vector, our model intuitively reflects the syntactic structure of the dependencies of the input sentence.
Second, together with convolution, our model can better capture longer distance dependencies than the sliding window size. As shown in Figure~\ref{fig:dg}, the second channel of McDepCNN\xspace breaks the dependency graph structure into structural $<$head word, child word$>$ pairs where each word is a modifier of its previous word. In this way, it reflects the skeleton of a constituent where the second channel shadows the detailed information of all sub-constituents in the first channel. From the perspective of the sentence string, the second channel is similar to a gapped $n$-gram or a skipped $n$-gram where the skipped words are based on the structure of the sentence.
\subsection{Convolution}
We applied convolution to input sentences to combine two channels and get local features~\cite{collobert2011natural}. Consider $x_1,\dotsc,x_n$ to be the sequence of word representations in a sentence where
\begin{equation}
x_i=E_{word}\oplus\dotsb\oplus E_{poistion}, i=1,\dotsc,n
\end{equation}
Here $\oplus$ is concatenation operation so $x^i \in \mathbb{R}^d$ is the embedding vector for the $i$th word with the dimensionality $d$. Let $x_{i:i+k-1}^{c}$ represent a window of size $k$ in the sentence for channel $c$. Then the output sequence of the convolution layer is
\begin{equation}
con_i=f(\sum_{c}{w_k^{c} x_{i:i+k-1}^{c}}+b_k)
\end{equation}
where $f$ is a rectify linear unit (ReLU) function and $b_k$ is the biased term. Both $w_k^{c}$ and $b_k$ are the learning parameters.
1-max pooling was then performed over each map, i.e., the largest number from each feature map was recorded. In this way, we obtained fixed length global features for the whole sentence. The underlying intuition is to consider only the most useful feature from the entire sentence.
\begin{equation}
m_k = \max_{1\leq i \leq n-k+1}{(con_i)}
\end{equation}
\subsection{Fully Connected Layer with Softmax}
To make a classifier over extracted global features, we first applied a fully connected layer to the feature vectors of multichannel obtained above.
\begin{equation}
O = w_o (m_3 \oplus m_5 \oplus m_7) + b_o
\end{equation}
The final softmax then receives this vector $O$ as input and uses it to classify the PPI; here we assume binary classification for the PPI task and hence depict two possible output states.
\begin{equation}
p(ppi|x,\theta)=\frac{e^{O_{ppi}}}{e^{O_{ppi}}+e^{O_{other}}}
\end{equation}
Here, $\theta$ is a vector of the hyper-parameters of the model, such as $w_k^c$, $b_k$, $w_o$, and $b_o$. Further, we used dropout technique in the output of the max pooling layer for regularization~\cite{srivastava2014dropout}. This prevented our method from overfitting by randomly ``dropping'' with probability $(1-p)$ neurons during each forward/backward pass while training.
\subsection{Training}
To train the parameters, we used the log-likelihood of parameters on a mini-batch training with a batch size of $m$. We use the Adam algorithm to optimize the loss function~\cite{kingma2015adam}.
\begin{equation}
J(\theta) = \sum_{m}{p(ppi^{(m)}|x^{(m)},\theta})
\end{equation}
\subsection{Experimental setup}
For our experiments, we used the Genia Tagger to obtain the part-of-speech, chunk tags, and named entities of each word~\cite{tsuruoka2005bidirectional}. We parsed each sentence using the Bllip parser with the biomedical model~\cite{charniak2000maximum, mcclosky2009any}. The universal dependencies were then obtained by applying the Stanford dependencies converter on the parse tree with the \textit{CCProcessed} and \textit{Universal} options~\cite{de2014universal}.
We implemented the model using TensorFlow~\cite{abadi2016tensorflow}. All trainable variables were initialized using the Xavier algorithm~\cite{glorot2010understanding}. We set the maximum sentence length to 160. That is, longer sentences were pruned, and shorter sentences were padded with zeros. We set the learning rate to be 0.0007 and the dropping probability 0.5. During the training, we ran 250 epochs of all the training examples. For each epoch, we randomized the training examples and conducted a mini-batch training with a batch size of 128 ($m=128$).
In this paper, we experimented with three window sizes: 3, 5 and 7, each of which has 400 filters. Every filter performs convolution on the sentence matrix and generates variable-length feature maps. We got the best results using the single window of size 3 (see Section~\ref{sec:results and discussion})
\section{Results and Discussion}
\label{sec:results}
\subsection{Data}
We evaluated McDepCNN\xspace on two benchmarking PPI corpora, AIMed~\cite{bunescu2005comparative} and BioInfer~\cite{pyysalo2007bioinfer}. These two corpora have different sizes (Table~\ref{tab:corpus}) and vary slightly in their definition of PPI~\cite{pyysalo2008comparative}.
\begin{table}
\caption{Statistics of the corpora.\label{tab:corpus}}
\centering
\begin{tabularx}{.48\textwidth}{Xrrr}
\hline
Corpus & Sentences & \# Positives & \# Negatives\\
\hline
AIMed & 1,955 & 1,000 & 4,834\\
BioInfer & 1,100 & 2,534 & 7,132\\
\hline
\end{tabularx}
\end{table}
\citet{tikk2010comprehensive} conducted a comparison of a variety of PPI extraction systems on these two corpora\footnote{\url{http://mars.cs.utu.fi/PPICorpora}}. In order to compare, we followed their experimental setup to evaluate our methods: self-interactions were excluded from the corpora and 10-fold cross-validation (CV) was performed.
\subsection{Results and discussion}
\label{sec:results and discussion}
Our system performance, as measured by Precision, Recall, and F1-score, is shown in Table~\ref{tab:evaluation}. To compare, we also include the results published in~\cite{tikk2010comprehensive, peng2015extended, vanlandeghem2008extracting, fundel2007relex}. Row 2 reports the results of the previous best deep learning system on these two corpora. Rows 3 and 4 report the results of two previous best single kernel-based methods, an APG kernel~\cite{airola2008all, tikk2010comprehensive} and an edit kernel~\cite{peng2015extended}. Rows 5-6 report the results of two rule-based systems.
As can be seen, McDepCNN\xspace achieved the highest results in both precision and overall F1-score on both datasets.
\begin{table*}
\newcommand{\rowno}[1]{$_\mathit{#1}$}
\caption{Evaluation results. Performance is reported in terms of Precision, Recall, and F1-score.\label{tab:evaluation}}
\centering
\begin{tabularx}{\textwidth}{lXrrrrrrr}
\hline
& & \multicolumn{3}{c}{AIMed} && \multicolumn{3}{c}{BioInfer}\\
\cline{3-5}\cline{7-9}
\multicolumn{2}{l}{Method} & P & R & F && P & R & F\\
\hline
\rowno{1} & McDepCNN\xspace & 67.3 & 60.1 & \textbf{63.5} && 62.7 & 68.2 & \textbf{65.3}\\
\rowno{2} & Deep neutral network~
\cite{zhao2016protein} & 51.5 & 63.4 & 56.1 && 53.9 & 72.9 & 61.6\\
\rowno{3} & All-path graph kernel~\cite{tikk2010comprehensive} & 49.2 & 64.6 & 55.3 && 53.3 & 70.1 & 60.0\\
\rowno{4} & Edit kernel~\cite{peng2015extended} & 65.3 & 57.3 & 61.1 && 59.9 & 57.6 & 58.7\\
\rowno{5} & Rich-feature~\cite{vanlandeghem2008extracting} & 49.0 & 44.0 & 46.0 && -- & -- & --\\
\rowno{6} & RelEx~\cite{fundel2007relex} & 40.0 & 50.0 & 44.0 && 39.0 & 45.0 & 41.0\\
\hline
\end{tabularx}
\end{table*}
Note that we did not compare our results with two recent deep-learning approaches~\cite{hua2016shortest, quan2016multichannel}. This is because unlike other previous studies, they artificially removed sentences that cannot be parsed and discarded pairs which are in a coordinate structure. Thus, our results are not directly comparable with theirs.
Neither did we compare our method with~\cite{miwa2009rich} because they combined, in a rich vector, analysis from different parsers and the output of multiple kernels.
To further test the generalizability of our method, we conducted the cross-corpus experiments where we trained the model on one corpus and tested it on the other (Table~\ref{tab:cc}). Here we compared our results with the shallow linguistic model which is reported as the best kernel-based method in~\cite{tikk2013detailed}.
The cross-corpus results show that McDepCNN\xspace achieved 24.4\% improvement in F-score when trained on BioInfer and tested on AIMed, and 18.2\% vice versa.
\begin{table*}
\caption{Cross-corpus results. Performance is reported in terms of Precision, Recall, and F1-score.\label{tab:cc}}
\centering
\begin{tabularx}{\textwidth}{lXrrrrrrr}
\hline
& & \multicolumn{3}{c}{AIMed} && \multicolumn{3}{c}{BioInfer}\\
\cline{3-5}\cline{7-9}
Method & Training corpus & P & R & F && P & R & F\\
\hline
McDepCNN\xspace & AIMed & -- & -- & -- && 39.5 & 61.4 & \textbf{48.0}\\
& BioInfer & 40.1 & 65.9 & \textbf{49.9} && -- & -- & --\\
Shallow linguistic~\cite{tikk2010comprehensive} & AIMed & -- & -- & -- && 29.2 & 66.8 & 40.6\\
& BioInfer & 76.8 & 27.2 & 41.5 && -- & -- & --\\
\hline
\end{tabularx}
\end{table*}
To better understand the advantages of McDepCNN\xspace over kernel-based methods, we followed the lead of~\cite{tikk2013detailed} to compare the method performance on some known ``difficult'' instances in AIMed and BioInfer. This subset of difficult instances is defined as ~10\% of all pairs with the least number of 14 kernels being able to classify correctly (Table~\ref{tab:difficult}).
\begin{table}
\caption{Instances that are the most difficult to classify correctly by the collection of kernels using cross-validation~\cite{tikk2013detailed}.\label{tab:difficult}}
\centering
\begin{tabularx}{.48\textwidth}{Xrr}
\hline
Corpus & Positive difficult & Negative difficult\\
\hline
AIMed & 61 & 184\\
BioInfer & 111 & 295\\
\hline
\end{tabularx}
\end{table}
Table~\ref{tab:comparisons difficult} shows the comparisons between McDepCNN\xspace and kernel-based methods on difficult instances. The results of McDepCNN\xspace were obtained from the difficult instances combined from AIMed and BioInfer (172 positives and 479 negatives). And the results of APG, Edit, and SL were obtained from AIMed, BioInfer, HPRD50, IEPA, and LLL (190 positives and 521 negatives)~\cite{tikk2013detailed}. While the input datasets are different, our outcomes are remarkably higher than the prior studies. The results show that McDepCNN\xspace achieves 17.3\% in F1-score on difficult instances – which is more than three times better than other kernels. Since there are no examples of difficult instances that could not be classified correctly by at least one of the 14 kernel methods, below, we only list some examples that McDepCNN\xspace can classify correctly.
\begin{enumerate}
\item Immunoprecipitation experiments further reveal that the fully assembled receptor complex is composed of two \textbf{IL-6}$_{\text{PROT1}}$, two \textbf{IL-6R alpha}$_{\text{PROT2}}$, and two gp130 molecules.
\item The phagocyte NADPH oxidase is a complex of membrane cytochrome b558 (comprised of subunits p22-phox and gp91-phox) and three cytosol proteins (\textbf{p47-phox}$_{\text{PROT1}}$, p67-phox, and p21rac) that translocate to membrane and bind to \textbf{cytochrome b558}$_{\text{PROT2}}$.
\end{enumerate}
Together with the conclusions in~\cite{tikk2013detailed}, ``positive pairs are more difficult to classify in longer sentences'' and ``most of the analyzed classifiers fail to capture the characteristics of rare positive pairs in longer sentences'', this comparison suggests that McDepCNN\xspace is probably capable of better capturing long distance features from the sentence and are more generalizable than kernel methods.
\begin{table}
\caption{Comparisons on the difficult instances with CV evaluation. Performance is reported in terms of Precision, Recall, and F1-score$^*$.\label{tab:comparisons difficult}}
\centering
\begin{tabularx}{.48\textwidth}{Xrrr}
\hline
Method & P & R & F\\
\hline
McDepCNN\xspace& 14.0& 22.7& \textbf{17.3}\\
All-path graph kernel& 4.3& 7.9& 5.5\\
Edit kernel& 4.8& 5.8& 5.3\\
Shallow linguistic & 3.6& 7.9&4.9\\
\hline
\end{tabularx}
\raggedright
\small $^*$~The results of McDepCNN were obtained on the difficult instances combined from AIMed and BioInfer (172 positives and 479 negatives). The results of others~\cite{tikk2013detailed} were obtained from AIMed, BioInfer, HPRD50, IEPA, and LLL (190 positives and 521 negatives).
\end{table}
Finally, Table~\ref{tab:comparisons} compares the effects of different parts in McDepCNN\xspace. Here we tested McDepCNN\xspace using 10-fold of AIMed. Row 1 used a single window with the length of 3, row 2 used two windows, and row 3 used three windows. The reduced performance indicate that adding more windows did not improve the model. This is partially because the multichannel in McDepCNN\xspace has captured good context features for PPI. Second, we used the single channel and retrained the model with window size 3. The performance then dropped 1.1\%. The results underscore the effectiveness of using the head word as a separate channel in CNN.
\begin{table}
\caption{Contributions of different parts in McDepCNN\xspace. Performance is reported in terms of Precision, Recall, and F1-score.\label{tab:comparisons}}
\centering
\begin{tabularx}{.48\textwidth}{Xrrrr}
\hline
Method & P & R & F & $\Delta$\\
\hline
window = 3 & 67.3 & 60.1 & 63.5 & \\
window = [3,5] & 60.9 & 62.4 & 61.6 & (1.9)\\
window = [3,5,7] & 61.7 & 61.9 & 61.8 & (1.7)\\
Single channel & 62.8 & 62.3 & 62.6 & (1.1)\\
\hline
\end{tabularx}
\end{table}
\section{Conclusion}
In this paper, we describe a multichannel dependency-based convolutional neural network for the sentence-based PPI task. Experiments on two benchmarking corpora demonstrate that the proposed model outperformed the current deep learning model and single feature-based or kernel-based models. Further analysis suggests that our model is substantially more generalizable across different datasets. Utilizing the dependency structure of sentences as a separated channel also enables the model to capture global information more effectively.
In the future, we would like to investigate how to assemble different resources into our model, similar to what has been done to rich-feature-based methods~\cite{miwa2009rich} where the current best performance was reported (F-score of 64.0\% (AIMed) and 66.7\% (BioInfer)). We are also interested in extending the method to PPIs beyond the sentence boundary. Finally, we would like to test and generalize this approach to other biomedical relations such as chemical-disease relations~\cite{wei2016assessing}.
\section*{Acknowledgments}
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of Medicine. We are also grateful to Robert Leaman for the helpful discussion.
|
1,477,468,750,914 | arxiv | \section*{Acknowledgments}
The authors are grateful to S\~{a}o Paulo Research Foundation - FAPESP (\#2013/07375-0, \#2014/12236-1, \#2017/25908-6, and \#2019/07825-1, Brazilian National Council for Scientific and Technological Development - CNPq (\#307066/2017-7, and \#427968/2018-6), and Petrobras (\#2017/00285-6).
\bibliographystyle{IEEEtran}
\section{Discussion}
\label{s.discuss}
Unfortunately, the approaches employed for comparison purposes did not release their training evolution for a direct comparison in Section~\ref{ss.training}. Nevertheless, it is possible to observe that all models performed very well for the image classification task. In Table~\ref{tbl.regs}, MaxDropout shows a result as good as Cutout for CIFAR-100 dataset, demonstrating it performs as expected when improving baseline models' results. However, it did not perform as well for CIFAR-10 dataset, but it still improves the baseline model results by almost $0.5\%$.
Results from Table~\ref{tbl.cutmax} show that the MaxDropout supports the improvement when another regularizer is used along with. Although Cutout has been used to demonstrate the proposed approach's effectiveness, one can consider other similar regularizers. The most interesting results can be found in Table~\ref{tbl.wrn}, where MaxDropout is directly compared to the standard Dropout. It shows relevant gains over the baseline model, and it performs a little better than Dropout using the same drop rate, indicating that it may be the case to find out the best drop rates for MaxDropout, which can be data or model-dependent.
\section{Conclusions and Future Works}
\label{s.conclusion}
In this paper, we introduced MaxDropout, an improved version of the original Dropout method. Experiments show that it can be incorporated into existing models, working along with other regularizers, such as Cutout, and can replace the standard Dropout with some accuracy improvement.
With relevant results, we intend to conduct a more in-depth investigation to figure out the best drop rates depending on the model and the training data. Moreover, the next step is to re-implement MaxDropout and make it available in other frameworks, like TensorFlow and MXNet, and test in other tasks, such as object detection and image segmentation.
Nonetheless, we showed that MaxDropout works very well for image classification tasks. For future works, we intended to perform evaluations in other different tasks such as natural language processing and automatic speech recognition.
\section{Experimental Results}
\label{s.results}
This section is divided into four main parts. First, we provided a convergence study during training for all experiments. Later, we compared the results of MaxDropout with other methods showing that, when combined with other regularizers, MaxDropout can lead to even better performance than their original versions. Finally, in the last part, we make a direct comparison between the proposed approach and standard Dropout by replacing the equivalent layer with the MaxDropout in the Wide-ResNet.
\subsection{Training Evolution}
\label{ss.training}
Figures~\ref{f.plot_cifar10} and~\ref{f.plot_cifar100} depict the mean accuracies concerning the test set considering the $5$ runs during training phase. Since we are dealing with regularizers, it makes sense to analyze their behavior during training and, for each epoch, compute their accuracy over the test set. One can notice that the proposed approach can improve the results even when the model is near to overfit.
\begin{figure}[htb!]
\centering
\includegraphics[width=\columnwidth]{figs/plot_cifar10.png}
\caption{Convergence over CIFAR-10 test set.}
\label{f.plot_cifar10}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\columnwidth]{figs/plot_cifar100.png}
\caption{Convergence over CIFAR-100 test set.}
\label{f.plot_cifar100} %
\end{figure}
\subsection{Comparison Against Other Regularizers}
\label{ss.max}
As aforementioned, we considered a comparison against some baselines over five runs and exposed their mean accuracies and standard deviation in Table~\ref{tbl.regs}. Such results evidence the robustness of the proposed approach against two other well-known regularizers, i.e., Cutout, and the RandomErasing.
\begin{table}[htb!]
\centering
\begin{tabular}{@{} l *4c @{}}
\toprule
\multicolumn{1}{c}{Approach} & CIFAR-100 & CIFAR-10 \\
\midrule
ResNet18~\cite{he2016identity,zhong2020random} & ${24.50 \pm 0.19}$ & ${5.17 \pm 0.18}$ & \\
ResNet18+RandomErasing~\cite{zhong2020random} & ${24.03 \pm 0.19}$ & ${4.31 \pm 0.07}$ & \\
ResNet18+Cutout~\cite{devries2017cutout} & ${21.96 \pm 0.24}$ & \bm{${3.99 \pm 0.13}$} & \\
ResNet18+MaxDropout & \bm{${21.93 \pm 0.07}$} & ${4.66 \pm 0.14}$ & \\ \bottomrule
\end{tabular}
\caption{Results of MaxDropout and other regularizers}
\label{tbl.regs}
\end{table}
From Table~\ref{tbl.regs}, one can notice that when MaxDropout is incorporated within ResNet18 blocks, it allows the model to accomplish relevant and better results. Regarding the CIFAR-10 dataset, the model that uses MaxDropout achieved a reduction of around $0.5\%$ in the error rate when compared to ResNet18. However, concerning the CIFAR-100 dataset, the model achieved over $2\%$ less error than the same baseline, besides being statistically similar to Cutout.
\subsection{Working Along with Other Regularizers}
\label{ss.other_reg}
Since MaxDropout works inside the neural network by changing the hidden layers' values, it permits the concomitant functionality with other methods that change information from the input, such as Cutout. Table~\ref{tbl.cutmax} portrays the results of each stand-alone approach and their combination. From these results, one can notice a slight improvement in performance considering the CIFAR-100 dataset, but it ends up as a relevant gain on CIFAR-10 dataset, reaching the best results so far.
\begin{table}[htb!]
\centering
\begin{tabular}{@{} l *4c @{}}
\toprule
\multicolumn{1}{c}{Regularizer} & CIFAR-100 & CIFAR-10 \\
\midrule
Cutout~\cite{devries2017cutout} & ${21.96 \pm 0.24}$ & ${3.99 \pm 0.13}$ & \\
MaxDropout & ${21.93 \pm 0.07}$ & ${4.66 \pm 0.14}$ & \\
MaxDropout + Cutout & \bm{${21.82 \pm 0.13}$} & \bm{${3.76 \pm 0.08}$} & \\ \bottomrule
\end{tabular}
\caption{Results of the MaxDropout combined with Cutout.}
\label{tbl.cutmax}
\end{table}
\subsection{MaxDropout x Dropout}
\label{ss.maxvsdrop}
One interesting point such a work stands for concerns the following question: Is the MaxDropout comparable to the standard Dropout~\cite{srivastava2014dropout}? To answer this question, we compared the proposed approach against standard Dropout by replacing it with MaxDropout on the Wide Residual Network (WRN).
From Table~\ref{tbl.wrn}, one can observe the model using MaxDropout works slightly better than standard Dropout, leading to dropping in the error rate regarding CIFAR-100 and CIFAR-10 datasets by $0.04$ and $0.05\%$, respectively. Although it may not look an impressive improvement, we showed that the proposed approach has a margin to improve the overall results, mainly when the threshold of the MaDropout is taken into account (i.e., ablation studies)\footnote{We did not show the standard deviation since the original original study did not present such an information as well.}.
\begin{table}[htb!]
\centering
\begin{tabular}{@{} l *4c @{}}
\toprule
\multicolumn{1}{c}{Model} & CIFAR-100 & CIFAR-10 \\
\midrule
WRN~\cite{zagoruyko2016wide} & ${19.25}$ & ${4.00}$ & \\
WRN + Dropout~\cite{zagoruyko2016wide} & ${18.85}$ & ${3.89}$ & \\
WRN + MaxDropout & \textbf{18.81} & \textbf{3.84} & \\ \bottomrule
\end{tabular}
\caption{Results of Dropout and MaxDropout over the WRN.}
\label{tbl.wrn}
\end{table}
\section{Introduction}
\label{s.introduction}
Following the advent of deeply connected systems and the new era of information, tons of data are generated every moment by different devices, such as smartphones or notebooks. A significant portion of the data can be collected from images or videos, which are usually encoded in a high-dimensional domain. Deep Learning (DL) techniques have been broadly employed in different knowledge fields, mainly due to their ability to create authentic representations of the real world, even for multimodal information. Recently, DL has emerged as a prominent area in Machine Learning, since its techniques have achieved outstanding results and established several hallmarks in a wide range of applications, such as motion tracking~\cite{doulamis2018}, action recognition~\cite{cao2016}, and human pose estimation~\cite{Toshev2014DeepPose,chen2014}, to cite a few.
Deep learning architectures such as Convolutional Neural Networks (CNNs), Deep Autoencoders, and Long Short-Term Memory Networks are powerful tools that deal with different image variations such as rotation or noise. However, their performance is highly data-dependent, which can cause some problems during training and further generalization for unseen examples. One common problem is overfitting, where the technique memorizes the data either due to the lack of information or because of too complex neural network architectures.
Such a problem is commonly handled with regularization methods, which represent a wide area of study in the scientific community. The employment of one or more of such techniques provides useful improvements in different applications. Among them, two well-known methods can be referred: (i) so-called ``Batch Normalization" and (ii) ``Dropout". The former was introduced by Ioffe et al.~\cite{ioffe2015batch} and performs data normalization in the output of each layer. The latter was introduced by Srivastava et al.~\cite{srivastava2014dropout}, and randomly deactivates some neurons present in each layer, thus forcing the model to be sparse.
However, dropping neurons out at random may slow down convergence during learning. To cope with this issue, we introduced an improved approach for regularizing deeper neural networks, hereinafter called ``MaxDropout"~\footnote{https://github.com/cfsantos/MaxDropout-torch}, which shuts off neurons based on their maximum activation values, i.e., the method drops the most active neurons to encourage the network to learn better and more informative features. Such an approach achieved remarkable results for the image classification task, concerning two important well-established datasets.
The remainder of this paper is presented as follows: Section~\ref{s.related} introduces the correlated works, while Section~\ref{s.proposed} presents the proposed approach. Further, Section~\ref{s.experiments} describes the methodology and datasets employed in this work. Finally, Sections~\ref{s.results} and \ref{s.conclusion} provide the experimental results and conclusions, respectively.
\section{Proposed Approach}
\label{s.proposed}
The proposed approach aims at shutting out the most activated neurons, which is responsible for inducing sparsity in the model, at the step that encourage the hidden neurons to learn more informative features and extract useful information that positively impacts the network's generalization ability.
For the sake of visualization, Figures~\ref{f.simulation}a-c show the differences between the proposed approach and the standard Dropout, in which Figure~\ref{f.simulation}a stands for the original grayscale image and Figures~\ref{f.simulation}b and~\ref{f.simulation}c denote their corresponding outcomes after Dropout and MaxDropout. It is important to highlight that Dropout removes any pixel of the image randomly, while MaxDropout tends to inactivate the lighter pixels.
\begin{figure}[!ht]
\centerline{\begin{tabular}{cc}
\includegraphics[width=0.47\columnwidth]{figs/original.png} &
\includegraphics[width=0.47\columnwidth]{figs/droped.png} \\
(a) & (b)\\
\includegraphics[width=0.47\columnwidth]{figs/maxdroped.png} &
\includegraphics[width=0.47\columnwidth]{figs/colored.png} \\
(c) & (d)\\
\includegraphics[width=0.47\columnwidth]{figs/colored_drop.png} &
\includegraphics[width=0.47\columnwidth]{figs/maxdroped_colored.png} \\
(e) & (f)\\
\end{tabular}}
\caption{Simulation using grayscale (a)-(c) and colored images (d)-(f): (a) original grayscale image and its outcomes after (b) Dropout and (c) MaxDropout transformations, respectively, and (d) original colored image and its outcomes after (e) Dropout and (f) MaxDropout transformations, respectively. In all cases, the drop rout ate is $50$\%.}
\label{f.simulation}
\end{figure}
The rationale behind the proposed approach can be better visualized in a tensor-like data. Considering the colored image showed in Figure~\ref{f.simulation}d, one can observe its outcome after Dropout and MaxDropout transformations in Figures~\ref{f.simulation}e and~\ref{f.simulation}f, respectively. Regarding standard Dropout, the image looks like a colored uniform noise, while MaxDropout could remove entire regions composed of bright pixels (i.e., pixels with high activation values, as expected).
For the sake of clarification purposes, Algorithm~\ref{maxdropout-algorithm} implements the proposed MaxDropout\footnote{The pseudocode uses Keras syntax.}: the main loop in Lines $1-9$ is in charge of the training procedure, and the inner loop in Lines $2-8$ is executed for each hidden layer. Line $3$ computes a random value uniformly distributed that is going to work as the dropout rate $r$. The output of each layer produces an $x\times y\times z$ tensor, where $x$ and $y$ stand for the image's size, and $z$ denotes the number of feature maps produced for each convolutional kernel. Line $4$ creates a copy of the original tensor and uses an $L_{2}$ normalization to produce an output between $0$ and $1$.
\begin{algorithm}[h]
\caption{Pseudocode for MaxDropout training algorithm.}
\label{maxdropout-algorithm}
\begin{algorithmic}[1]
\While {$training$}
\For{each layer}
\State $rate\gets U(0, r)$
\State $normTensor\gets L2Normalize(Tensor)$
\State $max\gets Max(normTensor)$
\State $keptIdx\gets IdxOf(normTensor, (1-rate)*max)$
\State $returnTensor\gets Tensor * KeptIdx$
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
Later, Line $5$ finds the biggest value in the normalized tensor, once it may not be equal to one\footnote{Depending on the floating-point precision, the maximum value can be extremely close but not equal to one.}. Line $6$ creates another tensor with the same shape as the input one and assigns $1$ where $(1 - rate) \times max$ at a certain tensor position is greater than a given threshold; otherwise it sets such a position to $0$. Finally, Line $7$ creates the tensor to be used in the training phase, where each position of the original tensor is multiplied by the value in the respective position of the tensor created in the line before. Therefore, such a procedure guarantees that only values smaller than the threshold employed in Line $3$ go further on.
\section{Related Works}
\label{s.related}
Regularization methods are widely used by several deep neural networks (DNNs) and with different architectures. The main idea is to help the system to prevent the overfitting problem, which causes the data memorization instead of generalization, also allowing DNNs to achieve better results. A well-known regularization method is Batch Normalization (BN), which works by normalizing the output of a giving layer at each iteration. The original work~\cite{ioffe2015batch} showed that such a process speeds up convergence for image classification tasks. Since then, several other works~\cite{zhang2017beyond,simon2016imagenet,wang2017gated}, including the current state-of-the-art on image classification~\cite{tan2019efficientnet}, also highlighted its importance.
As previously mentioned, Dropout is one of the most employed regularization methods for DNNs. Such an approach was developed between 2012 and 2014~\cite{srivastava2014dropout}, showing significant improvements in neural network's performance for various tasks, ranging from image classification, speech recognition, and sentimental analysis. The standard Dropout works by creating, during training time, a mask that direct multiples all values of a given tensor. The values of such a mask follow the Bernoulli distribution, being $0$ with a probability $p$ and $1$ with a probability $1 - p$ (according to the original work~\cite{srivastava2014dropout}, best value for $p$ in hidden layers is $0.5$). During training, some values will be kept while others will be changed to $0$. Visually, it means that some neurons will be deactivated while others will work normally.
After the initial development of the standard Dropout, Wang and Manning~\cite{wang2013fast} explored different strategies for sampling since at each mini-batch a subset of input features is turned off. Such a fact highlights an interesting Dropout feature since it represents an approximation by a Markov chain executed several times during training. Since the Bernoulli distribution tends to a Normal distribution when the dimensional space is high enough, such an approximation allows Dropout to its best without sampling.
In 2015, Kingma et al.~\cite{kingma2015variational} proposed the Variational Dropout, a generalization of Gaussian Dropout in which the dropout rates are learned instead of randomly attributed. They investigated a local reparameterization approach to reduce the variance of stochastic gradients in variational Bayesian inference of a posterior over the model parameters, thus retaining parallelizability. On the other hand, in 2017, Gal et al.~\cite{gal2017concrete} proposed a new Dropout variant to reinforcement learning models. Such a method aims to improve the performance and better calibration of uncertainties once it is an intrinsic property of the Dropout. In such a field, the proposed approach allows the agent to adapt its uncertainty dynamically as more data is seen.
Later on, Molchanov et al.~\cite{molchanov2017variational} explored the Variational Dropout proposed by Kingma et al.~\cite{kingma2015variational}. The authors extended it to situations when dropout rates are unbounded, leading to very sparse solutions in fully-connected and convolutional layers. Moreover, they achieved a reduction in the number of parameters up to $280$ times on LeNet architectures, and up to $68$ times on VGG-like networks with a small decrease in accuracy. Such a fact points out the importance of sparsity for parameter reduction and performance overall improvement.
Paralleling, other regularization methods have been emerged, like the ones that change the input of the neural network. For instance, Cutout~\cite{devries2017cutout} works by literally cutting off a region of the image (by setting the values of a random region to $0$). This simple approach shows relevant results on several datasets. Another similar regularizer is the RandomErasing~\cite{zhong2020random}, that works in the same manner, but instead of setting the values of the region to $0$, it changes these pixels to random values.
By bringing the concepts mentioned above and works close to the proposed approach, one can point out that the MaxDropout is similar to the standard Dropout, however, instead of randomly dropping out neurons, our approach follows a policy for shutting off the most active cells, representing a selection of neurons that may overfit the data, or discourage the fewer actives from extracting useful information.
\section{Experiments}
\label{s.experiments}
In this section, we describe the methodology employed to validate the robustness of the proposed approach. The hardware used for the paper is an Intel Xeon Bronze\textsuperscript{\textregistered} $3104$ CPU with $6$ cores ($12$ threads), $1.70$GHz, $96$GB RAM with $2666$Mhz, and a GPU Nvidia Tesla P$4$ with $8$GB. Since most of the regularization methods aim to improve image classification tasks, we decided to follow the same protocol and approaches for a fair comparison.
\subsection{Neural Network Structure}
\label{ss.nnstructure}
Regarding the neural network structure, we evaluated the proposed approach in two different practices. For the former experiments, regularization layers were added to a neural network that does not drop any transformation between layers. Concerning the latter experiments, the standard Dropout~\cite{srivastava2014dropout} layers were changed by the MaxDropout one to compare results.
For the first experiment, ResNet18~\cite{he2016identity} was chosen because such an architecture has been used in several works for comparison purposes when coming to new regularizer techniques. ResNet18 is compounded by a sequence of convolutional residual blocks, followed by the well-known BatchNormalization~\cite{ioffe2015batch}. As such, a MaxDropout layer was added between these blocks, changing the basic structure during training but keeping it to inference purposes.
In the second experiment, a slightly different approach has been performed. Here, a neural network that already has the Dropout regularization in its composition was considered for direct comparison among methods. The WideResNet~\cite{zagoruyko2016wide} uses Dropout layers in its blocks with outstanding results on image classification tasks, thus becoming a good choice.
\subsection{Training Protocol}
\label{ss.training}
In this work, we considered a direct comparison with other regularization algorithms. To be consistent with the literature, we provided the error rate instead of the accuracy itself~\cite{devries2017cutout, zagoruyko2016wide,zhong2020random}. Nonetheless, to ensure that the only difference between the proposed approach and the baselines used for comparison purposes concerns the MaxDropout layer, we strictly followed the protocols according to the original works.
To compare MaxDropout with other regularizers, we followed the protocol proposed by DeVries and Taylor~\cite{devries2017cutout}, in which five runs were repeated, and the mean and the standard deviation are used for comparison purposes. For the experiment, the images from the datasets were normalized per-channel using mean and standard deviation.
During the training procedure, the images were shifted four pixels in every direction and then cropped into $32$x$32$ pixels. Besides, the images were horizontally mirrored with a $50\%$ probability. In such a case, two comparisons were provided. In the first case, besides the data augmentation already described, only the MaxDropout was included in the ResNet18 structure, directly comparing to the other methods. Regarding the second case, the Cutout data augmentation was included, providing a direct comparison of the results, showing that the proposed approach can work nicely.
As previously mentioned, to evaluate the MaxDropout against the standard Dropout, we choose the Wide Residual Network~\cite{zagoruyko2016wide}, and the same training protocol and parameters were employed to make sure the only difference concerns the type of neuron dropping.
\subsection{Datasets}
\label{ss.datasets}
In this work, two well-established datasets in the literature were employed, i.e., CIFAR-10~\cite{Krizhevsky09learningmultiple} and its enhanced version CIFAR-100~\cite{Krizhevsky09learningmultiple}. Using such datasets allows us to compare the proposed approach toward important baseline methods, such as the standard Dropout~\cite{srivastava2014dropout} and CutOut~\cite{devries2017cutout}. Figure~\ref{f.datasets} portrays random samples extracted from the datasets mentioned above.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=4cm]{figs/cifar10.png} &
\includegraphics[width=4cm]{figs/cifar100.png} \\
(a) & (b)
\end{tabular}
\caption{Random training samples from: (a) CIFAR-10 and (b) CIFAR-100 datasets.}
\label{f.datasets}
\end{figure}
CIFAR-10 dataset comprises $10$ classes equally distributed in $60,000$ colored image samples, with a dimension of $32$x$32$ pixels. The entire dataset is partitioned into $50,000$ training images and $10,000$ test images. On the other hand, CIFAR-100 dataset holds similar aspects of its smaller version, but now with $100$ classes equally distributed in $60,000$ colored image samples, with 600 images samples per class. Nonetheless, the higher number of classes and the low number of samples per class make image classification significantly hard in this case. |
1,477,468,750,915 | arxiv | \section{Introduction}
The theoretical basis of NPRG was formulated by K.G.Wilson in
1970's.\cite{wk} After that, several types of `exact' renormalization group
equations were derived and have been applied to various quantum
systems. Although those equations are exact, we can not
solve them without any approximation in practice. Therefore it is not trivial
that such NPRG analysis can take account of the effects caused by
non-perturbative dynamics even qualitatively. \par
Generally, there are two types of non-perturbative quantities. One
corresponds to summation of all orders of the perturbative series,
which might be related to the Borel resummation.\cite{gz} The other is the
essential singularity with respect to coupling constant $\lambda$, which has a structure
like $e^{-\frac{1}{\lambda}}$.\cite{co} We are not able to expand this singular
contribution around the origin of $\lambda$. This singularity is
essential in case of quantum tunnelling. For example, in the
symmetric double well system, there are degenerated two energy levels
at each minima, which are mixed through tunnelling to generate an energy gap $\Delta E \sim e^{-\frac{1}{\lambda}}$. The
exponential factor comes from the free energy of topological
configurations, the instantons. Can NPRG evaluate these non-perturbative
effects in a good manner? The main purpose of our work
is to check
this not only qualitatively but also quantitatively. We carry it out
in quantum mechanical systems by comparing our results with the exact
values given by numerical analysis of
the Schr\"odinger equation, with the perturbative series, and with the
instanton method. The instanton method is a unique analysis of quantum
tunnelling leading to the exact essential singularity, which is,
however, valid only in a very
weak coupling region. It will turn out
that the instanton method and LPA W-H eqn. are somehow
complementary to each other. \par
Though quantum tunnelling is one of the most striking consequences
of quantum theories, there has been no general-purpose tool to analyze
it. So if NPRG treats non-perturbative dynamics well, it can be a
powerful new tool for analysis of tunnelling and this work will be the first
touch to attack more complex systems with quantum tunnelling by NPRG.
\section{The NPRG study of quantum mechanics}
As a primary study, we would like to restrict ourselves to treat
the effective potential. We start with the LPA W-H eqn. for scalar
theories, where we ignore the corrections to the derivative interactions,\cite{wh,ap}
\begin{eqnarray}
\pa{\hat{V}_{\rm
eff}}{\tau}=\left[D-d_{{\varphi}}\hat{\varphi}\pa{}{\hat{\varphi}}\right]\hat{V}_{\rm eff}+{A_D \over 2}
\log\ska{1+\papa{\hat{V}_{\rm eff}}{\hat{\varphi}}{\hat{\varphi}}} .
\end{eqnarray}
Each hatted($~\hat{}~$) variable represents a dimensionless quantity
with a unit defined by the momentum cut off $\Lambda(\tau)=\rm e^{-\tau}\Lambda_{\rm 0}$, $D$ is the space-time
dimension,
$d_{\varphi}=\frac{D-2}{2}$ is the canonical dimension of scalar
field $\varphi$, and $A_D=(2\pi)^{-D}\int d\Omega _{D}$. This is a partial differential equation of
the dimensionless effective potential $\hat{V}_{\rm eff}$ with respect
to $\hat{\varphi}$ and scale parameter $\tau$. \par
Furthermore, we expand $V_{\rm eff}$ as power series of $\varphi$,
\begin{eqnarray}
V_{\rm eff} \left( \varphi ;\tau \right) &=&
\sum_{n=0}^N\frac{a_n(\tau)}{n!}\varphi ^n ,
\end{eqnarray}
which is called the operator expansion. If the results converge as the
order of truncation $N$
becomes large, we regard them as the solutions of LPA W-H eqn. The partial differential equation is reduced to a set of ordinary differential
equations for dimensionless couplings $\{\hat{a}_n\}$,
\begin{eqnarray}
\de{\hat{a}_0}{\tau}&=&~~~~~~~~D\hat{a}_0+\frac{A_D}{2}\log(1+\hat{a}_2) ,\nonumber \\
\de{\hat{a}_1}{\tau}&=&~~\frac{D+2}{2}\hat{a}_1+\frac{A_D}{2}\left[\hat{a}_3 \over
1+\hat{a}_2\right] , \nonumber \\
\de{\hat{a}_2}{\tau}&=&~~~~~~~~~2\hat{a}_2+\frac{A_D}{2}\bka{{\hat{a}_4 \over 1+\hat{a}_2}-
{\hat{a}_3^2 \over \ska{1+\hat{a}_2}^2}} , \nonumber \\
\de{\hat{a}_3}{\tau}&=&~~\frac{6-D}{2}\hat{a}_3+\frac{A_D}{2}\bka{{\hat{a}_5 \over 1+\hat{a}_2}-{3\hat{a}_4\hat{a}_3
\over \ska{1+\hat{a}_2}^2}+{2\hat{a}_3^3 \over
\ska{1+\hat{a}_2}^3}} , \nonumber \\
\de{\hat{a}_4}{\tau}&=&(4-D)\hat{a}_4+\frac{A_D}{2}\bka{{\hat{a}_6 \over 1+\hat{a}_2}-{4\hat{a}_5\hat{a}_3
\over \ska{1+\hat{a}_2}^2}+ {12\hat{a}_4\hat{a}_3^2 \over \ska{1+\hat{a}_2}^3}-{3\hat{a}_4^2
\over \ska{1+\hat{a}_2}^2}- {6\hat{a}_3^4 \over
\ska{1+\hat{a}_2}^4}} . \nonumber \\
&& \vspace{3cm} \vdots
\end{eqnarray}
In each $\beta$-function (the right-handed side of each equation), the first term represents the canonical scaling and the second term
represents one-loop quantum corrections, respectively. The common
denominator $\frac{1}{1+\hat{a}_2}$ corresponds to the
`propagator'. The constant part of $V_{\rm eff}$, $a_0$, is given by the vacuum
bubble diagrams and is usually ignored. However we will keep it here,
since it plays a crucial role in supersymmetric theories. \par
Making use of these equations, we can analyze quantum mechanics,
which is $D$=1 real scalar theory with a dynamical variable $x(t)$. We
now evaluate two physical quantities, the vacuum energy $E_0$ and the energy
gap $\Delta E=E_1 -E_0$. The vacuum energy is given by,
\begin{eqnarray}
E_0=\bra{\Omega}\hat{H}\ket{\Omega} = V_{\rm
eff} \left. \right|_{x =<x>} .
\end{eqnarray}
Namely, the minimum value of
$V_{\rm eff}$ gives us the ground energy of the system. The energy gap
is obtained through the two point correlation
function,
\begin{eqnarray}
\mathop{\rm lim}_{t\to\infty}\bra{\Omega}{\rm T}\hat{x}(t)\hat{x}(0)\ket{\Omega}\propto e^{-(E_1-E_0)t} ,
\end{eqnarray}
while it is evaluated in the LPA as follows,
\begin{eqnarray}
\bra{\Omega}{\rm T}\hat{x}(t)\hat{x}(0)\ket{\Omega}
&\stackrel{\rm LPA}{=}&
\int
\frac{dE}{2\pi}e^{iEt}\frac{1}{E^2+m^2_{\rm eff}}\propto
e^{-m_{\rm eff}t} ,
\end{eqnarray}
where the effective
mass $m_{\rm eff}$ is the curvature at the potential minimum. Comparing the damping factor as $t$ goes to
infinity, the relation $E_1-E_0=m_{\rm eff}$ follows and we use,
\begin{eqnarray}
E_{1}&=&\left.V_{\rm eff}\right|_{x=<x>}+\sqrt{\left.\frac{\partial^2V_{\rm eff}}{\partial
x^2}\right|}_{x=<x>}.
\end{eqnarray}
Thus we know the energy
spectrum from the information of effective potential.\par
To show the fundamental procedure of
analysis, we consider the case of
the harmonic oscillator. We evaluate the effective potential $V_{\rm eff}$ by solving the differential
equations for dimensionless couplings as follows,
\newpage
\vspace{5mm}
\fbox{\parbox{4cm}{initial potential \\ $V_0(x)=a_0+\frac{1}{2}a_2 x^2$}}
\hspace{1mm}$\Lambda(0)=\Lambda_{\rm 0}$
\vspace{-15mm}
\begin{flushright}
$\left( a_0~ ,~a_2\right)\longrightarrow \left( \hat{a}_0=\frac{a_0}{\Lambda_0}~ ,~\hat{a}_2=\frac{a_2}{\Lambda_0^2}\right)$
$\Downarrow \tau =0$\\
{\begin{tabular}{|l|} \hline
LPA W-H eqn.\\
$ \de{\hat{a}_0}{\tau}=\hat{a}_0+\frac{1}{2\pi}\log(1+\hat{a}_2)$ \\
$ \de{\hat{a}_2}{\tau}=2\hat{a}_2 $ \\
\hline
\end{tabular}}
$\Downarrow \tau =\tau _{\rm f}$\\
$\left( a_{0\rm f}=\Lambda_{\rm f} \hat{a}_{0f}~,~a_{2\rm f}=\Lambda^2
_{\rm f} \hat{a}_{2\rm f}\right)\longleftarrow \left
( \hat{a}_{0\rm f}~,~\hat{a}_{2\rm f}\right)$
\end{flushright}
\vspace{-15mm}
\hspace{5mm}
\fbox{\parbox{4cm}{final potential \\ $V_{\rm eff}(x)=a_{0\rm
f}+\frac{1}{2}a_{2\rm f} x^2$}}
\hspace{1mm}$\Lambda_{\rm f}=\rm e^{-\tau_{\rm f}}\Lambda_{\rm 0}$\\
\par
\vspace{5mm}
\hspace{-7mm}
In this case, we can carry out the above procedure analytically,
\begin{eqnarray}
a_0 (\Lambda _{\rm f})\!\!&=&\!\!a_0(\Lambda_0)+
\frac{\sqrt{a_2(\Lambda_0)}}{2\pi}\left[\hat{p}\log \frac{1+\hat{p}^2}{\hat{p}^2}+2\tan^{-1}\hat{p}
\right]^{\hat{p}=\frac{\Lambda_0}{\sqrt{a_2(\Lambda_0)}}}_{\hat{p}=\frac{\Lambda_{\rm f}}{\sqrt{a_2(\Lambda_0)}}} , \\
a_2(\Lambda _{\rm f})\!\!&=&\!\!a_2(\Lambda_0) .
\end{eqnarray}
If we take initial conditions
$(a_0(\Lambda_0),a_2(\Lambda_0))=(0,m^2)$, then we get
$(a_0(\Lambda _{\rm f}),a_2(\Lambda _{\rm f}))=(\frac{m}{2},m^2)$
in the limit $\Lambda_0 \to \infty $ , $\Lambda_{\rm f} \to 0$.
That is, we can evaluate the zero-point energy $\frac{m}{2}$ as a result
of running of $a_0$. The evolution of $a_0$ freezes where the cut off scale becomes less than
the mass scale.
\par
\begin{figure}[htb]
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{hrun.eps}
\vspace{-10mm}
\caption{Running of vacuum energy}
\label{fig:hrun}
}
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{hflow.eps}
\vspace{-10mm}
\caption{Flow diagram}
\label{fig:hflow}
}
\end{figure}
The renormalization group flows are plotted in
Fig.\ref{fig:hrun} and Fig.\ref{fig:hflow}. We see that the momentum region where the quantum corrections are
effective is finite and depends on the mass. It is the decoupling
property, which enables us to get effective couplings as physical
quantities even by numerical calculation within a finite momentum
region.
\section{Analysis of anharmonic oscillators}
Now we proceed to analyze quantum mechanics of anharmonic
oscillators. At first, we consider a symmetric single-well potential,
\begin{eqnarray}
V_0(x) =~~\lambda_0 x^4+\frac{1}{2} x^2 .
\end{eqnarray}
Of course, there is no tunnelling, so our interest is to compare our
NPRG results with the perturbative
series. The corrected $V_{\rm eff}$ is shown in Fig.\ref{fig:spote} and we obtain
the energy spectrum as in Fig.\ref{fig:sspe}.
\par
\begin{figure}[htb]
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{spote.eps}
\vspace{-10mm}
\caption{Change of potential}
\label{fig:spote}
}
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{sspe.eps}
\vspace{-10mm}
\caption{Energy spectrum}
\label{fig:sspe}
}
\end{figure}
\par
\hspace{-7mm}
The perturbative series of $E_n$ are the asymptotic series,
\begin{eqnarray}
E_n\!=\![n+\frac{1}{2}]+\frac{3}{4}\lambda_0 [2n^2+2n+1]
-\frac{1}{8}\lambda_0 ^2 [34n^3+51n^2+59n+21]+\cdots ,
\end{eqnarray}
and shows diverging nature even in the weak coupling region.
Note that the Borel resummation of the perturbative series works fine in this case and gives quantitatively good
values. On the other
hand, even in the lowest order approximation, W-H equation can
evaluate the energy spectrum almost perfectly. Therefore, NPRG could treat all orders of the perturbative
series in a correct manner. \par
Next, we consider a symmetric double-well potential,
\begin{eqnarray}
V_0(x) =~~\lambda_0 x^4-\frac{1}{2} x^2 .
\end{eqnarray}
In this case, there is no well-defined perturbation theory. A standard
technique to get the energy gap $\Delta E$ is the dilute gas instanton
calculation which gives a result with the essential singularity,
\begin{eqnarray}
\Delta E=2\sqrt{\frac{2\sqrt{2}}{\pi \lambda_0}}e^{-\frac{1}{3\sqrt{2}\lambda_0}}.
\end{eqnarray}
In NPRG evolution of the effective potential, the initial double-well
potential finally becomes a single well and the energy gap (mass)
arises (Fig.\ref{fig:dpote}).
This evolution is readily understandable considering that in one
space-time dimension
the $Z_2$ symmetry does not break down due to the barrier
penetration, i.e. the quantum tunnelling.
\par
\begin{figure}[htb]
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{dpote.eps}
\vspace{-10mm}
\caption{Change of potential}
\label{fig:dpote}
}
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{dspe.eps}
\vspace{-10mm}
\caption{Energy gap}
\label{fig:dspe}
}
\end{figure}
\hspace{-6mm}The NPRG results are very good in the strong coupling region, where we
have no other method to compete (Fig.\ref{fig:dspe}). The perturbation can not be applied
in this double-well system and the dilute gas instanton does
not work at all, which is valid only in the very
weak coupling region. Therefore NPRG method can be a
powerful tool for analysis of tunnelling at least in such
region.
However, our NPRG results deviate from exact values as $\lambda _0 \to
0$, which corresponds to a very deep well. Because the $\beta$-function
becomes singular in this region, NPRG results become unreliable. We
consider that the cause of difficulty comes from the
approximation scheme that we now adopt. After all, the coupling regions
where LPA W-H eqn. and the dilute gas instanton are valid respectively are separated
completely. There is only a cross
over region. In this sense, these two methods are complementary to
each other. \par
Concerning the reliability of the results, we have to check the truncation dependence of physical quantities. In
a single-well case (Fig.\ref{fig:strun}), the results converge extremely well,
but in a double-well case, as $\lambda _0$ goes smaller (Fig.\ref{fig:dtrun}), the convergence becomes unclear. So in the small
$\lambda _0$ region, the effective couplings
at even $N=16$ are not suitable for physical quantities. We should always pay attention to
the convergence with respect to $N$.\cite{ka}
\par
\begin{figure}[htb]
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{strun.eps}
\vspace{-10mm}
\caption{Single well}
\label{fig:strun}
}
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{dtrun.eps}
\vspace{-10mm}
\caption{Double well}
\label{fig:dtrun}
}
\end{figure}
\section{Supersymmetric quantum mechanics}
Finally we analyze the supersymmetric theory, where we can see the non-perturbative dynamics of the system
more clearly. We consider the Witten's toy model for dynamical SUSY
breaking\cite{wi}, whose Hamiltonian is represented as follows,
\begin{eqnarray}
\hat{H}=\frac12\left[\hat{P}^2+\hat{W}^2(\Phi)
+\sigma_3\frac{d\hat{W}(\Phi)}{d\Phi}\right]=\left(\matrix{\!\!\!\frac{1}{2}\hat{P}^2+\hat{V}_{+}(\Phi)~~~~~~~0\cr ~~0~~~~~~~~~~\frac{1}{2}\hat{P}^2+\hat{V}_{-}(\Phi)}\right) ,
\end{eqnarray}
where $\hat{V}_{\pm}(\Phi)=\frac{1}{2}\hat{W}^2(\Phi)\pm
\frac{1}{2}\frac{d\hat{W}(\Phi)}{d\Phi}$ and $\hat{W}(\Phi)$ is called
SUSY potential. We define super charges
$\hat{Q}_1=\frac12(\sigma_1\hat{P}+\sigma_2\hat{W}(\Phi))$,
$\hat{Q}_2=\frac12(\sigma_2\hat{P}-\sigma_1\hat{W}(\Phi))$ and the
Hamiltonian is written as
$\hat{H}=\{\hat{Q}_1,\hat{Q}_1\}=\{\hat{Q}_2,\hat{Q}_2\}$. This
assures that the vacuum energy is always non-negative,
$E_0=\langle\Omega|\hat{H}|\Omega\rangle
=2\left\Vert\hat{Q}_1|\Omega\rangle\right\Vert ^{2}
=2\left\Vert\hat{Q}_2|\Omega\rangle\right\Vert ^{2}\geq0$, and we have
the criterion of SUSY `breaking',
\begin{eqnarray}
E_0=0 \quad \Rightarrow
&&\hat{Q}_1|\Omega\rangle=0 , \hat{Q}_2|\Omega\rangle=0 \qquad
{\rm SUSY\quad unbroken} ,\\
E_0>0 \quad \Rightarrow
&&\hat{Q}_1|\Omega\rangle\ne0 , \hat{Q}_2|\Omega\rangle\ne0
\qquad {\rm SUSY\quad breaking} .
\end{eqnarray}
That is, the vacuum energy $E_0$ is the order parameter of SUSY
`breaking'. Furthermore, the perturbative corrections to $E_0$ are vanishing in any order
of perturbation, which is known as the non-renormalization
theorem. Actually under the SUSY potential $W(\Phi)=g\Phi^2-\Phi$,
$V_{+}(\Phi)$ becomes,
\begin{eqnarray}
V_+(\Phi) =~~\frac{1}{2}g^2\Phi^4-g\Phi^3+\frac{1}{2}\Phi^2+g\Phi-\frac{1}{2},
\end{eqnarray}
and the perturbative corrections to energy spectrum are calculated as follows,
\begin{eqnarray}
E_n=n+\frac38g^2[2n^2+2n+1]-\frac38g^2[10n^2+2n+1]
+\cdots .
\end{eqnarray}
These corrections to $E_0$ are cancelled out in each order
of $g$, thus there is no perturbative corrections. Namely, non-vanishing $E_0$ is realized only by
non-perturbative effects caused by the essential singularity. \par
\begin{figure}[htb]
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{bpote.eps}
\vspace{-10mm}
\caption{Bare potentials}
\label{fig:bpote}
}
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{epote.eps}
\vspace{-10mm}
\caption{Change of potential}
\label{fig:epote}
}
\end{figure}
Now we analyze this system by our NPRG method. We calculate the effective potentials for a wide range of parameter
$g$. The case of vanishing $g$ corresponds to the harmonic oscillator and SUSY
does not break there. On the other hand, SUSY is dynamically broken at any
non-vanishing $g$. Note that at small $g$ the bare potential is an asymmetric
double-well, while at $g > \sqrt[4]{\frac{1}{108}}\simeq 0.31$ it is a single-well and quantum
tunnelling is irrelevant there (Fig.\ref{fig:bpote}). Figure
\ref{fig:epote} shows the result for
$g$=0.24, where the effective potential evolves into a convex one and
its minimum turns out to be positive. That is, our NPRG method
gives positive $E_0$ correctly and realizes the dynamical SUSY breaking.
\begin{figure}[htb]
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{rspe.eps}
\vspace{-10mm}
\caption{Energy spectrum}
\label{fig:rspe}
}
\hspace{8mm}
\parbox{70mm}{
\epsfxsize=70mm
\epsfysize=70mm
\leavevmode
\epsfbox{vspe.eps}
\vspace{-10mm}
\caption{RG and valley instanton }
\label{fig:vspe}
}
\end{figure}
As is shown in Fig.\ref{fig:rspe}, NPRG results
are excellent in the strong coupling region, but not in the region where the
bare double-well potential becomes deep. In this region ($ 0.1<g<0.2
$), we can not show any result because of large numerical errors,
while the valley instanton method works very well as shown in
Fig.\ref{fig:vspe}. The valley instanton is generalization of the instanton method based on
the valley structure of the configuration space.\cite{ao1,ao2} Again, two methods are
somehow complementary to each other.
\section{Discussions}
The NPRG method, even in LPA, can evaluate the non-perturbative
quantities of the summation of all orders of the perturbative series
in a quantitatively good manner. As for the
non-perturbative quantities characterized by the essential
singularity, LPA W-H eqn. also works very well in the region where
the instanton and the perturbation break down, i.e. the strong
coupling region. However NPRG is not so effective in the weak coupling
region because of large numerical errors. To summarize, LPA W-H
eqn. and the (valley) instanton
play complementary roles to each other. We don't know the clear origin of difficulty
which we encounter in our NPRG analysis. We suspect that the
derivative expansion does not fit in such a parameter
region. Anyway, we have to search for `better' approximations. \par
On the other hand, from a practical point of view, NPRG method can
be a good new tool for analysis of quantum tunnelling
at least in some parameter region. For this purpose, we also need to study in detail
how to extract tunnelling physics from the effective potential and
the effective action.
Those techniques may be applied to models in quantum field
theories\cite{st} and in more complex systems. Especially quantum tunnelling
with multi-degrees of freedom represented by dissipation\cite{cj} is a very interesting subject to be attacked
by NPRG method.
\section*{Acknowledgments}
K.-I.Aoki and H.Terao are partially supported by the Grant-in Aid for
Scientific Research \\ (\#09874061, \#09226212, \#09246212, \#08640361) from the
Ministry of Education, Science \\and Culture.
|
1,477,468,750,916 | arxiv | \section{Introduction}
Double stars are those stars which, seen through the telescope,
present themselves as two points of light. Some of these are
physically associated with each other and are true bona fide
{\it binary stars}, while others are chance alignments. While these
$``$optical doubles" may prove troublesome as stray light
complicates both photometry and astrometry, they are astrophysically
inconsequential. The true binary nature of double stars can be
detected through a variety of means, from wide systems found via
common proper motion (CPM) to orbit pairs to the even closer systems,
found through periodic variations in radial velocity or photometry. For
generations, painstaking measurements of have been collected in catalogs
such as the Washington Double Star Catalog
(hereafter WDS; Mason et al.\ 2001). The organization of significant
data sets of multiple stars is critical to understanding the
outcomes of the star formation process as well as key to identifying
which systems promise fundamental astrophysical parameters, e.g.,
masses.
Red dwarfs, specifically, M dwarfs, are the most common stellar
constituent of the Milky Way, accounting for three of every four
stars (Henry et al.\ 2006). However, their binary fraction is quite
low in comparison to other stars ($\sim$27\%; Winters et al.\ 2015).
The other end of the Main Sequence, the O stars, have a very
high binary fraction (43/59/75\% for Runaway/Field/Cluster samples;
Mason et al.\ 2009). Possible companions to an O star may include
stars from the entire spectral sequence, while the only possible
stellar companions to an M dwarf are lower mass M dwarfs, brown
dwarfs or fainter evolved objects. Mass determinations of M Dwarfs
are poorly constrained\footnote{Although,
thanks to work such as Benedict et al.\ (2016) it is getting better
on the low-mass end.}, observations of M dwarfs, for binary
detection, orbit determination, and eventual mass determination, are
of paramount importance. To improve the statistical basis for
investigations of the nearest M dwarfs and to pinpoint systems
worthy of detailed studies, in this paper, we report high resolution
optical speckle observations of 336 M dwarfs. We report 113 resolved
measurements of 80 systems, nineteen of these have their first
measure reported here, although all but two of those have their
first published measure elsewhere.
\section{Instrumentation and Calibration}
Observing runs for this program are provided in Table 1,
which includes the dates, telescopes and observers, a
subset of the authors on this the paper. The observing runs included many different projects since
speckle interferometry is a fast observing technique with up to 20
objects per hour observed and nightly totals of 120-220 stars
depending on hours of dark time. Most data not specific to this M dwarf
program were Massive stars (Mason et al.\ 2009) or Exoplanet hosts
(Mason et al.\ 2011). Other data are presented in Appendix A.
The instrument used for these observations was the USNO speckle
interferometer, which is described in detail
in Mason et al.\ (2009, 2011). Briefly, the camera consists of two
different microscope objectives giving different scales, interference
filters of varying FWHM to allow fainter objects to be observed, Risley
prisms which correct for atmospheric dispersion and finally a Gen IIIc ICCD
capable of very short exposures necessary to take advantage of the
$``$speckling" generated by atmospheric turbulence. Each observation
represents the directed vector autocorrelation (Bagnuolo et al.\ 1992) of
2000$+$ individual exposures, each $1-15msec$ long, depending on an
object's brightness and the filter in use. As the speckles are an
atmospheric effect independent of the telescope, a larger telescope sees more
turbulence cells and, therefore, more speckles. While a larger telescope
can produce more correlations and a higher SNR it does not significantly
change the magnitude limit. Brighter primary stars
with $V~<11.5$ were observed with a {\it Str\"{o}mgren y} filter (FWHM
$25nm$ centered on $550nm$). Stars fainter than this were observed with
a {\it Johnson V} filter (FWHM $70nm$ centered on $550nm$). The resolution
limit with the 4m telescope employed in these observations is $30mas$;
however, when the wider filter was used, the resolution capability
is degraded to $50mas$ due to the greater atmospheric dispersion. The
field of view is 1\farcs8 centered on the target. The camera is
capable of multiple observing modes, where wider pairs, if seen in
the field, can be observed and measured using 2$\times$2 or
4$\times$4 binning\footnote{Increasing the field-of-view to
3\farcs6 or 7\farcs2 in the horizontal or vertical and even larger
by $cos\theta$ along diagonals.}. However, this is only when the
companion is seen or known {\it a priori}. In terms of the search
for new companions the field-of-view is characterized as
1\farcs8$\times$1\farcs8.
For calibration, a double-slit mask was placed over the ``stove pipe"
of the KPNO Mayall Reflector, and a known single star was observed.
This application of the well known experiment of Young allowed for
the determination of scale without relying on binaries themselves to
determine calibration parameters. The slit-mask, at the start of the
optical path, generates peaks based upon the the slit-separation and the
wavelength of observation. These peaks can be measured using the same
methodology as a double star measure and, thus, generates a very precise
scale for the CCD. See McAlister et al.\ (1987) \S 4 and Figure 4 for further
details. Multiple observations through the
slit mask yield an error in the position angle zero point of 0\fdg20
and a scale error of 0.357\%. These ``internal errors" are
undoubtedly underestimates of the true errors of these observations.
While this produces excellent calibration for the Mayall Reflector, due
to small differences between it and the CTIO Blanco Reflector, the double
slit-mask could not be placed on the CTIO 4m $``$stove pipe".
Because this option was not available on the CTIO Blanco Reflector,
a large number of well-known equatorial binaries with very
accurate orbits were observed with both
telescopes to allow for the determination of more realistic global
errors. Given the long time between some of these observations,
wider pairs were observed with other telescopes that were slowly
orbiting and well-characterized, as well as linear pairs, were observed. This
process prevented excessive extrapolation when measuring the
scale of the observed field.
Speckle Interferometry is a technique that is sensitive to
changes in observing conditions, particularly coherence length
($\rho_0$) and time ($\tau_0$). These typically manifest as a
degradation of detection capability close to the telescope
resolution limit or at larger magnitude differences between
components. To ensure we reached our desired detection
thresholds, a variety of systems with well-determined and
characterized morphologies and magnitude differences were observed
throughout each observing night. In all cases, results for these
test systems indicated that our observing met or exceeded the desired
separation and magnitude difference goals. Most, but not all, of the
systems observed for characterizing errors or investigating detection space
were presented in Mason et al.\ (2011). Others are presented in Appendix A
below. Overall, our speckle observations are generally
able to detect companions to M dwarfs from $30mas~<~\rho~<1\farcs8$
if the $\Delta$m$_{\rm v}~<~2$ for M dwarfs brighter than $V=11.5$.
If fainter than this, the resolution of close pairs is degraded such
that the effectively searched region is $50mas~<~\rho~<1\farcs8$.
Some observations and measurements were obtained during times of
compromised observing conditions. Non-detections made at this time are not
considered definitive and are not tabulated below.
\section{Results}
Table 2 lists the astrometric measurements (T, $\theta$, and
$\rho$) of the observed red dwarf stars. The first two columns identify the
system by providing the WDS designation (based on epoch-2000
coordinates) and discovery designation. Columns three through five
give the epoch of observation (expressed as a fractional Julian
year), the position angle (in degrees), and the separation (in
seconds of arc). Colons indicate measures with reduced accuracy due
to observing conditions. Note that the position angle has not been
corrected for precession, and thus, is based on the equinox for the
epoch of observation. The sixth column indicates the number of
observations contained in the mean position. Columns seven and eight
list position angle and separation residuals (in degrees and
arcseconds, respectively) to the orbit or rectilinear fit referenced
in Column nine. Finally, the last column is reserved for notes for
these systems.
While some published orbits may be premature and some linear
determinations may reflect relative motion of an edge-on and/or
long-period eccentric binary, these are nominally used to
characterize each pair as physical and optical, respectively. Other
pairs, as indicated in the notes to Table 2, are further classified
as physical or optical based on the relative motion of the pair through
inspection of their double star measures compared with the proper motion.
The proper motion of these M dwarfs are typically large, therefore double star
measures at approximately the same position over a time base of many
years establishes the pair as physical through common proper motion. This
assessment depends on the magnitude of the proper motion, the change in
relative position, and the time between observations. This sort of analysis
cannot be made for unconfirmed pairs.
For twenty-one of the pairs in Table 2 this represents the earliest
measure. While the data presented in Table 2 has not been published
before, their results had been shared with collaborators (Hartkopf et al.\
2012, Tokovinin et al.\ 2010, 2014, 2015, 2016, 2018, 2019). In addition,
independent initiatives of others (Benedict et al.\ 2016, Henry et al.\
1999, Horch et al.\ 2010, 2011, 2012, Janson et al.\ 2012, 2014a, 2014b,
Jodar et al.\ 2013, Riedel et al.\ 2014, Ward-Duong et al.\ 2015, Winters
et al.\ 2011, 2017) has further enhanced the capability to assess the
physicality of these pairs and have enabled many of the orbits and linear
solutions presented below.
Overall, 336 M dwarfs were observed. From these observations, we
completed 113 measures of position angle and separation for 80 different
pairs.
\section{Analysis of Resolved Doubles}
\subsection{New Orbital Solutions}
All orbits were computed using the ``grid search" routine described
in Hartkopf et al.\ (1989); weights are applied based on the methods
described by Hartkopf et al.\ (2001a). Briefly, weights of the
individual observations are evaluated based on the separation relative to the
resolution capability of the telescope (larger telescopes produce more
accurate data), the method of observation (e.g., micrometry, photography,
interferometry, etc.), whether the published measure is a mean of multiple
nights, and if the measurer made any notes regarding the quality of the
observation. Elements for these systems
are given in Table 3, where columns (1), (2) and (3) give the WDS
and discovery designations, followed by an alternate designation;
columns (4) -- (10) list the seven Campbell elements: $P$ (period,
in years), $a$ (semi-major axis, in arcseconds), $i$ (inclination,
in degrees), $\Omega$ (longitude of node, equinox 2000.0, in
degrees), $T_0$ (epoch of periastron passage, in fractional
Julian year), $e$ (eccentricity), and $\omega$ (longitude of
periastron, in degrees). Formal errors are listed with each element.
Columns (11) and (12) provide the orbit grade (see Hartkopf et al.\
2001a) and the reference for a previous orbit determination, if one
exists. Orbit grades are on a $1-5$ scale. In the case of the orbits
presented here, a grade of $3$ indicates the orbit is $``$reliable,"
$4$ is $``$preliminary" and $``$5" is $``$indeterminate." In all
cases here, the numbers are indicative of the small number of
observations and incomplete phase coverage.
Figure 1 illustrates the new orbital solutions for the six systems whose
orbits are presented here, plotted together
with all published data in the WDS database as well as the
previously unpublished data from Table 2. In each of these plots,
micrometric observations are indicated by plus signs, and
photographic measures by asterisks; Hipparcos measures are indicated
by the letter `H', conventional CCD measures by triangles,
interferometric measures by filled circles, and the new measures
presented in Table 2 are indicated with stars. ``$O-C$" lines
connect each measure to its predicted position along the new orbit
(shown as a thick solid line). Dashed ``$O-C$" lines indicate
measures given zero weight in the final solution. A dot-dash line
indicates the line of nodes, and a curved arrow in the lower right
corner of each figure indicates the direction of orbital motion. The
scale, in arcseconds, is indicated on the left and bottom of each
plot. Finally, if there is a previously published orbit it is shown
as a dashed ellipse. The sources of those orbits are listed in the
final column of Table 3.
\begin{figure}[p]
\begin{center}
{\epsfxsize 2.5in \epsffile{wds07549-2920j.eps} \epsfxsize 2.5in \epsffile{wds09156-1036j.eps}}
{\epsfxsize 2.5in \epsffile{wds14540+2335k.eps} \epsfxsize 2.5in \epsffile{wds17077+0722j.eps}}
{\epsfxsize 2.5in \epsffile{wds17119-0151k.eps} \epsfxsize 2.5in \epsffile{wds19449-2338j.eps}}
\end{center}
\caption{\small New orbits for the systems listed in Table 3 and all
data in the WDS database and Table 2. Micrometric observations are
indicated by plus signs, and photographic measures by asterisks;
Hipparcos measures are indicated by the letter `H', conventional CCD
measures by triangles, interferometric measures by filled circles,
and the new measures presented in Table 2 are indicated with stars.
``$O-C$" lines connect each measure to its predicted position along
the new orbit (shown as a thick solid line). Dashed ``$O-C$" lines
indicate measures given zero weight in the final solution. A
dot-dash line indicates the line of nodes, and a curved arrow in the
lower right corner of each figure indicates the direction of orbital
motion. The scale, in arcseconds, is indicated on the left and
bottom of each plot. Finally, if there is a previously published
orbit, it is shown as a dashed ellipse.}
\end{figure}
The orbital periods of all six pairs (three of which have very high
eccentricities; $> 0.7$) are all quite short, from 5$-$38y, and have
small semi-major axes (0\farcs2-0\farcs9). The potential for
improvement of the orbits and precise mass determinations for these
pairs, all with large parallaxes, is excellent, especially for
precise high angular resolution work with large aperture
instruments. The errors of some of the earlier micrometry measures
are quite high (e.g.\ WDS14540$+$2335), and are given quite low
weight in the orbit. However, these historic observations can be
quite helpful, especially in determining the orbital period.
The most interesting of these six pairs is discussed in detail
below while the remaining five are noted in \S6.
\subsubsection{G 161-7}
The M dwarf star G 161-7 (alternatively known as LHS 6167 or NLTT
21329) was first resolved as a double with adaptive optics by
Montagnier et al.\ (2006), who resolved the pair on two occasions.
If the resolved optical companion of G 161-7 were simply a chance
alignment with small proper motion, then the high proper motion of G
161-7 would result in a relative shift of 1\farcs6 between the two
components. However, the companion continues to stay quite close,
making this a very likely physical pair. While maintaining their
proximity, large changes in the position angle of the companion
demonstrated that the orbital period was short. Observed by this
effort in 2010 (Table 2) the measures were also supplemented by
Janson et al.\ (2014a) who observed it with $``$lucky imaging" and
were able to split the pair as well as determine a mass ratio:
0.57$\pm$0.05. Lately, it has been regularly observed by the
SOAR-Speckle program (Tokovinin et al.\ 2015, 2016, 2018, 2019).
Barlett et al.\ (2017) measured the parallax ($103.33\pm1.00mas$)
to this nearby pair and also made an estimate of $\sim$4y for the
orbital period. Taking the available relative astrometry an orbital
solution with a period just over 5y quickly converged (see Table 3 and
Figure 1). With the parallax a mass sum is 0.273$\pm$0.018\msun ~ is
determined and with the mass ratio individual masses of 0.156$\pm$0.011
and 0.1175$\pm$0.0079\msun ~ are determined for A and B, respectively.
While Gaia parallax should be quite precise for this pair, the errors
of the orbit, already under 2\%, can be improved with the accumulation
of more data filling in unobserved regions of the orbit. With this, the
orbital elements and, hence, the mass errors will improve. This pair is
the best example of what we hope this effort will ultimately achieve.
\subsection{New Linear Solutions}
Inspection of all observed pairs with either a 30$^{\circ}$ change
in their relative position angles or a 30\% change in separations
since the first observation cataloged in the WDS revealed six pairs
whose motion seemed linear. These apparent linear relative motions
suggest that these pairs are either composed of physically unrelated
stars or have very long orbital periods. Linear elements to these
doubles are given in Table 4, where Columns one and two give the WDS
and discoverer designations and Columns three to nine list the seven
linear elements: x$_{0}$ (zero point in x, in arcseconds), a$_{x}$
(slope in x, in $''$/yr), y$_{0}$ (zero point in y, in arcseconds),
a$_{y}$ (slope in y, in $''$/yr), T$_{0}$ (time of closest apparent
separation, in years), $\rho_{0}$ (closest apparent separation, in
arcseconds), and $\theta_{0}$ (position angle at T$_{0}$, in
degrees). See Hartkopf \& Mason (2015) for a description of all
terms.
Figure 2 illustrates these new linear solutions, plotted together
with all published data in the WDS database, as well as the
previously unpublished data from Table 2. Symbols are the same as in
Figure 1. In the case of linear plots, the dashed line indicates the
time of closest apparent separation. As in Figure 1, the direction of
motion is indicated at lower right of each figure. As the plots and solutions
are all relative, the proper motion ($\mu$) difference is assumed to
be zero.
\begin{figure}[p]
~\vskip -1.8in
\begin{center}
{\epsfxsize 2.8in \epsffile{wds04073-2429Q.eps} \epsfxsize 2.8in \epsffile{wds05101-2341Q.eps}}
\vskip 0.05in
{\epsfxsize 2.8in \epsffile{wds06300-1924Q.eps} \epsfxsize 2.8in \epsffile{wds11105-3732Q.eps}}
\vskip 0.05in
{\epsfxsize 2.8in \epsffile{wds13422-1600Q.eps} \epsfxsize 2.8in \epsffile{wds21492-4133Q.eps}}
\end{center}
\vskip -0.3in
\caption{\small New linear fits for the systems listed in Table 4
and all data in the WDS database and Table 2. Symbols are the same as
Figure 1. ``$O-C$" lines connect each measure to its predicted position along
the linear solution (shown as a thick solid line). An arrow in the lower
right corner of each figure indicates the direction of motion. The scale, in
arcseconds, is indicated on the left and bottom of each plot.}
\end{figure}
\vskip 0.1in
Table 5 gives ephemerides for each orbit or linear solution over the
years 2018 through 2023, in annual increments. Columns (1) and (2)
are the same identifiers as in the previous tables, while columns
(3+4), (5+6), ... (13+14) give predicted values of $\theta$ and
$\rho$, respectively, for the years 2018.0, 2019.0, etc., through
2023.0. All the orbit pairs are relatively fast moving, with mean
motions of more than 6$^{\circ}$/yr. Notes to individual systems are
given in \S6.
\section{M dwarfs with no companion detected}
The selection of systems for this project was not blind and
preference was given to systems previously known as double or having
parallax data from the CTIOPI program (Jao et al.\ 2005) that seemed
to indicate duplicity. Therefore, any duplicity rate we determine
would be enriched and not representative of stars of this type.
Despite this preselection, there were a large number of targets
observed for which we did not detect a companion.
Table 6 provides the complete list of unresolved red dwarfs obtained
on these observing runs. In some cases, known companions
are not detectable due to the separation being wider than the field
of view of 1\farcs8, or the magnitude difference being larger than
detectable by the optical speckle camera. Due to the faintness of
the primary targets, the companion must have $\Delta$m$~<~2$mag and
$30mas~<~\rho~<~$1\farcs8. In this case, the upper limit is set by
the minimum field of view when the object is centered for detection
of unknown companions. As seen in Table 2, wider systems can be
measured with {\it a priori} knowledge of the system or if they are
seen while pointing the telescope. The usual procedure after moving
the telescope to the approximate field was to step through larger
fields of view obtained through 4$\times$4 or 2$\times$2 binning en
route to a final un-binned field of about $6mas/pixel$. Data could be
taken in these binned fields to obtain measures of wider pairs. In
some cases, pairs were too widely separated to be measured; often
for these both components were observed separately. Finally, as some
of these targets are rather faint, an interference filter with a
significantly larger FWHM ({\it Johnson V} as opposed to {\it Str\"{o}mgren y})
was used to allow enough photons to permit detection. However, use
of this filter compromises the detection of the closest pairs. For
these we set a lower separation limit of $50mas$. The cases where
this filter was used are noted in Table 6.
All individual observations, including a complete listing of each
measure identifying the date of observation, resolution limit,
filter and telescope, are given in the Catalog of Interferometric
Measurements of Binary Stars\footnote{See Hartkopf et al.\ (2001b).
The online version ({\tt http://ad.usno.navy.mil/wds/int4.html})
is updated frequently.}. Notes to individual systems reported here
are provided in \S6.
\section{Notes to Individual Systems}
{\bf WDS04073$-$2429 = BEU\phm{888}5 = LHS 1630} (resolved, linear,
in WDS) : The proper motion (UCAC5; Zacharias et al.\ 2017) is
$673.1mas/yr$, which seems to indicate the components are moving
together with small changes in relative position, so the pair is
classified as physical. However, their relative motion can be fit by
a line (see \S4.2, Table 4 and Figure 2). More data obtained over several
years may determine if we have a companion which is optical, or
if we happen to be catching the orbit on a long near-linear segment.
{\bf WDS05000$-$0333 = JNN\phm{88}29 = SCR\phn J0459$-$0333}
(unresolved, in WDS) : The companion has been measured multiple
times, but only through red filters (Janson et al.\ 2012, 2014b). It
may be too faint in {\it Johnson V}.
{\bf WDS05174$-$3522 = TSN\phm{888}1 = L\phn449-001} (unresolved, in
WDS) : The companion has only been measured with HST-FGS once at
$47mas$ (Riedel et al.\ 2014), closer than our limit here with the
{\it Johnson V} filter. This known pair is worth additional observations with
large aperture high angular resolution techniques.
{\bf WDS06523$-$0510 = GJ\phn250} (resolved, in WDS) : The wide CPM
pair, WNO\phn\phn17AB has many measures. Two
unconfirmed companions to B have been measured. WSI\phn125Ba,Bb
measured only in Table 2 and the much wider IR companion
TNN\phm{888}6BC measured in Tanner et al.\ (2010). It is unknown if
either of these are physical. We crudely estimate the $\Delta$m in V
as 0.5 for the Ba,Bb pair.
{\bf WDS07549-2920 = KUI\phm{88}32 = LHS\phn1955} (resolved, orbit,
in WDS) : The first orbit of this pair. Based on these elements and
the parallax ($74.36\pm1.13mas$; Winters et al.\ 2015), the
resulting mass sum of 1.54$\pm$0.37\msun ~is suspiciously large. (see
\S 4.1, Table 3 and Figure 1). It is possible that these preliminary
orbital elements may aid future determinations and the planning of
observing.
{\bf WDS08272$-$4459 = JOD\phm{888}5 = LHS\phn2010} (unresolved, in
WDS) : The companion has only been measured once in the red ($914mas$
in 2008; Jodar et al.\ 2013). The companion is either too faint in
the {\it Johnson V} observation or the companion is optical and has moved
to a separation too wide for detection.
{\bf WDS08317$+$1924 = BEU\phm{88}12Aa,Ab = GJ\phm{8}2069}
(unresolved, in WDS) : This pair of the multiple system has only
been measured in the red or infrared. The companion is likely too
faint in this {\it Johnson V} measurement for detection. The Ba,Bb pair is
resolved in Table 3. AB is CPM but is too wide for measurement here.
{\bf WDS10121$-$0241 = DEL\phm{888}3 = GJ\phm{88}381} (unresolved,
in WDS) : The companion has only been measured in the red or
infrared. It is likely too faint in this {\it Johnson V} measurement for
detection.
{\bf WDS10430$-$0913 = WSI\phn112 = WT\phn1827} (resolved, in WDS) :
The companion is measured only in Table 2. It is unknown if it is
physical. We crudely estimate the $\Delta$m in V as 1.7.
{\bf WDS11105$-$3732 = REP\phm{88}21 = TWA\phm{888}3} (resolved,
orbit or linear, in WDS) : The proper motion (UCAC4; Zacharias et
al.\ 2013) is $107.3mas/yr$. While orbits with periods ranging from
236-800y have been determined, $``$the $\chi^2$ from the orbit fit
was indistinguishable from the linear fit" (Kellogg et al.\ 2017).
The solution presented in \S4.2, Table 4 and Figure 2 is a linear
fit to the data. Only time will tell if we have a companion which is
optical or we happen to be catching the orbit on a long near-linear
segment.
{\bf 11354$-$3232 = GJ 433} (unresolved, not in WDS) : Detected as a
500d pair by Hipparcos (ESA 1997). However, according to Delfosse et
al.\ (2013), radial velocity coverage eliminates the Hipparcos
result and the system just has one short-period planet.
{\bf WDS13422$-$1600 = WSI\phn114 = LHS 2783} (resolved, linear, in
WDS) : Given the high proper motion of the PPMXL ($508.6mas/yr$;
Roeser et al.\ 2010) and that from the CTIO-PI ($503.6mas/yr$;
Bartlett et al.\ 2017), it would indicate the stars are moving
together. The measures can be fit by a line (see \S4.2, Table 4 and
Figure 2), and thusfar do not seem to support the estimated period
of 52y from Bartlett et al.\ (2017). However, based on this orbital
period, the parallax, and an assumed total mass of 0.5\msun, a$''$
would be 0\farcs28, not too different from our measures (Table 2) of
about 0\farcs5. This tends to support the supposition that we are
looking at a physical pair observed when the relative motion only
appears to be linear. The pair should be monitored for variation
from linearity.
{\bf WDS14540$+$2335 = REU\phm{888}2 = GJ 568} (resolved, orbit, in
WDS) : The orbit of Heintz (1990) is improved here. Based on these
elements and the parallax ($98.40\pm4.42mas$; van Leeuwen 2007) the
resulting mass sum is 0.261$\pm$0.083\msun. See \S 4.1, Table 3 and
Figure 1.
{\bf 15301$-$0752 = G 152-31} (unresolved, not in WDS) : This 5.96y
pair of Harrington \& Dahn (1988) should be resolvable (a$''$ =
$496mas$ assuming $\Sigma~\cal{M}$ = 0.5\msun); therefore, it is
assumed the $\Delta$m is higher than 2.5 and observation with a
technique with a greater $\Delta$m sensitivity, such as adaptive
optics, is appropriate.
{\bf WDS16240$+$4822 = HEN\phm{888}1Aa,Ab = GJ\phm{88}623}
(unresolved, in WDS) : The companion has only been measured in the
infrared or with HST-FGS. It likely has too large a $\Delta$m for V
band detection here.
{\bf WDS17077$+$0722 = YSC\phm{88}62 = GJ 1210} (resolved, orbit, in
WDS) : This is the first orbit for this pair, whose first published
measure (Horch et al.\ 2010) was made two years after that presented
in Table 2. Based on these elements and the parallax ($78.0\pm5.3mas$;
van Altena et al.\ 1995) the resulting mass sum is
0.280$\pm$0.067\msun. ~See \S 4.1, Table 3 and Figure 1.
{\bf WDS17119$-$0151 = LPM\phn629 = GJ 660} (resolved, orbit, in
WDS) : The orbit of S\"{o}derhjelm (1999) is improved here. Based on
these elements and the parallax ($98.19\pm12.09mas$; van Leeuwen
2007) the resulting mass sum is 0.40$\pm$0.16\msun. See \S 4.1,
Table 3 and Figure 1.
{\bf WDS18387$-$1429 = HDS2641 = GJ\phn2138} (unresolved, in WDS) :
The companion was measured by Hipparcos (ESA 1997) at $107mas$ and
$\Delta$H$_p$ = 0.41. It would be expected to be resolved in our
observation if near this location. Because it is not, the pair has
either closed under $50mas$, was optical or was a false detection.
{\bf WDS19449$-$2338 = MTG\phm{888}4 = LP 869-26} (resolved, orbit,
in WDS) : This is the first orbit for this pair. Based on these
elements and the parallax ($67.87\pm1.1mas$; Bartlett et al.\ 2017)
the resulting mass sum is 0.283$\pm$0.086\msun. See \S 4.1, Table 3
and Figure 1.
{\bf 23018$-$0351 = GJ 886} (unresolved, not in WDS) : The 468.1d
pair of Jancart et al.\ (2005) may have a separation close to our
resolution limit, or slightly under it (a$''$ = $50mas$ assuming
$\Sigma~\cal{M}$ = 0.5\msun). The $\Delta$m is unknown and may also
be too high for our detection. This pair is worthy of additional
observation.
\section{Conclusions}
In this paper, we report high resolution optical speckle
observations of 336 M dwarfs that resulted in 113 resolved
measurements of 80 systems and 256 other stars that gave no
indication of duplicity within the detection limits of the
telescope/system. We calculate orbits for six systems, two of which were
revised and four which are first time orbits. All have short periods, 5-38y,
and these data may eventually assist in determining accurate masses.
\acknowledgements
The USNO speckle interferometry program has been supported by NASA
and the SIM preparatory science program through NRA 98-OSS-007. This
research has made use of the SIMBAD database, operated at CDS,
Strasbourg, France and NASA's Astrophysics Data System. Thanks are also
provided to the U.S.\ Naval Observatory for their continued support of the
Double Star Program. The telescope operators and observing support personnel
of KPNO and CTIO continue to provide exceptional support for visiting
astronomers. Thanks to Claudio Aguilero, Alberto Alvarez, Skip Andree, Bill
Binkert, Gale Brehmer, Ed Eastburn, Angel Guerra, Hal Halbedal, Humberto
Orrero, David Rojas, Hernan Tirado, Patricio Ugarte, Ricard Venegas, George
Will, and the rest of the KPNO and CTIO staff. Members of the RECONS team
(JPS \& TJH) have been supported by NSF grants AST 05-07711, 09-08402 and
14-12026. We would also like to thank Andrei Tokovinin for helpful comments.
|
1,477,468,750,917 | arxiv | \section{Introduction}\label{sec:intro}
High-energy particle collisions such as those produced in the Large Hadron Collider (LHC) at CERN can result in the production of massive particles (e.g.\ $W$/$Z$/$H$ bosons and top quarks) with large Lorentz boosts. When such particles decay, their decay products become collimated, or `boosted', in the direction of the progenitor particle. For massive particles that are sufficiently boosted, it is advantageous to reconstruct their hadronic decay products as a single large-radius (\largeR) jet. Such \largeR jets capture a characteristic, multi-pronged jet substructure from the two-body or three-body decays of hadronically decaying $W$, $Z$ and $H$ bosons and top quarks, which is distinct from the radiation pattern of a light-quark- or gluon-initiated jet.
The substructure of boosted particle decays~\cite{Asquith:2018igt,Larkoski:2017jix} allows powerful new approaches to be utilised in searches for physics beyond the Standard Model (BSM)~\cite{EXOT-2015-04,SUSY-2016-10,EXOT-2016-25,SUSY-2017-01,HDBS-2018-31,CMS-B2G-18-002,CMS-B2G-17-019,CMS-B2G-17-018,CMS-EXO-18-012,CMS-HIG-19-003} at high energy scales, and has enabled novel measurements of Standard Model processes~\cite{STDM-2017-04, STDM-2017-33, STDM-2017-14, Aad:2020zcn,CMS-TOP-19-005,CMS-SMP-16-010,CMS-TOP-17-013,Adam:2020kug,Aaij:2017fak,Aaij:2019ctd,Acharya:2019djg,Zardoshti:2020cwl}.
The reconstruction of boosted hadronic systems is complicated by the presence of soft radiation from several sources, which degrades performance when reconstructing jet substructure observables. In particular, soft radiation from the underlying event and uncorrelated radiation from additional $pp$ interactions concurrent with the hard-scattering event of interest (pile-up interactions) can degrade the jet mass resolution and other jet substructure quantities, which are critical to boosted object identification. These effects are amplified by the use of a large radius for jet reconstruction~\cite{Dasgupta:2007wa,Cacciari:2008gn,Cacciari:2010te,Soyez:2012hv}, which incorporates more uncorrelated energy. During Run~1, the average number of pile-up interactions per LHC bunch crossing was roughly 20. This number increased to $\sim34$ in the Run~2 dataset, although some events during this period were recorded with up to 70 pile-up interactions. The average number of pile-up collisions is expected to increase further during Run~3 and will reach $\sim200$ pile-up interactions during high-luminosity LHC operations~\cite{ApollinariG.:2017ojx}. As experimental conditions become more challenging, the choices made when reconstructing \largeR jets will need to evolve to maintain optimal performance.
There is no single way to reconstruct a jet, and several choices must be made at the level of a physics analysis to define the jets which will be used. Jets at the LHC are typically reconstructed from some set of input objects (`jet inputs', or simply `inputs' throughout) using a sequential recombination algorithm with a user-specified radius parameter ($R$). Once a jet input type is chosen, it may be preprocessed before jet reconstruction, for example, to mitigate the effects of pile-up. After jet reconstruction, a grooming algorithm may be applied to the jets which preferentially removes soft and/or wide-angled radiation from the reconstructed jet, to further suppress contributions from pile-up and the underlying event and to enhance the resolution of the jet mass and other substructure observables.
\LargeR jets are typically reconstructed by ATLAS using the \akt algorithm~\cite{Cacciari:2008gp} and a radius parameter $R=1.0$. The choice of recombination scheme and radius parameter has been studied previously~\cite{PERF-2012-02}, and is not revisited in these studies. ATLAS \largeR jet reconstruction has so-far been based on topological cluster inputs reconstructed only using calorimeter-based energy measurements. These clusters provide excellent energy resolution, but do not accurately represent the positions of individual particles within jets with large transverse momentum (\pt), particularly in areas where the energy density is large or the calorimeter granularity is coarse. This can result in degraded performance when the resolution of individual particles becomes relevant, for instance, when reconstructing the mass of showers which are so collimated that they are not spatially resolved by the ATLAS calorimeter's granularity. In order to better reconstruct the angular distributions of charged particles within jets, several particle-flow (PFlow) algorithms which were developed and commissioned by ATLAS during Run~2 are considered. These include a PFlow implementation designed to improve $R=0.4$ jet performance at low \pt~\cite{PERF-2015-09}, and a variant designed to reconstruct jet substructure at the highest transverse momenta, called Track-CaloClusters (TCCs)~\cite{ATL-PHYS-PUB-2017-015,HDBS-2018-31}. In this work, a union of PFlow and TCCs called `Unified Flow Objects' (UFOs) is established to provide optimal performance across a wider kinematic range than is possible with either particle-flow objects (PFOs) or TCCs alone, which are each found to perform well in distinct kinematic regions. Jet inputs may also be preprocessed using one or several of the many input-object-level pile-up mitigation techniques which have been developed, such as constituent subtraction~\cite{Berta:2014eza, Berta:2019hnj}, Voronoi subtraction~\cite{Soyez:2018opl}, SoftKiller~\cite{Cacciari:2014gra}, and pile-up per particle identification (PUPPI)~\cite{Bertolini:2014bba}. Various input types and pile-up mitigation algorithms can be combined to create pile-up-robust inputs to jet reconstruction, adding additional complexity to the search for optimal performance.
Grooming algorithms are another tool which may be used to remove undesirable radiation from jets after they have been reconstructed. The performance of several grooming algorithms was studied by ATLAS in detail using Run~1 data~\cite{PERF-2015-03} and during preparations for Run~2~\cite{ATL-PHYS-PUB-2014-004}, including the jet trimming~\cite{Krohn:2009th}, pruning~\cite{Ellis:2009me}, and mass drop filtering~\cite{Butterworth:2008iy} algorithms. Based on these studies, \largeR jets groomed with the trimming algorithm using parameter choices of $\Rsub=0.2$ and $\fcut=5$\% were found to be optimal for ATLAS with Run~2 conditions. Since the completion of these studies, several additional jet grooming algorithms have been proposed, including the modified mass drop (mMDT)~\cite{Dasgupta:2013ihk} and soft-drop (SD)~\cite{Larkoski:2014wba} algorithms, and their recent extensions: bottom-up soft-drop (BUSD) and recursive soft-drop (RSD)~\cite{Dreyer:2018tjj}.
The development of new input objects, pile-up mitigation techniques and jet grooming algorithms by the experimental and phenomenological communities motivates a thorough reoptimisation of the \largeR jet definition used by ATLAS. In this paper, the jet tagging and substructure performance of 171 distinct combinations of the different jet inputs, pile-up mitigation techniques and grooming algorithms is evaluated using Run~2 conditions. The performance of different jet definitions is compared in the context of several metrics, which quantify their tagging performance, their pile-up stability, and the sensitivity of their mass response to different jet substructure topologies. The performance in data is also studied to ensure the validity of the conclusions from the Monte Carlo studies.
The remaining sections of this document are structured as follows. The ATLAS detector is described in Section~\ref{sec:detector}, along with aspects of the 2017 $pp$ dataset and details of the simulated events used to perform these studies. An overview of the jet reconstruction techniques surveyed by these studies is provided in Section~\ref{sec:objects}. Several metrics are used to determine the optimal jet definition, as well as to understand the behaviour of individual algorithms. Due to the large number of possible \largeR jet definitions, a two-stage optimisation is performed to determine which of these exhibit the best performance. In the first stage, presented in Section~\ref{sec:nocalib}, the metrics which will be used to evaluate the relative performance of all jet definitions are established by studying the performance of a limited set of jet definitions. The observations made from these comparisons motivate a union of the existing particle-flow and TCC input objects; this new input object type is presented in Section~\ref{sec:ufos}. The results of the complete survey of jet definitions are presented in Section~\ref{sec:results}. UFO-based definitions which perform consistently well are selected for further study. This smaller list of jet definitions, each of which improves on the current ATLAS baseline \largeR jet definition, is calibrated using simulated events, and a more detailed comparison of their performance in terms of their tagging performance and jet \pt and mass resolutions as well as their performance in data is made in Section~\ref{sec:calib}. In an appendix, more details of the interaction between pile-up interactions and topological cluster formation are provided.
\section{The ATLAS detector, data and simulated events}\label{sec:detector}
The ATLAS detector~\cite{PERF-2007-01,ATLAS-TDR-2010-19,PIX-2018-001} consists of three principal subsystems\footnote{ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the $z$-axis. The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$.}. The inner detector (ID) provides tracking of charged particles within $|\eta|<2.5$ using silicon pixel and microstrip detectors, as well as a transition radiation tracker which provides a large number of hits in the ID's outermost layers in addition to particle identification information. This subsystem is immersed in an axial magnetic field generated by a 2~T solenoid. A sampling calorimeter surrounds the ID and barrel solenoid, providing energy measurements of electromagnetically and hadronically interacting particles within $|\eta| < 4.9$, and is followed by a muon spectrometer.
The electromagnetic showers of electrons and photons are measured with a high-granularity liquid argon (LAr) calorimeter, consisting of a barrel module within $|\eta|<1.475$ and two endcaps from $1.365 < |\eta| < 3.2$. Hadronic showers are measured using a steel/scintilator tile calorimeter within $|\eta|<1.7$ and with a pair of LAr/copper endcaps within $1.5 < |\eta| < 3.2$. In the forward region, a LAr/copper and LAr/tungsten forward calorimeter measures showers of both kinds within $3.2 < |\eta| < 4.9$.
The muon spectrometer is based one barrel and two endcap superconducting toroidal magnets. Precision chambers provide measurements for all muons within $|\eta|<2.7$, and separate trigger chambers allow the online selection of events with muons within $|\eta| < 2.4$.
As writing events to disk at the nominal LHC collision rate of 40~MHz is currently unfeasible, a two-level trigger system is used to select events for analysis. The hardware-based Level-1 trigger accepts events at a rate of $\sim$100~kHz using a subset of available detector information. The software-based High-Level Trigger then reduces the event rate to $\sim$1~kHz, which is retained for further analysis.
Studies presented in this paper utilise a dataset of proton--proton collisions delivered by the LHC in 2017 with centre-of-mass-energy $\sqrt{s}=13$~\TeV\ and collected with the ATLAS detector. Data containing high-\pt dijet events were selected using a single-jet trigger, and the leading \akt $R=1.0$ jet is required to have \pt above 600~\GeV. All data are required to meet standard ATLAS quality criteria~\cite{DAPR-2018-01}; data taken during periods when detector subsystems were not functional, which contain significant contamination from detector noise, or where there were detector read-out problems are discarded. The resulting dataset has an integrated luminosity of 44.3~\ifb\ and an associated luminosity uncertainty of 2.4\%~\cite{ATLAS-CONF-2019-021}, obtained using the LUCID-2 detector~\cite{LUCID2} for the primary luminosity measurements.
The simulated event samples used to perform these studies were generated using \PYTHIA~8.186~\cite{Sjostrand:2006za,Sjostrand:2007gs} with the NNPDF2.3 LO~\cite{Ball:2012cx} set of parton distribution functions (PDF), a \pt-ordered parton shower, Lund string hadronisation~\cite{Andersson:1983ia,Sjostrand:1984ic}, and the A14 set of tuned parameters (tune)~\cite{ATL-PHYS-PUB-2014-021}. These samples provide `background' jets which originate from high-energy quark and gluon scattering (using a \TwoToTwo matrix element), and `signal' jets originating from high-\pt $W$ boson and top quark decays across a wide kinematic range. The signal $W$ jets were produced using a BSM spin-1 $W' \rightarrow WZ \rightarrow qqqq$ model including only hadronic $W$ and $Z$ decays. The signal top quark jets are taken from a BSM $Z' \rightarrow tt$ model, where the top quarks may decay either hadronically or semileptonically. In order to remove dependence on the specific BSM physics models used to generate these jets, the \pt spectrum of signal jets is always reweighted to match that of background jets~\cite{PERF-2015-04}. Straightforward particle-level containment definitions are used to ensure that the signal jets provide samples of two- and three-pronged jet topologies: the decay partons of the $W$ boson or top quark are required to be within $\Delta R = 0.75$ of the particle-level jet axis. Top jets containing leptonic $W$ boson decays are rejected using particle-level information.
All simulated events were passed through the complete ATLAS detector simulation~\cite{SOFT-2010-01} based on $\GEANT4$~\cite{Agostinelli:2002hh} using the FTFP\_BERT\_ATL model~\cite{SOFT-2010-01}. The effect of pile-up was modelled by overlaying the hard-scatter event with minimum-bias $pp$ collisions generated by \PYTHIA~8.210 with the A3 tune~\cite{ATL-PHYS-PUB-2016-017} and the NNPDF2.3 LO PDF set. The number of pile-up vertices was reweighted to match the data events, which have an average of $38$ simultaneous interactions per bunch crossing in the 2017 dataset. Pile-up events are overlaid such that each subdetector reconstructs the effect of signals from adjacent bunch crossings (`out-of-time' pile-up) as well as those from the same bunch crossing as the hard-scatter event (`in-time' pile-up)~\cite{PERF-2014-03}.
\section{Objects and algorithms}\label{sec:objects}
This section provides a brief overview of different jet input object, pile-up mitigation and grooming options. All jets discussed in these studies are reconstructed using the \akt algorithm as implemented in \Fastjet~\cite{Cacciari:2011ma} with radius parameter $R=1.0$. All jets used in these results are required to have a minimum \pt of 300~\GeV, and to be within $\eta<1.2$.
The complete set of jet input object types, pile-up mitigation and grooming algorithms surveyed is summarised in Table~\ref{tab:summary:algos}. In some cases, additional algorithms or settings were studied but were not found to produce results which differed significantly from those presented here. Notes have been made in Section~\ref{sec:nocalib} when appropriate regarding these omitted jet definitions, and they are indicated in Table~\ref{tab:summary:algos} by an asterisk (*).
\begin{table}[ht]
\centering
\caption{Summary of pile-up mitigation algorithms, jet inputs, and grooming algorithms, the abbreviated names used throughout this work, and the relevant parameters tested for each algorithm. UFOs are introduced in Section~\ref{sec:ufos}. Algorithms marked with an asterisk (*) were studied, but were not found to produce results significantly different from other configurations. Such results are not presented in these studies.}
\begin{tabular}{|c||c|c|c|}
\hline
& \textbf{Algorithm} & \textbf{Abbreviation} & \textbf{Settings} \\ \hline \hline
\multirow{3}{*}{\textbf{Jet input objects}} & \multirow{2}{*}{Topological Clusters} & \multirow{2}{*}{Topoclusters} & \multirow{2}{*}{N/A} \\
& & & \\ \cline{2-4}
& \multirow{2}{*}{Particle-Flow} & \multirow{2}{*}{PFlow} & \multirow{2}{*}{N/A} \\
& & & \\ \cline{2-4}
& \multirow{2}{*}{Track-CaloClusters} & \multirow{2}{*}{TCCs} & \multirow{2}{*}{N/A} \\
& & & \\ \cline{2-4}
& \multirow{2}{*}{Unified Flow Objects} & \multirow{2}{*}{UFOs} & \multirow{2}{*}{N/A} \\
& & & \\
\hline \hline
\multirow{3}{*}{\textbf{Pile-up mitigation algorithms}} & \multirow{3}{*}{Constituent Subtraction} & \multirow{3}{*}{CS}
& $A_g = 0.01$ \\
& & & $\DeltaR_\text{max} = 0.25$ \\
& & & $\alpha = 0$ \\ \cline{2-4}
& Voronoi Subtraction (*)& VS
& N/A \\ \cline{2-4}
& SoftKiller & SK
& $\ell = 0.6$ \\ \cline{2-4}
& \multirow{4}{*}{Pile-up Per Particle Identification} & \multirow{4}{*}{PUPPI}
& $R_\text{min} = 0.001$ \\
& & & $R_0 = 0.3$ \\
& & & $a = 200~\MeV$ \\
& & & $b = 14~\MeV$ \\
\hline \hline
\multirow{5}{*}{\textbf{Jet grooming algorithms}}
& \multirow{2}{*}{Soft-Drop} & \multirow{2}{*}{SD}
& $\zcut$ = 0.1 \\
& & & $\beta$ = 0, 1, 2(*) \\ \cline{2-4}
& \multirow{2}{*}{Bottom-up Soft-Drop} & \multirow{2}{*}{BUSD}
& $\zcut$ = 0.05, 0.1 \\
& & & $\beta$ = 0, 1, 2(*) \\ \cline{2-4}
& \multirow{3}{*}{Recursive Soft-Drop} & \multirow{3}{*}{RSD}
& $\zcut$ = 0.05, 0.1 \\
& & & $\beta$ = 0, 1, 2(*) \\
& & & $N$ = 3, 5(*), $\infty$ \\ \cline{2-4}
& \multirow{2}{*}{Pruning} & \multirow{2}{*}{N/A}
& \zcut = 0.15 \\
& & & \Rcut = 0.25 \\ \cline{2-4}
& \multirow{2}{*}{Trimming} & \multirow{2}{*}{N/A}
& \fcut = 5\%, 9\% \\
& & & \Rsub = 0.1, 0.2 \\
\hline
\end{tabular}
\label{tab:summary:algos}
\end{table}
\subsection{Jet input objects}
\subsubsection{Stable generator-level particles}\label{sec:objects:truth}
Particle-level jets, or `truth jets', are reconstructed in simulated events at generator level. All detector-stable particles from the hard-scattering process with a lifetime $\tau$ in the laboratory frame such that $c\tau > 10$ mm are used. Particles that are expected to leave only negligible energy depositions in the calorimeter, i.e.\ muons and neutrinos, are excluded.
Ungroomed particle-level jets are used as the reference objects for selections throughout these studies in order to ensure that the same set of reconstructed jets are selected for comparison, regardless of the jet input objects used in reconstruction or grooming algorithm applied. In studies of simulated jets, unless otherwise specified, ungroomed particle-level jets are geometrically matched ($\DeltaR$ < 0.75) to ungroomed reconstructed jets, and kinematic selections are applied to the ungroomed particle-level jet four-vector.
Particle-level jets are also taken as the reference for simulation-based ATLAS jet calibrations, and for studies of the jet energy and mass resolution. In this circumstance, they are groomed using the same algorithm and parameters as the reconstructed jets to which they are being compared (Section~\ref{sec:calib}).
\subsubsection{Inner detector tracks}
Tracks are reconstructed from charged-particle hits in the inner detector. In order to ensure that only well-reconstructed tracks from the hard scattering are used, track quality criteria are applied. The `loose' quality working point is used, which places requirements on the number of silicon hits in each subdetector~\cite{ATL-PHYS-PUB-2015-051}. Tracks are associated to the primary vertex (PV) of the hard interaction by placing a requirement on the track distance of closest approach to the PV along the $z$ axis, $|z_{0}\sin{\theta}| < 2.0~\text{mm}$. The PV is selected as the vertex with the highest scalar $\pt^2$ sum of tracks associated with it using transverse and longitudinal impact parameter requirements. In addition, tracks are required to have \pt > 500~\MeV\ and to be within the tracking volume (|$\eta$| < 2.5).
\subsubsection{Topological clusters}
Jets reconstructed from ATLAS calorimeter information are built from `topoclusters'~\cite{PERF-2014-07}, which are three-dimensional groupings of topologically connected calorimeter cells. Topoclusters are formed using iterated `seed' and `collect' steps based on the absolute value of the signal significance in a cell relative to the expected noise, $\sigma_\text{noise}$, which considers both electronic noise and stochastic noise from pile-up interactions. Cells with signal significance over $4\sigma_\text{noise}$ in an event are allowed to seed topocluster formation, and their neighbouring cells with significance over $2\sigma_\text{noise}$ are subsequently included. This step is repeated until all adjacent cells have a significance below $2\sigma_\text{noise}$, at which point all neighbouring cells are added to the cluster ($0\sigma_\text{noise}$). If this process results in a cluster with two or more local energy maxima, a splitting algorithm is used to separate the showers. The energies of the resulting set of clusters are calibrated at the electromagnetic (EM) scale, and all clusters are taken to be massless.
An additional calibration using the local cell weighting (LCW) scheme is applied to form clusters whose energy is calibrated at the correct particle-level scale~\cite{PERF-2014-07}. This weighting scheme classifies energy depositions as either electromagnetic- or hadronic-like using a variety of cluster moments, and accounts for the non-compensating response of the calorimeter, out-of-cluster energy, and for energy deposited in the dead material within the detector.
Finally, the angular coordinates ($\eta$ and $\phi$) of topoclusters are recalculated relative to the primary vertex of the event, instead of the geometric centre of the ATLAS detector. A detailed description of topocluster reconstruction and calibration is provided in Ref.~\cite{PERF-2014-07}.
\subsubsection{Particle-flow objects (PFOs)}\label{sec:objects:pfo}
Particle-flow (PFlow) reconstruction combines track- and calorimeter-based measurements and results in improved jet energy and mass resolution, and improved pile-up stability relative to jets reconstructed from topoclusters alone~\cite{PERF-2015-09,Aad:2020flx}. Double-counting of contributions from the momentum measurement of charged particles in the inner detector and their energy measurement from the calorimeters is removed using a cell-based energy subtraction.
The PFlow algorithm first attempts to match each selected track to a single topocluster in the calorimeter, using topoclusters calibrated to the EM scale, and tracks selected using the ``tight'' quality working point~\cite{ATL-PHYS-PUB-2015-051}. The track momentum and the topocluster position are used to compute the expected energy deposition in the calorimeter by the particle that created the track. It is not uncommon for a single particle to deposit energy in multiple topoclusters. For each track/topocluster system, the PFlow algorithm evaluates the probability that the particle's energy was deposited in more than one topocluster, and may include additional topoclusters in the track/topocluster system if they are necessary to reconstruct the full shower energy. The expected energy deposited in the calorimeter by the particle that produced the track is subtracted, cell-by-cell, from the associated topoclusters. If the associated calorimeter energy following this subtraction is consistent with the expected shower fluctuations of a single particle, the remaining calorimeter energy is removed.
Topoclusters which are not matched to any tracks are assumed to contain energy deposited by neutral particles and are left unmodified. In the cores of jets, particles are often produced at higher energies and in dense environments, decreasing the advantages of using inner-detector-based measurements of charged particles. To account for this degradation of inner tracker performance, the shower subtraction is gradually disabled for tracks with momenta below $100~\GeV$ if the energy $E_\text{clus}$ deposited in the calorimeter in a cone of size $\DeltaR=0.15$ around the extrapolated track trajectory satisfies
\[
\frac{E^{\textrm{clus}}-\langle E_{\textrm{dep}} \rangle}{\sigma(E_{\textrm{dep}})}>33.2\times\log_{10}(40~\GeV / ~\pT^{\textrm{trk}})\,,
\]
where $E_{\textrm{dep}}$ is the expected energy deposition from a charged pion. The subtraction is completely disabled for tracks with $\pt>100$~\GeV\ when this condition is satisfied.
After the PFlow algorithm has run to completion, the collection of particle-flow objects (PFOs) consists of tracks, and both modified and unmodified topoclusters. Charged PFOs which are not matched to the PV are removed in order to reduce the contribution from pile-up; this procedure is referred to as `Charged Hadron Subtraction' (CHS)~\cite{CMS-JME-18-001,CMS-PRF-14-001}.
\subsubsection{Track-CaloClusters (TCCs)}\label{sec:objects:tccs}
Track-CaloClusters (TCCs)~\cite{ATL-PHYS-PUB-2017-015} were developed in the context of searches for massive BSM diboson resonances~\cite{HDBS-2018-31}. These constituents combine calorimeter- and inner-detector-based measurements in a manner which is optimised for jet substructure reconstruction performance in the highest-\pt jets. Unlike PFlow, which uses the expected energy depositions of single particles to determine the contributions of individual tracks to clusters, the TCCs use the energy information from topoclusters and angular information from tracks.
The TCC algorithm starts by attempting to match each `loose' track in the event (from both the hard-scatter and pile-up vertices) to topoclusters calibrated to the local hadronic scale in the calorimeter. In the case where one track matches one topocluster, the $\pt$ of the TCC object is taken from the topocluster, while its $\eta$ and $\phi$ coordinates are taken from the track. In more complex situations where multiple tracks are matched to multiple topoclusters, several TCC objects are created (where the TCC multiplicity is equal to the track multiplicity): each TCC object is given some fraction of the momentum of the topocluster, where that fraction is determined from the ratios of momenta of the matched tracks. TCC angular properties ($\eta$, $\phi$) are taken directly from the unmodified inner detector tracks, and their mass is set to zero.
As in PFlow reconstruction, unmatched topoclusters are included in the TCC objects as unmodified neutral objects.
\subsection{Jet-input-level pile-up mitigation algorithms}
Prior to jet reconstruction, the set of input objects may be preprocessed by one or by a combination of several input-level pile-up mitigation algorithms. When reconstructing jets from topoclusters, these algorithms are applied to the entire set of inputs. When incorporating tracking information, the PV provides an additional, powerful method to reject charged particles from pile-up interactions. In this case, these additional pile-up mitigation algorithms are applied only to the neutral PFOs or TCCs in an event before jet finding.
\subsubsection{Constituent Subtraction (CS)}
Constituent Subtraction~\cite{Berta:2014eza} is a per-particle method of performing area subtraction~\cite{Soyez:2009cw} on jet input objects. The catchment area~\cite{Cacciari:2008gn} of each input object is defined using ghost association: massless particles called `ghosts' are overlaid on the event uniformly, with \pt satisfying
\begin{equation*}
\pt^{\text{g}}= A_{\text{g}} \times \rho,
\end{equation*}
where $A_g$, the area of the ghosts, is set to 0.01 and $\pT^g$ corresponds to the expected contribution from pile-up radiation in a small $\Delta \eta$--$\Delta\phi$ area of $0.1\times0.1$. For each event, the pile-up energy density $\rho$ is estimated as the median of the $\pt/A$ distribution of the $R=0.4$ \kt~\cite{Ellis:1993tq} jets in the event. These jets are reconstructed without a \pt requirement, but are required to be within $|\eta| < 2.0$. The total \pt of all of the ghosts is equal to the expected average pile-up contribution, based on the estimated value of $\rho$.
After the ghosts have been added, the distance $\Delta R_{i,k}$ between each cluster $i$ and ghost $k$ is given by\footnote{In the original formulation, there is also the option to make a $\pt^\alpha$-dependent distance metric. Only values of $\alpha=0$ were considered in Ref.~\cite{Berta:2014eza}, and so only this configuration is considered in these studies.}
\begin{equation*}
\Delta R_{i,k} = \sqrt{(\eta_i - \eta_k)^2+(\phi_i - \phi_k)^2}.
\end{equation*}
The cluster--ghost pairs are then sorted in order of ascending $\Delta R_{i,k}$, and the algorithm proceeds iteratively through each $(i,k)$ pair,
modifying the \pt of each cluster and ghost by
\begin{center}
\begin{tabular}{c l}
If $p_{\text{T,}i} \geq p_{\text{T},k}$: & $p_{\text{T,}i}\longrightarrow p_{\text{T,}i} - p_{\text{T,}k}$, \\
& $p_{\text{T,}k}\longrightarrow 0$; \\
otherwise: & $p_{\text{T,}k} \longrightarrow p_{\text{T,}k} - p_{\text{T,}i}$, \\
& $p_{\text{T,}i}\longrightarrow 0$.
\end{tabular}
\end{center}
until $\Delta R_{i,k} > \Delta R_{\text{max}}$, where $\Delta R_{\text{max}}$ is a free parameter of the algorithm taken to be 0.25 in this study, based on studies of $R=0.4$ jet performance~\cite{ATLAS-CONF-2017-065}. Any ghosts remaining after the subtraction are eliminated.
In the authors' description of this algorithm, a correction is also applied for the mass of the input object. Since all neutral ATLAS jet inputs are defined to be massless, this correction is unnecessary in the ATLAS implementation.
\subsubsection{SoftKiller (SK)}
The SoftKiller (SK)~\cite{Cacciari:2014gra} algorithm applies a \pt cut to input objects. This cut is chosen on an event-by-event basis such that the value of $\rho$ after the selection is approximately zero. To achieve this, the event is divided into an $\eta$--$\phi$ grid of user-specified length scale, chosen to be $\ell=0.6$, based on studies of $R=0.4$ jet performance~\cite{ATLAS-CONF-2017-065}. The \pt cut is determined in order to make half of the grid spaces empty after it is applied (input objects are removed from all grid cells, not just the half which are empty following SK).
To account for detector-level effects, where input objects may not consist purely of hard-scatter or pile-up contributions (see appendix), the best performance is achieved by applying some form of area subtraction to input objects before applying SK. In these studies, SK is always applied to inputs after the CS algorithm; this combination is indicated as `CS+SK'.
An alternative approach to assigning areas to jet input objects is based on Voronoi tesselation~\cite{Soyez:2018opl} and was studied both in isolation and in conjunction with the SoftKiller algorithm. Both variants of this alternative were found to perform similarly to the CS+SK results presented here.
\subsubsection{Pile-up Per Particle Identification (PUPPI)}
`Pile-up per particle identification', or PUPPI~\cite{Bertolini:2014bba}, is a pile-up-mitigation algorithm which assigns each input object $i$ a likelihood to have originated from a pile-up interaction based on its kinematic properties and proximity to charged hard-scatter particles matched to the event's PV. This likelihood is given by
\[
\alpha_{i} = \log\left(\sum_{j} \frac{\pt^{j}}{\Delta R^{ij}} \times \Theta\left(R_\mathrm{min} \le \Delta R_{ij} \le R_{0}\right)\right),
\]
where the index $j$ tracks the charged inputs matched to the PV, $R_{0}$ is the maximum radial distance at which inputs may be matched to each other, $R_\mathrm{min}$ is the minimum radial distance of matching, $\Delta R^{ij}$ is the angular distance between an input object and a charged hard-scatter particle, and $\Theta$ is the Heaviside step function. The value of $R_\mathrm{min}$ is generally taken to be very small, and is chosen to be $0.001$ in these studies. The value of $R_{0}$ is chosen to be $0.3$.
Once $\alpha$ has been calculated for all input objects, then the following quantity is determined:
\[
\chi_{i}^2 = \Theta\left(\alpha_i - \bar{\alpha}_\mathrm{PU}\right) \times \frac{\left(\alpha_i - \bar{\alpha}_\mathrm{PU}\right)^2 }{\sigma_\mathrm{PU}^2},
\]
where $\bar{\alpha}_\mathrm{PU}$ is the mean value of $\alpha$ for all charged pile-up input objects in the event, and $\sigma_\mathrm{PU}$ is the RMS of that same distribution. The four-momentum of each neutral input $i$ is then weighted by
\[
w_{i} = F_{\chi^{2},\mathrm{NDF=1}}\left(\chi_{i}^{2}\right),
\]
where $F_{\chi^{2}}$ is the cumulative distribution of the $\chi^2$ distribution, eliminating all neutral inputs $i$ whose
calculated value of $\alpha_{i}$ is less than $\bar{\alpha}_\mathrm{PU}$.
In order to suppress additional noise, a $\pt$ cut is applied to the remaining input objects after they have been reweighted. This cut is dependent on the number of reconstructed primary vertices (\Npv), and is determined by
\[
p_\mathrm{T,cut} = a + b \times \Npv
\]
where the parameters $a$ and $b$ are user-specified. For these studies, the parameters are chosen to be $a = 200$~\MeV\ and $b = 14$~\MeV, based on studies of the $R=0.4$ PFlow jet energy resolution.
While PUPPI could technically be applied to topoclusters, the principles of the algorithm depend strongly on the matching of neutral input objects
to nearby charged particles from the hard-scatter event. It is therefore more effective for particle-flow-type algorithms. Due to the large number of free parameters,
and since it has only been optimised for ATLAS PFlow jets with $R=0.4$, PUPPI is only applied to PFlow jets.
\subsection{Grooming algorithms}
\subsubsection{Trimming}
Trimming~\cite{Krohn:2009th} was designed to remove contamination from soft radiation in the jet by excluding regions of the jet where the energy flow originates mainly from the underlying event, pile-up, or initial-state radiation (ISR), in order to improve the resolution of the jet energy and mass measurements. In Run~1~\cite{PERF-2012-02}, it was also found to be effective in mitigating the effects of pile-up on \largeR jets. To trim a \largeR jet, the jet constituents are reclustered into subjets of a user-specified radius \Rsub using the \kt algorithm. Subjets with \pt less than some user-specified fraction \fcut of the \pt of the original ungroomed jet are discarded: their constituents are removed from the final groomed jet.
\subsubsection{Pruning}
Pruning~\cite{Ellis:2009me} proposes a modification of the jet clustering sequence, which removes splittings that are assessed as likely to pull in soft radiation from pile-up interactions and the underlying event. This is achieved by determining a `pruning radius' such that hard prongs fall into separate subjets, while discarding softer radiation outside of these prongs. The constituents of the \largeR jet are reclustered using the Cambridge--Aachen (C/A) algorithm~\cite{Dokshitzer:1997in,Wobisch:1998wt} to form an angle-ordered cluster sequence. At each step of the clustering sequence, the softer subjet is discarded if it is either too soft or wide-angled, enforced by requiring
\[\Delta R_{12} \ge \Rcut \times 2\frac{M_{12}}{p_{\text{T,}12}},\]
\[z \le \zcut,\]
where $\Delta R_{12}$, $M_{12}$, and $p_{\text{T,}12}$ are respectively the angular distance, the mass, and the transverse momentum of the subjet pair at a given step in the clustering sequence, and $z = \text{min} \left( p_{\text{T,1}}, p_{\text{T,2}} \right) / \left(p_{\text{T,1}}+p_{\text{T,2} }\right)$. The parameters $\Rcut$ and $\zcut$ are user-defined, and respectively control the amount of wide-angled and soft radiation which is removed by the pruning algorithm.
\subsubsection{Soft-Drop (SD)}
Soft-drop~\cite{Larkoski:2014wba} is a technique for removing soft and wide-angle radiation from a jet. In this algorithm, the constituents of the \largeR jet are reclustered using the C/A algorithm, creating an angle-ordered jet clustering history. Then, the clustering sequence is traversed in reverse (starting from the widest-angled radiation and iterating towards the jet core). At each step in the clustering sequence, the kinematics of the splitting are tested with the condition
\[\frac{\mathrm{min}(p_{\text{T,}1}, p_{\text{T,}2})}{p_{\text{T,}1} + p_{\text{T,}2}} \textless \zcut \Big( \frac{\Delta R_{12}}{R} \Big)^{\beta}\,,\]
where the subscripts $1$ and $2$ respectively denote the harder and softer branches of the splitting, and the parameters $\zcut$ and $\beta$ dictate the amount of soft and wide-angled radiation which is removed. If the splitting fails this condition, the lower-\pt branch of the clustering history is removed, and the declustering process is repeated on the higher-\pt branch. If the condition is satisfied, the process terminates and the remaining constituents form the groomed jet.
If $\beta$ = 0, SD suppresses radiation purely based on the \pt, while larger values of $\beta$ allow more soft radiation to remain within the groomed jet when it is sufficiently collinear. SD with $\beta=0$ is equivalent to the modified Mass Drop Tagger (MDT) algorithm~\cite{Marzani:2017mva,PERF-2012-02}. SD grooming has an intrinsic quality which is not shared by the trimming or pruning algorithms: certain jet substructure observables are calculable beyond leading-logarithm accuracy following the application of SD~\cite{Marzani:2017mva,Marzani:2017kqd,Frye:2016aiz,Frye:2016okc,Kang:2018vgn,Kang:2018jwa,Hoang:2017kmk}.
\subsubsection{Recursive Soft-Drop (RSD) and Bottom-Up Soft-Drop (BUSD)}
The standard soft-drop algorithm aims to find the first hard splitting in the jet clustering history in order to define a groomed jet. In the case of a multi-pronged decay, this treatment may not be sufficient to remove enough soft radiation from the jet, since the SD condition may be satisfied before removing all of this energy. A recursive extension of the SD algorithm (`recursive soft-drop,' or RSD) has been proposed~\cite{Dreyer:2018tjj}, in which the algorithm continues recursively along the harder branch of the C/A clustering sequence until $N$ hard splittings have been found. The case of $N$=1 is equivalent to the standard SD algorithm, while for larger values of $N$, a larger fraction of the jet may be traversed by the grooming algorithm. When $N=\infty$, the entire C/A sequence is traversed by the grooming algorithm regardless of the number of hard splittings found.
Bottom-up soft-drop (BUSD)~\cite{Dreyer:2018tjj} instead incorporates the SD criteria within the jet clustering algorithm, similar to pruning. In these studies, the `local' version of BUSD is implemented, which is applied after initial jet reconstruction. Using this approach, jets are reconstructed with the \antikt algorithm, and then reclustered using a modified version of the C/A algorithm, where particles $i$ and $j$ with the smallest distance $d_{ij} = \Delta R_{ij} / R_{0}$ are combined to create a new pseudojet given by
\[
p_{ij} = \left\{
\begin{array}{ll}
\mathrm{max}(p_{i}, p_{j})~\mathrm{,~if~the~\textrm{soft-drop}~condition~fails,} \\
p_{i} + p_{j}~\mathrm{,~otherwise}. \\
\end{array}
\right.
\]
The results of applying local BUSD are expected to be similar to those of RSD with $N=\infty$, since both algorithms begin with the same set of constituents per jet and groom the entire C/A clustering sequence.
Other configurations for the SD family of algorithms were studied, including $\beta=2$ grooming, but were not found to give results significantly different from those reported in detail.
\section{Performance metrics}\label{sec:nocalib}
In order to survey the relative performance of all considered \largeR jet definitions, several metrics must be established which probe relevant aspects of their behaviour in the context of \largeR jet reconstruction and calibration by ATLAS. It is not feasible to calibrate each of the definitions studied (even with a simulation-based approach, as in Section~\ref{sec:calib}), and so these metrics have been chosen in order to be robust against differences caused by calibration. The metrics selected include the tagging performance of high-\pt $W$ bosons and top quarks, the stability of the jets in the presence of pile-up interactions, and the degree to which a jet definition's mass scale depends on the signal- or background-like substructure of the jet.
In this section, the behaviour of each metric is illustrated using a reduced list of jet definitions that have been selected to highlight the interplay between different aspects of jet reconstruction. For each metric, jets reconstructed from topological clusters, particle-flow and track-calocluster input objects are compared, with and without pile-up mitigation. Two grooming algorithms are also compared for each jet input: trimming with $\Rsub=0.2$ and $\fcut=0.05$, and soft-drop with $\beta=1.0$ and $\zcut = 0.1$. The trimming algorithm is chosen because it is the current baseline definition used by ATLAS. The soft-drop algorithm is chosen as an alternative which has demonstrated good performance, as is shown in Section~\ref{sec:results}.
Results of the complete survey of all jet definitions summarised in Table~\ref{tab:summary:algos} are provided in Section~\ref{sec:results}.
\subsection{Tagging performance}\label{sec:nocalib:tagging}
Many analyses using \largeR jets rely on a tagger to distinguish between different types of jets, such as distinguishing between the decay of a high-\pt, hadronically decaying top quark and a jet originating from a high-energy quark or gluon. Such boosted-particle taggers range in complexity from simple mass cuts to complex machine-learning algorithms~\cite{JETM-2018-03,CMS-JME-13-006,CMS-JME-18-002}. While the complete optimisation of a jet tagger is outside the scope of this work, it is important to compare the tagging performance of different jet definitions in terms of their background rejection (defined as the reciprocal of the background-jet tagging efficiency) at fixed signal-jet tagging efficiency. This may be done using a simple tagger based on the jet mass and a jet substructure (JSS) observable. In order to study the tagging performance for different jet topologies, taggers are created for high-\pt $W$ bosons and top quarks by combining the jet mass with another jet substructure observable which is sensitive to either two- or three-pronged signal jet topologies.
The jet mass, as defined by
\[
\massjet = \sqrt{\left(\sum_{i \in \text{jet}} E_i\right)^2-\left(\sum_{i \in \text{jet}} \vec{p}_i\right)^2},
\]
where $i$ are the constituents of the jet, is typically one of the most powerful variables that can be used to discriminate between different types of jets.
To tag boosted $W$ decays, which have a two-pronged structure, the $\DTwo$ observable~\cite{Larkoski:2013eya,Larkoski:2014gra,Larkoski:2015kga} is used with a choice of angular exponent $\beta=1.0$.
This observable is a ratio of three-point to two-point energy--energy correlation functions which has been used by ATLAS in $W$ taggers since Run~1~\cite{PERF-2015-03,JETM-2018-03}.
For boosted top quark decays, which have a three-pronged structure, \tauthrtwo with the winner-take-all axis configuration~\cite{Thaler_2011,Thaler_2012} is used. This observable is a ratio of two $N$-subjettiness variables, which tests the compatibility of a jet's substructure with a particular $N$-pronged hypothesis.
ATLAS has incorporated \tauthrtwo into its top taggers, whether simple or complex, since Run~1~\cite{PERF-2015-04,JETM-2018-03}.
Unlike a mass-only tagger, where more aggressive grooming can improve the jet mass resolution at the cost of grooming away additional information contained within a jet's soft radiation, a mass+JSS tagger relies on such soft radiation to achieve better background rejection. Such taggers are a more realistic approximation to the expected future tagging performance of any given jet definition (which will use more sophisticated techniques), and are amenable to this survey of many jet definitions.
For both the $W$ and top taggers, the tagging algorithm proceeds similarly: first, a fixed signal-efficiency ($\epsilon_\text{sig}$) mass window is selected, where the window is defined to be the minimum mass range which contains 68\% of the signal mass distribution. This window should select the signal jet mass peak. A one-sided cut is then applied to \DTwo or \tauthrtwo, and background rejection ($1/\epsilon_\text{bkg}$) is compared at a fixed signal efficiency taken to be $\epsilon_\text{sig}=50\%$. This signal efficiency working point is representative of taggers used by ATLAS in physics analysis, and the results were not found to depend strongly on the working point which was selected. The relative performance of various jet definitions in terms of their background rejection at a fixed signal efficiency point was noted to typically provide a consistent ordering of jet definitions before and after applying a simulation-based calibration, and so this metric was selected instead of possible alternatives such as the Receiver Operating Characteristic (ROC) curve integral.
The background rejection for the boosted $W$ boson tagger is shown as a function of signal tagging efficiency in Figure~\ref{nocalib:tag:w} for two \pt bins: a low-\pt bin (300~\GeV~< $\pt^\text{true, ungroomed}$ < 500~\GeV), and a high-\pt bin (1000~\GeV~< $\pt^\text{true, ungroomed}$ < 1500~\GeV), where kinematic requirements are placed on the \pt of the ungroomed particle-level jet which is associated with the detector-level jet under study (Section~\ref{sec:objects:truth}). The low-\pt bin represents the regime where the $W$ decay products are boosted just enough to be contained within a single \largeR jet, while the high-\pt bin represents the regime where the decay products are more collimated and may begin to merge. The performance in these two regions is expected to be different due to detector effects and algorithmic differences. Similarly, the background rejection of the top tagger is shown in Figure~\ref{nocalib:tag:top}, except the lower \pt bin is chosen to be 500~\GeV~< $\pt^\text{true, ungroomed}$ < 1000~\GeV, since the larger mass of the top quark results in less collimation of its decay products.
Better alternatives to the baseline topocluster jet definition are clearly visible. At low \pt, PFlow reconstruction results in the best performance for $W$ boson and top tagging, while TCCs have a lower background rejection than topocluster jets. At high \pt, TCCs provide a significantly better background rejection than the other options, although PFlow still provides an improvement over topocluster reconstruction.
The application of CS+SK pile-up mitigation has very little effect for the high-\pt jets, but for the low-\pt $W$ tagger, it significantly improves the background rejection for soft-drop jets, which are more susceptible to pile-up than trimmed jets. This effect is seen for all three jet input types, but it is pronounced for topocluster inputs, which do not use tracking information to remove pile-up. Top tagging performance benefits more from adopting soft-drop grooming than $W$ tagging: background rejection increases when tagging top quarks regardless of the input object type or \pt bin when soft-drop is chosen.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_01a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_01b.pdf}}\\
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_01c.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_01d.pdf}}\\
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_01e.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_01f.pdf}}
\end{center}
\caption{Background rejection as a function of signal efficiency for a tagger using the jet mass and \DTwo for $W$ boson jets at (a,c,e) low \pt, and (b,d,f) high \pt. Several different jet input object types are shown: (a,b) topoclusters, (c,d) particle-flow objects and (e,f) track-caloclusters. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. Jets groomed with the trimming ($\Rsub=0.2$, $\fcut=0.05$) and soft-drop ($\beta=1.0$, $\zcut = 0.1$) algorithms are shown. The background rejection factor of the baseline topocluster-based trimmed collection at a fixed signal tagging efficiency of 50\% is indicated with a $\star$.}
\label{nocalib:tag:w}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_02a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_02b.pdf}}\\
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_02c.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_02d.pdf}}\\
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_02e.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_02f.pdf}}
\end{center}
\caption{Background rejection as a function of signal efficiency for a tagger using the jet mass and \tauthrtwo for top quark jets at (a,c,d) low \pt, and (b,d,f) high \pt. Several different jet input object types are shown: (a,b) topoclusters, (c,d) particle-flow objects and (e,f) track-caloclusters. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. Jets groomed with the trimming ($\Rsub=0.2$, $\fcut=0.05$) and soft-drop ($\beta=1.0$, $\zcut = 0.1$) algorithms are shown. The background rejection factor of the baseline topocluster-based trimmed collection at a fixed signal tagging efficiency of 50\% is indicated with a $\star$.}
\label{nocalib:tag:top}
\end{figure}
\FloatBarrier
\subsection{Pile-up stability}
Two metrics are used to study the pile-up stability of jet definitions in order to determine which definitions are sufficiently insensitive to pile-up. The first quantifies the effect on the jet mass scale by studying how the $W$ boson mass peak position changes as a function of pile-up, and provides a handle with which to assess the impact of pile-up on a jet's hard structure. The second quantifies the impact on substructure observables by studying the pile-up dependence of $W$ boson tagging efficiency, in order to quantify how pile-up contributions alter the soft radiation patterns within jets.
A related study of the effects of pile-up on topocluster reconstruction is presented in an appendix of this publication, utilising a new technique which propagates particle-level information about hard-scatter and pile-up energy depositions through the ATLAS reconstruction procedure.
\FloatBarrier
\subsubsection{Pile-up stability of the $W$ boson jet mass peak position}\label{sec:nocalib:pumass}
Jet substructure observables such as the jet mass are particularly sensitive to pile-up; the contribution of pile-up to the jet mass scales approximately with the jet radius cubed~\cite{STDM-2011-19}. Figure~\ref{nocalib:pileup:wmass} shows a subset of the trimmed mass distribution of $W$ jets in bins of \Npv for various jet input object types, demonstrating that pile-up can visibly alter the average value and width of the jet mass distribution. This effect is quantified using a simple metric. In bins of \Npv, the core of the $W$ mass peak is iteratively fit with a Gaussian distribution. The trend of the fitted peak position versus \Npv is then fit with a line. The slope of this line is a measure of the sensitivity of the jet mass to PU: a larger magnitude indicates larger pile-up sensitivity. The position of the $W$ jet mass peak was found to be a more resilient metric when studying the performance of uncalibrated jet definitions than other possible choices, such as properties of the jet mass response.
The results of this fitting procedure are provided in Figure~\ref{nocalib:pileup:wmass:fits} for the reduced set of jet definitions. The application of CS+SK pile-up mitigation is shown to stabilise trends in topocluster and PFlow jets, even for jet grooming algorithms which are most sensitive to the effects of pile-up such as soft-drop with topocluster jets. The fitted value of the $W$ boson mass peak position decreases as a function of \Npv for TCCs. This is related to TCC cluster splitting: as the number of pile-up interactions increases, the number of pile-up tracks also increases. Since these tracks are included in the energy-sharing step of the TCC algorithm, topoclusters are divided into more parts, and more energy is removed. Unlike PFlow and topocluster jet reconstruction, the pile-up stability of TCCs deteriorates after the application of CS+SK. Uncorrected PFlow and TCC jet reconstruction are less sensitive to pile-up than topocluster inputs, since they are able to remove the charged pile-up component via CHS.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{
\includegraphics[width=0.45\textwidth]{fig_03a.pdf}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{fig_03b.pdf}}\\
\subfigure[]{\includegraphics[width=0.45\textwidth]{fig_03c.pdf}}\\
\end{center}
\caption{Pile-up dependence of the $W$ boson jet mass reconstructed using (a) topoclusters, (b) particle-flow objects and (c) track-caloclusters. Distributions are shown for the trimming grooming algorithm ($\Rsub=0.2$, $\fcut=0.05$), with unmodified jet input objects. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets.}
\label{nocalib:pileup:wmass}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_04a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_04b.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_04c.pdf}}
\end{center}
\caption{The value of the fitted $W$ boson mass peak as a function of the number of primary vertices, \Npv. Several different jet input object types are shown: (a) topoclusters, (b) particle-flow objects and (c) track-caloclusters. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. Jets groomed with the trimming ($\Rsub=0.2$, $\fcut=0.05$) and soft-drop ($\beta=1.0$, $\zcut = 0.1$) algorithms are shown.}
\label{nocalib:pileup:wmass:fits}
\end{figure}
\subsubsection{Pile-up stability of a simple tagger}
The second metric of pile-up stability quantifies the effect of pile-up on the tagging efficiency, which is impacted more by contributions from soft radiation to the tails of jet substructure observables. The $\DTwo$ variable is particularly sensitive to soft radiation, and so a $W$ tagger is defined using the jet mass and $\DTwo$ (Section~\ref{sec:nocalib:tagging}). For a sample of events with \Npv < 15, a mass cut which results in a 68\% signal efficiency is found, and then the $\DTwo$ cut that results in an overall signal efficiency of 50\% is determined. Then, in bins of \Npv, the signal efficiency of applying these cuts is evaluated. These signal efficiencies are plotted as a function of \Npv and the trend is fit with a line. The slope of this line is indicative of pile-up sensitivity in the soft jet substructure of the jet definition. These slopes are shown for the reduced set of jet definitions in Figure~\ref{nocalib:pileup:d2}.
As pile-up levels increase, the signal efficiency of the $W$ tagger tends to decrease, although the opposite behaviour is often observed for TCC jets. Similarly to what was found when studying the $W$ mass peak position metric (Section~\ref{sec:nocalib:pumass}), topocluster inputs are the least stable. After pile-up mitigation, the pile-up stability of all inputs, including TCCs, improves. The trends in stability as a function of grooming algorithm are the same as for the $W$ mass peak position.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_05a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_05b.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_05c.pdf}}
\end{center}
\caption{The signal efficiency of a $W$ boson tagger as a function of the number of primary vertices, \Npv. Several different jet input object types are shown: (a) topoclusters, (b) particle-flow objects and (c) track-caloclusters. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. Jets groomed with the trimming ($\Rsub=0.2$, $\fcut=0.05$) and soft-drop ($\beta=1.0$, $\zcut = 0.1$) algorithms are shown.}
\label{nocalib:pileup:d2}
\end{figure}
\FloatBarrier
\subsection{Topological sensitivity}\label{sec:nocalib:topology}
ATLAS calibrates \largeR jets using a procedure which involves simulation-based and \emph{in situ} methods~\cite{JETM-2018-02}. For the simulation-based calibration, the average jet energy and mass scale in reconstructed jets are calibrated to the average scale of jets at particle level, using a sample of jets originating from light quarks and gluons (Section~\ref{sec:mcjesjms}). These light-quark- and gluon-derived calibrations are also currently applied to all jets, including to signal jets (e.g.\ $W$/$Z$/$H$/$t$ jets). Dependence of the jet energy and mass scale on the progenitor of the jet is undesirable: if the jet mass scale for signal and background jets with similar kinematics is different, then the signal jets will receive an incorrect calibration factor.
In order to examine the topology dependence of the jet mass scale for different jet definitions, the ratio of the mean value of the uncalibrated jet mass response, $R_{m}=m^{\text{reco}}/m^{\text{true}},$ for signal $W$ jets to that of background jets is constructed within a bin of \largeR jet \pt, $\eta$ and mass. Deviations from unity will result in non-closure in the mass response for signal jets following calibration (Section~\ref{sec:mcjesjms}). This effect is relevant at low \pt, where $W$ jets may be contained within an $R=1.0$ jet, but top quarks are not; therefore, only $W$ jets and background jets are considered in this context. The baseline topocluster-based trimmed \largeR jet definition used by the ATLAS experiment exhibits a difference for signal jets of 4\% by this metric; therefore, deviations from unity of 4\% or less have not been found to be problematic at later stages of the calibration workflow~\cite{JETM-2018-02}, given the current level of calibration precision.
Figure~\ref{fig:nocalib:topology} shows the jet mass response for signal and background jets built from topological clusters and groomed with either the trimming or soft-drop grooming algorithms. The low-\pt bin, where this topological effect is most pronounced, is shown. A larger sensitivity to the signal- or background-like nature of the jet is observed for soft-drop grooming, which retains more soft radiation. The application of pile-up mitigation can exacerbate topological differences in the jet mass scale by altering the distribution of soft jet constituents differently depending on the jet's signal- or background-like topology.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_06a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_06b.pdf}} \\
\end{center}
\caption{Distribution of the jet mass response in $W$ jets and $q/g$ jets reconstructed from topoclusters. The mass response is constructed following application of the (a) trimming ($\Rsub=0.2$, $\fcut=0.05$) or (b) soft-drop ($\beta=1.0$, $\zcut = 0.1$) grooming algorithms at both truth and detector level. Jet \pt and $\eta$ selections are made using the ungroomed particle-level \largeR jet matched to each of the groomed detector-level \largeR jets. The uncertainties from the fits are typically less than 0.005. A particle-level mass-window cut with 68\% signal efficiency is applied to both the groomed signal and background jets.}
\label{fig:nocalib:topology}
\end{figure}
\FloatBarrier
\section{Unified Flow Objects (UFOs)}\label{sec:ufos}
After observing the behaviour of the jet input objects currently used by ATLAS in physics analyses (topoclusters, PFOs and TCCs), it is clear even from the reduced set of jet definitions (Section~\ref{sec:nocalib}) that no single jet definition is optimal according to all metrics. While TCCs significantly improve tagging performance at high \pt, their performance is typically worse than the baseline topocluster-based trimmed jet definition at low \pt, and they are more sensitive to pile-up than other definitions. Jets reconstructed from PFOs can improve on the baseline definition for the entire \pt range, but their tagging performance is significantly worse than that of TCC jets at high \pt when given the same grooming algorithm.
The relative performance of these jet definitions can be understood by reflecting on how different inputs are reconstructed. For low-\pt particles, PFOs are designed to improve the correspondence between particles and reconstructed objects. However, as the particle \pt increases or the environment close-by to the particle becomes dense, the inner detector's momentum resolution deteriorates, and so the PFlow subtraction algorithm is gradually disabled in order to avoid degradation of the jet energy resolution.
The cluster splitting scheme used for TCCs does not utilise a detailed understanding of the correlation between tracks and clusters, and instead is designed to resolve many (charged) particles without double counting their energy. When splitting low-energy topoclusters, this can result in an incorrect redistribution of the cluster's energy, while for high-energy clusters, the ability to resolve many particles increases the relative tagging performance of TCCs over other definitions. TCCs exhibit pile-up instabilities at low \pt, where the mass scale decreases as the number of pile-up interactions increases. This trend is the opposite of what is observed for jets reconstructed from topoclusters and PFOs, and occurs because the TCC algorithm splits clusters into more components when additional tracks from pile-up interactions are present in the reconstruction procedure.
These observations motivate the development of a new jet input object, which combines desirable aspects of PFO and TCC reconstruction in order to achieve optimal overall performance across the full kinematic range. These new inputs are called Unified Flow Objects (UFOs).
The UFO reconstruction algorithm is illustrated in Figure~\ref{fig:ufos:flowchart}. The process begins by applying the standard ATLAS PFlow algorithm (Section~\ref{sec:objects:pfo}). Charged PFOs which are matched to pile-up vertices are removed. The remaining PFOs are classified into different categories: neutral PFOs, charged PFOs which were used to subtract energy from a topocluster, and charged PFOs for which no subtraction was performed due to their high momentum or being located in a dense environment. Jet-input-level pile-up mitigation algorithms may now be applied to the neutral PFOs if desired. A modified version of the TCC splitting algorithm is then applied to the remaining PFOs: only tracks from the hard-scatter vertex are used as input to the splitting algorithm, in order to avoid pile-up instabilities. Any tracks which have been used for PFlow subtraction are not considered, as they have already been well-matched and their expected contributions have been subtracted from the energy in the calorimeter. The TCC algorithm then proceeds as described in Section~\ref{sec:objects:tccs}, using the modified collection of tracks to split neutral and unsubtracted charged PFOs instead of topoclusters. This approach provides the maximum benefit of PFlow subtraction at lower particle \pt, and cluster splitting where the benefit is maximal at high particle \pt.
The performance of UFOs is illustrated in Figures~\ref{fig:nocalib:tag:ufos} and~\ref{fig:nocalib:pileup:wmass:ufos} according to the same metrics as for other jet input objects in Section~\ref{sec:nocalib}. The increased tagging performance of UFOs is demonstrated across both the low and high \pt ranges in Figure~\ref{fig:nocalib:tag:ufos}, where their performance is superior to that of TCC jets at high \pt, and becomes similar to that of PFlow jets as \pt decreases.
UFOs are naturally pile-up-stable due to the inclusion of only charged-particle tracks matched to the primary vertex, similar to the ATLAS PFlow algorithm. Figure~\ref{fig:nocalib:pileup:wmass:ufos} demonstrates the additional stability that an input-level pile-up mitigation algorithm such as CS+SK can offer when it is applied to neutral particles (calorimeter deposits), especially at low \pt.
The topological dependence of UFOs is not enhanced relative to the other jet definitions previously studied, and options exist with sensitivity equal to or below that of the baseline topocluster-based trimmed definition which improve on other aspects of jet performance.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\textwidth]{fig_07.pdf}
\end{center}
\caption{An illustration of the unified flow object reconstruction algorithm.}
\label{fig:ufos:flowchart}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_08a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_08b.pdf}}\\
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_08c.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_08d.pdf}}\\
\end{center}
\caption{Background rejection as a function of signal efficiency for a tagger using (top row) the jet mass and \DTwo for $W$ boson jets, or (bottom row) the jet mass and \tauthrtwo for top quark jets. These results are shown in (left) low-\pt and (right) high-\pt bins, and include a comparison of different jet input object types, including topoclusters, particle-flow objects, track-caloclusters and unified flow objects. The \largeR jets are groomed using the trimming algorithm ($\Rsub=0.2$, $\fcut=0.05$). The background rejection factor of the baseline topocluster-based trimmed collection at a fixed signal tagging efficiency of 50\% is indicated with a $\star$.}
\label{fig:nocalib:tag:ufos}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_09a.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_09b.pdf}} \\
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_09c.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{fig_09d.pdf}}
\end{center}
\caption{(top row) The value of the fitted $W$ boson mass peak, and (bottom row) the signal efficiency of a $W$ boson tagger as a function of the number of primary vertices, \Npv. These results are shown for \largeR jets groomed with the (left) trimming ($\Rsub=0.2$, $\fcut=0.05$) or (right) soft-drop ($\beta=1.0$, $\zcut = 0.1$) algorithms. A comparison of different jet input object types is made, including topoclusters, particle-flow objects, track-caloclusters and unified flow objects.}
\label{fig:nocalib:pileup:wmass:ufos}
\end{figure}
\FloatBarrier
\section{Performance survey}\label{sec:results}
The metrics described in Section~\ref{sec:nocalib} are used to study the performance of all jet definitions listed in Table~\ref{tab:summary:algos}, with the addition of UFOs. This provides a more complete understanding of the interplay between the different aspects of jet reconstruction. The results are summarised in Figures~\ref{fig:heatmap:w}--\ref{fig:heatmap:topology}.
\FloatBarrier
\subsection{Tagging performance}
A comparison of the background rejection of the $W$ tagger at the 50\% signal tagging efficiency working point is shown in Figure~\ref{fig:heatmap:w} for two \pt bins: a low-\pt bin (300~\GeV~< $\pt^\text{true, ungroomed}$ < 500~\GeV), and a high-\pt bin (1000~\GeV~< $\pt^\text{true, ungroomed}$ < 1500~\GeV).
Several trends are apparent from the performance of the taggers. As seen in Section~\ref{sec:nocalib}, for a fixed grooming algorithm, PFO reconstruction improves on topocluster reconstruction for both \pt bins, while TCCs improve background rejection even further at high \pt. In both cases, UFO reconstruction is able to match or improve on the performance of other jet inputs for both \pt bins. In general, pile-up mitigation improves $W$ tagging performance for all input types. The effects of pile-up mitigation are more apparent at low \pt, where soft pile-up radiation has a larger impact on the reconstruction of \DTwo. At high \pt, pile-up mitigation significantly improves the performance of TCC jets. This is related to the greater impact of pile-up mitigation for TCCs on the background mass distribution than the signal distribution, which increases the background rejection.
The tagging performance varies significantly among the different grooming algorithms and parameter choices. For trimming algorithms, smaller values of \Rsub or larger values of \fcut result in reduced tagging performance, regardless of the jet input type. These parameter choices correspond to more aggressive grooming, indicating that some of the softer radiation is important for effectively tagging different types of jets. An analogous observation is made for SD jets, where small values of $\beta$, or large values of \zcut generally result in degraded tagging performance.
A similar set of results is seen for the top tagger in Figure~\ref{fig:heatmap:top}. In the low-\pt bin, PFlow jets typically outperform both topocluster and TCC jets, while TCC jets outperform the other input object types at high \pt. Again, UFO jets are able to match or improve the performance compared to the other jet input types in both \pt bins. Pile-up mitigation tends to improve results, particularly at low \pt, as observed for $W$ taggers, although in a few cases the background rejection deteriorates. The baseline trimming algorithm works well for all input object types, but at low \pt, the background rejection may be improved by 50\% by instead using a SD algorithm with lighter grooming. The standard SD algorithm with $\beta=1$ and $z_{\mathrm{cut}}=0.1$ works particularly well, although recursive and bottom-up variants can also provide comparable performance.
In general, the tagging performance of jets constructed out of UFOs matches or exceeds that of jets reconstructed out of any other input type.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.95\textwidth]{fig_10a.pdf}} \\
\subfigure[]{\includegraphics[width=0.95\textwidth]{fig_10b.pdf}}
\end{center}
\caption{Background rejection at 50\% signal efficiency for a tagger using the jet mass and \DTwo for $W$ boson jets at (a) low \pt, and (b) high \pt. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. The current baseline topocluster-based trimmed collection is indicated with a $\star$.}
\label{fig:heatmap:w}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.95\textwidth]{fig_11a.pdf}} \\
\subfigure[]{\includegraphics[width=0.95\textwidth]{fig_11b.pdf}}
\end{center}
\caption{Background rejection at 50\% signal efficiency for a tagger using the jet mass and \tauthrtwo for top quark jets at (a) low \pt, and (b) high \pt. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. The current baseline topocluster-based trimmed collection is indicated with a $\star$.}
\label{fig:heatmap:top}
\end{figure}
\FloatBarrier
\subsection{Pile-up stability}
The slopes of the fitted average $W$ boson jet mass as a function of \Npv are shown in Figure~\ref{results:pileup:wmass} for each of the surveyed jet definitions. The uncertainties in the fitted slope values tend to be negligible compared to the differences between reported values. Among jet input types, PFOs and UFOs are the most pile-up-stable. PFOs, TCCs, and UFOs are all more pile-up-stable than topoclusters, due to the ability to easily remove charged particles from pile-up vertices. As discussed in Section~\ref{sec:nocalib}, the fitted value of the TCC $W$ mass peak position decreases as a function of \Npv for most grooming algorithms, although for lighter grooming algorithms which are more affected by pile-up, the slope is sometimes positive. This effect is exacerbated by the use of CS+SK, and for CS+SK TCCs, all of the studied trends are negative.
There are significant differences in the pile-up stability of different jet grooming algorithms. In general, all studied configurations of trimming are stable. For SD, RSD and BUSD, stability depends on the parameter choice. Larger values of $\beta$, where more soft and wide-angled radiation is retained, have a larger pile-up dependence. As expected, for the same value of \zcut, RSD and BUSD are more stable than the standard SD definition.
For all input types, with the exception of TCCs, jet-input-level pile-up mitigation techniques improve the pile-up stability of the jet definitions. Since too much energy is already subtracted for TCCs because of the inclusion of pile-up tracks in their reconstruction, any additional subtraction further degrades performance. For other jet inputs, the use of pile-up mitigation reduces the pile-up sensitivity so that it is better than or equivalent to the pile-up sensitivity from the baseline trimmed topocluster jet definition. This is true even for lightly groomed algorithms (e.g.\ RSD with $\zcut=0.05$, $\beta=1$, $N=3$), where CS+SK improves stability by a factor of 20. While PUPPI improves the pile-up stability of PFOs, the performance of CS+SK PFOs is better overall, sometimes by more than a factor of two. This improvement is seen for nearly all grooming algorithms. The pile-up stability of UFOs is similar to that of PFOs, which is expected since the modified TCC splitting step does not remove pile-up particles.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\textwidth]{fig_12.pdf}
\end{center}
\caption{Pile-up dependence of the value of the fitted $W$ boson mass peak at low \pt. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. The current baseline topocluster-based trimmed collection is indicated with a $\star$. The $z$-axis colour range is based on the difference of the baseline collection from a slope of 0. This makes differences between definitions more discernible than those between very unstable collections, which may have values beyond the axis range.}
\label{results:pileup:wmass}
\end{figure}
The change in signal efficiency of the \DTwo tagger as a function of \Npv is shown in Figure~\ref{results:pileup:d2}. Uncertainties in the reported values from the fitting procedure tend to be negligible (sub-percent level). As pile-up levels increase, the signal efficiency of the $W$ tagger tends to decrease. As observed when studying the $W$ mass peak position metric, topocluster inputs are the least stable. After pile-up mitigation, the pile-up stability of all inputs, including TCCs, improves by this metric. The trends in stability as a function of grooming algorithm are the same as for the $W$ mass position. While CS+SK is typically still more performant than PUPPI, the degree of improvement is not as large as that observed when studying the pile-up stability of the $W$ jet mass peak-position.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\textwidth]{fig_13.pdf}
\end{center}
\caption{Pile-up dependence of a \DTwo cut on the $W$ boson jet selection efficiency at low \pt. Jet \pt and $\eta$ cuts before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. The current baseline topocluster-based trimmed collection is indicated with a $\star$. The $z$-axis colour range is based on the difference of the baseline collection from a slope of 0. This makes differences between definitions more discernible than those between very unstable collections, which may have values beyond the axis range.}
\label{results:pileup:d2}
\end{figure}
\FloatBarrier
\subsection{Topological sensitivity}
In order to examine the topology dependence of the jet energy and mass scale for different jet definitions, the ratio of the mean value of the uncalibrated jet mass response for $W$ jets to that of background jets is constructed. These values can be significantly different, as seen in Section~\ref{sec:nocalib}. Deviations from unity will result in non-closure in the mass response following calibration. This effect is largest at low \pt, where the reconstruction of $W$ jets is relevant. As seen in Figure~\ref{fig:heatmap:topology}, the baseline topocluster-based trimmed \largeR jet definition used by the ATLAS experiment shows a score of around 4\% in this metric, and so small deviations from unity are not problematic.
The topology dependence is increased by the application of jet-input-level pile-up mitigation algorithms. In general, TCCs show the most sensitivity, which can reach 20\% after pile-up mitigation algorithms are applied. The topological sensitivity is increased for all inputs after the application of CS+SK, regardless of the grooming algorithm applied. This effect is generally lower for UFOs than for other jet inputs, even after pile-up mitigation algorithms are applied; the behaviour of PFlow jets is similar.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\textwidth]{fig_14.pdf}
\end{center}
\caption{Ratio of the mean value of mass response in $W$ jets to that in $q/g$ jets at low \pt. Kinematic selections before tagging are made using the ungroomed particle-level \largeR jet matched to each of the groomed reconstructed \largeR jets. The current baseline topocluster-based trimmed collection is indicated with a $\star$.}
\label{fig:heatmap:topology}
\end{figure}
\clearpage
\FloatBarrier
\section{Comparison of calibrated jet definitions}\label{sec:calib}
The tagging performance of a jet definition will have the largest impact on the sensitivity of searches for new physics performed by ATLAS, and so it is the primary metric used to determine which definitions are important for further study. The pile-up stability and topological sensitivity of the jet mass scale are also important, but since the performance of the baseline topocluster-based trimmed jet definition is still adequate, they are primarily used to distinguish between otherwise similar jet definitions. The primary motivation for choosing UFO-based definitions for further study is their $W$ boson and top quark tagging performance.
Based on their optimal tagging performance over the entire kinematic range of interest, in addition to the increased pile-up stability achieved by utilising tracking information in the jet definition, only jets reconstructed from UFOs are considered further. Several grooming algorithms are promising: soft-drop ($\beta=1.0, \zcut=0.1$) jets perform well when tagging high-\pt top quarks, while the RSD ($\beta=1.0, \zcut=0.05, N=\infty$) and BUSD ($\beta=1.0, \zcut=0.05$) extensions provide further improvements for high-\pt $W$ bosons. Trimmed UFO jets ($\fcut=0.05, \Rsub=0.2$) also provide competitive performance in certain regions. These four UFO jet definitions were selected for calibration and further study, as summarised in Table~\ref{tab:calib:jetchoice} in the category `studied definitions.'
\begin{table}[htbp]
\centering
\caption{Summary of the jet reconstruction algorithms, jet-input-level pile-up mitigation algorithms, and grooming algorithms which were determined to merit calibration and further study. Several promising UFO-based definitions are calibrated, as well as other definitions which enable comparisons of the impact of varying different aspects of jet definitions.}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Category} & \textbf{Input Objects} & \textbf{Grooming Algorithm} & \textbf{Configuration} \\
\hline \hline
\textbf{Baseline} & LCW Topoclusters & Trimmed & $\Rsub=0.2$, $\fcut=0.05$ \\
\textbf{definitions}& TCCs & Trimmed & $\Rsub=0.2$, $\fcut=0.05$ \\
\hline\hline
& CS+SK UFOs & Trimmed & $\Rsub=0.2$, $\fcut=0.05$ \\
\textbf{Studied} & CS+SK UFOs & SD & $\zcut=0.1$, $\beta=1.0$ \\
\textbf{definitions} & CS+SK UFOs & RSD & $\zcut=0.05$, $\beta=1.0$, $N=\infty$ \\
& CS+SK UFOs & BUSD & $\zcut=0.05$, $\beta=1.0$ \\
\hline \hline
\textbf{Additional}& UFOs & Trimmed & $\Rsub=0.2$, $\fcut=0.05$ \\
\textbf{definitions}& PFOs & Trimmed & $\Rsub=0.2$, $\fcut=0.05$ \\
& UFOs & SD & $\zcut=0.1$, $\beta=1.0$ \\
\hline
\end{tabular}
\label{tab:calib:jetchoice}
\end{table}
\subsection{Simulation-based jet energy and mass scale calibrations}\label{sec:mcjesjms}
A simulation-based calibration is derived using \PYTHIA dijet events for each of the UFO collections which were selected for further study, as well as for additional \largeR jet definitions which will permit comparisons of each aspect of the jet definition which is studied. These jet definitions are listed in Table~\ref{tab:calib:jetchoice}. This calibration follows the methodology in Ref.~\cite{JETM-2018-02}, and restores the average reconstructed jet \pt and mass scales (JES, JMS) to those of the particle-level references. For each jet definition, a reference set of particle-level jets are reconstructed as described in Section~\ref{sec:objects:truth}, and the same grooming algorithm is applied as that used for the detector-level jet definition.
Detector-level jets are matched to particle-level jets using a procedure which minimises the distance $\DeltaR = \sqrt{(\Delta\phi)^2+(\Delta\eta)^2}$. The \pt and mass responses are defined respectively as $R_{\pt} = \langle \pt^{\text{reco}}/\pt^{\text{true}} \rangle$ and $R_{m} = \langle m^{\text{reco}}/m^{\text{true}} \rangle$, where the `reco' quantities correspond to the value of the jet energy or mass before any calibration has been applied. The truth quantities are defined using particle-level jets, reconstructed following the procedure described in Section~\ref{sec:objects:truth}. The average response is determined using a Gaussian fit to the core of each response distribution.
For the JES calibration, these fits are performed in bins of jet energy and detector pseudorapidity $\eta^{\mathrm{det}}$, defined as the jet pseudorapidity calculated relative to the geometrical centre of the ATLAS detector. This parameterisation yields a more accurate representation of the active calorimeter cells than that obtained when using the pseudorapidity calculated relative to the PV, and results in an improved evaluation of the calorimeter response. The JES correction factor, $c_\text{JES} = 1/R_{\pt}$ is smoothed in energy and $\eta^\text{det}$, and is applied to the four-momentum of the reconstructed jet as a multiplicative scale factor. A correction to the jet $\eta$ (`$\Delta\eta$' below) is also applied to correct for biases with respect to the particle-level reference in certain detector regions~\cite{ATLAS-CONF-2015-037}. The JES correction is similar for each of the four CS+SK UFO jet definitions which are calibrated, regardless of the grooming algorithm which is applied.
After the JES correction has been applied, the jet mass scale calibration is derived using the same procedure in bins of $E_{\text{reco}}$, $\eta^{\text{det}}$, and log($m_{\text{reco}}/E_{\text{reco}}$). The jet mass calibration factor $c_\text{JMS} = 1/R_{m}$ is applied only to the mass of the jet, keeping the jet energy fixed and thus allowing the $\pt$ to vary. This factor is also a smooth function of the \largeR jet kinematics. The reconstructed \largeR jet kinematics are thus given by:
\begin{eqnarray*}
E_{\mathrm{reco}} = c_\text{JES}\,E_0,~~~m_{\mathrm{reco}} = c_\text{JES}\,c_\text{JMS}\,m_0,~~~\eta_{\mathrm{reco}} = \eta_0+\Delta\eta,~~~\pt^{\mathrm{reco}} = c_\text{JES} \frac{\sqrt{E_0^2-c^2_\text{JMS}\,m_0^2}}{\cosh{(\eta_0+\Delta\eta)}},
\end{eqnarray*}
where the quantities $E_0$, $m_0$ and $\eta_0$ refer to the jet properties prior to any calibration, but following the jet grooming procedure. The JMS correction is mostly similar for each of the four CS+SK UFO jet definitions which are studied, but differences in the size of the correction become largest for massive jets at high \pt. Figure~\ref{fig:calib:JMS:W} presents the average jet mass response $R_{m}$ for jets with a particle-level jet mass equal to that of the $W$ boson, for the four CS+SK UFO jet definitions which are calibrated. The response for \largeR jets with this mass is obtained by directly taking a profile through the smoothed response maps. High-\pt trimmed jets require a smaller calibration factor than jets which are groomed using the SD, RSD or BUSD algorithms. This indicates that there are differences in the high-\pt behaviour of grooming algorithms: trimming removes more pile-up from jets at high \pt, bringing the average JMS of these jets closer to particle level before calibration.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_15a.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_15b.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_15c.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_15d.pdf}}
\end{center}
\caption{The jet mass response for UFO CS+SK \largeR jets which have been groomed with (a) trimming, (b) soft-drop, (c) recursive soft-drop and (d) bottom-up soft-drop. The jet mass response is presented as a function of jet pseudorapidity for several values of the jet transverse momentum from 200~\gev{} to 2~\tev{}, for jets with a particle-level mass equal to the $W$ boson mass. The mass responses for \largeR jets with this mass are obtained by directly taking a profile through the smoothed response maps.}
\label{fig:calib:JMS:W}
\end{figure}
All figures where JES+JMS calibrations have been applied to the \largeR jet four-vector are labelled `JES+JMS'.
\subsection{Comparison of calibrated jet definition performance}
\subsubsection{Jet mass and \pt resolution}
The expected \largeR jet mass resolution, defined to be the 68\% interquantile range divided by twice the median of the distribution, is shown in Figure~\ref{fig:calib:w:mResponse} for samples of signal jets. For these studies (as for all studies in this document), the baseline trimmed topocluster mass is used directly, rather than the combined mass~\cite{JETM-2018-02} (which incorporates additional measurements from the inner tracking detector), allowing a direct comparison of the unmodified performance of the different jet definitions. In Figures~\ref{fig:calib:w:mResponse:ufosGoodWs} and~\ref{fig:calib:w:mResponse:ufosGoodTops}, the resolution for all UFO jet definitions is shown to be better than for the baseline trimmed topocluster definition, particularly at high \pt. The expected mass resolution of UFO jets is stable across the entire \pt spectrum. In the low-\pt region the mass resolution of UFO jets is typically similar to that of topocluster jets, while in the high-\pt region, it more closely follows the behaviour of TCC jets. For hadronically decaying high-\pt top quarks, UFOs improve the jet mass resolution relative to topocluster-based jets by 26\%, and by 40\% for high-\pt hadronically decaying $W$ bosons.
In order to help factorise the performance gains from various sources, comparisons of the jet mass resolution are also provided for several other calibrated jet definitions. Figures~\ref{fig:calib:w:mResponse:trimmedWs} and~\ref{fig:calib:w:mResponse:trimmedTops} show a comparison of the four unmodified input object types using the trimming algorithm. In general, at high-\pt the mass resolution of top quarks is better than that of $W$ bosons due to the fact that $W$ bosons are lighter, and their decay products are typically more collimated, making the calorimeter granularity relevant at lower values of \pt.
UFO jets outperform topocluster and TCC jets for both $W$ boson and top quark jets. PFlow jets are also found to be more performant than topocluster and TCC jets for top quark jets, although their performance deteriorates for highly boosted $W$ bosons. The trimming and soft-drop algorithms are compared for UFO jets with and without CS+SK pile-up mitigation in Figures~\ref{fig:calib:w:mResponse:CSSKWs} and~\ref{fig:calib:w:mResponse:CSSKTops}. The application of CS+SK does not significantly alter the mass resolution of trimmed UFO jets; however, it is found to improve the mass resolution for soft-drop jets at low \pt by nearly 40\%.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_16a.pdf}\label{fig:calib:w:mResponse:ufosGoodWs}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_16b.pdf}\label{fig:calib:w:mResponse:ufosGoodTops}} \\
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_16c.pdf}\label{fig:calib:w:mResponse:trimmedWs}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_16d.pdf}\label{fig:calib:w:mResponse:trimmedTops}} \\
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_16e.pdf}\label{fig:calib:w:mResponse:CSSKWs}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_16f.pdf}\label{fig:calib:w:mResponse:CSSKTops}} \\
\end{center}
\caption{The jet mass resolution for (a,c,e) $W$ boson jets, and (b,d,f) top quark jets as a function of \pt. In (a,b) the relative performance of the studied UFO definitions is compared with the current ATLAS baseline topocluster and TCC jets, while in (c,d) only jet input object types are compared, and in (e,f) the impact of pile-up mitigation is highlighted.}
\label{fig:calib:w:mResponse}
\end{figure}
The \largeR jet \pt resolution for background jets is shown in Figure~\ref{fig:calib:qcd:ptResolution}, determined as the one-standard-deviation width of Gaussian fits to the $R_{\pt}$ distributions divided by their fitted mean. The \pt resolution of trimmed topocluster jets is superior to that of either TCC trimmed jets or any of the UFO jet definitions studied. UFO jets do not use the LC correction because PFOs are reconstructed using topoclusters at the EM scale, which results in a degraded correlation between the particle-level and detector-level \largeR jet \pt. While TCC jets take topoclusters calibrated to the LC scale as input, the energy resolution of TCC trimmed jets is worse than for topocluster trimmed jets, while the UFO trimmed jet resolution is almost identical to the resolution of PFlow trimmed jets. This indicates that the energy resolution degradation of TCC is due to the inclusion of pile-up tracks in the energy sharing, since these are not included in the UFO implementation.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.48\textwidth]{fig_17a.pdf}\label{fig:calib:w:ptResponse:shortlist}}
\subfigure[]{\includegraphics[width=0.48\textwidth]{fig_17b.pdf}\label{fig:calib:w:ptResponse:trimmed}}\\
\subfigure[]{\includegraphics[width=0.48\textwidth]{fig_17c.pdf}\label{fig:calib:w:ptResponse:cssk}}
\end{center}
\caption{The jet \pt resolution in dijet events. In (a) the relative performance of the studied UFO definitions is compared with the current ATLAS baseline topocluster and TCC jets, while in (b) only jet input object types are compared, and in (c) the impact of pile-up mitigation is highlighted.}
\label{fig:calib:qcd:ptResolution}
\end{figure}
\subsubsection{Jet mass+JSS tagging performance}
In this section, a comparison of the tagging performance of the calibrated jet definitions is reported. Instead of considering a single efficiency working point (Section~\ref{sec:nocalib:tagging}), the tagging performance is studied using ROC curves. Figures~\ref{fig:calib:w:2var} and~\ref{fig:calib:top:2var} show the tagger background rejection as a function of the tagger signal efficiency, using the same jet mass $+$ jet substructure taggers discussed in Section~\ref{sec:nocalib:tagging}: a fixed mass-window cut with 68\% signal efficiency is applied, and then a one-sided \DTwo or \tauthrtwo cut is made to obtain the desired signal efficiency.
When tagging high-\pt, hadronically decaying $W$ bosons (Figure~\ref{fig:calib:w:2var}), the considered UFO definitions bring significant improvement over the LCTopo and TCC definitions. At high \pt, UFOs outperform the baseline topocluster-based jet definition in terms of their background rejection by about 120\% at a fixed signal-tagging efficiency of 50\%. For high-\pt, hadronically decaying top quarks (Figure~\ref{fig:calib:top:2var}), UFO definitions outperform all other choices, improving the background rejection by 135\% when compared with the baseline topocluster-based jet definition at a fixed signal-tagging efficiency of 50\%. Use of the recursive or bottom-up soft-drop grooming algorithm is noted to further improve performance over the trimmed UFO definition by an additional 10\% for a signal efficiency of 50\%, and the application of CS+SK pile-up mitigation is also found to increase performance by roughly 10\% when it is applied in conjunction with the soft-drop grooming algorithm.
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_18a.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_18b.pdf}}\\
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_18c.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_18d.pdf}} \\
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_18e.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_18f.pdf}} \\
\end{center}
\caption{Background rejection as a function of signal efficiency for a tagger using the jet mass and \DTwo for $W$ boson jets at (left) low \pt, and (right) high \pt. In (a,b) the relative performance of the studied UFO definitions is compared with the current ATLAS baseline topocluster and TCC jets, while in (c,d) only jet input object types are compared, and in (e,f) the impact of pile-up mitigation is highlighted. The background rejection factor of the baseline topocluster-based trimmed collection at a fixed signal tagging efficiency of 50\% is indicated with a $\star$.}
\label{fig:calib:w:2var}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_19a.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_19b.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_19c.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_19d.pdf}}\\
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_19e.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_19f.pdf}}\\
\end{center}
\caption{Background rejection as a function of signal efficiency for a tagger using the jet mass and \tauthrtwo for top quark jets at (left) low \pt, and (right) high \pt. In (a,b) the relative performance of the studied UFO definitions is compared with the current ATLAS baseline topocluster and TCC jets, while in (c,d) only jet input object types are compared, and in (e,f) the impact of pile-up mitigation is highlighted.}
\label{fig:calib:top:2var}
\end{figure}
\subsection{Data-to-simulation comparisons}
Robust modelling of jet substructure is crucial to reduce uncertainties related to Monte Carlo modelling of parton showers in physics analyses that rely on jet-substructure-based techniques. To verify the accuracy of the simulation, predictions were generated at the detector level for several jet substructure observables in high-\pt dijet events using \PYTHIA and reconstructed using the full ATLAS detector simulation~\cite{SOFT-2010-01} based on $\GEANT4$~\cite{Agostinelli:2002hh}. The results are compared with the distributions observed in data collected during 2017. Events are selected using the lowest unprescaled single \largeR jet trigger. This trigger is fully efficient for ungroomed \largeR jets with \pt > 600~\GeV. Data are required to pass a series of quality requirements and cleaning cuts. In addition, overlap removal and pile-up reweighting are applied. Events are required to have at least one jet with a groomed jet \pt above 600~\GeV, and all jets are required to have $\pt > 600$~\GeV\ and $|\eta| < 1.2$. When studying the behaviour of $\tau_{32}$ and $D_{2}$, the jet mass is required to be greater than 40~\GeV. Data and simulated events are required to pass the same event selection.
The observed data are compared with simulated dijet events in Figure~\ref{fig:dataStudies}. The jet mass, number of jet constituents, \DTwo, and \tauthrtwo are studied. Only statistical uncertainties are displayed, and the statistical uncertainty of the simulation is negligible compared to that of the data. In general, the level of agreement between data and simulation for the UFO jets is similar to that of topocluster trimmed jets, indicating that this level of agreement is tolerable for general use on ATLAS. The exception to this is the number of constituents, which is known to be modelled poorly~\cite{PERF-2014-07}. The modelling is improved for UFO jets relative to topocluster-based trimmed jets, particularly at large constituent multiplicities.
\begin{figure}[htbp]
\centering
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_20a.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_20b.pdf}}\\
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_20c.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_20d.pdf}}
\caption{Data-to-simulation comparisons of (a) the groomed jet mass, (b) the number of constituents, (c) the groomed jet $D_{2}$ and (d) the groomed jet $\tau_{32}$. Only statistical uncertainties are displayed, and the statistical uncertainty of the simulation is negligible compared to that of the data. The ratio of simulation to data is provided in the lower panel of each figure.
}\label{fig:dataStudies}
\end{figure}
The background rejection for the mass+JSS taggers described in Section~\ref{sec:results} is shown in Figure~\ref{fig:dataMC:backgroundRejection} as a function of the \largeR jet \pt, where taggers are created for each \pt bin, using the 50\% signal efficiency working point. For the $W$ tagger, agreement between data and simulation is similar for all jet definitions, while for the top taggers, agreement is slightly worse for UFO jets than for the topocluster trimmed definition.
\begin{figure}[htbp]
\centering
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_21a.pdf}}
\subfigure[]{\includegraphics[width=0.47\textwidth]{fig_21b.pdf}}
\caption{Data-to-simulation comparisons of the background rejection for groomed jets for (a) the mass+\DTwo $W$ tagger, and (b) the mass+\tauthrtwo top tagger.}
\label{fig:dataMC:backgroundRejection}
\end{figure}
\FloatBarrier
\FloatBarrier
\section{Concluding remarks}\label{sec:conclusion}
The development of jet substructure techniques has enabled new searches and measurements, boosting the sensitivity of the Large Hadron Collider experiments to the physics of and beyond the Standard Model. This paper has presented a set of performance comparisons in order to determine the most promising \largeR jet definitions for use in future analyses, with a focus on comparing different jet input objects, pile-up mitigation algorithms and jet grooming algorithms.
A new type of jet input, called a Unified Flow Object, has been proposed which incorporates tracking information into jet substructure reconstruction by combining particle-flow reconstruction for low-\pt particles and cluster splitting for particles at high \pt and in dense environments. These UFO inputs can increase the background rejection of jet taggers across a wide kinematic range by up to 120\% for a simple $W$ tagger at 50\% signal efficiency, and up to 135\% for a simple top tagger at 50\% signal efficiency when compared with the current baseline trimmed topocluster \largeR jet definition. While the \pt resolution of these jets is degraded relative to the baseline LCW topocluster-based ATLAS \largeR jet definition due to the different topocluster energy scales used as input objects, UFO jets provide an improved jet mass resolution, with up to a 45\% improvement at high \pt for signal jets when compared with existing ATLAS \largeR jet definitions.
The application of CS+SK pile-up mitigation has been shown to stabilise and augment performance as a function of the number of pile-up interactions, which will be crucial in the face of the difficult experimental conditions to come during future LHC data-taking periods. Pile-up mitigation increases the number of experimentally viable grooming configurations to include options which do not groom soft radiation aggressively enough to be considered with unmodified jet inputs.
Several promising grooming algorithms were compared using \largeR CS+SK UFO jets. Definitions incorporating soft-drop grooming and its extensions, recursive soft-drop and bottom-up soft-drop, all outperform the baseline ATLAS trimming configuration in terms of high-\pt $W$ and top quark tagging using simple taggers. These collections are viable for general-purpose use in the challenging experimental conditions of the LHC only due to the improvements in jet inputs and pile-up mitigation algorithms. The soft-drop definition using $\zcut=0.1$ and angular exponent $\beta=1.0$ outperforms all other candidates when identifying high-\pt top quarks, and is competitive to within 5--10\% of the considered RSD and BUSD options when tagging boosted $W$ bosons. These jets also exhibit good pile-up stability and a tolerable sensitivity to topological effects, according to the metrics studied. This definition provides superior jet mass resolution for low-\pt $W$ jets when compared with RSD and BUSD options. Due to its wide range of applicability, it is concluded that the CS+SK UFO soft-drop ($\beta=1.0$, $\zcut=0.1$) \largeR jet definition provides the best performance for use as a general-purpose jet definition in ATLAS physics analyses.
\clearpage
\FloatBarrier
\section*{Acknowledgements}
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; ANID, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS and CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF and MPG, Germany; GSRT, Greece; RGC and Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; JINR; MES of Russia and NRC KI, Russian Federation; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, CANARIE, Compute Canada, CRC and IVADO, Canada; Beijing Municipal Science \& Technology Commission, China; COST, ERC, ERDF, Horizon 2020 and Marie Sk{\l}odowska-Curie Actions, European Union; Investissements d'Avenir Labex, Investissements d'Avenir Idex and ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and GIF, Israel; La Caixa Banking Foundation, CERCA Programme Generalitat de Catalunya and PROMETEO and GenT Programmes Generalitat Valenciana, Spain; G\"{o}ran Gustafssons Stiftelse, Sweden; The Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref.~\cite{ATL-SOFT-PUB-2020-001}.
\printbibliography
\clearpage \input{atlas_authlist}
\end{document}
|
1,477,468,750,918 | arxiv | \section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users in the near future is a crucial task in intelligent transportation systems (ITS) \cite{goldhammer2013early,hashimoto2015probabilistic,koehler2013stationary}, autonomous driving \cite{franke1998autonomous,ferguson2008detection,luo2018porca}, mobile robot applications \cite{ziebart2009planning,du2011robot,mohanan2018survey}, \textit{etc}.. This task enables an intelligent system to foresee the behavior of road users and make a reasonable and safe decision/strategy for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
Trajectory prediction is generally defined as to predict the plausible and social-acceptable positions of target agents at each time step within a predefined future time interval by observing their history trajectories \cite{alahi2016social,lee2017desire,gupta2018social,sadeghian2018sophie,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,al2018move,zhang2019sr,cheng2020mcenet,johora2020agent,giuliari2020transformer}. The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrians, vehicles, cyclist and other road users \cite{rudenko2019human}.
A typical prediction process of mixed traffic is exemplified in Figure~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.2in 0.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting the plausible and social-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their history trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
However, how to effectively and accurately predict trajectory of mixed agents remains an open problem in many research communities. The challenges are mainly from three aspects: 1) the uncertain moving intent of each agent, 2) the complex interactions between agents and 3) there are more than one socially plausible paths that an agent could move in the future. In a crowded scene, the moving direction and speed of different agents change dynamically because of their freewheeling intent and the interactions with surrounding agents.
There exists a large body of literature that focus on addressing part or all of the aforementioned challenges in order to make accurate future prediction.
The traditional methods model the interactions based on hand-crafted features \cite{helbing1995social,yi2015understanding,yi2016pedestrian,zhou2012understanding,antonini2006discrete,tay2008modelling,yamaguchi2011you}. However, their performance are crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performances on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treat trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Therefore, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple social-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
Previous methods have achieved great success in this domain. However, most of these methods are designed for specific agent: pedestrian. In reality, vehicle, pedestrian and cyclist are the three main types of agents and their behaviors affect each other. To make precise trajectories prediction, their interactions should be considered jointly. Second, the interactions between the target agent and others are equally treated. However, different agents affect the target agent on how to move in the near future differently. For instance, the nearer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are nearly behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. For example, can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
\wt{needs more words to explain their problems. For example, the drawbacks in modeling the interactions.}
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps. An overview of our proposed framework is depicted in Fig. \ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking the X-Encoder as an example, the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTM) and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by reparameterization of encoded features during training phase and resampling from the learned latent space during inference phase) is fed forward a following decoder associated with the output of X-Encoder as condition to forecast the future trajectory, which works in the same way as conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework based on variational auto-encoder which is trained to encode the motion and interaction information for predicting multiple plausible future trajectories of target agents.
\item[2] We design an innovative module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using self-attention mechanism. The global interactions are considered rather than the local ones.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, vehicle and cyclist rather than only focusing on pedestrian, in various unseen real-world environments.
\end{itemize}
The efficacy of the proposed method has been validated on the most challenging benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments. Our method reports the new state-of-the-art performance and wins the first place on the leader board for trajectory prediction.
Its performance for predicting longer term (up to 32 time-step positions of 12.8 seconds) trajectory is also investigated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different intersections.
Each component of the proposed model is validated via a series of ablative studies. \wt{The code is released.}
\section{Related Work}
Our work focuses on predicting trajectories of mix road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work concentrates on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include but not limit to linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processing \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods reply on the quality of manually designed features heavily and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNNs models and the variants, e.\,g.,~Long Short-Term Memories (LSTM) \cite{hochreiter1997long} and Gated Recurrent Units (GRU) \cite{cho2014learning} have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,xu2018encoding,bhattacharyya2018long,gupta2018social,sadeghian2018sophie,zhang2019sr,liang2019peeking}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
Other deep learning technologies, such as Convolutional Neural Networks (CNN) and Graph-based neural networks are also used for trajectory prediction and report good performances \cite{bai2018empirical,chandra2019forecasting,mohamed2020social,gao2020vectornet}.
In this work, we also utilize LSTM to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, to effectively model the social interactions among agents is important for accurate trajectories prediction.
One of the most influential approaches for modeling interaction is Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{schonauer2012modeling}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many following works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message pass framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,luong2015effective,vaswani2017attention,wang2018non} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in different domains and have been widely utilized \cite{vaswani2017attention,anderson2018bottom,giuliari2020transformer,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human}.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation, as well as trajectories prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and history information of trajectories.
In this work, we incorporate a VAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models. It is worth noting that our method does not explore information from image, i.\,e.,~visual information is not used.
Future trajectories are predicted only based on the map data (i.\,e.,. position coordinate).
Therefore, it is more computationally effective than the methods that need more information from image, such as DESIRE \cite{lee2017desire}. In addition, our model is trained on some available spaces but is validated on other unseen spaces. The visual information may limit the model's robustness due to the over-trained visual features \cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. Sample Generator produce devise samples of future generation. The Decoder module is used to decode the features from the produced sample in last step and predict the future trajectory sequentially.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample} and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In task like trajectory prediction, we are interested in modeling conditional distribution $P(Y_n|X)$, where $X$ is the history trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on history trajectories, a deep generative model, conditional variational auto-encoder (CVAE), is adopted inside of our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ which characterize the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}].
\end{equation}
Where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel. This can be achieved by reaching an approximate lower bound since the Kullback-Leibler divergence is always non-negative.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned to derive the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as: for an agent $i$, received as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinate is also possible, but in this work only 2D coordinate is considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
For simplicity, we omit the notation of time steps if it is explicit in the following parts of the paper.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. $N$ denotes the total number of predicted trajectories and $\text{Map}_i$ denotes the dynamic maps centralized of the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared with coordinates, offset is independent from the given space and less harmful in the regard of overfitting a model to a particular space or travel direction.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-encoder. The encoder has two branches: the top one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the down one is used to learn the interaction information among the neighboring road users from dynamic map over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position which are encoded as dynamic maps that centralized of the target road user respectively. The motion information and the interaction information are encoded by their own LSTMs sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of X-encoder. Y-encoder has the same structure as Y-encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose an innovative and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Figure~\ref{fig:framework}, a dynamic map consists of three layers that interprets the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively. Each layer is centralized of the target agent's position and divided into uniform gird cells. The layers are divided into gird because: (1) comparing with representing information in pixel level, in grid level is more computational effective; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid on different layers as follows:
the neighboring agents are located into the corresponding grids by their relative postilion to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is normalized into $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring road users existing in the grid normalized by the total number of all of the neighboring road users at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\ch{Add a visualization of the maps information}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps visualized for \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Figure~\ref{fig:dynamic_maps} as an example showcased by \textit{nexus-0} (see more information in Sec~\ref{subsec:benchmark} for the benchmark datasets). The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose \emph{Attentive Maps Encoder} module.
X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to a LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Maximum Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with attention mechanism. Here, we adopt the multi-head attention method from Transformer \cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q}\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k}\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v}
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. Note that, $\#head$ is an aliquot part of $D_{q}$. It is the same for $W_{Ki}$ and $W_{Vi}$. The output of multi-head is achieved via a linear transformation with parameter $W_O$ of the concatenation of the output from each head.
The output of multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference, Y-Encoder is removed and the X-encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse sample, this step is repeated N times to generate N samples of future prediction conditioned on $\Phi_X$.
The overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-space and Decoder.
Each of the modules use different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark dataset which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impact the overall performance, we design a series of ablation studies and discuss the results in details.
\subsection{Trajnet Benchmark challenge datasets}
\label{subsec:benchmark}
We verify the performance of the proposed model on the most challenging benchmark Trajnet\footnote{http://trajnet.stanford.edu/}. It is the most commonly used large-scale trajectory-based activity benchmark,
which provides a consistent evaluation system for fare comparison between submitted results from different proposed methods \cite{sadeghiankosaraju2018trajnet}.
The benchmark not only covers a wide range of datasets, but also includes various types of road users, from pedestrians to bikers, skateboarders, cars, buses, and golf cars, that navigate in a real world outdoor mixed traffic environment.
Trajnet provides trajectories data collected from 38 scenes with ground truth for training and data collected from the other 20 scenes without ground truth for the challenge competition. Each scene presents various traffic density in different space layout for mixed traffic, which makes the prediction task more difficult than training and testing on the same space. It has high requirements of the generalizability of a model. Trajectories are in $x$ and $y$ coordinates in meters or pixels projected on a Cartesian space with 8 time steps for observation and 12 time steps for prediction. Each time step lasts 0.4 seconds.
However, the pixel coordinates are not in the same scale across all the datasets, including the challenge datasets. Without standardizing the pixels into same scale, it is extremely difficult to train a model in one pixel scale and test in other pixel scales. Hence, we follow all the other models to only use the coordinates in meters.
In order to train and evaluate the proposed model, as well as ablative models (see Sec~\ref{sec:ablativemodels}) with ground truth information, 6 datasets from different scenes with mixed traffic are selected from the 38 training datasets. Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}. Figure~\ref{fig:trajectories} shows the visualization of the trajectories in each dataset.
\begin{figure}[bpht!]
\centering
\hspace{-2cm}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{fig/trajectories_bookstore_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{fig/trajectories_coupa_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.4\textwidth]{fig/trajectories_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.35\textwidth]{fig/trajectories_gates_1.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.63\textwidth]{fig/trajectories_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.35\textwidth]{fig/trajectories_nexus_0.pdf}
\caption{Selected datasets for evaluating the proposed model, as well as the ablative models. From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}. In addition, we also count a predicted trajectory as invalid if it collides with another trajectory.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\item Count of collisions with liner interpolation. Since each discrete time step lasts \SI{0.4}{seconds}, similar to \citep{sadeghian2018sophie}, an intermediate position is inserted using linear interpolation to increase the granularity of the time steps. If one agent coexits with another agent and they encounter with each other at any given time step within \SI{0.1}{meter}, the encounter is counted as a collision. Once the predicted trajectory collides with one another, the prediction is invalid.
\end{itemize}
We evaluate the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction, respectively.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
Best prediction $@top10$ means among the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth is selected as the best. When the ground truth is not available, i.\,e.,~the Trajnet Benchmark test datasets (see \ref{subsec:benchmark}), only the evaluation on the most-likely prediction is reported.
\subsection{Recent State-of-the-Art Methods}
\label{sec:stoamodels}
We compare the proposed model with the most influential recent state-of-the-art models published on the benchmark challenge for trajectory prediction on an unified evaluation system up to 28/05/2020, in order to guarantee the fairness of comparison. It is worth mentioning that given the large number of submissions, only the very top methods with published papers are listed here\footnote{More details of the ranking can be found at \url{http://trajnet.stanford.edu/result.php?cid=1&page=2&offset=10}}.
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangle occupancy gird is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, as well as the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and head pose estimates as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\subsection{Ablative Models}
\label{sec:ablativemodels}
In order to analyze the impact of each component, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out in comparison with the proposed model.
\begin{itemize}
\item AMENet, uses dynamic maps in both X-Encoder (observation time) and Y-Encoder (prediction time). This is the proposed model.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both X-Encoder and Y-Encoder. This comparison is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention for the dynamic maps. This comparison is used to validate the contribution from the self-attention mechanism for the dynamic maps alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi} with self-attention. This comparison is used to validate the contribution of the extended structure for processing the dynamic maps information in \textbf{Y-Encoder}.
\end{itemize}
\section{Results}
In this section, we will discuss the results of the proposed model in comparison with several recent state-of-the-art models published on the benchmark challenge, as well as the ablative models. We will also discuss the performance of multi-path trajectory prediction with the latent space.
\subsection{Results on Benchmark Challenge}
\label{sec:benchmarkresults}
\begin{table}[t!]
\centering
\caption{Comparison between the proposed model and the state-of-the-art models. Best values are highlighted in boldface.}
\begin{tabular}{lllll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ &Year\\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 & 2018\\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 & 2018\\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 & 2018\\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 & 1995\\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 & 2019\\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 & 2018\\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} & 2020\\
Ours\tablefootnote{The name of the proposed model AMENet was called \textit{ikg\_tnt} by the abbreviation of our institutes on the ranking list at \url{http://trajnet.stanford.edu/result.php?cid=1}.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} & 2020 \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
Table~\ref{tb:results} lists the top performances published on the benchmark challenge measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. AMENet wins the first position and surpasses the models published before 2020 significantly.
Compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}, our model achieves state-of-the-art performance measured by MAD. On the other hand, our model achieves the lowest FDE, reducing the error from 1.197 to 1.183 meters. It demonstrates the model's ability of predicting the most accurate destination in 12 time steps.
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the ablative studies in the following sub-section.
\subsection{Results for Ablative Studies}
\label{sec:ablativestudies}
In this sub-section, the contribution of each component in AMENet will be discussed via the ablative studies. More details of the dedicated structure of each ablative model can be found in Sec~\ref{sec:ablativemodels}. Table~\ref{tb:resultsablativemodels} shows the quantitative evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on most-likely prediction.
The comparison between AOENet and AMENet shows that when we replace the dynamic maps with the occupancy grid, the errors measured by MAD and FDE increase by a remarkable margin across all the datasets. The number of invalid trajectories with detected collisions also increases when the dynamic maps are substituted by the occupancy grid. This comparison proves that the dynamic maps with the neighboring agents' motion information, namely, orientation, travel speed and position relative to the target agent, can capture more detailed and accurate interaction information.
The comparison between MENet and AMENet shows that when we remove the self-attention mechanism, the errors measured by MAD and FDE also increase by a remarkable margin across all the datasets, and the number of collisions slightly increases. Without self-attention, the model may have difficulty in learning the behavior patterns of how the target agent interacts with its neighboring agents from one time step to other time steps. This proves the assumption that, the self-attention enables the model to learn the global dependency over different time steps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the errors measured by MAD, especially FDE increase significantly across all the datasets, as well as the number of collisions. The extended structure provides the ability the model for processing the interaction information even in prediction time during training. It improves the model's performance, especially for predicting more accurate destination. This improvement has also been confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative studies across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979}/\textbf{0} & 0.574/1.144/\textbf{0} & 0.576/1.139/\textbf{0} & 0.509/1.030/2 \\
coupa3 & \textbf{0.226}/\textbf{0.442}/6 & 0.260/0.509/8 & 0.294/0.572/2 & 0.237/0.464/\textbf{0} \\
deathCircle0 & \textbf{0.659}/\textbf{1.297}/\textbf{2} & 0.726/1.437/6 & 0.725/1.419/6 & 0.698/1.378/10 \\
gates1 & \textbf{0.797}/\textbf{1.692}/\textbf{0} & 0.878/1.819/\textbf{0} & 0.941/1.928/2 & 0.861/1.823/\textbf{0} \\
hyang6 & \textbf{0.542}/\textbf{1.094}/\textbf{0} & 0.619/1.244/2 & 0.657/1.292/\textbf{0} & 0.566/1.140/\textbf{0} \\
nexus6 & \textbf{0.559}/\textbf{1.109}/\textbf{0} & 0.752/1.489/\textbf{0} & 0.705/1.140/\textbf{0} & 0.595/1.181/\textbf{0} \\
Average & \textbf{0.545}/\textbf{1.102}/\textbf{1.3} & 0.635/1.274/2.7 & 0.650/1.283/1.7 & 0.578/1.169/2.0 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Figure~\ref{fig:abl_qualitative_results} shows the qualitative results of the proposed model AMENet in comparison with the ablative models across the datasets.
In general, all the models can predict realistic trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. After a short observation time, i.\,e.,~ 8 time steps, all the models can capture the general speed and heading direction for agents located in different areas in the space.
From a close observation we can see that, AMENet generates more accurate trajectories than the other models, which are very close or even completely overlap with the corresponding ground truth trajectories. Compared with the ablative models, AMENet predicts more accurate destination, which is in line with the quantitative results shown in Table~\ref{tb:results}. One very clear example in \textit{hyang6} (left figure in the third row) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{scenarios/bookstore_3290.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{scenarios/coupa_3327.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{scenarios/deathCircle_0000.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{scenarios/gates_1001.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{scenarios/hyang_6209.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{scenarios/nexus_0067.pdf}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) trajectories in different scenes. From From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:abl_qualitative_results}
\end{figure}
\subsection{Results for Multi-Path Prediction}
\label{sec:multipath-selection}
In this sub-section, we will discuss the performance of multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{fig:framework}). During training, the motion information and interaction information in observation and ground truth are encoded into the so-called Z-Space. The KL-divergence loss forces $z$ into a Gaussian distribution. Figure~\ref{fig:z_space} shows the visualization of Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared with the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that generating multiple trajectories, e.\,g.,~ten trajectories, increases the chance to narrow down the errors from the prediction to the ground truth. Meanwhile, the ranking mechanism (see Sec~\ref{sec:multipath-selection}) guarantees the quality of the selected one.
Figure~\ref{fig:multi-path} demonstrates the effectiveness of multi-path trajectory prediction. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. Even though our method is able to predict the trajectory correctly, the predicted trajectories diverge more widely in further time steps. It also proves that the ability of predicting multiple plausible trajectories is important in the task of trajectory prediction, because of the uncertainty of the future movements increasing in the longer term prediction.
\begin{table}[hbpt!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet. Predicted trajectories are ranked by $\text{top}@10$ and most-likely and errors are measured by MAD/FDE/\#collisions}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961/0 & 0.486/0.979/0 \\
coupa3 & 0.221/0.432/0 & 0.226/0.442/6 \\
deathCircle0 & 0.650/1.280/6 & 0.659/1.297/2 \\
gates1 & 0.784/1.663/2 & 0.797/1.692/0 \\
hyang6 & 0.534/1.076/0 & 0.542/1.094/0 \\
nexus6 & 0.642/1.073/0 & 0.559/1.109/0 \\
Average & 0.535/1.081/1.3 & 0.545/1.102/1.3 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\pagebreak
\section{Studies on Longer-term Trajectory Prediction}
\label{sec:longterm}
In this section, we investigate the model's performance on predicting longer term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets and 7, 11, 12 and 3 from each intersection. We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of ANENet on longer term trajectory prediction. Namely they are the last 2, 4, 5, and 1 datasets from each intersection. The other datasets are used for training the models for predicting trajectories with sequence lengths slowly increased from 12, 16 up to 32 time steps.
Figure~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting longer term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Figure~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for longer term trajectory in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates very accurate prediction for 12 and 16 time steps (visualized in first two rows) for two pedestrians. However, when they encounter each other at 20 time steps (third row), the model predicts a near collision situation. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, longer term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model.
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose an innovative way---dynamic maps---to learn the social effects between agents during interaction. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leaderboard for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
We run the model on another newly published open-source data of mixed traffic in different intersections to investigate the performance for longer term (up to 32 time-step positions of 12.8 seconds) trajectory prediction.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Figure~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\section*{References}
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road agents in the near future is a crucial task in intelligent transportation systems (ITS) \cite{goldhammer2013early,hashimoto2015probabilistic,koehler2013stationary}, autonomous driving \cite{franke1998autonomous,ferguson2008detection,luo2018porca}, mobile robot applications \cite{ziebart2009planning,du2011robot,mohanan2018survey}, \textit{etc}.. This task enables an intelligent system to foresee the behavior of road agents and make a reasonable and safe decision/strategy for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
Trajectory prediction is generally defined as to predict the plausible and social-acceptable positions of target agents at each time step within a predefined future time interval by observing their history trajectories \cite{alahi2016social,lee2017desire,gupta2018social,sadeghian2018sophie,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,al2018move,zhang2019sr,cheng2020mcenet,johora2020agent,giuliari2020transformer}, as shown in Figure~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.2in 0.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting the plausible and social-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their history trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
However, how to effectively and accurately predict trajectory of mix road agents remains an open problem in many research communities. The challenges are mainly from three aspects: 1) the uncertain moving intent of each agent, 2) the complex interactions between agents and 3) there are more than one socially plausible paths that a road agent could move in the future. In a crowded scene, the moving direction and speed of different agents change dynamically because of their freewheeling intent and the interactions with surrounding agents.
There exists a large body of literature that focus on addressing part or all of aforementioned challenges in order to make accurate future prediction.
The traditional methods model the interactions based on hand-crafted features \cite{helbing1995social,yi2015understanding,yi2016pedestrian,zhou2012understanding,antonini2006discrete,tay2008modelling,yamaguchi2011you}. However, their performance are crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performances on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future in sequence. Many later works follow this pioneering work that treat trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Therefore, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple social-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
Previous methods have achieved great success in this domain. However, most of these methods are designed for specific road agent: pedestrian. In reality, vehicle, pedestrian and cyclist are the three main types of road agents and their behaviors affect each other. To make precise trajectories prediction, their interactions should be considered jointly. Second, the interactions between the target agent and others are equally treated. However, different agents affect the target agent on how to move in the near future differently. For instance, the nearer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are nearly behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. For example, can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
\wt{needs more words to explain their problems. For example, the drawbacks in modeling the interactions.}
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as dynamic attention maps. An overview of our proposed framework is depicted in Fig. \ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking the X-Encoder as an example, the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent LSTMs and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by reparameterization of encoded features during training phase and resampling from the learned latent space during inference phase) is fed forward a following decoder associated with the output of X-Encoder as condition to forecast the future trajectory, which works in the same way as conditional variational auto-encoder (CVAE) \cite{sohn2015learning}.
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework based on variational auto-encoder which is trained to encode the motion and interaction information for predicting multiple plausible future trajectories of target agents.
\item[2] We design an innovative module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using self-attention mechanism. The global interactions are considered rather than the local ones.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, vehicle and cyclist rather than only focusing on pedestrian in various unseen real-world environments.
\end{itemize}
The efficacy of the proposed method has been validated on the most challenging benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments. Our method reports the new state-of-the-art performance and wins the first place on the leader board for trajectory prediction.
Its performance for predicting longer term (up to 32 time-step positions of 12.8 seconds) trajectory is also investigated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different intersections.
Each component of the proposed model is validated via a series of ablative studies. \wt{The code is released.}
\section{Related Work}
Our work focuses on predicting trajectories of mix road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work concentrates on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include but not limit to linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processing \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods reply on the quality of manually designed features heavily and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNNs models and the variants, e.\,g.,~Long Short-Term Memories (LSTM) \cite{hochreiter1997long} and Gated Recurrent Units (GRU) \cite{cho2014learning} have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,xu2018encoding,bhattacharyya2018long,gupta2018social,sadeghian2018sophie,zhang2019sr,liang2019peeking}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
Other deep learning technologies, such as Convolutional Neural Networks (CNN) and Graph-based neural networks are also used for trajectory prediction \cite{bai2018empirical,chandra2019forecasting,mohamed2020social,gao2020vectornet} and report good performances.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of a road user agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, to effectively model the social interactions among agents is important for accurate trajectories prediction.
One of the most influential approaches for modeling interaction is Social Force Model \cite{helbing1995social} which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{schonauer2012modeling}.
Such rule-based interaction modeling has been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighborhood agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many following works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighborhood agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message pass framework. The motion gate and agent-wise attention are used to select the most important information from neighborhood agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighborhood agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,luong2015effective,vaswani2017attention,wang2018non} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in different domains and have been widely utilized \cite{vaswani2017attention,anderson2018bottom,giuliari2020transformer,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human}.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation, as well as trajectories prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and history information of trajectories.
In this work, we incorporate a VAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from above generative models. It is worth noting that our method does not explore information from image, i.\,e.,~visual information is not used.
Future trajectories are predicted only based on the map data (i.\,e.,. position coordinate).
Therefore, it is more computationally effective than the methods that need more information from image, such as DESIRE \cite{lee2017desire}. In addition, our model is trained on some available spaces but is validated on other unseen spaces. The visual information may limit the model's robustness due to the over-trained visual features \cite{cheng2020mcenet}.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0.5cm 3cm 8cm 0, width=1\textwidth]{fig/model_framework2.pdf}
\caption{The framework of the proposed model. The target road user's motion information (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space) and the interaction information with the neighborhood road users (their orientation, travel speed and relative position to the target road user) over time, which are encoded as dynamic maps that centralized of the target road user, are the input for the proposed model for accurate and realistic trajectory prediction. In the training phase, both the target user's motion information and interaction information with neighborhood road users in both observation and prediction time are encoded by X-Encoder and Y-Encoder, respectively. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variable $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration of the target road user's motion and interactions with the neighborhood road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. Then the output of X-Encoder is concatenated with $z$ for reconstructing the target road user's future trajectory via Decoder. Decoder is trained by minimizing the mean square error between the ground truth $Y$ and the reconstructed trajectory $\hat{Y}$.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighborhood road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with $z$ sampled from the Z-Space. The decoder uses the concatenated information as the input for predicting the target road user's future trajectory.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig. \ref{fig:framework}) in the following structure: a brief review on CVAE (Sec. \ref{subsec:cvae}), \emph{Problem Definition} (Sec. \ref{subsec:definition}), \emph{Input} (Sec. \ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample} and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In task like trajectory prediction, we are interested in modelling conditioned distribution $P(Y_n|X)$, where $X$ is the history trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse sample of future trajectories based on history trajectories, a deep generative model, conditional variational auto-encoder (CVAE), is adopted inside of our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the statistic latent variable $z$ which characterize the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}].
\end{equation}
Where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel. This can be achieved by reaching an approximate lower bound since the Kullback-Leibler divergence is always non-negative.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.\ref{fig:framework}). In this way, the derivation problem of the sample process $Q(z_i|Y_i, X_i)$ is turn to derive the sample results $z_i$ w.\,r.\,t. $\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as: for an agent $i$, received as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinate is also possible, but in this work only 2D coordinate is considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
For simplicity, we omit the notation of time steps in the following parts of the paper.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. $N$ denotes the total number of predicted trajectories and $\text{Map}_i$ denotes the dynamic maps centralized of the target agent for mapping the interactions with its neighborhood agents over the time steps. More details of the dynamic maps will be given in the following section.
\subsection{Input of Model}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared with coordinates, offset is independent from the given space and less harmful in the regard of overfitting a model to a particular space or travel direction.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighborhood agents using occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose an innovative and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Figure~\ref{fig:framework}, a dynamic map consists of three layers that interprets the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively. Each layer is centralized of the target agent's position and divided into uniform gird cells. The layers are divided into gird because: (1) comparing with representing information in pixel level, in grid level is more computational effective; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid on different layers as follows:
the neighborhood agents are located into the corresponding grids by their relative postilion to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighborhood agents $j \in K$ that coexist with the target agents at each time step.
\begin{equation}
\label{eq:map}
\begin{split}
\text{Map}_i^t = \sum_j^K(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t), \\
\text{where}~i, j \in K ~\text{and} ~i\neq j.
\end{split}
\end{equation}
The \emph{orientation layer $O$} represents the heading direction of neighborhood agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is normalized into $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed layer $S$} represents all the neighborhood agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position layer $P$} stores the positions of the grids that are occupied by the neighborhood agents calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighborhood road users existing in the grid normalized by the total number of all of the neighborhood road users at that time step, which can be interpreted as the grid's occupancy density .
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighborhood agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose \emph{Attentive Maps Encoder} module.
X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to a LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Maximum Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with attention mechanism. Here, we adopt the multi-head attention method from Transformer \cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map}W_Q), W_Q \in \mathbb{R}^{D\times D_q}\\
K =& \pi(\text{Map}W_K), W_K \in \mathbb{R}^{D\times D_k}\\
V =& \pi(\text{Map}W_V), W_A \in \mathbb{R}^{D\times D_v}
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. Note that, $\#head$ is an aliquot part of $D_{q}$. It is the same for $W_{Ki}$ and $W_{Vi}$. The output of multi-head is achieved via a linear transformation with parameter $W_O$ of the concatenation of the output from each head.
The output of multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution which are used to re-parameterize $z$ as discussed in Sec. \ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference, Y-Encoder is removed and the X-encoder works in the same way as in the training stage to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X)$ (as condition) as the input of the decoder.
To generate diverse sample, this step is repeated N times to generate N samples of future prediction condition on $\Phi_X$.
The overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-space and Decoder.
Each of the modules use different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighborhood road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighborhood road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{\textbf{Evaluation Metrics and Experiments}}
\label{sec:experimentsandmetrcs}
In this section, we will introduce the benchmark challenge, the evaluation metrics and the recent state-of-the-art methods. To analyze how each module in our framework impact the overall performance, we will design a series of ablation studies.
\subsection{Trajnet Benchmark challenge datasets}
\label{sec:benchmark}
We verify the performance of the proposed model on the most challenging benchmark Trajnet\footnote{http://trajnet.stanford.edu/}. It is the most commonly used large-scale trajectory-based activity benchmark, which provides a unified evaluation system to test gathered state-of-the-art methods on various trajectory-based activity forecasting datasets \cite{sadeghiankosaraju2018trajnet}. The benchmark not only covers a wide range of datasets, but also includes various types of road users, from pedestrians to bikers, skateboarders, cars, buses, and golf cars, that navigate in a real world outdoor mixed traffic environment.
Trajnet provides 38 datasets with ground truth for training models and 20 datasets without ground truth for the challenge competition. Each dataset presents various traffic density in different space layout for mixed traffic, which makes the prediction task more difficult than training and testing on the same space. It requires a model not to be over trained to a particular type of road users or space. Trajectories are in $x$ and $y$ coordinates in meters or pixels projected on a Cartesian space with 8 time steps for observation and 12 time steps for prediction. Each time step lasts 0.4 seconds.
However, the pixel coordinates are not in the same scale across all the datasets, including the challenge datasets. Without standardizing the pixels into same scale, it is extremely difficult to train a model in one pixel scale and test in other pixel scales. Hence, we follow all the other models to only use the coordinates in meters.
In order to train and evaluate the proposed model, as well as ablative models (see Sec~\ref{sec:ablativemodels}) with ground truth information, 6 datasets from different scenes with mixed traffic are selected from the 38 training datasets. Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}. Figure~\ref{fig:trajectories} shows the visualization of the trajectories in each dataset.
\begin{figure}[bpht!]
\centering
\hspace{-2cm}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{fig/trajectories_bookstore_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{fig/trajectories_coupa_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.4\textwidth]{fig/trajectories_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.35\textwidth]{fig/trajectories_gates_1.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.63\textwidth]{fig/trajectories_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.35\textwidth]{fig/trajectories_nexus_0.pdf}
\caption{Selected datasets for evaluating the proposed model, as well as the ablative models. From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}. In addition, we also count a predicted trajectory as invalid if it collides with another trajectory.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\item Count of collisions with liner interpolation. Since each discrete time step lasts \SI{0.4}{seconds}, similar to \citep{sadeghian2018sophie}, an intermediate position is inserted using linear interpolation to increase the granularity of the time steps. If one agent coexits with another agent and they encounter with each other at any given time step within \SI{0.1}{meter}, the encounter is counted as a collision. Once the predicted trajectory collides with one another, the prediction is invalid.
\end{itemize}
We evaluate the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction, respectively.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
Best prediction $@top10$ means among the 10 predicted trajectories with highest confidence, the one which has the smallest ADE and FDE compared with the ground truth is selected as the best. When the ground truth is not available, i.\,e.,~the Trajnet Benchmark test datasets (see \ref{sec:benchmark}), only the evaluation on the most-likely prediction is reported.
\subsection{Recent State-of-the-Art Methods}
\label{sec:stoamodels}
We compare the proposed model with the most influential recent state-of-the-art models published on the benchmark challenge for trajectory prediction on an unified evaluation system up to 28/05/2020, in order to guarantee the fairness of comparison. It is worth mentioning that given the large number of submissions, only the very top methods with published papers are listed here. More details of the ranking can be found at \url{http://trajnet.stanford.edu/result.php?cid=1&page=2&offset=10}.
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangle occupancy gird is used to pool the existence of the neighborhood at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, as well as the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and head pose estimates as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with other neighborhood agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighborhood agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighborhood agents.
\end{itemize}
\subsection{Ablative Models}
\label{sec:ablativemodels}
In order to analyze the impact of each component, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out in comparison with the proposed model.
\begin{itemize}
\item AMENet, uses dynamic maps in both X-Encoder (observation time) and Y-Encoder (prediction time). This is the proposed model.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both X-Encoder and Y-Encoder. This comparison is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention for the dynamic maps. This comparison is used to validate the contribution from the self-attention mechanism for the dynamic maps alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi} with self-attention. This comparison is used to validate the contribution of the extended structure for processing the dynamic maps information in \textbf{Y-Encoder}.
\end{itemize}
\section{Results}
In this section, we will discuss the results of the proposed model in comparison with several recent state-of-the-art models published on the benchmark challenge, as well as the ablative models. We will also discuss the performance of multi-path trajectory prediction with the latent space.
\subsection{Results on Benchmark Challenge}
\label{sec:benchmarkresults}
\begin{table}[t!]
\centering
\caption{Comparison between the proposed model and the state-of-the-art models. Best values are highlighted in boldface.}
\begin{tabular}{lllll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ &Year\\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 & 2018\\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 & 2018\\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 & 2018\\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 & 1995\\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 & 2019\\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 & 2018\\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} & 2020\\
Ours\tablefootnote{The name of the proposed model AMENet was called \textit{ikg\_tnt} by the abbreviation of our institutes on the ranking list at \url{http://trajnet.stanford.edu/result.php?cid=1}.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} & 2020 \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
Table~\ref{tb:results} lists the top performances published on the benchmark challenge measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. AMENet wins the first position and surpasses the models published before 2020 significantly.
Compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}, our model achieves state-of-the-art performance measured by MAD. On the other hand, our model achieves the lowest FDE, reducing the error from 1.197 to 1.183 meters. It demonstrates the model's ability of predicting the most accurate destination in 12 time steps.
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the ablative studies in the following sub-section.
\subsection{Results for Ablative Studies}
\label{sec:ablativestudies}
In this sub-section, the contribution of each component in AMENet will be discussed via the ablative studies. More details of the dedicated structure of each ablative model can be found in Sec~\ref{sec:ablativemodels}. Table~\ref{tb:resultsablativemodels} shows the quantitative evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on most-likely prediction.
The comparison between AOENet and AMENet shows that when we replace the dynamic maps with the occupancy grid, the errors measured by MAD and FDE increase by a remarkable margin across all the datasets. The number of invalid trajectories with detected collisions also increases when the dynamic maps are substituted by the occupancy grid. This comparison proves that the dynamic maps with the neighborhood agents' motion information, namely, orientation, travel speed and position relative to the target agent, can capture more detailed and accurate interaction information.
The comparison between MENet and AMENet shows that when we remove the self-attention mechanism, the errors measured by MAD and FDE also increase by a remarkable margin across all the datasets, and the number of collisions slightly increases. Without self-attention, the model may have difficulty in learning the behavior patterns of how the target agent interacts with its neighborhood agents from one time step to other time steps. This proves the assumption that, the self-attention enables the model to learn the global dependency over different time steps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the errors measured by MAD, especially FDE increase significantly across all the datasets, as well as the number of collisions. The extended structure provides the ability the model for processing the interaction information even in prediction time during training. It improves the model's performance, especially for predicting more accurate destination. This improvement has also been confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighborhood agents. Its robustness has been demonstrated by the ablative studies across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979}/\textbf{0} & 0.574/1.144/\textbf{0} & 0.576/1.139/\textbf{0} & 0.509/1.030/2 \\
coupa3 & \textbf{0.226}/\textbf{0.442}/6 & 0.260/0.509/8 & 0.294/0.572/2 & 0.237/0.464/\textbf{0} \\
deathCircle0 & \textbf{0.659}/\textbf{1.297}/\textbf{2} & 0.726/1.437/6 & 0.725/1.419/6 & 0.698/1.378/10 \\
gates1 & \textbf{0.797}/\textbf{1.692}/\textbf{0} & 0.878/1.819/\textbf{0} & 0.941/1.928/2 & 0.861/1.823/\textbf{0} \\
hyang6 & \textbf{0.542}/\textbf{1.094}/\textbf{0} & 0.619/1.244/2 & 0.657/1.292/\textbf{0} & 0.566/1.140/\textbf{0} \\
nexus6 & \textbf{0.559}/\textbf{1.109}/\textbf{0} & 0.752/1.489/\textbf{0} & 0.705/1.140/\textbf{0} & 0.595/1.181/\textbf{0} \\
Average & \textbf{0.545}/\textbf{1.102}/\textbf{1.3} & 0.635/1.274/2.7 & 0.650/1.283/1.7 & 0.578/1.169/2.0 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Figure~\ref{fig:abl_qualitative_results} shows the qualitative results of the proposed model AMENet in comparison with the ablative models across the datasets.
In general, all the models can predict realistic trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. After a short observation time, i.\,e.,~ 8 time steps, all the models can capture the general speed and heading direction for agents located in different areas in the space.
From a close observation we can see that, AMENet generates more accurate trajectories than the other models, which are very close or even completely overlap with the corresponding ground truth trajectories. Compared with the ablative models, AMENet predicts more accurate destination, which is in line with the quantitative results shown in Table~\ref{tb:results}. One very clear example in \textit{hyang6} (left figure in the third row) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{scenarios/bookstore_3290.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{scenarios/coupa_3327.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{scenarios/deathCircle_0000.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{scenarios/gates_1001.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{scenarios/hyang_6209.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{scenarios/nexus_0067.pdf}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) trajectories in different scenes. From From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:abl_qualitative_results}
\end{figure}
\subsection{Results for Multi-Path Prediction}
\label{sec:multipath-selection}
In this sub-section, we will discuss the performance of multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{fig:framework}). During training, the motion information and interaction information in observation and ground truth are encoded into the so-called Z-Space. The KL-divergence loss forces $z$ into a Gaussian distribution. Figure~\ref{fig:z_space} shows the visualization of Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared with the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that generating multiple trajectories, e.\,g.,~ten trajectories, increases the chance to narrow down the errors from the prediction to the ground truth. Meanwhile, the ranking mechanism (see Sec~\ref{sec:multipath-selection}) guarantees the quality of the selected one.
Figure~\ref{fig:multi-path} demonstrates the effectiveness of multi-path trajectory prediction. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. Even though our method is able to predict the trajectory correctly, the predicted trajectories diverge more widely in further time steps. It also proves that the ability of predicting multiple plausible trajectories is important in the task of trajectory prediction, because of the uncertainty of the future movements increasing in the longer term prediction.
\begin{table}[hbpt!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet. Predicted trajectories are ranked by $\text{top}@10$ and most-likely and errors are measured by MAD/FDE/\#collisions}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961/0 & 0.486/0.979/0 \\
coupa3 & 0.221/0.432/0 & 0.226/0.442/6 \\
deathCircle0 & 0.650/1.280/6 & 0.659/1.297/2 \\
gates1 & 0.784/1.663/2 & 0.797/1.692/0 \\
hyang6 & 0.534/1.076/0 & 0.542/1.094/0 \\
nexus6 & 0.642/1.073/0 & 0.559/1.109/0 \\
Average & 0.535/1.081/1.3 & 0.545/1.102/1.3 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\pagebreak
\section{Studies on Longer-term Trajectory Prediction}
\label{sec:longterm}
In this section, we investigate the model's performance on predicting longer term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{sec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets and 7, 11, 12 and 3 from each intersection. We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of ANENet on longer term trajectory prediction. Namely they are the last 2, 4, 5, and 1 datasets from each intersection. The other datasets are used for training the models for predicting trajectories with sequence lengths slowly increased from 12, 16 up to 32 time steps.
Figure~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting longer term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Figure~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for longer term trajectory in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates very accurate prediction for 12 and 16 time steps (visualized in first two rows) for two pedestrians. However, when they encounter each other at 20 time steps (third row), the model predicts a near collision situation. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, longer term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model.
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose an innovative way---dynamic maps---to learn the social effects between agents during interaction. The dynamic maps capture accurate interaction information by encoding the neighborhood agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leaderboard for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
We run the model on another newly published open-source data of mixed traffic in different intersections to investigate the performance for longer term (up to 32 time-step positions of 12.8 seconds) trajectory prediction.
\section*{Appendix A}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Figure~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{sec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\pagebreak
\section*{References}
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users in the near future is a crucial task in intelligent transportation systems (ITS) \cite{goldhammer2013early,hashimoto2015probabilistic,koehler2013stationary}, autonomous driving \cite{franke1998autonomous,ferguson2008detection,luo2018porca}, mobile robot applications \cite{ziebart2009planning,du2011robot,mohanan2018survey}, \textit{etc}.. This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
The trajectory of a non-erratic agent is often referred as plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D aligned in a timing law.
Trajectory prediction is generally defined as to predict the plausible and socially-acceptable positions of target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time \cite{alahi2016social,lee2017desire,gupta2018social,sadeghian2018sophie,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,al2018move,zhang2019sr,cheng2020mcenet,johora2020agent,giuliari2020transformer}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrians, vehicles, cyclists and other road users \cite{rudenko2019human}.
The prediction task can be generalized as short-term or long-term trajectory prediction depending on the prediction time horizons. In this study, under 5 seconds is categorized as short term, otherwise as long term. A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
However, how to effectively and accurately predict trajectory of mixed agents remains an unsolved problem in many research communities. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intent of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could move in the future. In a crowded scene, the moving direction and speed of different agents change dynamically because of their freewheeling intent and the interactions with neighboring agents.
There exists a large body of literature that focuses on addressing part or all of the aforementioned challenges in order to make accurate future prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules and constant velocity \cite{helbing1995social,best1997new,yi2015understanding,zhou2012understanding,antonini2006discrete,tay2008modelling,yamaguchi2011you}. The performance is crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performance on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treats the trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Hence, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
Previous methods have achieved great success in this domain. However, most of these methods are designed for a specific agent: pedestrian. In reality, vehicle, pedestrian and cyclist are the three main types of agents and their behaviors affect each other. To make precise trajectories prediction, their interactions should be considered jointly. Second, the interactions between the target agent and others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the nearer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are nearly behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. For example, can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic maps manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTM) and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by reparameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework based on a special CVAE module which is trained to encode both the motion and interaction information for predicting multiple plausible future trajectories of target agents.
Our CVAE module differs from \cite{kingma2013auto,kingma2014semi,sohn2015learning} by extending the dynamic maps into Y-Encoder for extracting the interaction information in future time during training phase, while the conventional CVAE module relies on an auto-encoder structure that follows a consistent input and out structure (e.\,g.,~the input and out are both trajectories in the same structure \cite{lee2017desire}).
\item[2] We design an innovative module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences, the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence. It models the dependency without regard to the distance between two positions \cite{vaswani2017attention} and the global interactions are considered rather than the local ones \cite{wang2018non}.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, vehicle and cyclist rather than only focusing on pedestrian, in various unseen real-world environments.
\end{itemize}
The efficacy of the proposed method has been validated on the most challenging benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for short-term trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
Its performance for predicting long term (up to 32 time-step positions of 12.8 seconds) trajectory is also investigated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different intersections.
Each component of the proposed model is validated via a series of ablative models. \wt{The code is released.}
\section{Related Work}
Our work focuses on predicting trajectories of mix road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work concentrates on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include but not limit to linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processing \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods largely reply on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTM) \cite{hochreiter1997long} and Gated Recurrent Units (GRU) \cite{cho2014learning} have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,xu2018encoding,bhattacharyya2018long,gupta2018social,sadeghian2018sophie,zhang2019sr,liang2019peeking}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
Other deep learning technologies, such as Convolutional Neural Networks (CNN) and Graph-based neural networks are also used for trajectory prediction and report good performances \cite{bai2018empirical,chandra2019forecasting,mohamed2020social,gao2020vectornet}.
In this work, we also utilize LSTM to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, to effectively model the social interactions among agents is important for accurate trajectories prediction.
One of the most influential approaches for modeling interaction is Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{schonauer2012modeling}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many following works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,luong2015effective,vaswani2017attention,wang2018non} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, such as, neural machine translation \cite{bahdanau2014neural,luong2015effective,vaswani2017attention} and image caption generation \cite{xu2015show}, and have been widely utilized in other domains \cite{anderson2018bottom,giuliari2020transformer,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention. Thanks to the mapping mechanism, the dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE \cite{kingma2014semi,sohn2015learning} and Conditional GAN \cite{mirza2014conditional} ) are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation \ch{add citation}, as well as trajectories prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) our method does not explore information from images, i.\,e.,~visual information is not used and
future trajectories are predicted only based on the map data (i.\,e.,. position coordinate).
Therefore, it is more computationally effective than the methods that need information from images.
In addition, our model is trained on some available spaces but is validated on other unseen spaces. The visual information, such as vegetation, curbside, and buildings, are very different from one space to another. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in a new space of totally different environment \cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. The sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The Specific structure of X-Encoder/Y-Encoder is given by Fig.\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}].
\end{equation}
Where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel. This can be achieved by reaching an approximate lower bound since the Kullback-Leibler divergence is always non-negative.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as: for an agent $i$, received as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinate is also possible, but in this work only 2D coordinate is considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
For simplicity, we omit the notation of time steps if it is explicit in the following parts of the paper.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. $N$ denotes the total number of predicted trajectories and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared with coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel direction.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
We also apply the augmentation technique that rotates the trajectories to prevent the system only learns certain directions. In order to obtain the relative positions and angles between agents, the trajectories of all the agents that coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position which are encoded as dynamic maps that centralized of the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose an innovative and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interprets the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grid because: (1) comparing with representing information in pixel level, is more computational effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid on different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is normalized into $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring road users existing in the grid normalized by the total number of all of the neighboring road users at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and positions over the grids of the map. For example, all the agents move in a certain direction with similar speed on a particular area of the map, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Maximum Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q}\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k}\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v}
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. Note that, $\#head$ is an aliquot part of $D_{q}$. It is the same for $W_{Ki}$ and $W_{Vi}$. The output of multi-head is achieved via a linear transformation with parameter $W_O$ of the concatenation of the output from each head.
The output of multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference, Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules use different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation models and discuss the results in details.
\subsection{Trajnet Benchmark challenge datasets}
\label{subsec:benchmark}
We verify the performance of the proposed model on the most challenging benchmark Trajnet\footnote{http://trajnet.stanford.edu/}. It is the most commonly used large-scale trajectory-based activity benchmark,
which provides a consistent evaluation system for fair comparison between submitted results from different proposed methods \cite{sadeghiankosaraju2018trajnet}.
The benchmark not only covers a wide range of datasets, but also includes various types of road users, from pedestrians to bikers, skateboarders, cars, buses, and golf cars, that navigate in a real world outdoor mixed traffic environment.
Trajnet provides trajectories data collected from 38 scenes with ground truth for training and data collected from the other 20 scenes without ground truth for the challenge competition. Each scene presents various traffic density in different space layout for mixed traffic, which makes the prediction task more difficult than training and testing on the same space.
It highly requires a model to manifest generalizability, in order to obtain an overall good performance.
Trajectories are in $x$ and $y$ coordinates in meters or pixels projected on a Cartesian space with 8 time steps for observation and 12 time steps for prediction. Each time step lasts 0.4 seconds.
However, the pixel coordinates are not in the same scale across all the datasets, including the challenge datasets. Without standardizing the pixels into same scale, it is extremely difficult to train a model in one pixel scale and test in other pixel scales. Hence, we follow all the other models to only use the coordinates in meters.
In order to train and evaluate the proposed model, as well as the ablative models (see Sec~\ref{sec:ablativemodels}) with ground truth information, 6 datasets from different scenes with mixed traffic are selected as test datasets from the 38 training datasets. Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}. Please note that, the selected test datasets are also different from the remaining 32 training datasets, so we can monitor the training and conduct the evaluation locally in a similar manner as the 20 challenge datasets on the remote evaluation system.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each dataset.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{fig/trajectories_bookstore_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{fig/trajectories_coupa_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{fig/trajectories_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{fig/trajectories_gates_1.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{fig/trajectories_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{fig/trajectories_nexus_0.pdf}
\caption{Selected datasets for evaluating the proposed model, as well as the ablative models. From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{pellegrini2009you,yamaguchi2011you,alahi2016social,gupta2018social,sadeghian2018sophie}. In addition, we also count a predicted trajectory as invalid if it collides with another trajectory.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\item Count of collisions with linear interpolation. Since each discrete time step lasts \SI{0.4}{seconds}, similar to \citep{sadeghian2018sophie}, an intermediate position is inserted using linear interpolation to increase the granularity of the time steps. If one agent coexits with another agent and they encounter with each other at any given time step within \SI{0.1}{meter}, the encounter is counted as a collision. Once the predicted trajectory collides with one another, the prediction is invalid.
\end{itemize}
We evaluate the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction, respectively.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
Best prediction $@top10$ means among the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth is selected as the best. When the ground truth is not available, i.\,e.,~the Trajnet Benchmark test datasets (see \ref{subsec:benchmark}), only the evaluation on the most-likely prediction is reported.
\subsection{Recent State-of-the-Art Methods}
\label{sec:stoamodels}
We compare the proposed model with the most influential recent state-of-the-art models published on the benchmark challenge for trajectory prediction on a consistent evaluation system up to 28/05/2020, in order to guarantee the fairness of comparison. It is worth mentioning that given the large number of submissions, only the very top methods with published papers are listed here\footnote{More details of the ranking can be found at \url{http://trajnet.stanford.edu/result.php?cid=1&page=2&offset=10}}.
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangular occupancy grid is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, including the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and the visibility attentional area driven by the head pose as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with all the other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\subsection{Ablative Models}
\label{sec:ablativemodels}
In order to analyze the impact of each component, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out in comparison with the proposed model.
\begin{itemize}
\item AMENet, uses dynamic maps in both X-Encoder (observation time) and Y-Encoder (prediction time). This is the proposed model.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both X-Encoder and Y-Encoder. This comparison is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention for the dynamic maps. This comparison is used to validate the contribution from the self-attention mechanism for the dynamic maps alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi} with self-attention. This comparison is used to validate the contribution of the extended structure for processing the dynamic maps information in Y-Encoder.
\end{itemize}
\section{Results}
In this section, we will discuss the results of the proposed model in comparison with several recent state-of-the-art models published on the benchmark challenge, as well as the ablative models. We will also discuss the performance of multi-path trajectory prediction with the latent space.
\subsection{Results on Benchmark Challenge}
\label{sec:benchmarkresults}
Table~\ref{tb:results} lists the top performances published on the benchmark challenge measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. AMENet wins the first position and surpasses the models published before 2020 significantly.
Compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}, our model achieves state-of-the-art performance measured by MAD. On the other hand, our model achieves the lowest FDE, reducing the error from 1.197 to 1.183 meters. It demonstrates the model's ability of predicting the most accurate destination in 12 time steps.
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the ablative models in the following sub-section.
\begin{table}[t!]
\centering
\caption{Comparison between the proposed model and the state-of-the-art models. Best values are highlighted in boldface.}
\begin{tabular}{lllll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ &Year\\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 & 2018\\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 & 2018\\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 & 2018\\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 & 1995\\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 & 2019\\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 & 2018\\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} & 2020\\
Ours (AMENet)\tablefootnote{The name of the proposed model AMENet was called \textit{ikg\_tnt} by the abbreviation of our institutes on the ranking list at \url{http://trajnet.stanford.edu/result.php?cid=1}.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} & 2020 \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for ablative models}
\label{sec:ablativestudies}
In this sub-section, the contribution of each component in AMENet will be discussed via the ablative models. More details of the dedicated structure of each ablative model can be found in Sec~\ref{sec:ablativemodels}. Table~\ref{tb:resultsablativemodels} shows the quantitative evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on most-likely prediction.
The comparison between AOENet and AMENet shows that when we replace the dynamic maps with the occupancy grid, the errors measured by MAD and FDE increase by a remarkable margin across all the datasets. The number of invalid trajectories with detected collisions also increases when the dynamic maps are substituted by the occupancy grid. This comparison proves that the dynamic maps with the neighboring agents' motion information, namely, orientation, travel speed and position relative to the target agent, can capture more detailed and accurate interaction information.
The comparison between MENet and AMENet shows that when we remove the self-attention mechanism, the errors measured by MAD and FDE also increase by a remarkable margin across all the datasets, and the number of collisions slightly increases. Without self-attention, the model may have difficulty in learning the behavior patterns of how the target agent interacts with its neighboring agents from one time step to other time steps. This proves the assumption that, the self-attention enables the model to learn the global dependency over different time steps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the errors measured by MAD, especially FDE increase significantly across all the datasets, as well as the number of collisions. The extended structure provides the ability the model for processing the interaction information even in prediction time during training. It improves the model's performance, especially for predicting more accurate destinations. This improvement has also been confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979}/\textbf{0} & 0.574/1.144/\textbf{0} & 0.576/1.139/\textbf{0} & 0.509/1.030/2 \\
coupa3 & \textbf{0.226}/\textbf{0.442}/6 & 0.260/0.509/8 & 0.294/0.572/2 & 0.237/0.464/\textbf{0} \\
deathCircle0 & \textbf{0.659}/\textbf{1.297}/\textbf{2} & 0.726/1.437/6 & 0.725/1.419/6 & 0.698/1.378/10 \\
gates1 & \textbf{0.797}/\textbf{1.692}/\textbf{0} & 0.878/1.819/\textbf{0} & 0.941/1.928/2 & 0.861/1.823/\textbf{0} \\
hyang6 & \textbf{0.542}/\textbf{1.094}/\textbf{0} & 0.619/1.244/2 & 0.657/1.292/\textbf{0} & 0.566/1.140/\textbf{0} \\
nexus6 & \textbf{0.559}/\textbf{1.109}/\textbf{0} & 0.752/1.489/\textbf{0} & 0.705/1.140/\textbf{0} & 0.595/1.181/\textbf{0} \\
Average & \textbf{0.545}/\textbf{1.102}/\textbf{1.3} & 0.635/1.274/2.7 & 0.650/1.283/1.7 & 0.578/1.169/2.0 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} shows the qualitative results of the proposed model AMENet in comparison with the ablative models across the datasets.
In general, all the models can predict realistic trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. After a short observation time, i.\,e.,~ 8 time steps, all the models can capture the general speed and heading direction for agents located in different areas in the space.
From a close observation we can see that, AMENet generates more accurate trajectories than the other models, which are very close or even completely overlap with the corresponding ground truth trajectories. Compared with the ablative models, AMENet predicts more accurate destinations, which is in line with the quantitative results shown in Table~\ref{tb:results}. One very clear example in \textit{hyang6} (left figure in Fig.~\ref{fig:abl_qualitative_results}, in the third row) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{scenarios/bookstore_3290.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{scenarios/coupa_3327.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{scenarios/deathCircle_0000.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{scenarios/gates_1001.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{scenarios/hyang_6209.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{scenarios/nexus_0038.pdf}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) trajectories in different scenes. From From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:abl_qualitative_results}
\end{figure}
\subsection{Results for Multi-Path Prediction}
\label{sec:multipath-selection}
In this sub-section, we will discuss the performance of multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}). During training, the motion information and interaction information in observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ into a Gaussian distribution. Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. Form the figure we can see that, the training phase successfully reparameterized the latent space into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, The well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one plausible future trajectories.
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking, if the ground truth is not available. Compared with the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that generating multiple trajectories, e.\,g.,~ten trajectories, increases the chance to narrow down the errors from the prediction to the ground truth. Meanwhile, the ranking mechanism (see Sec~\ref{sec:multipath-selection}) guarantees the quality of the selected one.
Fig.~\ref{fig:multi-path} demonstrates the effectiveness of multi-path trajectory prediction. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. Even though our method is able to predict the trajectory correctly, the predicted trajectories diverge more widely in further time steps. It also proves that the ability of predicting multiple plausible trajectories is important in the task of trajectory prediction, because of the uncertainty of the future movements increasing in the long-term prediction.
\begin{table}[hbpt!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet. Predicted trajectories are ranked by $\text{top}@10$ and most-likely and errors are measured by MAD/FDE/\#collisions}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961/0 & 0.486/0.979/0 \\
coupa3 & 0.221/0.432/0 & 0.226/0.442/6 \\
deathCircle0 & 0.650/1.280/6 & 0.659/1.297/2 \\
gates1 & 0.784/1.663/2 & 0.797/1.692/0 \\
hyang6 & 0.534/1.076/0 & 0.542/1.094/0 \\
nexus6 & 0.642/1.073/0 & 0.559/1.109/0 \\
Average & 0.535/1.081/1.3 & 0.545/1.102/1.3 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\section{Studies on Long-Term Trajectory Prediction}
\label{sec:longterm}
In this section, we investigate the model's performance on predicting long-term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets from different intersections.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of ANENet on long-term trajectory prediction and the remaining datasets are used for training.
Fig.~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting long-term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Fig.~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for long-term trajectories in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates very accurate predictions for 12 and 16 time steps (visualized in first two rows) for two pedestrians. When they encounter each other at 20 time steps (third row), the model correctly predicts that the left pedestrian yields. But the predicted trajectories slightly deviate from the ground truth and lead to a very close interaction. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, long-term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model. \ch{explain this more}
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose an innovative way---dynamic maps---to learn the social effects between agents during interaction. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leaderboard for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative models.
In order to further investigate the model's performance on long-term trajectory prediction, the newly published benchmark InD is utilized. The performance of AMENet is on a par with the one on Trajnet for 12 time steps, but slowly goes down up to 32 time-step positions of 12.8 seconds.
\ch{Future work:}
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\section{Introduction}
\label{sec:introduction}
\my{Start with an introduction linking to photogrammetry, not ITS only}
Accurate trajectory prediction of road users in the near future is a crucial task in intelligent transportation systems (ITS) \cite{goldhammer2013early,hashimoto2015probabilistic,koehler2013stationary}, autonomous driving \cite{franke1998autonomous,ferguson2008detection,luo2018porca}, mobile robot applications \cite{ziebart2009planning,du2011robot,mohanan2018survey}, \textit{etc}.. This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
\ms{check writing (socially) and give short def}
The trajectory of a non-erratic agent is often referred as plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D aligned in a timing law.
Trajectory prediction is generally defined as to predict the plausible and socially-acceptable positions of target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time \cite{alahi2016social,lee2017desire,gupta2018social,sadeghian2018sophie,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,al2018move,zhang2019sr,cheng2020mcenet,johora2020agent,giuliari2020transformer}.
\ms{previous? history: i would assume that it is not just 8 previous time steps ...}
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrians, vehicles, cyclists and other road users \cite{rudenko2019human}.
\ch{The prediction task can be generalized as short-term or long-term trajectory prediction depending on the prediction time horizons. In this study, under 5 seconds is categorized as short term, otherwise as long term.} A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
However, how to effectively and accurately predict trajectory of mixed agents remains an unsolved problem in many research communities. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intent of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could move in the future. In a crowded scene, the moving direction and speed of different agents change dynamically because of their freewheeling intent and the interactions with neighboring agents.
\my{Too long related works here, you also have sec.2 on related work.}
There exists a large body of literature that focuses on addressing part or all of the aforementioned challenges in order to make accurate future prediction.
\ms{i would expect some examples for features here}
The traditional methods model the interactions based on hand-crafted features, such as force-based rules and constant velocity \cite{helbing1995social,best1997new,yi2015understanding,zhou2012understanding,antonini2006discrete,tay2008modelling,yamaguchi2011you}. The performance is crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performance on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treats the trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Hence, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
Previous methods have achieved great success in this domain. However, most of these methods are designed for a specific agent: pedestrian. In reality, vehicle, pedestrian and cyclist are the three main types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Second, the interactions between the target agent and others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the nearer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are nearly behind it. \ch{Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. For example, can a model trained in some spaces to predict accurate trajectories in other unseen spaces?}
\wt{needs more words to explain their problems. For example, the drawbacks in modeling the interactions.}
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
\ms{you should give a brief definition of attentive here, as it is a decisive idea of your approach}
\ch{The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.}
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by reparameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
\my{You proposed AMENet, however, in main contributions, nowhere AMENet is mentioned. Special CVAE module sounds a minor contribution.}
\my{Re-write 1st item, AMENet}
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework based on a special CVAE module which is trained to encode both the motion and interaction information for predicting multiple plausible future trajectories of target agents.
\ms{what is the difference to other work 39,40,41}
\my{difference to other work 39,40,41 could move to Sec. 2, not here. For each contribution point, try to make it concise, not too long.}
\ch{Our CVAE module differs from \cite{kingma2013auto,kingma2014semi,sohn2015learning} by extending the dynamic maps into Y-Encoder for extracting the interaction information in future time during training phase, while the conventional CVAE module relies on an auto-encoder structure that follows a consistent input and out structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}).}
\item[2] We design an innovative module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
\ms{this last sentence needs explaination!}
\my{innovative -- novel}
\ch{Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences, the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence. It models the dependency without regard to the distance between two positions \cite{vaswani2017attention} and the global interactions are considered rather than the local ones \cite{wang2018non}.}
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, vehicle and cyclist rather than only focusing on pedestrian, in various unseen real-world environments.
\end{itemize}
The efficacy of the proposed method has been validated on the most challenging benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for short-term trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
\my{Trajnet in the referene has no year information. Check all other reference papers.}
Its performance for predicting long term (up to 32 time-step positions of 12.8 seconds) trajectory is also investigated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different intersections.
Each component of the proposed model is validated via a series of ablative models.
\section{Related Work}
Our work focuses on predicting trajectories of mix road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work concentrates on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include but not limit to linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processing \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods largely reply on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs) \cite{hochreiter1997long} and Gated Recurrent Units (GRU) \cite{cho2014learning} have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,xu2018encoding,bhattacharyya2018long,gupta2018social,sadeghian2018sophie,zhang2019sr,liang2019peeking}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
Other deep learning technologies, such as Convolutional Neural Networks (CNN) and Graph-based neural networks are also used for trajectory prediction and report good performances \cite{bai2018empirical,chandra2019forecasting,mohamed2020social,gao2020vectornet,liang2019garden}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, to effectively model the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interaction is the Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{schonauer2012modeling}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many following works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,luong2015effective,vaswani2017attention,wang2018non} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, such as, neural machine translation \cite{bahdanau2014neural,luong2015effective,vaswani2017attention} and image caption generation \cite{xu2015show}, and have been widely utilized in other domains \cite{anderson2018bottom,giuliari2020transformer,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human} along the time axis.
\ms{can you briefly explain this mechanism?}
\ch{The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention. Thanks to the mapping mechanism, unlike recurrent structures (e.\,g.,~RNNs), the dependency between the input and output is not restricted to their distance of positions.}
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE \cite{kingma2014semi,sohn2015learning} and Conditional GAN \cite{mirza2014conditional} ) are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation \ch{add citation}, as well as trajectory prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following points: \ch{(1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)}
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and
future trajectories are predicted only based on the map data (i.\,e.,. position coordinate).
Therefore, it is more computationally effective than the methods that need information from images.
\ch{In addition, our model is trained on some available spaces but is validated on other unseen spaces. The visual information, such as vegetation, curbside, and buildings, are very different from one space to another. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in a new space of totally different environment \cite{cheng2020mcenet}}.
\ms{this sentence could also be explained a little bit}
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. The sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The Specific structure of X-Encoder/Y-Encoder is given by Fig.\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}].
\end{equation}
Where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel. This can be achieved by reaching an approximate lower bound since the Kullback-Leibler divergence is always non-negative.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as: for an agent $i$, received as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinate is also possible, but in this work only 2D coordinate is considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
For simplicity, we omit the notation of time steps if it is explicit in the following parts of the paper.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. $N$ denotes the total number of predicted trajectories and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared with coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
\ms{is there an orientation dependence? did you use augmentation in training to prevent that the system only learns certain directions?}
\ch{We also apply the augmentation technique that rotates the trajectories to prevent the system from only learning certain directions. In order to obtain the relative positions and angles between agents, the trajectories of all the agents that coexisting in a given period are rotated by the same angle.}
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position which are encoded as dynamic maps that centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose an innovative and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interprets the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grid because: (1) comparing with representing information in pixel level, is more computational effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid on different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is normalized into $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring road users existing in the grid normalized by the total number of all of the neighboring road users at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\ch{Add a visualization of the maps information}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
\ms{how large are the calculated grids? which distance to neighbors do you assume?}
\ch{Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.}
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and positions over the grids of the map. For example, all the agents move in a certain direction with similar speed on a particular area of the map, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Maximum Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, \ch{which is a linear projecting of multiple self-attention in parallel and concatenating them together}\cite{vaswani2017attention}.
\ms{what is this? short explaination?}
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q}\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k}\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v}
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. Note that, $\#head$ is an aliquot \ms{what is this} part of $D_{q}$. It is the same for $W_{Ki}$ and $W_{Vi}$. The output of multi-head is achieved via a linear transformation with parameter $W_O$ \ms{Wo or Wq??} of the concatenation of the output from each head.
The output of multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. \ch{It is worth mentioning that, the dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure \cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from \cite{lee2017desire} that the input and output maintain in the same form.}
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference, Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\my{Put Results section in Experiments section.
}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation models and discuss the results in details.
\ms{ablation studies - ablative models -- check if wording is correct}
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed model on the most challenging benchmark Trajnet\footnote{http://trajnet.stanford.edu/}. It is the most commonly used large-scale trajectory-based activity benchmark,
which provides a consistent evaluation system for fair comparison between submitted results from different proposed methods \cite{sadeghiankosaraju2018trajnet}.
The benchmark not only covers a wide range of datasets, but also includes various types of road users, from pedestrians to bikers, skateboarders, cars, buses, and golf cars, that navigate in a real world outdoor mixed traffic environment.
Trajnet provides trajectories data collected from 38 scenes with ground truth for training and data collected from the other 20 scenes without ground truth for the challenge competition. Each scene presents various traffic density in different space layout for mixed traffic, which makes the prediction task more difficult than training and testing on the same space.
It highly requires a model to manifest generalizability, in order to obtain an overall good performance.
Trajectories are in $x$ and $y$ coordinates in meters or pixels projected on a Cartesian space with 8 time steps for observation and 12 time steps for prediction. Each time step lasts 0.4 seconds.
However, the pixel coordinates are not in the same scale across all the datasets, including the challenge datasets. Without standardizing the pixels into same scale, it is extremely difficult to train a model in one pixel scale and test in other pixel scales. Hence, we follow all the other models to only use the coordinates in meters.
In order to train and evaluate the proposed model, as well as the ablative models (see Sec~\ref{sec:ablativemodels}) with ground truth information, 6 datasets from different scenes with mixed traffic are selected as test datasets from the 38 training datasets. Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}. Please note that, the selected test datasets are also different from the remaining 32 training datasets, so we can monitor the training and conduct the evaluation locally in a similar manner as the 20 challenge datasets on the remote evaluation system.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each dataset.
\ms{just to clarify: you traind with 6 data sets and also tested on those? if so, how did you test the transferability?}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{fig/trajectories_bookstore_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{fig/trajectories_coupa_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{fig/trajectories_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{fig/trajectories_gates_1.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{fig/trajectories_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{fig/trajectories_nexus_0.pdf}
\caption{Selected datasets for evaluating the proposed model, as well as the ablative models. From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:trajectories}
\end{figure}
\ms{what do the colors mean? different tracks? why are seemingly continuous tracks separated into different colors?}
\subsection{Evaluation Metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{pellegrini2009you,yamaguchi2011you,alahi2016social,gupta2018social,sadeghian2018sophie}. In addition, we also count a predicted trajectory as invalid if it collides with another trajectory.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\item Count of collisions with linear interpolation. Since each discrete time step lasts \SI{0.4}{seconds}, similar to \citep{sadeghian2018sophie}, an intermediate position is inserted using linear interpolation to increase the granularity of the time steps. If one agent coexits with another agent and they encounter with each other at any given time step within \SI{0.1}{meter}, the encounter is counted as a collision. Once the predicted trajectory collides with one another, the prediction is invalid.
\end{itemize}
We evaluate the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction, respectively.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
Best prediction $@top10$ means among the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth is selected as the best. When the ground truth is not available, i.\,e.,~the Trajnet Benchmark test datasets (see \ref{subsec:benchmark}), only the evaluation on the most-likely prediction is reported.
\subsection{Recent State-of-the-Art Methods}
\label{sec:stoamodels}
We compare the proposed model with the most influential recent state-of-the-art models published on the benchmark challenge for trajectory prediction on a consistent evaluation system up to 05/06/2020, in order to guarantee the fairness of comparison. It is worth mentioning that given the large number of submissions, only the very top methods with published papers are listed here\footnote{More details of the ranking can be found at \url{http://trajnet.stanford.edu/result.php?cid=1&page=2&offset=10}}.
\my{Too much detail about each SOTA method, try to squeeze in one sentence about SOTA methods.}
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangular occupancy grid is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, including the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and the visibility attentional area driven by the head pose \ms{what is head pose estimate?} as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with all the other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\subsection{Ablative Models}
\label{sec:ablativemodels}
In order to analyze the impact of each component, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out in comparison with the proposed model.
\begin{itemize}
\item AMENet, uses dynamic maps in both X-Encoder (observation time) and Y-Encoder (prediction time). This is the proposed model.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both X-Encoder and Y-Encoder. This comparison is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention for the dynamic maps. This comparison is used to validate the contribution from the self-attention mechanism for the dynamic maps alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This comparison is used to validate the contribution of the extended structure for processing the dynamic maps information in Y-Encoder.
\end{itemize}
\section{Results}
In this section, we will discuss the results of the proposed model in comparison with several recent state-of-the-art models published on the benchmark challenge, as well as the ablative models. We will also discuss the performance of multi-path trajectory prediction with the latent space.
\subsection{Results on Benchmark Challenge}
\label{sec:benchmarkresults}
Table~\ref{tb:results} lists the top performances published on the benchmark challenge measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. AMENet wins the first position and surpasses the models published before 2020 significantly.
Compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}, our model achieves state-of-the-art performance measured by MAD. On the other hand, our model achieves the lowest FDE, reducing the error from 1.197 to 1.183 meters. It demonstrates the model's ability of predicting the most accurate destination in 12 time steps.
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the ablative models in the following sub-section.
\begin{table}[t!]
\centering
\caption{Comparison between the proposed model and the state-of-the-art models. Best values are highlighted in boldface.}
\begin{tabular}{lllll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ &Year\\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 & 2018\\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 & 2018\\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 & 2018\\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 & 1995\\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 & 2019\\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 & 2018\\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} & 2020\\
Ours (AMENet)\tablefootnote{The name of the proposed model AMENet was called \textit{ikg\_tnt} by the abbreviation of our institutes on the ranking list at \url{http://trajnet.stanford.edu/result.php?cid=1}.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} & 2020 \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Ablative Models}
\label{sec:ablativestudies}
In this sub-section, the contribution of each component in AMENet will be discussed via the ablative models. More details of the dedicated structure of each ablative model can be found in Sec~\ref{sec:ablativemodels}. Table~\ref{tb:resultsablativemodels} shows the quantitative evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on most-likely prediction.
The comparison between AOENet and AMENet shows that when we replace the dynamic maps with the occupancy grid, the errors measured by MAD and FDE increase by a remarkable margin across all the datasets. The number of invalid trajectories with detected collisions also increases when the dynamic maps are substituted by the occupancy grid. This comparison proves that the dynamic maps with the neighboring agents' motion information, namely, orientation, travel speed and position relative to the target agent, can capture more detailed and accurate interaction information.
The comparison between MENet and AMENet shows that when we remove the self-attention mechanism, the errors measured by MAD and FDE also increase by a remarkable margin across all the datasets, and the number of collisions slightly increases. Without self-attention, the model may have difficulty in learning the behavior patterns of how the target agent interacts with its neighboring agents from one time step to other time steps. This proves the assumption that, the self-attention enables the model to learn the global dependency over different time steps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the errors measured by MAD, especially FDE increase significantly across all the datasets, as well as the number of collisions. The extended structure provides the ability the model for processing the interaction information even in prediction time during training. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979}/\textbf{0} & 0.574/1.144/\textbf{0} & 0.576/1.139/\textbf{0} & 0.509/1.030/2 \\
coupa3 & \textbf{0.226}/\textbf{0.442}/6 & 0.260/0.509/8 & 0.294/0.572/2 & 0.237/0.464/\textbf{0} \\
deathCircle0 & \textbf{0.659}/\textbf{1.297}/\textbf{2} & 0.726/1.437/6 & 0.725/1.419/6 & 0.698/1.378/10 \\
gates1 & \textbf{0.797}/\textbf{1.692}/\textbf{0} & 0.878/1.819/\textbf{0} & 0.941/1.928/2 & 0.861/1.823/\textbf{0} \\
hyang6 & \textbf{0.542}/\textbf{1.094}/\textbf{0} & 0.619/1.244/2 & 0.657/1.292/\textbf{0} & 0.566/1.140/\textbf{0} \\
nexus6 & \textbf{0.559}/\textbf{1.109}/\textbf{0} & 0.752/1.489/\textbf{0} & 0.705/1.140/\textbf{0} & 0.595/1.181/\textbf{0} \\
Average & \textbf{0.545}/\textbf{1.102}/\textbf{1.3} & 0.635/1.274/2.7 & 0.650/1.283/1.7 & 0.578/1.169/2.0 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} shows the qualitative results of the proposed model AMENet in comparison with the ablative models across the datasets.
In general, all the models can predict realistic trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. After a short observation time, i.\,e.,~ 8 time steps, all the models can capture the general speed and heading direction for agents located in different areas in the space.
From a close observation we can see that, AMENet generates more accurate trajectories than the other models, which are very close or even completely overlap with the corresponding ground truth trajectories. Compared with the ablative models, AMENet predicts more accurate destinations, which is in line with the quantitative results shown in Table~\ref{tb:results}. One very clear example in \textit{hyang6} (left figure in Fig.~\ref{fig:abl_qualitative_results}, in the third row) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
\ch{Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (right figure in Fig.~\ref{fig:abl_qualitative_results}, in the second row). The sudden maneuver of agents are very difficult to forecast even for human observers.}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{scenarios/bookstore_3290.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{scenarios/coupa_3327.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{scenarios/deathCircle_0000.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{scenarios/gates_1001.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{scenarios/hyang_6209.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{scenarios/nexus_0038.pdf}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) trajectories in different scenes. From From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:abl_qualitative_results}
\end{figure}
\ms{perhaps you could play around a little bit with the signatures, e.g. putting GT to the bottom, overlaying it with the colors, using thinner lines (or even dashed lines)}
\subsection{Results for Multi-Path Prediction}
\label{sec:multipath-selection}
In this sub-section, we will discuss the performance of multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple plausible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}). During training, the motion information and interaction information in observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ into a Gaussian distribution. Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. \ch{Form the figure we can see that, the training phase successfully reparameterized the latent space into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one plausible future trajectories.}
\ms{what are the dimensions? what does the figure indicate - how can it be interpreted?}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking, if the ground truth is not available. Compared with the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that generating multiple trajectories, e.\,g.,~ten trajectories, increases the chance to narrow down the errors from the prediction to the ground truth. Meanwhile, the ranking mechanism (see Sec~\ref{sec:multipath-selection}) guarantees the quality of the selected one.
Fig.~\ref{fig:multi-path} demonstrates the effectiveness of multi-path trajectory prediction. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. Even though our method is able to predict the trajectory correctly, the predicted trajectories diverge more widely in further time steps. It also proves that the ability of predicting multiple plausible trajectories is important in the task of trajectory prediction, because of the increasing uncertainty of the future movements .
\begin{table}[hbpt!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet. Predicted trajectories are ranked by $\text{top}@10$ and most-likely and errors are measured by MAD/FDE/\#collisions}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961/0 & 0.486/0.979/0 \\
coupa3 & 0.221/0.432/0 & 0.226/0.442/6 \\
deathCircle0 & 0.650/1.280/6 & 0.659/1.297/2 \\
gates1 & 0.784/1.663/2 & 0.797/1.692/0 \\
hyang6 & 0.534/1.076/0 & 0.542/1.094/0 \\
nexus6 & 0.642/1.073/0 & 0.559/1.109/0 \\
Average & 0.535/1.081/1.3 & 0.545/1.102/1.3 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\ms{blue lines should be thinner, perhaps dashed lines?}
\my{I am not sure if we want to report InD in this paper. This section somehow breaks the structure of this paper.}
\my{If this is not the main contribution, I would actually remove this section.}
\section{Studies on Long-Term Trajectory Prediction}
\label{sec:longterm}
In this section, we investigate the model's performance on predicting long-term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets from different intersections.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet on long-term trajectory prediction and the remaining datasets are used for training.
Fig.~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and the number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting long-term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Fig.~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for long-term trajectories \ms{is this the right notion: long-term traj? it means that you want to make long term predictions? pherhaps you use this notion?} in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates accurate predictions for 12 and 16 time steps (visualized in first two rows) for two pedestrians. When they encounter each other at 20 time steps (third row), the model correctly predicts that the left pedestrian yields. But the predicted trajectories slightly deviate from the ground truth and lead to a very close interaction. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
\ms{question: i wonder why the predictions are not just getting longer with increasing prediction time - instead, new traje. are created. why is this so?}
\ch{because the ground truth is fed into the Y-Encoder, different prediction horizons mean different ground truth input in training, so the model may generate very different trajectories}
To summarize, long-term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model. \ch{For example, if the new positions of target agents are acquired in later time steps, the observation time horizon can be shifted accordingly, which is similar to the mechanism in Kalman Filter \cite{kalman1960new} that the prediction can be calibrated to improve performance for long-term trajectory prediction.}
\ms{can you explain this more? which information? perhaps this could also go to a section / paragraph on future work?}
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose an innovative way---dynamic maps---to learn the social effects between agents during interaction. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leaderboard for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative models.
In order to further investigate the model's performance on long-term trajectory prediction, the newly published benchmark InD is utilized. The performance of AMENet is on a par with the one on Trajnet for 12 time steps, but slowly goes down up to 32 time-step positions of 12.8 seconds.
\ms{i would add a paragraph on future work, what are the next steps?}
\ch{In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.}
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\my{Bib list is also too long, you have 74 at moment. Try to delete less important ones. I give a number, say less than 50.}
\section{Introduction}
\label{sec:introduction}
\my{Start with an introduction linking to photogrammetry, not ITS only}
Accurate trajectory prediction of road users in the near future is a crucial task in intelligent transportation systems (ITS) \cite{morris2008survey},
autonomous driving \cite{rudenko2019human},
mobile robot applications \cite{mohanan2018survey},
\textit{etc}..
This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
The trajectory of a non-erratic agent is often referred as plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D aligned in a timing law.
Trajectory prediction is generally defined as to predict the plausible and socially-acceptable positions of target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time \cite{alahi2016social,lee2017desire,gupta2018social,sadeghian2018sophie,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,al2018move,zhang2019sr,cheng2020mcenet,johora2020agent,giuliari2020transformer}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrian, cyclist, vehicle and other road users \cite{rudenko2019human}.
The prediction task can be generalized as short-term or long-term trajectory prediction depending on the prediction time horizons. In this study, under five seconds is categorized as short term, otherwise as long term.
A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively and accurately predict trajectory of mixed agents remains an unsolved problem in many research communities. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intent of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could move in the future.
\my{Too long related works here, you also have sec.2 on related work.}
There exists a large body of literature that focuses on addressing part or all of the aforementioned challenges in order to make accurate trajectory prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules \cite{helbing1995social}, Game Theory \cite{johora2020agent}, or constant velocity \cite{best1997new}.
The performance is crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performance on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Recurrent Neural Networks (RNNs) based models are used to model the interactions between agents and predict the future positions in sequence \cite{alahi2016social,wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. The models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed, such as Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based framework Social GAN \cite{gupta2018social} and Conditional Variational Auto-Encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} based framework DESIRE \cite{lee2017desire}.
\begin{comment}
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treats the trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Hence, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
\end{comment}
In spite of the great success in this domain, most of these methods are designed for a specific agent: pedestrian.
In reality, pedestrian, cyclist and vehicle are the three mayor types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Besides, the interactions between the target agent and the others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the nearer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are nearly behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. Can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by reparameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
\my{You proposed AMENet, however, in main contributions, nowhere AMENet is mentioned. Special CVAE module sounds a minor contribution.}
\my{Re-write 1st item, AMENet}
\my{difference to other work 27,28,29 could move to Sec. 2, not here. For each contribution point, try to make it concise, not too long.}
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework AMENet for multi-path trajectory prediction.
AMENet extends a CVAE module that is trained to learn a latent space for encoding the motion and interaction information in both observation and future time. It predicts multiple plausible future trajectories conditioned on the observed information concatenated with a stochastic variable repeatably sampled from the latent space.
\item[2] We design a novel module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
The attention mechanism models the dependency without regard to the distance between two positions \cite{vaswani2017attention} and the global interactions are considered rather than the local ones \cite{wang2018non}.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, cyclist and vehicle rather than only focusing on pedestrian, in various unseen real-world environments.
\end{itemize}
The efficacy of the proposed method has been validated on the most challenging benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for short-term trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
\my{Trajnet in the referene has no year information. Check all other reference papers.}
Its performance for predicting long term trajectory (up to 32 time-step positions of 12.8 seconds) is also investigated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different intersections.
Each component of the proposed model is validated via a series of ablative models.
\section{Related Work}
Our work focuses on predicting trajectories of mix road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work concentrates on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include but not limit to linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processing \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods largely reply on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs) \cite{hochreiter1997long} and Gated Recurrent Units (GRU) \cite{cho2014learning} have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,xu2018encoding,bhattacharyya2018long,gupta2018social,sadeghian2018sophie,zhang2019sr,liang2019peeking}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, effectively modeling the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interactions is the Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{johora2020agent}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many following works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,vaswani2017attention,wang2018non} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, such as, neural machine translation \cite{bahdanau2014neural,vaswani2017attention} and image caption generation \cite{xu2015show}, and have been widely utilized in other domains \cite{anderson2018bottom,giuliari2020transformer,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences,
the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence. The dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE \cite{kingma2014semi,sohn2015learning})
are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation \ch{add citation}, as well as trajectory prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and
future trajectories are predicted only based on the map data (i.\,e.,~position coordinate).
Therefore, it is more computationally effective than the methods that need information from images.
In addition, our model is trained on some available spaces but is validated on other unseen spaces. The visual information, such as vegetation, curbside and buildings, are very different from one space to another. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in an unseen space of totally different environment \cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. The Sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The specific structure of X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}],
\end{equation}
where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as: for an agent $i$, received as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinate is also possible, but in this work only 2D coordinate is considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
For simplicity, we omit the notation of time steps if it is explicit in the following parts of the paper.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. $N$ denotes the total number of predicted trajectories and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared with coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
We apply the augmentation technique to randomly rotate the trajectories to prevent the system from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position which are encoded as dynamic maps that centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose a novel and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interpret the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grid because: (1) comparing with representing information in pixel level, is more computational effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid in different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is normalized into $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring road users existing in the grid normalized by the total number of all of the neighboring road users at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and position over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Maximum Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q},\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k},\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v},
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. Note that, $\#head$ is an aliquot \ms{what is this} part of $D_{q}$. It is the same for $W_{Ki}$ and $W_{Vi}$. The output of multi-head is achieved via a linear transformation with parameter $W_O$ \ms{Wo or Wq??} of the concatenation of the output from each head.
The output of the multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. The dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure \cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from \cite{lee2017desire} that the input and output maintain in the same form.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution, which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference, Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\my{Put Results section in Experiments section.
}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation studies and discuss the results in details.
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed method on the most challenging benchmark Trajnet\footnote{http://trajnet.stanford.edu/}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provide an only test system for fair comparison among different submitted methods \cite{sadeghiankosaraju2018trajnet}.
Trajnet covers a wide range of datasets and includes various types of road users (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in a real world outdoor mixed traffic environment.
The data are collected from 38 scenes with ground truth for training and the ones from the other 20 scenes without ground truth for test (i.\,e., open challenge competition). Each scene presents various traffic density in different space layout for mixed traffic, which makes the prediction task more difficult than training and testing on the same space.
It highly requires a model to manifest generalizability, in order to obtain an overall good performance.
Trajectories are in $x$ and $y$ coordinates in meters or pixels projected on a Cartesian space with 8 time steps for observation and 12 time steps for prediction. Each time step lasts 0.4 seconds.
However, the pixel coordinates are not in the same scale across all the datasets, including the challenge datasets. Without standardizing the pixels into same scale, it is extremely difficult to train a model in one pixel scale and test in other pixel scales. Hence, we follow all the other models to only use the coordinates in meters.
In order to train and evaluate the proposed model, as well as the ablative models (see Sec~\ref{sec:ablativemodels}) with ground truth information, 6 datasets from different scenes with mixed traffic are selected as test datasets from the 38 training datasets. Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}. The selected test datasets are also different from the remaining 32 training datasets, so we can monitor the training and conduct the evaluation locally in a similar manner as the 20 challenge datasets on the remote evaluation system.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each dataset.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{fig/trajectories_bookstore_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{fig/trajectories_coupa_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{fig/trajectories_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{fig/trajectories_gates_1.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{fig/trajectories_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{fig/trajectories_nexus_0.pdf}
\caption{Selected datasets for evaluating the proposed model, as well as the ablative models. From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation Metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}.
In addition, we also count a predicted trajectory as invalid as long as it collides with another trajectory.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\item Count of collisions with linear interpolation. Since each discrete time step lasts \SI{0.4}{seconds}, similar to \citep{sadeghian2018sophie}, an intermediate position is inserted using linear interpolation to increase the granularity of the time steps. If one agent coexits with another agent and they encounter with each other at any given time step within \SI{0.1}{meter}, the encounter is counted as a collision. Once the predicted trajectory collides with one another, the prediction is invalid.
\end{itemize}
We evaluate the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction, respectively.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
Best prediction $@top10$ means among the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth is selected as the best. When the ground truth is not available, i.\,e.,~the Trajnet Benchmark test datasets (see \ref{subsec:benchmark}), only the evaluation on the most-likely prediction is reported.
\subsection{Recent State-of-the-Art Methods}
\label{sec:stoamodels}
We compare the proposed model with the most influential recent state-of-the-art models published on the benchmark challenge for trajectory prediction on a consistent evaluation system up to 05/06/2020, in order to guarantee the fairness of comparison.
Those models are, for example, the rule-based model Social Force~\cite{helbing1995social},
social pooling with a rectangular occupancy grid for close neighboring agents Social LSTM~\cite{alahi2016social}, pooling for agents in visibility attentional area MX-LSTM~\cite{hasan2018mx},
RNN Encoder-based model RED~\cite{becker2018evaluation},
generative model Social GAN~\cite{gupta2018social},
attention-based model SR-LSTM~\cite{zhang2019sr} and
Ind-TF~\cite{giuliari2020transformer}.
It is worth mentioning that given the large number of submissions, only the very top methods with published papers are listed here\footnote{More details of the ranking can be found at \url{http://trajnet.stanford.edu/result.php?cid=1&page=2&offset=10}}.
\begin{comment}
\my{Too much detail about each SOTA method, try to squeeze in one sentence about SOTA methods.}
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangular occupancy grid is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, including the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and the visibility attentional area driven by the head pose \ms{what is head pose estimate?} as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with all the other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\end{comment}
\subsection{Ablative Models}
\label{sec:ablativemodels}
In order to analyze the impact of each component, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out in comparison with the proposed model.
\begin{itemize}
\item AMENet, uses dynamic maps in both X-Encoder (observation time) and Y-Encoder (prediction time). This is the proposed model.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both X-Encoder and Y-Encoder. This comparison is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention for the dynamic maps. This comparison is used to validate the contribution from the self-attention mechanism for the dynamic maps alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This comparison is used to validate the contribution of the extended structure for processing the dynamic maps information in Y-Encoder.
\end{itemize}
\subsection{Results}
In this sub-section, we will discuss the results of the proposed model in comparison with several recent state-of-the-art models published on the benchmark challenge, as well as the ablative models. We will also discuss the performance of multi-path trajectory prediction with the latent space.
\subsubsection{Results on Benchmark Challenge}
\label{sec:benchmarkresults}
Table~\ref{tb:results} lists the top performances published on the benchmark challenge measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. AMENet wins the first position and surpasses the models published before 2020 significantly.
Compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}, our model achieves the state-of-the-art performance measured by MAD. On the other hand, our model achieves the lowest FDE, reducing the error from 1.197 to 1.183 meters. It demonstrates the model's ability of predicting the most accurate destination in 12 time steps.
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the following ablative studies.
\begin{table}[t!]
\centering
\caption{Comparison between the proposed model and the state-of-the-art models. Best values are highlighted in boldface.}
\begin{tabular}{lllll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ &Year\\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 & 2018\\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 & 2018\\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 & 2018\\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 & 1995\\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 & 2019\\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 & 2018\\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} & 2020\\
Ours (AMENet)\tablefootnote{The name of the proposed model AMENet was called \textit{ikg\_tnt} by the abbreviation of our institutes on the ranking list at \url{http://trajnet.stanford.edu/result.php?cid=1}.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} & 2020 \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsubsection{Results for Ablative Models}
\label{sec:ablativestudies}
Here, the contribution of each component in AMENet will be discussed via the ablative models. More details of the dedicated structure of each ablative model can be found in Sec~\ref{sec:ablativemodels}. Table~\ref{tb:resultsablativemodels} shows the quantitative evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on most-likely prediction.
The comparison between AOENet and AMENet shows that when we replace the dynamic maps with the occupancy grid, the errors measured by MAD and FDE increase by a remarkable margin across all the datasets. The number of invalid trajectories with detected collisions also increases when the dynamic maps are substituted by the occupancy grid. This comparison proves that the dynamic maps with the neighboring agents' motion information, namely, orientation, travel speed and position relative to the target agent, can capture more detailed and accurate interaction information.
The comparison between MENet and AMENet shows that when we remove the self-attention mechanism, the errors measured by MAD and FDE also increase by a remarkable margin across all the datasets, and the number of collisions slightly increases. Without self-attention, the model may have difficulty in learning the behavior patterns of how the target agent interacts with its neighboring agents from one time step to other time steps. This proves the assumption that, the self-attention enables the model to learn the global dependency over different time steps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the errors measured by MAD, especially FDE increase significantly across all the datasets, as well as the number of collisions. The extended structure provides the ability the model for processing the interaction information even in prediction time during training. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE/\#collisions on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979}/\textbf{0} & 0.574/1.144/\textbf{0} & 0.576/1.139/\textbf{0} & 0.509/1.030/2 \\
coupa3 & \textbf{0.226}/\textbf{0.442}/6 & 0.260/0.509/8 & 0.294/0.572/2 & 0.237/0.464/\textbf{0} \\
deathCircle0 & \textbf{0.659}/\textbf{1.297}/\textbf{2} & 0.726/1.437/6 & 0.725/1.419/6 & 0.698/1.378/10 \\
gates1 & \textbf{0.797}/\textbf{1.692}/\textbf{0} & 0.878/1.819/\textbf{0} & 0.941/1.928/2 & 0.861/1.823/\textbf{0} \\
hyang6 & \textbf{0.542}/\textbf{1.094}/\textbf{0} & 0.619/1.244/2 & 0.657/1.292/\textbf{0} & 0.566/1.140/\textbf{0} \\
nexus6 & \textbf{0.559}/\textbf{1.109}/\textbf{0} & 0.752/1.489/\textbf{0} & 0.705/1.140/\textbf{0} & 0.595/1.181/\textbf{0} \\
Average & \textbf{0.545}/\textbf{1.102}/\textbf{1.3} & 0.635/1.274/2.7 & 0.650/1.283/1.7 & 0.578/1.169/2.0 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} shows the qualitative results of the proposed model AMENet in comparison with the ablative models across the datasets.
In general, all the models can predict realistic trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. After a short observation time, i.\,e.,~ 8 time steps, all the models can capture the general speed and heading direction for agents located in different areas in the space.
From a close observation we can see that, AMENet generates more accurate trajectories than the other models, which are very close or even completely overlap with the corresponding ground truth trajectories. Compared with the ablative models, AMENet predicts more accurate destinations, which is in line with the quantitative results shown in Table~\ref{tb:results}. One very clear example in \textit{hyang6} (left figure in Fig.~\ref{fig:abl_qualitative_results}, in the third row) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (right figure in Fig.~\ref{fig:abl_qualitative_results}, in the second row). The sudden maneuver of agents are very difficult to forecast even for human observers.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{scenarios/bookstore_3290.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{scenarios/coupa_3327.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{scenarios/deathCircle_0000.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{scenarios/gates_1001.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{scenarios/hyang_6209.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{scenarios/nexus_0038.pdf}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) trajectories in different scenes. From From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:abl_qualitative_results}
\end{figure}
\subsubsection{Results for Multi-Path Prediction}
\label{sec:multipath-selection}
Here, we will discuss the performance of multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple plausible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}). During training, the motion information and interaction information in observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ into a Gaussian distribution. Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. Form the figure we can see that, the training phase successfully re-parameterized the $z$ into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one plausible future trajectories.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared with the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that generating multiple trajectories, e.\,g.,~10 trajectories, increases the chance to narrow down the errors from the prediction to the ground truth. Meanwhile, the ranking mechanism (see Sec~\ref{sec:multipath-selection}) guarantees the quality of the selected one.
Fig.~\ref{fig:multi-path} demonstrates the effectiveness of multi-path trajectory prediction. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. Even though our method is able to predict the trajectory correctly, the predicted trajectories diverge more widely in further time steps. It also proves that the ability of predicting multiple plausible trajectories is important in the task of trajectory prediction, because of the increasing uncertainty of the future movements .
\begin{table}[hbpt!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet. Predicted trajectories are ranked by $\text{top}@10$ and most-likely and errors are measured by MAD/FDE/\#collisions}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961/0 & 0.486/0.979/0 \\
coupa3 & 0.221/0.432/0 & 0.226/0.442/6 \\
deathCircle0 & 0.650/1.280/6 & 0.659/1.297/2 \\
gates1 & 0.784/1.663/2 & 0.797/1.692/0 \\
hyang6 & 0.534/1.076/0 & 0.542/1.094/0 \\
nexus6 & 0.642/1.073/0 & 0.559/1.109/0 \\
Average & 0.535/1.081/1.3 & 0.545/1.102/1.3 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\my{I am not sure if we want to report InD in this paper. This section somehow breaks the structure of this paper.}
\my{If this is not the main contribution, I would actually remove this section.}
\section{Studies on Long-Term Trajectory Prediction}
\label{sec:longterm}
In this section, we investigate the model's performance on predicting long-term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets from different intersections.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet on long-term trajectory prediction and the remaining datasets are used for training.
Fig.~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and the number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting long-term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Fig.~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for predicting long-term trajectories
in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates accurate predictions for 12 and 16 time steps (visualized in first two rows) for two pedestrians. When they encounter each other at 20 time steps (third row), the model correctly predicts that the left pedestrian yields. But the predicted trajectories slightly deviate from the ground truth and lead to a very close interaction. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, long-term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model. For example, if the new positions of the agents are acquired in later time steps, the observation time horizon can be shifted accordingly, which is similar to the mechanism in Kalman Filter \cite{kalman1960new} that the prediction can be calibrated by the newly available observation to improve performance for long-term trajectory prediction.
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose an novel way---dynamic maps---to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets in various real-world environments. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
In order to further investigate the model's performance on long-term trajectory prediction, the newly published benchmark InD is utilized. The performance of AMENet is on a par with the one on Trajnet for 12 time steps, but slowly goes down up to 32 time-step positions of 12.8 seconds.
In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\my{Bib list is also too long, you have 74 at moment. Try to delete less important ones. I give a number, say less than 50.}
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users is a crucial task in different communities, such as photogrammetry \cite{schindler2010automatic,klinger2015probabilistic,klinger2017probabilistic,cheng2018mixed}, intelligent transportation systems (ITS) \cite{morris2008survey,cheng2018modeling,cheng2020mcenet}, computer vision \cite{alahi2016social,mohajerin2019multi},
mobile robot applications \cite{mohanan2018survey},
\textit{etc}.~
This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
Trajectory prediction is generally defined as to predict the plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D of non-erratic target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time \cite{helbing1995social,alahi2016social}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrian, cyclist, vehicle and other road users \cite{rudenko2019human}.
A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively and accurately predict trajectories of mixed agents is still an unsolved problem in many research communities. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intent of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could use in the future.
There exists a large body of literature that focuses on addressing part or all of the aforementioned challenges in order to make accurate trajectory prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules \cite{helbing1995social}, Game Theory \cite{johora2020agent}, or constant velocity \cite{best1997new}.
The performance is crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performance on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Recurrent Neural Networks (RNNs) based models are used to model the interactions between agents and predict the future positions in sequence \cite{alahi2016social,xue2018ss}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. The models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed, such as Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based framework Social GAN \cite{gupta2018social} and Conditional Variational Auto-Encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} based framework DESIRE \cite{lee2017desire}.
\begin{comment}
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treats the trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Hence, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
\end{comment}
In spite of the great success in this domain, most of these methods are designed for a specific agent type: pedestrians.
In reality, pedestrians, cyclists and vehicles are the three mayor types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Besides, the interactions between the target agent and the others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the closer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. Can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking the X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by re-parameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of the X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework Attentive Maps Encoder Network (AMENet) for multi-path trajectory prediction.
AMENet inserts a generative module that is trained to learn a latent space for encoding the motion and interaction information in both observation and future, and predicts multiple feasible future trajectories conditioned on the observed information.
\item[2] We design a novel module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, cyclist and vehicle rather than only focusing on pedestrian, in various unseen real-world environments, which makes our work different from most of the previous ones that only predict pedestrian trajectories.
\end{itemize}
The efficacy of the proposed method has been validated on the recent benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
Its performance is further validated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different vehicle dominated intersections.
Each component of the proposed model is validated via a series of ablative studies.
\section{Related Work}
Our work focuses on predicting trajectories of heterogeneous road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work focuses on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processes \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods largely rely on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs) \cite{hochreiter1997long}, have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,gupta2018social,sadeghian2018sophie,zhang2019sr}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, effectively modeling the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interactions is the Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{johora2020agent}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,vaswani2017attention} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, such as, neural machine translation \cite{bahdanau2014neural,vaswani2017attention} and image caption generation \cite{xu2015show,anderson2018bottom,he2020image}, \textit{etc}..
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences,
the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence. The dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE \cite{kingma2014semi,sohn2015learning})
are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation, as well as trajectory prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and
future trajectories are predicted only based on the map data (i.\,e.,~position coordinate).
Our model is trained on some available spaces but is validated on other unseen spaces. The visual information, such as vegetation, curbside and buildings, are very different from one space to another. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in an unseen space of totally different environment \cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. The Sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The specific structure of X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}],
\end{equation}
where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as follows: agent $i$, receives as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinates are also possible, but in this work only 2D coordinates are considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. Here, the total number of predicted trajectories is denoted as $N$ and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared to coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
As the augmentation technique we randomly rotate the trajectories to prevent the system from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position, which are centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose a novel and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interpret the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grids because: (1) comparing with representing information in pixel level, is more computationally effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid in different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is mapped to $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring agents existing in the grid normalized by the total number of all of the neighboring agents at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and position over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Max Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q},\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k},\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v},
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention} \cite{vaswani2017attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. It is the same for $W_{Ki}$ and $W_{Vi}$. Note that, $\#head$ is the total number of attention head and it must be an aliquot part of $D_{q}$. The outputs of each head are concatenated and passed a linear transformation with parameter $W_O$.
The output of the multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. The dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure \cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from \cite{lee2017desire} that the input and output maintain in the same form.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution, which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference at test time, the Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation studies and discuss the results in detail.
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed method on the most challenging benchmark Trajnet datasets \cite{sadeghiankosaraju2018trajnet}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provide a uniform evaluation system for fair comparison among different submitted methods.
Trajnet covers a wide range of datasets and includes various types of road users (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in a real world outdoor mixed traffic environment.
The data were collected from 38 scenes with ground truth for training and the ones from the other 20 scenes without ground truth for test (i.\,e.,~open challenge competition). The most popular pedestrian scenes ETH \cite{pellegrini2009you} and UCY \cite{lerner2007crowds} are also included in the benchmark. Each scene presents various traffic density in different space layout, which makes the prediction task challenging.
It requires a model to generalize, in order to adapt to the various complex scenes.
Trajectories are recorded as the $xy$ coordinates in meters or pixels projected on a Cartesian space. Each trajectory provides 8 steps for observation and the following 12 steps for prediction. The duration between two successive steps is 0.4 seconds.
However, the pixel coordinates are not in the same scale across the whole benchmark. Without uniforming the pixels into the same scale, it is extremely difficult to train a general model for the whole dataset. Hence, we follow all the previous works \cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,gupta2018social,giuliari2020transformer} that use the coordinates in meters.
In order to train and evaluate the proposed method, as well as the ablative studies, 6 different scenes are selected as offline test set from the 38 scenes in the training set.
Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.
The best trained model is based on the evaluation performance on the offline test set and then is used for the online evaluation.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each scene.
\begin{figure}[bpht!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_bookstore_3.pdf}
\label{trajectories_bookstore_3}
\caption{\small bookstore3}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.47cm, width=1\textwidth]{fig/trajectories_coupa_3.pdf}
\label{trajectories_coupa_3}
\caption{\small coupa3}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 2cm 0cm -0.47cm, width=1\textwidth]{fig/trajectories_deathCircle_0.pdf}
\label{trajectories_deathCircle_0}
\caption{\small deathCircle0}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_gates_1.pdf}
\label{trajectories_gates_1}
\caption{\small gates1}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.64cm, width=1\textwidth]{fig/trajectories_hyang_6.pdf}
\label{trajectories_hyang_6}
\caption{\small hyang6}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_nexus_0.pdf}
\label{trajectories_nexus_0}
\caption{\small nexus0}
\end{subfigure}
\caption{Visualization of each scene of the offline test set. Sub-figures are not resized for alignment, in order to demonstrate the very different space size and layout.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation Metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to its prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\end{itemize}
We evaluate both the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
$@top10$ prediction means the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth. When the ground truth is not available (for online test), only the most-likely prediction is selected. Then it comes to the single trajectory prediction problem, as most of the previous works did \cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,giuliari2020transformer}.
\subsection{Quantitative Results and Comparison}
\label{subsec:results}
We compare the performance of our method with the most influential previous work and the recent state-of-the-art works published on the Trajnet challenge (up to 10/06/2020) for trajectory prediction to ensure the fair comparison.
The compared works include: the rule-based model \emph{Social Force}~\cite{helbing1995social};
\emph{Social LSTM}~\cite{alahi2016social} that proposes social pooling with a rectangular occupancy grid for close neighboring agents which is widely adopted thereafter in this domain~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}; the state refinement LSTM~\emph{SR-LSTM}~\cite{zhang2019sr}; the RNN Encoder-based model \emph{RED}~\cite{becker2018evaluation}; \emph{MX-LSTM}~\cite{hasan2018mx} exploits the head pose information of agents to help analyze its moving intention;
\emph{Social GAN}~\cite{gupta2018social}
proposes to utilize the generative models GAN for multi-path trajectories prediction, which are the one of the closest works to our work; (the other one is \emph{DESIRE} \cite{lee2017desire}, however neither the online test nor code was reported. Hence, we do not compare with \emph{DESIRE} for a fairness purpose);
\emph{Ind-TF}~\cite{giuliari2020transformer} which utilizes transformer network for this task.
\begin{comment}
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangular occupancy grid is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, including the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and the visibility attentional area driven by the head pose \ms{what is head pose estimate?} as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with all the other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\end{comment}
Table~\ref{tb:results} lists the performances from above methods and ours on the Trajnet test set measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. The data are originally reported on the Trajnet challenge leader board\footnote{http://trajnet.stanford.edu/result.php?cid=1}. We can see that, our method (AMENet) outperforms the other methods significantly and wins the first place on all metrics.
Even thought compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}(under reviewed), our method performs better. Particularly, our method improves the FDE performance with large margin (reducing the error from 1.197 to 1.183 meters).
Note that, our model predicts multiple trajectories by sampling from the learned Z-Sapce repeatedly (see Sec.~\ref{subsec:sample}). We select the most-likely prediction using the proposed ranking mehtod as discussed in Sec. \ref{subsec:ranking}. The outstanding performances from our method also demonstrate that our ranking method is effective.
\begin{comment}
\wt{We don't need the below statement.}
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the following ablative studies.
\end{comment}
\begin{table}[t!]
\centering
\caption{Comparison between our method and the state-of-the-art works. The smaller number is better. Best values are highlighted in boldface.}
\begin{tabular}{llll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ \\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 \\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 \\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 \\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 \\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 \\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 \\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} \\
Ours (AMENet)\tablefootnote{Our method is named as \textit{ikg\_tnt} on the leadboard.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Multi-Path Prediction}
\label{subsec:multipath-selection}
Multi-path trajectories prediction is one of the main contribution of this work and distinguishes our work from most of the existing works essentially.
Here, we discuss its performance w.\,r.\,t.~multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}).
During training, the motion information and interaction information from observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ to be a normal Gaussian distribution.
Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. Form the figure we can see that, the training phase successfully re-parameterizes the variable $z$ into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one feasible future trajectories conditioned on the observation.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared to the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that: 1) the generated multiple trajectories increase the chance to narrow down the errors from the prediction to the ground truth, and 2) the predicted trajectories are feasible (if not, the bad predictions will deteriorate the overall performance and leads to worse results than the most-likely prediction).
Fig.~\ref{fig:multi-path} showcases some qualitative examples of multi-path trajectory prediction from our model. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. We also notice that the predicted trajectories diverge more widely in further time steps. It is reasonable because the further the future is the uncertainty of agents intention is higher. It also proves that the ability of predicting multiple plausible trajectories is important for analyzing the movements of road users because of the increasing uncertainty of the future movements. Single prediction provides limited information for analyzing in this case and is likely to lead to false conclusion if the prediction is not correct/precise in the early step.
\begin{table}[t!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet on the offline test set of Trajnet. Predicted trajectories are ranked by $\text{top}@10$ (former) and most-likely and are measured by MAD/FDE.}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961 & 0.486/0.979 \\
coupa3 & 0.221/0.432 & 0.226/0.442 \\
deathCircle0 & 0.650/1.280 & 0.659/1.297 \\
gates1 & 0.784/1.663 & 0.797/1.692 \\
hyang6 & 0.534/1.076 & 0.542/1.094 \\
nexus6 & 0.642/1.073 & 0.559/1.109 \\
Average & 0.535/1.081 & 0.545/1.102 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\subsection{Ablation Studies}
\label{sec:ablativemodels}
In order to analyze the impact of each proposed module in our framework, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out.
\begin{itemize}
\item AMENet, the full model of our framework.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both the X-Encoder and Y-Encoder. This setting is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention module in the dynamic maps branch. This setting is used to validate the effectiveness of the self-attention module that learns the spatial interactions among agents alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This setting is used to validate the contribution of the extended structure for processing the dynamic maps information in the Y-Encoder.
\end{itemize}
Table~\ref{tb:resultsablativemodels} shows the quantitative results frome the above ablative models.
Errors are measured by MAD/FDE on the most-likely prediction.
By comparison between AOENet and AMENet we can see that when we replace the dynamic maps with the occupancy grid, both the MAD and FDE increase by a remarkable margin across all the datasets.
It demonstrates that our proposed dynamic maps is more helpful for exploring the interaction information among agents than the occupancy grid.
We can also see that if the self-attention module is removed (MENet) the performance decreases by a remarkable margin across all the datasets.
This phenomena indicates that the self-attention module is effective in learning the interaction among agents from the dynamic maps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the performances, especially FDE, decrease significantly across all the datasets. The extended structure provides the ability of the model to process the interaction information even in prediction. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979} & 0.574/1.144 & 0.576/1.139 & 0.509/1.030 \\
coupa3 & \textbf{0.226}/\textbf{0.442} & 0.260/0.509 & 0.294/0.572 & 0.237/0.464 \\
deathCircle0 & \textbf{0.659}/\textbf{1.297} & 0.726/1.437 & 0.725/1.419 & 0.698/1.378 \\
gates1 & \textbf{0.797}/\textbf{1.692} & 0.878/1.819 & 0.941/1.928 & 0.861/1.823 \\
hyang6 & \textbf{0.542}/\textbf{1.094} & 0.619/1.244 & 0.657/1.292 & 0.566/1.140 \\
nexus6 & \textbf{0.559}/\textbf{1.109} & 0.752/1.489 & 0.705/1.140 & 0.595/1.181 \\
Average & \textbf{0.545}/\textbf{1.102} & 0.635/1.274 & 0.650/1.283 & 0.578/1.169 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} showcases some exapmles of the qualitative results of the full AMENet in comparison to the ablative models in different scenes.
In general, all the models are able to predict trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. Given a short observation of trajectories, i.\,e.,~ 8 time steps, all the models are able to capture the general speed and heading direction for agents located in different areas in the space.
AMENet predicts the most accurate trajectories which are very close or even completely overlap with the corresponding ground truth trajectories.
Compared with the ablative models, AMENet predicts more accurate destinations (the last position of the predicted trajectories), which is in line with the quantitative results shown in Table~\ref{tb:results}.
One very clear example in \textit{hyang6} (Fig.~\ref{hyang_6209}) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (Fig.~\ref{gates_1001}). The sudden maneuver of agents are very difficult to forecast even for human observers.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/bookstore_3290.pdf}
\caption{\small bookstore3}
\label{bookstore_3290}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.47cm, width=1\textwidth]{scenarios/coupa_3327.pdf}
\caption{\small coupa3}
\label{coupa_3327}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -0.47cm, width=1\textwidth]{scenarios/deathCircle_0000.pdf}
\caption{\small deathCircle0}
\label{deathCircle_0000}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/gates_1001.pdf}
\caption{\small gates1}
\label{gates_1001}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.64cm, width=1\textwidth]{scenarios/hyang_6209.pdf}
\caption{\small hyang6}
\label{hyang_6209}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/nexus_0038.pdf}
\caption{\small nexus0}
\label{nexus_0038}
\end{subfigure}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) in different scenes. Sub-figures are not resized for alignment, in order to demonstrate the very different space size and layout.}
\label{fig:abl_qualitative_results}
\end{figure}
\subsection{Extensive Studies on Benchmark InD}
\label{subsec:InD}
To further investigate the performance of our methods, we carry out extensive experiments on a newly published large-scale benchmark InD\footnote{\url{https://www.ind-dataset.com/}}.
It consists of 33 datasets and was collected using drones on four very busy intersections (as shown in Fig.~\ref{fig:qualitativeresultsInD}) in Germany in 2019 by Bock et al. \cite{inDdataset}.
Different from Trajnet that most of the environments (i.\,e.,~shared spaces \cite{reid2009dft,robicquet2016learning}) are pedestrian friendly, the interactions in InD are more designed for vehicles. This makes the prediction task more changing due to the very different travel speed between pedestrians and vehicles, and their direct interactions.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and 12 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet and the remaining datasets are used for training.
Table~\ref{tb:resultsInD} lists the performance of our method measured by MAD and FDE for each intersection and the overall average errors. We can see that our method is still able to generate feasible trajectories and reports good results (average errors (0.731/1.587) which are lightly inferior than the ones tested on Trajnet (0.545/1.102).
\begin{table}[t]
\centering
\small
\caption{Quantitative Results of AMENet on InD measured by MAD/FDE, and average performance across the whole datasets.}
\begin{tabular}{lll}
\\ \hline
InD & Top@10 & Most-likely \\ \hline
Intersection-A & 0.952/1.938 & 1.070/2.216 \\
Intersection-B & 0.585/1.289 & 0.653/1.458 \\
Intersection-C & 0.737/1.636 & 0.827/1.868 \\
Intersection-D & 0.279/0.588 & 0.374/0.804 \\
Average & 0.638/1.363 & 0.731/1.587 \\ \hline
\end{tabular}
\label{tb:resultsInD}
\end{table}
\begin{figure} [bpht!]
\centering
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/06_Trajectories020_12.pdf}
\caption{\small{Intersection-A}}
\label{subfig:Intersection-A}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/14_Trajectories030_12.pdf}
\caption{\small{Intersection-B}}
\label{subfig:Intersection-B}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/27_Trajectories011_12.pdf}
\caption{\small{Intersection-C}}
\label{subfig:Intersection-C}
\end{subfigure}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/32_Trajectories019_12.pdf}
\caption{\small{Intersection-D}}
\label{subfig:Intersection-D}
\end{subfigure}\hfill
\caption{\small{Benchmark InD: examples for predicting trajectories of mixed traffic in different intersections.}}
\label{fig:qualitativeresultsInD}
\end{figure}
\begin{comment}
\subsection{Studies on Long-Term Trajectory Prediction}
\label{subsec:longterm}
In this section, we investigate the model's performance on predicting long-term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets from different intersections.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet on long-term trajectory prediction and the remaining datasets are used for training.
Fig.~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and the number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting long-term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Fig.~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for predicting long-term trajectories
in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates accurate predictions for 12 and 16 time steps (visualized in first two rows) for two pedestrians. When they encounter each other at 20 time steps (third row), the model correctly predicts that the left pedestrian yields. But the predicted trajectories slightly deviate from the ground truth and lead to a very close interaction. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, long-term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model. For example, if the new positions of the agents are acquired in later time steps, the observation time horizon can be shifted accordingly, which is similar to the mechanism in Kalman Filter \cite{kalman1960new} that the prediction can be calibrated by the newly available observation to improve performance for long-term trajectory prediction.
\end{comment}
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose a novel way---dynamic maps---to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets in various real-world environments. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
In order to further investigate the model's performance, the newly published benchmark InD is utilized. The performance of AMENet tested on InD is comparable with the one on Trajnet and the model generates feasible trajectories even in the vehicle dominated environments.
In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users is a crucial task in different communities, such as intelligent transportation systems (ITS) \cite{morris2008survey,cheng2018modeling,cheng2020mcenet}, photogrammetry
\cite{schindler2010automatic,klinger2017probabilistic,cheng2018mixed}, computer vision \cite{alahi2016social,mohajerin2019multi},
mobile robot applications \cite{mohanan2018survey}.
This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
Trajectory prediction is generally defined as to predict the plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D of non-erratic target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time \cite{helbing1995social,alahi2016social}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrian, cyclist, vehicle and other road users \cite{rudenko2019human}.
A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively and accurately predict trajectories of mixed agents is still an unsolved problem. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intention of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could use in the future.
There exists a large body of literature that focuses on addressing parts or all of the aforementioned challenges in order to make accurate trajectory prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules \cite{helbing1995social}, Game Theory \cite{johora2020agent}, or a constant velocity assumption \cite{best1997new}.
Their performance is crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performance on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Recurrent Neural Networks (RNNs) based models are used to model the interactions between agents and predict the future positions in sequence \cite{alahi2016social,xue2018ss}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. The models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed, such as Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based framework Social GAN \cite{gupta2018social} and Conditional Variational Auto-Encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} based framework DESIRE \cite{lee2017desire}.
\begin{comment}
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treats the trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Hence, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
\end{comment}
In spite of the great success in this domain, most of these methods are designed for a specific agent type: pedestrians.
In reality, pedestrians, cyclists and vehicles are the three major types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Besides, the interactions between the target agent and the others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the closer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. So an important reserch question is: Can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. It has an encoding-decoding structure.
\emph{Encoding.} Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they have a similar structure. Taking the X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution.
\emph{Decoding.} The output of the variational auto-encoder module (it is achieved by re-parameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of the X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework Attentive Maps Encoder Network (AMENet) for multi-path trajectory prediction.
AMENet inserts a generative module that is trained to learn a latent space for encoding the motion and interaction information in both observation and future, and predicts multiple feasible future trajectories conditioned on the observed information.
\item[2] We design a novel module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrians, cyclists and vehicles rather than only focusing on pedestrians, in various unseen real-world environments, which makes our work different from most of the previous ones that only predict pedestrian trajectories.
\end{itemize}
The efficacy of the proposed method has been validated on the recent benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leaderboard.
Each component of the proposed model is validated via a series of ablative studies.
\section{Related Work}
Our work focuses on predicting trajectories of heterogeneous road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work focuses on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processes \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods largely rely on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs) \cite{hochreiter1997long}, have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,gupta2018social,sadeghian2018sophie,zhang2019sr}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own will but also crucially affected by the interactions between it and the other agents. Therefore, effectively modeling the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interactions is the Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{johora2020agent}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,vaswani2017attention} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, e.g. neural machine translation \cite{bahdanau2014neural,vaswani2017attention} and image caption generation \cite{xu2015show,anderson2018bottom,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences,
the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence. The dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE \cite{kingma2014semi,sohn2015learning})
are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation, as well as trajectory prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following two points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and
future trajectories are predicted only based on the map data (i.\,e.,~position coordinate).
The visual information, such as vegetation, curbside and buildings, are very different from one space to another. Our model is trained on some available spaces but is validated on other unseen spaces. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in an unseen space of totally different environment \cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. The Sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The specific structure of X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}],
\end{equation}
where $Y_i$ and $X_i$ stand for the future and past trajectories in our task, respectively. $z_i$ is the learned latent variable of $Y_i$. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel.
In order to enable the back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied: $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z_i$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as follows: agent $i$, receives as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinates are also possible, but in this work only 2D coordinates are considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. Here, the total number of predicted trajectories is denoted as $N$ and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
The motion information for each agent is captured by the position coordinates at each time step.
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared to coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
As the augmentation technique we randomly rotate the trajectories to prevent the system from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position, which are centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose a novel and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interpret the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grids because: (1) comparing with representing information in pixel level, is more computationally effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid in different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is mapped to $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring agents existing in the grid normalized by the total number of all of the neighboring agents at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and position over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Max Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q},\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k},\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v},
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention} \cite{vaswani2017attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. It is the same for $W_{Ki}$ and $W_{Vi}$. Note that, $\#head$ is the total number of attention heads and it must be an aliquot part of $D_{q}$. The outputs of each head are concatenated and passed a linear transformation with parameter $W_O$.
The output of the multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. The dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure \cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from \cite{lee2017desire} that the input and output maintain in the same form.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution, which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference at test time, the Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation studies and discuss the results in detail.
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed method on the most challenging benchmark Trajnet datasets \cite{sadeghiankosaraju2018trajnet}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provide a uniform evaluation system for fair comparison among different submitted methods.
Trajnet covers a wide range of datasets and includes various types of road users (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in a real world outdoor mixed traffic environment.
The data were collected from 38 scenes with ground truth for training and the ones from the other 20 scenes without ground truth for test (i.\,e.,~open challenge competition). The most popular pedestrian scenes ETH \cite{pellegrini2009you} and UCY \cite{lerner2007crowds} are also included in the benchmark. Each scene presents various traffic density in different space layout, which makes the prediction task challenging.
It requires a model to generalize, in order to adapt to the various complex scenes.
Trajectories are recorded as the $xy$ coordinates in meters or pixels projected on a Cartesian space. Each trajectory provides 8 steps for observation and the following 12 steps for prediction. The duration between two successive steps is 0.4 seconds.
However, the pixel coordinates are not in the same scale across the whole benchmark. Without uniforming the pixels into the same scale, it is extremely difficult to train a general model for the whole dataset. Hence, we follow all the previous works \cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,gupta2018social,giuliari2020transformer} that use the coordinates in meters.
In order to train and evaluate the proposed method, as well as the ablative studies, 6 different scenes are selected as offline test set from the 38 scenes in the training set.
Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.
The best trained model is based on the evaluation performance on the offline test set and then is used for the online evaluation.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each scene.
\begin{figure}[bpht!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_bookstore_3.pdf}
\label{trajectories_bookstore_3}
\caption{\small bookstore3}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.47cm, width=1\textwidth]{fig/trajectories_coupa_3.pdf}
\label{trajectories_coupa_3}
\caption{\small coupa3}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 2cm 0cm -0.47cm, width=1\textwidth]{fig/trajectories_deathCircle_0.pdf}
\label{trajectories_deathCircle_0}
\caption{\small deathCircle0}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_gates_1.pdf}
\label{trajectories_gates_1}
\caption{\small gates1}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.64cm, width=1\textwidth]{fig/trajectories_hyang_6.pdf}
\label{trajectories_hyang_6}
\caption{\small hyang6}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_nexus_0.pdf}
\label{trajectories_nexus_0}
\caption{\small nexus0}
\end{subfigure}
\caption{Visualization of each scene of the offline test set. Sub-figures are not aligned in the same size, in order to demonstrate the very different space size and layout.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation Metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to its prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\end{itemize}
We evaluate both the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
$@top10$ prediction means the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth. When the ground truth is not available (for online test), only the most-likely prediction is selected. Then it comes to the single trajectory prediction problem, as most of the previous works did \cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,giuliari2020transformer}.
\subsection{Quantitative Results and Comparison}
\label{subsec:results}
We compare the performance of our method with the most influential previous works and the recent state-of-the-art works published on the Trajnet challenge (up to 14/06/2020) for trajectory prediction to ensure the fair comparison.
The compared works include the following models.
\begin{itemize}
\item\emph{Social Force}~\cite{helbing1995social} is a rule-based model with the repulsive force for collision avoidance and the attractive force for social connections;
\item\emph{Social LSTM}~\cite{alahi2016social} proposes a social pooling with a rectangular occupancy grid for close neighboring agents which is widely adopted thereafter in this domain~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent};
\item\emph{SR-LSTM}~\cite{zhang2019sr} uses a states refinement module for extract social effects between the target agent and its neighboring agents;
\item\emph{RED}~\cite{becker2018evaluation} uses RNN-based Encoder with Multilayer Perceptron (MLP) for trajectory prediction;
\item\emph{MX-LSTM}~\cite{hasan2018mx} exploits the head pose information of agents to help analyze its moving intention;
\item\emph{Social GAN}~\cite{gupta2018social} proposes to utilize the generative models GAN for multi-path trajectories prediction, which are the one of the closest works to our work; (the other one is \emph{DESIRE} \cite{lee2017desire}, however neither the online test nor code was reported. Hence, we do not compare with \emph{DESIRE} for a fairness purpose);
\item\emph{Ind-TF}~\cite{giuliari2020transformer} only utilizes the Transformer network \cite{vaswani2017attention} for sequence modeling with no consideration for social interactions between agents.
\end{itemize}
\begin{comment}
\begin{itemize}[\indent {}]
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangular occupancy grid is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, including the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and the visibility attentional area driven by the head pose \ms{what is head pose estimate?} as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with all the other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\end{comment}
Table~\ref{tb:results} lists the performances from above methods and ours on the Trajnet test set measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. The data are originally reported on the Trajnet challenge leader board\footnote{http://trajnet.stanford.edu/result.php?cid=1}. We can see that, our method (AMENet) outperforms the other methods significantly and wins the first place on all metrics.
Even thought compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}(under reviewed), our method performs better. Particularly, our method improves the FDE performance with large margin (reducing the error from 1.197 to 1.183 meters).
Note that, our model predicts multiple trajectories conditioned on the observed trajectories with the stochastic variable sampled from a Gaussian distribution repeatedly (see Sec.~\ref{subsec:sample}). We select the most-likely prediction using the proposed ranking method as discussed in Sec. \ref{subsec:ranking}. The outstanding performances from our method also demonstrate that our ranking method is effective.
\begin{comment}
\wt{We don't need the below statement.}
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the following ablative studies.
\end{comment}
\begin{table}[t!]
\centering
\caption{Comparison between our method and the state-of-the-art works. The smaller number is better. Best values are highlighted in boldface.}
\begin{tabular}{llll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ \\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 \\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 \\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 \\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 \\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 \\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 \\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} \\
Ours (AMENet)\tablefootnote{Our method is named as \textit{ikg\_tnt} on the leadboard.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Multi-Path Prediction}
\label{subsec:multipath-selection}
Multi-path trajectories prediction is one of the main contribution of this work and distinguishes our work from most of the existing works essentially.
Here, we discuss its performance w.\,r.\,t.~multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}).
During training, the motion information and interaction information from observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ to be a normal Gaussian distribution.
Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. From the figure we can see that, the training phase successfully re-parameterizes the variable $z$ into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one feasible future trajectories conditioned on the observation.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared to the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that: 1) the generated multiple trajectories increase the chance to narrow down the errors from the prediction to the ground truth, and 2) the predicted trajectories are feasible (if not, the bad predictions will deteriorate the overall performance and leads to worse results than the most-likely prediction).
Fig.~\ref{fig:multi-path} showcases some qualitative examples of multi-path trajectory prediction from our model. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. We also notice that the predicted trajectories diverge more widely in further time steps. It is reasonable because the further the future is the uncertainty of agents intention is higher. It also proves that the ability of predicting multiple plausible trajectories is important for analyzing the movements of road users because of the increasing uncertainty of the future movements. Single prediction provides limited information for analyzing in this case and is likely to lead to false conclusion if the prediction is not correct/precise in the early steps.
\begin{table}[t!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet on the offline test set of Trajnet. Predicted trajectories are ranked by $\text{top}@10$ (former) and most-likely and are measured by MAD/FDE.}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961 & 0.486/0.979 \\
coupa3 & 0.221/0.432 & 0.226/0.442 \\
deathCircle0 & 0.650/1.280 & 0.659/1.297 \\
gates1 & 0.784/1.663 & 0.797/1.692 \\
hyang6 & 0.534/1.076 & 0.542/1.094 \\
nexus6 & 0.642/1.073 & 0.559/1.109 \\
Average & 0.535/1.081 & 0.545/1.102 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\subsection{Ablation Studies}
\label{sec:ablativemodels}
In order to analyze the impact of each proposed module in our framework, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out.
\begin{itemize}
\item AMENet, the full model of our framework.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both the X-Encoder and Y-Encoder. This setting is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention module in the dynamic maps branch. This setting is used to validate the effectiveness of the self-attention module that learns the spatial interactions among agents alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This setting is used to validate the contribution of the extended structure for processing the dynamic maps information in the Y-Encoder.
\end{itemize}
Table~\ref{tb:resultsablativemodels} shows the quantitative results from the above ablative models.
Errors are measured by MAD/FDE on the most-likely prediction.
By comparison between AOENet and AMENet we can see that when we replace the dynamic maps with the occupancy grid, both MAD and FDE increase by a remarkable margin across all the datasets.
It demonstrates that our proposed dynamic maps are more helpful for exploring the interaction information among agents than the occupancy grid.
We can also see that if the self-attention module is removed (MENet) the performance decreases by a remarkable margin across all the datasets.
This phenomena indicates that the self-attention module is effective in learning the interaction among agents from the dynamic maps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in the Y-Encoder for dynamic maps, the performances, especially FDE, decrease significantly across all the datasets. The extended structure provides the ability of the model to process the interaction information even in prediction. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in the Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979} & 0.574/1.144 & 0.576/1.139 & 0.509/1.030 \\
coupa3 & \textbf{0.226}/\textbf{0.442} & 0.260/0.509 & 0.294/0.572 & 0.237/0.464 \\
deathCircle0 & \textbf{0.659}/\textbf{1.297} & 0.726/1.437 & 0.725/1.419 & 0.698/1.378 \\
gates1 & \textbf{0.797}/\textbf{1.692} & 0.878/1.819 & 0.941/1.928 & 0.861/1.823 \\
hyang6 & \textbf{0.542}/\textbf{1.094} & 0.619/1.244 & 0.657/1.292 & 0.566/1.140 \\
nexus6 & \textbf{0.559}/\textbf{1.109} & 0.752/1.489 & 0.705/1.140 & 0.595/1.181 \\
Average & \textbf{0.545}/\textbf{1.102} & 0.635/1.274 & 0.650/1.283 & 0.578/1.169 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} showcases some exapmles of the qualitative results of the full AMENet in comparison to the ablative models in different scenes.
In general, all the models are able to predict trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. Given a short observation of trajectories, i.\,e.,~ 8 time steps, all the models are able to capture the general speed and heading direction for agents located in different areas in the space.
AMENet predicts the most accurate trajectories which are very close or even completely overlap with the corresponding ground truth trajectories.
Compared with the ablative models, AMENet predicts more accurate destinations (the last position of the predicted trajectories), which is in line with the quantitative results shown in Table~\ref{tb:results}.
One very clear example in \textit{hyang6} (Fig.~\ref{hyang_6209}) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (Fig.~\ref{gates_1001}). The sudden maneuver of agents are very difficult to forecast even for human observers.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/bookstore_3290.pdf}
\caption{\small bookstore3}
\label{bookstore_3290}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.47cm, width=1\textwidth]{scenarios/coupa_3327.pdf}
\caption{\small coupa3}
\label{coupa_3327}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -0.47cm, width=1\textwidth]{scenarios/deathCircle_0000.pdf}
\caption{\small deathCircle0}
\label{deathCircle_0000}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/gates_1001.pdf}
\caption{\small gates1}
\label{gates_1001}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.64cm, width=1\textwidth]{scenarios/hyang_6209.pdf}
\caption{\small hyang6}
\label{hyang_6209}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/nexus_0038.pdf}
\caption{\small nexus0}
\label{nexus_0038}
\end{subfigure}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) in different scenes. Sub-figures are not aligned in the same size, in order to demonstrate the very different space size and layout.}
\label{fig:abl_qualitative_results}
\end{figure}
\begin{comment}
\subsection{Extensive Studies on Benchmark InD}
\label{subsec:InD}
To further investigate the performance of our methods, we carry out extensive experiments on a newly published large-scale benchmark InD\footnote{\url{https://www.ind-dataset.com/}}.
It consists of 33 datasets and was collected using drones on four very busy intersections (as shown in Fig.~\ref{fig:qualitativeresultsInD}) in Germany in 2019 by Bock et al. \cite{inDdataset}.
Different from Trajnet that most of the environments (i.\,e.,~shared spaces \cite{reid2009dft,robicquet2016learning}) are pedestrian friendly, the interactions in InD are more designed for vehicles. This makes the prediction task more changing due to the very different travel speed between pedestrians and vehicles, and their direct interactions.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and 12 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet and the remaining datasets are used for training.
Table~\ref{tb:resultsInD} lists the performance of our method measured by MAD and FDE for each intersection and the overall average errors. We can see that our method is still able to generate feasible trajectories and reports good results (average errors (0.731/1.587) which are lightly inferior than the ones tested on Trajnet (0.545/1.102).
\begin{table}[t]
\centering
\small
\caption{Quantitative Results of AMENet on InD measured by MAD/FDE, and average performance across the whole datasets.}
\begin{tabular}{lll}
\\ \hline
InD & Top@10 & Most-likely \\ \hline
Intersection-A & 0.952/1.938 & 1.070/2.216 \\
Intersection-B & 0.585/1.289 & 0.653/1.458 \\
Intersection-C & 0.737/1.636 & 0.827/1.868 \\
Intersection-D & 0.279/0.588 & 0.374/0.804 \\
Average & 0.638/1.363 & 0.731/1.587 \\ \hline
\end{tabular}
\label{tb:resultsInD}
\end{table}
\begin{figure} [bpht!]
\centering
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/06_Trajectories020_12.pdf}
\caption{\small{Intersection-A}}
\label{subfig:Intersection-A}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/14_Trajectories030_12.pdf}
\caption{\small{Intersection-B}}
\label{subfig:Intersection-B}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/27_Trajectories011_12.pdf}
\caption{\small{Intersection-C}}
\label{subfig:Intersection-C}
\end{subfigure}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/32_Trajectories019_12.pdf}
\caption{\small{Intersection-D}}
\label{subfig:Intersection-D}
\end{subfigure}\hfill
\caption{\small{Benchmark InD: examples for predicting trajectories of mixed traffic in different intersections.}}
\label{fig:qualitativeresultsInD}
\end{figure}
\end{comment}
\begin{comment}
\subsection{Studies on Long-Term Trajectory Prediction}
\label{subsec:longterm}
In this section, we investigate the model's performance on predicting long-term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets from different intersections.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet on long-term trajectory prediction and the remaining datasets are used for training.
Fig.~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and the number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting long-term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Fig.~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for predicting long-term trajectories
in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates accurate predictions for 12 and 16 time steps (visualized in first two rows) for two pedestrians. When they encounter each other at 20 time steps (third row), the model correctly predicts that the left pedestrian yields. But the predicted trajectories slightly deviate from the ground truth and lead to a very close interaction. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, long-term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model. For example, if the new positions of the agents are acquired in later time steps, the observation time horizon can be shifted accordingly, which is similar to the mechanism in Kalman Filter \cite{kalman1960new} that the prediction can be calibrated by the newly available observation to improve performance for long-term trajectory prediction.
\end{comment}
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose a novel way---dynamic maps---to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets in various real-world environments. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the benchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users is a crucial task in different communities, such as intelligent transportation systems (ITS)~\cite{morris2008survey,cheng2018modeling,cheng2020mcenet},
photogrammetry~\cite{schindler2010automatic,klinger2017probabilistic,cheng2018mixed},
computer vision~\cite{alahi2016social,mohajerin2019multi},
mobile robot applications \cite{mohanan2018survey}.
This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces~\cite{reid2009dft}).
Trajectory prediction is generally defined as to predict the plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D of non-erratic target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time~\cite{helbing1995social,alahi2016social}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrian, cyclist, vehicle and other road users~\cite{rudenko2019human}.
A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively and accurately predict trajectories of mixed agents is still an unsolved problem. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intention of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could use in the future.
There exists a large body of literature that focuses on addressing parts or all of the aforementioned challenges in order to make accurate trajectory prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules~\cite{helbing1995social}, Game Theory~\cite{johora2020agent}, or a constant velocity assumption~\cite{best1997new}.
Their performance is crucially affected by the quality of manually designed features and they lack generalizability~\cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies~\cite{lecun2015deep}, data-driven methods keep reporting new state-of-the-art performance on benchmarks~\cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Recurrent Neural Networks (RNNs) based models are used to model the interactions between agents and predict the future positions in sequence~\cite{alahi2016social,xue2018ss}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. The models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed, such as Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} based framework Social GAN~\cite{gupta2018social} and Conditional Variational Auto-Encoder (CVAE)~\cite{kingma2013auto,kingma2014semi,sohn2015learning} based framework DESIRE~\cite{lee2017desire}.
In spite of the great success in this domain, most of these methods are designed for a specific agent type: pedestrians.
In reality, pedestrians, cyclists and vehicles are the three major types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Besides, the interactions between the target agent and the others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the closer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. So an important research question is: Can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. It has an encoding-decoding structure.
\emph{Encoding.} Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they have a similar structure. Taking the X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution.
\emph{Decoding.} The output of the variational auto-encoder module (it is achieved by re-parameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of the X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE)~\cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarized as follows:
\begin{itemize}
\item[1] We propose a generative framework Attentive Maps Encoder Network (AMENet) for multi-path trajectory prediction.
AMENet inserts a generative module that is trained to learn a latent space for encoding the motion and interaction information in both observation and future, and predicts multiple feasible future trajectories conditioned on the observed information.
\item[2] We design a novel module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrians, cyclists and vehicles rather than only focusing on pedestrians, in various unseen real-world environments, which makes our work different from most of the previous ones that only predict pedestrian trajectories.
\end{itemize}
The efficacy of the proposed method has been validated on the recent benchmark \emph{Trajnet}~\cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
Each component of the proposed model is validated via a series of ablative studies.
\section{Related Work}
Our work focuses on predicting trajectories of heterogeneous road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work focuses on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include linear regression and Kalman filter~\cite{harvey1990forecasting}, Gaussian processes~\cite{tay2008modelling} and Markov decision processing~\cite{kitani2012activity}.
However, these traditional methods largely rely on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs)~\cite{hochreiter1997long}, have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory~\cite{alahi2016social,gupta2018social,sadeghian2018sophie,zhang2019sr}, as well as other types of road users~\cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own will but also crucially affected by the interactions between it and the other agents. Therefore, effectively modeling the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interactions is the Social Force Model~\cite{helbing1995social}, which uses the repulsive force for collision avoidance and the attractive force for social connections.
Game Theory is utilized to simulate the negotiation between different road users~\cite{johora2020agent}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction~\cite{alahi2016social}. Many works design their specific ``occupancy'' grid for interaction modeling~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN~\cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM~\cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms~\cite{bahdanau2014neural,xu2015show,vaswani2017attention} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, e.\,g.,~neural machine translation~\cite{bahdanau2014neural,vaswani2017attention} and image caption generation~\cite{xu2015show,anderson2018bottom,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism~\cite{xu2015show} is incorporated in LSTMs to learn the spatio-temporal patterns from the position coordinates~\cite{varshneya2017human}. Similarly, SoPhie~\cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model~\cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF~\cite{giuliari2020transformer} replaces RNN with Transformer~\cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism~\cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences,
the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence~\cite{varshneya2017human}. The dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE~\cite{kingma2013auto} and GAN~\cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE~\cite{kingma2014semi,sohn2015learning}) are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation, as well as trajectory prediction~\cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following two points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and future trajectories are predicted only based on the map data (i.\,e.,~position coordinate).
The visual information, such as vegetation, curbside and buildings, are very different from one space to another. Our model is trained on some available spaces but is validated on other unseen spaces. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in an unseen space of totally different environment~\cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: the X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have a similar structure. The Sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The specific structure of the X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}],
\end{equation}
where $Y_i$ and $X_i$ stand for the future and past trajectories in our task, respectively. $z_i$ is the learned latent variable of $Y_i$. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel.
In order to enable the back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied: $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z_i$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as follows: agent $i$, receives as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinates are also possible, but in this work only 2D coordinates are considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. Here, the total number of predicted trajectories is denoted as $N$ and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
The motion information for each agent is captured by the position coordinates at each time step.
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared to coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
As the augmentation technique we randomly rotate the trajectories to prevent the system from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position, which are centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose a novel and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interpret the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grids because: (1) comparing with representing information in pixel level, is more computationally effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid in different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is mapped to $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the mean speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring agents existing in the grid normalized by the total number of all of the neighboring agents at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and position over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Max Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q},\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k},\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v},
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}~\cite{vaswani2017attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O, \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi}),
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. It is the same for $W_{Ki}$ and $W_{Vi}$. Note that, $\#head$ is the total number of attention heads and it must be an aliquot part of $D_{q}$. The outputs of each head are concatenated and passed a linear transformation with parameter $W_O$.
The output of the multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. The dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure~\cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from~\cite{lee2017desire} that the input and output maintain in the same form.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution, which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference at test time, the Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation studies and discuss the results in detail.
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed method on the most challenging benchmark Trajnet datasets~\cite{sadeghiankosaraju2018trajnet}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provide a uniform evaluation system for fair comparison among different submitted methods.
Trajnet covers a wide range of datasets and includes various types of road users (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in a real world outdoor mixed traffic environment.
The data were collected from 38 scenes with ground truth for training and the ones from the other 20 scenes without ground truth for test (i.\,e.,~open challenge competition). The most popular pedestrian scenes ETH~\cite{pellegrini2009you} and UCY~\cite{lerner2007crowds} are also included in the benchmark. Each scene presents various traffic density in different space layout, which makes the prediction task challenging.
It requires a model to generalize, in order to adapt to the various complex scenes.
Trajectories are recorded as the $xy$ coordinates in meters or pixels projected on a Cartesian space. Each trajectory provides 8 steps for observation and the following 12 steps for prediction. The duration between two successive steps is 0.4 seconds.
However, the pixel coordinates are not in the same scale across the whole benchmark. Without uniforming the pixels into the same scale, it is extremely difficult to train a general model for the whole dataset. Hence, we follow all the previous works~\cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,gupta2018social,giuliari2020transformer} that use the coordinates in meters.
In order to train and evaluate the proposed method, as well as the ablative studies, 6 different scenes are selected as offline test set from the 38 scenes in the training set.
Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.
The best trained model is based on the evaluation performance on the offline test set and then is used for the online evaluation.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each scene.
\begin{figure}[bpht!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_bookstore_3.pdf}
\label{trajectories_bookstore_3}
\caption{\small bookstore3}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.47cm, width=1\textwidth]{fig/trajectories_coupa_3.pdf}
\label{trajectories_coupa_3}
\caption{\small coupa3}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 2cm 0cm -0.47cm, width=1\textwidth]{fig/trajectories_deathCircle_0.pdf}
\label{trajectories_deathCircle_0}
\caption{\small deathCircle0}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_gates_1.pdf}
\label{trajectories_gates_1}
\caption{\small gates1}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.64cm, width=1\textwidth]{fig/trajectories_hyang_6.pdf}
\label{trajectories_hyang_6}
\caption{\small hyang6}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_nexus_0.pdf}
\label{trajectories_nexus_0}
\caption{\small nexus0}
\end{subfigure}
\caption{Visualization of each scene of the offline test set. Sub-figures are not aligned in the same size, in order to demonstrate the very different space size and layout.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation Metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to its prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\end{itemize}
We evaluate both the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
$@top10$ prediction means the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth. When the ground truth is not available (for online test), only the most-likely prediction is selected. Then it comes to the single trajectory prediction problem, as most of the previous works did~\cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,giuliari2020transformer}.
\subsection{Quantitative Results and Comparison}
\label{subsec:results}
We compare the performance of our method with the most influential previous works and the recent state-of-the-art works published on the Trajnet challenge (up to 14/06/2020) for trajectory prediction to ensure the fair comparison.
The compared works include the following models.
\begin{itemize}
\item\emph{Social Force}~\cite{helbing1995social} is a rule-based model with the repulsive force for collision avoidance and the attractive force for social connections;
\item\emph{Social LSTM}~\cite{alahi2016social} proposes a social pooling with a rectangular occupancy grid for close neighboring agents which is widely adopted thereafter in this domain~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent};
\item\emph{SR-LSTM}~\cite{zhang2019sr} uses a states refinement module for extract social effects between the target agent and its neighboring agents;
\item\emph{RED}~\cite{becker2018evaluation} uses RNN-based Encoder with Multilayer Perceptron (MLP) for trajectory prediction;
\item\emph{MX-LSTM}~\cite{hasan2018mx} exploits the head pose information of agents to help analyze its moving intention;
\item\emph{Social GAN}~\cite{gupta2018social} proposes to utilize the generative models GAN for multi-path trajectories prediction, which are the one of the closest works to our work; (the other one is \emph{DESIRE}~\cite{lee2017desire}, however neither the online test nor code was reported. Hence, we do not compare with \emph{DESIRE} for a fairness purpose);
\item\emph{Ind-TF}~\cite{giuliari2020transformer} only utilizes the Transformer network~\cite{vaswani2017attention} for sequence modeling with no consideration for social interactions between agents.
\end{itemize}
Table~\ref{tb:results} lists the performances from above methods and ours on the Trajnet test set measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. The data are originally reported on the Trajnet challenge leader board\footnote{http://trajnet.stanford.edu/result.php?cid=1}. We can see that, our method (AMENet) outperforms the other methods significantly and wins the first place on all metrics.
Even thought compared with the most recent model Transformer Networks Ind-TF~\cite{giuliari2020transformer}(under reviewed), our method performs better. Particularly, our method improves the FDE performance with large margin (reducing the error from 1.197 to 1.183 meters).
Note that, our model predicts multiple trajectories conditioned on the observed trajectories with the stochastic variable sampled from a Gaussian distribution repeatedly (see Sec.~\ref{subsec:sample}). We select the most-likely prediction using the proposed ranking method as discussed in Sec. \ref{subsec:ranking}. The outstanding performances from our method also demonstrate that our ranking method is effective.
\begin{table}[t!]
\centering
\caption{Comparison between our method and the state-of-the-art works. The smaller number is better. Best values are highlighted in boldface.}
\begin{tabular}{llll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ \\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 \\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 \\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 \\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 \\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 \\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 \\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} \\
Ours (AMENet)\tablefootnote{Our method is named as \textit{ikg\_tnt} on the leadboard.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Multi-Path Prediction}
\label{subsec:multipath-selection}
Multi-path trajectories prediction is one of the main contribution of this work and distinguishes our work from most of the existing works essentially.
Here, we discuss its performance w.\,r.\,t.~multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}).
During training, the motion information and interaction information from observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ to be a normal Gaussian distribution.
Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. From the figure we can see that, the training phase successfully re-parameterizes the variable $z$ into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one feasible future trajectories conditioned on the observation.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared to the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that: 1) the generated multiple trajectories increase the chance to narrow down the errors from the prediction to the ground truth, and 2) the predicted trajectories are feasible (if not, the bad predictions will deteriorate the overall performance and leads to worse results than the most-likely prediction).
Fig.~\ref{fig:multi-path} showcases some qualitative examples of multi-path trajectory prediction from our model. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. We also notice that the predicted trajectories diverge more widely in further time steps. It is reasonable because the further the future is the uncertainty of agents intention is higher. It also proves that the ability of predicting multiple plausible trajectories is important for analyzing the movements of road users because of the increasing uncertainty of the future movements. Single prediction provides limited information for analyzing in this case and is likely to lead to false conclusion if the prediction is not correct/precise in the early steps.
\begin{table}[t!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet on the offline test set of Trajnet. Predicted trajectories are ranked by $\text{top}@10$ (former) and most-likely and are measured by MAD/FDE.}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961 & 0.486/0.979 \\
coupa3 & 0.221/0.432 & 0.226/0.442 \\
deathCircle0 & 0.650/1.280 & 0.659/1.297 \\
gates1 & 0.784/1.663 & 0.797/1.692 \\
hyang6 & 0.534/1.076 & 0.542/1.094 \\
nexus6 & 0.642/1.073 & 0.559/1.109 \\
Average & 0.535/1.081 & 0.545/1.102 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\subsection{Ablation Studies}
\label{sec:ablativemodels}
In order to analyze the impact of each proposed module in our framework, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out.
\begin{itemize}
\item AMENet, the full model of our framework.
\item AOENet, substitutes dynamic maps with occupancy grid~\cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both the X-Encoder and Y-Encoder. This setting is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention module in the dynamic maps branch. This setting is used to validate the effectiveness of the self-attention module that learns the spatial interactions among agents alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE~\cite{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This setting is used to validate the contribution of the extended structure for processing the dynamic maps information in the Y-Encoder.
\end{itemize}
Table~\ref{tb:resultsablativemodels} shows the quantitative results from the above ablative models.
Errors are measured by MAD/FDE on the most-likely prediction.
By comparison between AOENet and AMENet we can see that when we replace the dynamic maps with the occupancy grid, both MAD and FDE increase by a remarkable margin across all the datasets.
It demonstrates that our proposed dynamic maps are more helpful for exploring the interaction information among agents than the occupancy grid.
We can also see that if the self-attention module is removed (MENet) the performance decreases by a remarkable margin across all the datasets.
This phenomena indicates that the self-attention module is effective in learning the interaction among agents from the dynamic maps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in the Y-Encoder for dynamic maps, the performances, especially FDE, decrease significantly across all the datasets. The extended structure provides the ability of the model to process the interaction information even in prediction. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in the Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[t!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979} & 0.574/1.144 & 0.576/1.139 & 0.509/1.030 \\
coupa3 & \textbf{0.226}/\textbf{0.442} & 0.260/0.509 & 0.294/0.572 & 0.237/0.464 \\
deathCircle0 & \textbf{0.659}/\textbf{1.297} & 0.726/1.437 & 0.725/1.419 & 0.698/1.378 \\
gates1 & \textbf{0.797}/\textbf{1.692} & 0.878/1.819 & 0.941/1.928 & 0.861/1.823 \\
hyang6 & \textbf{0.542}/\textbf{1.094} & 0.619/1.244 & 0.657/1.292 & 0.566/1.140 \\
nexus6 & \textbf{0.559}/\textbf{1.109} & 0.752/1.489 & 0.705/1.140 & 0.595/1.181 \\
Average & \textbf{0.545}/\textbf{1.102} & 0.635/1.274 & 0.650/1.283 & 0.578/1.169 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} showcases some exapmles of the qualitative results of the full AMENet in comparison to the ablative models in different scenes.
In general, all the models are able to predict trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. Given a short observation of trajectories, i.\,e.,~ 8 time steps, all the models are able to capture the general speed and heading direction for agents located in different areas in the space.
AMENet predicts the most accurate trajectories which are very close or even completely overlap with the corresponding ground truth trajectories.
Compared with the ablative models, AMENet predicts more accurate destinations (the last position of the predicted trajectories), which is in line with the quantitative results shown in Table~\ref{tb:results}.
One very clear example in \textit{hyang6} (Fig.~\ref{hyang_6209}) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (Fig.~\ref{gates_1001}). The sudden maneuver of agents are very difficult to forecast even for human observers.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/bookstore_3290.pdf}
\caption{\small bookstore3}
\label{bookstore_3290}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.47cm, width=1\textwidth]{scenarios/coupa_3327.pdf}
\caption{\small coupa3}
\label{coupa_3327}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -0.47cm, width=1\textwidth]{scenarios/deathCircle_0000.pdf}
\caption{\small deathCircle0}
\label{deathCircle_0000}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/gates_1001.pdf}
\caption{\small gates1}
\label{gates_1001}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.64cm, width=1\textwidth]{scenarios/hyang_6209.pdf}
\caption{\small hyang6}
\label{hyang_6209}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{scenarios/nexus_0038.pdf}
\caption{\small nexus0}
\label{nexus_0038}
\end{subfigure}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) in different scenes. Sub-figures are not aligned in the same size, in order to demonstrate the very different space size and layout.}
\label{fig:abl_qualitative_results}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose a novel way---dynamic maps---to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets in various real-world environments. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users is a crucial task in different communities, such as intelligent transportation systems (ITS)~\cite{morris2008survey,cheng2018modeling,cheng2020mcenet},
photogrammetry~\cite{schindler2010automatic,klinger2017probabilistic,cheng2018mixed},
computer vision~\cite{alahi2016social,mohajerin2019multi},
mobile robot applications \cite{mohanan2018survey}.
This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces~\cite{reid2009dft}).
Trajectory prediction is generally defined as to predict the plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D of non-erratic target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time~\cite{helbing1995social,alahi2016social}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrian, cyclist, vehicle and other road users~\cite{rudenko2019human}.
A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/model_example.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively and accurately predict trajectories of mixed agents is still an unsolved problem. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intention of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could use in the future.
There exists a large body of literature that focuses on addressing parts or all of the aforementioned challenges in order to make accurate trajectory prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules~\cite{helbing1995social}, Game Theory~\cite{johora2020agent}, or a constant velocity assumption~\cite{best1997new}.
Their performance is crucially affected by the quality of manually designed features and they lack generalizability~\cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies~\cite{lecun2015deep}, data-driven methods keep reporting new state-of-the-art performance on benchmarks~\cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Recurrent Neural Networks (RNNs) based models are used to model the interactions between agents and predict the future positions in sequence~\cite{alahi2016social,xue2018ss}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. The models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed, such as Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} based framework Social GAN~\cite{gupta2018social} and Conditional Variational Auto-Encoder (CVAE)~\cite{kingma2013auto,kingma2014semi,sohn2015learning} based framework DESIRE~\cite{lee2017desire}.
In spite of the great success in this domain, most of these methods are designed for a specific agent type: pedestrians.
In reality, pedestrians, cyclists and vehicles are the three major types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Besides, the interactions between the target agent and the others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the closer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. So an important research question is: Can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. It has an encoding-decoding structure.
\emph{Encoding.} Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they have a similar structure. Taking the X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution.
\emph{Decoding.} The output of the variational auto-encoder module (it is achieved by re-parameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of the X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE)~\cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarized as follows:
\begin{itemize}
\item[1] We propose a generative framework Attentive Maps Encoder Network (AMENet) for multi-path trajectory prediction.
AMENet inserts a generative module that is trained to learn a latent space for encoding the motion and interaction information in both observation and future, and predicts multiple feasible future trajectories conditioned on the observed information.
\item[2] We design a novel module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrians, cyclists and vehicles rather than only focusing on pedestrians, in various unseen real-world environments, which makes our work different from most of the previous ones that only predict pedestrian trajectories.
\end{itemize}
The efficacy of the proposed method has been validated on the recent benchmark \emph{Trajnet}~\cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
Each component of the proposed model is validated via a series of ablative studies.
\section{Related Work}
Our work focuses on predicting trajectories of heterogeneous road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work focuses on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include linear regression and Kalman filter~\cite{harvey1990forecasting}, Gaussian processes~\cite{tay2008modelling} and Markov decision processing~\cite{kitani2012activity}.
However, these traditional methods largely rely on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs)~\cite{hochreiter1997long}, have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory~\cite{alahi2016social,gupta2018social,sadeghian2018sophie,zhang2019sr}, as well as other types of road users~\cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own will but also crucially affected by the interactions between it and the other agents. Therefore, effectively modeling the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interactions is the Social Force Model~\cite{helbing1995social}, which uses the repulsive force for collision avoidance and the attractive force for social connections.
Game Theory is utilized to simulate the negotiation between different road users~\cite{johora2020agent}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction~\cite{alahi2016social}. Many works design their specific ``occupancy'' grid for interaction modeling~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN~\cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM~\cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms~\cite{bahdanau2014neural,xu2015show,vaswani2017attention} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, e.\,g.,~neural machine translation~\cite{bahdanau2014neural,vaswani2017attention} and image caption generation~\cite{xu2015show,anderson2018bottom,he2020image}.
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism~\cite{xu2015show} is incorporated in LSTMs to learn the spatio-temporal patterns from the position coordinates~\cite{varshneya2017human}. Similarly, SoPhie~\cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model~\cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF~\cite{giuliari2020transformer} replaces RNN with Transformer~\cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism~\cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences,
the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence~\cite{varshneya2017human}. The dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE~\cite{kingma2013auto} and GAN~\cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE~\cite{kingma2014semi,sohn2015learning}) are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation, as well as trajectory prediction~\cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following two points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and future trajectories are predicted only based on the map data (i.\,e.,~position coordinate).
The visual information, such as vegetation, curbside and buildings, are very different from one space to another. Our model is trained on some available spaces but is validated on other unseen spaces. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in an unseen space of totally different environment~\cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework.pdf}
\caption{An overview of the proposed framework. It consists of four modules: the X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have a similar structure. The Sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The specific structure of the X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}],
\end{equation}
\ch{Explain Eq.~\ref{eq:CVAE} in detail in appendix}
where $Y_i$ and $X_i$ stand for the future and past trajectories in our task, respectively. $z_i$ is the learned latent variable of $Y_i$. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel.
In order to enable the back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied: $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z_i$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as follows: agent $i$, receives as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinates are also possible, but in this work only 2D coordinates are considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. Here, the total number of predicted trajectories is denoted as $N$ and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
The motion information for each agent is captured by the position coordinates at each time step.
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared to coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
As the augmentation technique we randomly rotate the trajectories to prevent the system from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/model_encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position, which are centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose a novel and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interpret the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grids because: (1) comparing with representing information in pixel level, is more computationally effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid in different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
\ch{rewrite the Eq.~\eqref{eq:map}}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is mapped to $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the mean speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring agents existing in the grid normalized by the total number of all of the neighboring agents at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.\ch{optimize or remove this figure}}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and position over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ \ch{rewrite the notation} for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ \ch{rewrite the notation} as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Max Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q},\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k},\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v},
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
\ch{explain $\sqrt{d_k}$}
This operation is also called \emph{scaled dot-product attention}~\cite{vaswani2017attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O, \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi}),
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. It is the same for $W_{Ki}$ and $W_{Vi}$. Note that, $\#head$ is the total number of attention heads and it must be an aliquot part of $D_{q}$. The outputs of each head are concatenated and passed a linear transformation with parameter $W_O$.
The output of the multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. The dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure~\cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from~\cite{lee2017desire} that the input and output maintain in the same form.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution, which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference at test time, the Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation studies and discuss the results in detail.
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed method on the most challenging benchmark Trajnet datasets~\cite{sadeghiankosaraju2018trajnet}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provide a uniform evaluation system for fair comparison among different submitted methods.
Trajnet covers a wide range of datasets and includes various types of road users (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in a real world outdoor mixed traffic environment.
The data were collected from 38 scenes with ground truth for training and the ones from the other 20 scenes without ground truth for test (i.\,e.,~open challenge competition). The most popular pedestrian scenes ETH~\cite{pellegrini2009you} and UCY~\cite{lerner2007crowds} are also included in the benchmark. Each scene presents various traffic density in different space layout, which makes the prediction task challenging.
It requires a model to generalize, in order to adapt to the various complex scenes.
Trajectories are recorded as the $xy$ coordinates in meters or pixels projected on a Cartesian space. Each trajectory provides 8 steps for observation and the following 12 steps for prediction. The duration between two successive steps is 0.4 seconds.
However, the pixel coordinates are not in the same scale across the whole benchmark. Without uniforming the pixels into the same scale, it is extremely difficult to train a general model for the whole datasets. Hence, we follow all the previous works~\cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,gupta2018social,giuliari2020transformer} that use the coordinates in meters.
In order to train and evaluate the proposed method, as well as the ablative studies, 6 different scenes are selected as offline test set from the 38 scenes in the training set.
Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}. \ch{explain the choice of these six spaces, see Table~\ref{tb:multipath}}
The best trained model is based on the evaluation performance on the offline test set and then is used for the online evaluation.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each scene.
\begin{figure}[bpht!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_bookstore_3.pdf}
\label{trajectories_bookstore_3}
\caption{\small bookstore3}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.47cm, width=1\textwidth]{fig/trajectories_coupa_3.pdf}
\label{trajectories_coupa_3}
\caption{\small coupa3}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 2cm 0cm -0.47cm, width=1\textwidth]{fig/trajectories_deathCircle_0.pdf}
\label{trajectories_deathCircle_0}
\caption{\small deathCircle0}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_gates_1.pdf}
\label{trajectories_gates_1}
\caption{\small gates1}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 2cm 0cm -1.64cm, width=1\textwidth]{fig/trajectories_hyang_6.pdf}
\label{trajectories_hyang_6}
\caption{\small hyang6}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=1\textwidth]{fig/trajectories_nexus_0.pdf}
\label{trajectories_nexus_0}
\caption{\small nexus0}
\end{subfigure}
\caption{Visualization of each scene of the offline test set. Sub-figures are not aligned in the same size, in order to demonstrate the very different space size and layout.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation Metrics}
The mean average displacement error (ADE) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}.
\begin{itemize}
\item ADE is the aligned L2 distance from $Y$ (ground truth) to its prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\end{itemize}
We evaluate both the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
$@top10$ prediction means the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth. When the ground truth is not available (for online test), only the most-likely prediction is selected. Then it comes to the single trajectory prediction problem, as most of the previous works did~\cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,giuliari2020transformer}.
\subsection{Quantitative Results and Comparison}
\label{subsec:results}
We compare the performance of our method with the most influential previous works and the recent state-of-the-art works published on the Trajnet challenge (up to 14/06/2020) for trajectory prediction to ensure the fair comparison.
The compared works include the following models.
\begin{itemize}
\item\emph{Social Force}~\cite{helbing1995social} is a rule-based model with the repulsive force for collision avoidance and the attractive force for social connections;
\item\emph{Social LSTM}~\cite{alahi2016social} proposes a social pooling with a rectangular occupancy grid for close neighboring agents which is widely adopted thereafter in this domain~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent};
\item\emph{SR-LSTM}~\cite{zhang2019sr} uses a states refinement module for extract social effects between the target agent and its neighboring agents;
\item\emph{RED}~\cite{becker2018evaluation} uses RNN-based Encoder with Multilayer Perceptron (MLP) for trajectory prediction;
\item\emph{MX-LSTM}~\cite{hasan2018mx} exploits the head pose information of agents to help analyze its moving intention;
\item\emph{Social GAN}~\cite{gupta2018social} proposes to utilize the generative models GAN for multi-path trajectory prediction, which is the one of the closest works to our work; (the other one is \emph{DESIRE}~\cite{lee2017desire}, however neither the online test nor code was reported. Hence, we do not compare with \emph{DESIRE} for a fairness purpose);
\item\emph{Ind-TF}~\cite{giuliari2020transformer} only utilizes the Transformer network~\cite{vaswani2017attention} for sequence modeling with no consideration for social interactions between agents.
\end{itemize}
Table~\ref{tb:results} lists the performances from above methods and ours on the Trajnet test set measured by ADE, FDE and overall average $(\text{ADE} + \text{FDE})/2$. The data are originally reported on the Trajnet challenge leader board\footnote{http://trajnet.stanford.edu/result.php?cid=1}. We can see that, our method (AMENet) outperforms the other methods significantly and wins the first place on all metrics.
Even thought compared with the most recent model Ind-TF~\cite{giuliari2020transformer}(under reviewed), our method performs better. Particularly, our method improves the FDE performance with large margin (reducing the error from 1.197 to 1.183 meters).
Note that, our model predicts multiple trajectories conditioned on the observed trajectories with the stochastic variable sampled from a Gaussian distribution repeatedly (see Sec.~\ref{subsec:sample}). We select the most-likely prediction using the proposed ranking method as discussed in Sec. \ref{subsec:ranking}. The outstanding performances from our method also demonstrate that our ranking method is effective.
\begin{table}[t!]
\centering
\caption{Comparison between our method and the state-of-the-art works. The smaller number is better. Best values are highlighted in boldface.}
\begin{tabular}{llll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & ADE [m]$\downarrow$ \\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 \\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 \\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 \\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 \\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 \\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 \\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} \\
Ours (AMENet)$^{*}$ & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} \\
\hline
\end{tabular}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{\textwidth}}{ $^{*}$ Our method is named as \textit{ikg\_tnt} on the leadboard.}
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Multi-Path Prediction}
\label{subsec:multipath-selection}
Multi-path trajectories prediction is one of the main contribution of this work and distinguishes our work from most of the existing works essentially.
Here, we discuss its performance w.\,r.\,t.~multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}).
During training, the motion information and interaction information from observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ to be a normal Gaussian distribution.
Fig.~\ref{fig:z_space} shows the visualization of $z$ in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. From the figure we can see that, the training phase successfully re-parameterizes the variable $z$ into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one feasible future trajectories conditioned on the observation.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{The latent variable $z$ of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero. \ch{remove this figure}}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared to the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that: 1) the generated multiple trajectories increase the chance to narrow down the errors from the prediction to the ground truth, and 2) the predicted trajectories are feasible (if not, the bad predictions will deteriorate the overall performance and leads to worse results than the most-likely prediction).
Fig.~\ref{fig:multi-path} showcases some qualitative examples of multi-path trajectory prediction from our model. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. We also notice that the predicted trajectories diverge more widely in further time steps. It is reasonable because the further the future is the uncertainty of agents intention is higher. It also proves that the ability of predicting multiple plausible trajectories is important for analyzing the movements of road users because of the increasing uncertainty of the future movements. Single prediction provides limited information for analyzing in this case and is likely to lead to false conclusion if the prediction is not correct/precise in the early steps.
\begin{table}[t!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet on the offline test set of Trajnet. Predicted trajectories are ranked by $\text{top}@10$ (former) and most-likely and are measured by ADE/FDE.}
\begin{tabular}{llllll}
\\ \hline
Dataset & Layout & \#Trajs & \begin{tabular}[c]{@{}l@{}}Non-linear \\ traj rate\end{tabular}
& Top@10 & Most-likely \\ \hline
bookstore3 & parking & 429 & 0.71 & 0.477/0.961 & 0.486/0.979 \\
coupa3 & corridor & 639 & 0.31 & 0.221/0.432 & 0.226/0.442 \\
deathCircle0 & roundabout & 648 & 0.89 & 0.650/1.280 & 0.659/1.297 \\
gates1 & roundabout & 268 & 0.87 & 0.784/1.663 & 0.797/1.692 \\
hyang6 & intersection & 327 & 0.79 & 0.534/1.076 & 0.542/1.094 \\
nexus6 & corridor & 131 & 0.88 & 0.542/1.073 & 0.559/1.109 \\
Average & - & 406 & 0.74 & 0.535/1.081 & 0.545/1.102 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{fig/multi_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{fig/multi_gates_1.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\subsection{Ablation Studies}
\label{sec:ablativemodels}
In order to analyze the impact of each proposed module in our framework, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out. \ch{added more ablative models, see Table~\ref{tb:resultsablativemodels}}
\begin{itemize}
\item AMENet, the full model of our framework.
\item AOENet, substitutes dynamic maps with occupancy grid~\cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both the X-Encoder and Y-Encoder. This setting is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention module in the dynamic maps branch. This setting is used to validate the effectiveness of the self-attention module that learns the spatial interactions among agents alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE~\cite{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This setting is used to validate the contribution of the extended structure for processing the dynamic maps information in the Y-Encoder.
\end{itemize}
Table~\ref{tb:resultsablativemodels} shows the quantitative results from the above ablative models.
Errors are measured by ADE/FDE on the most-likely prediction.
By comparison between AOENet and AMENet we can see that when we replace the dynamic maps with the occupancy grid, both ADE and FDE increase by a remarkable margin across all the datasets.
It demonstrates that our proposed dynamic maps are more helpful for exploring the interaction information among agents than the occupancy grid.
We can also see that if the self-attention module is removed (MENet) the performance decreases by a remarkable margin across all the datasets.
This phenomena indicates that the self-attention module is effective in learning the interaction among agents from the dynamic maps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in the Y-Encoder for dynamic maps, the performances, especially FDE, decrease significantly across all the datasets. The extended structure provides the ability of the model to process the interaction information even in prediction. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by ADE and FDE. This observation further proves that, even without the extended structure in the Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by ADE/FDE on the most-likely prediction. Best values are highlighted in bold face. \ch{Optimize this table or move it to appendix}}
\begin{tabular}{lllllll}
\\ \hline
Dataset & ENet & OEnet & AOENet & MENet & ACVAE & AMENet \\ \hline
bookstore3 & 0.532/1.080 & 0.601/1.166 & 0.574/1.144 & 0.576/1.139 & 0.509/1.030 & \textbf{0.486}/\textbf{0.979} \\
coupa3 & 0.241/0.474 & 0.342/0.656 & 0.260/0.509 & 0.294/0.572 & 0.237/0.464 & \textbf{0.226}/\textbf{0.442} \\
deathCircle0 & 0.681/1.353 & 0.741/1.429 & 0.726/1.437 & 0.725/1.419 & 0.698/1.378 & \textbf{0.659}/\textbf{1.297} \\
gates1 & 0.876/1.848 & 0.938/1.921 & 0.878/1.819 & 0.941/1.928 & 0.861/1.823 & \textbf{0.797}/\textbf{1.692} \\
hyang6 & 0.598/1.202 & 0.661/1.296 & 0.619/1.244 & 0.657/1.292 & 0.566/1.140 & \textbf{0.797}/\textbf{1.692} \\
nexus6 & 0.684/1.387 & 0.695/1.314 & 0.752/1.489 & 0.705/1.140 & 0.595/1.181 & \textbf{0.542}/\textbf{1.094} \\
Average & 0.602/1.224 & 0.663/1.297 & 0.635/1.274 & 0.650/1.283 & 0.578/1.169 & \textbf{0.545}/\textbf{1.102} \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
\ch{linear trajectories: two degree polynomial fitting with the sum of squared residuals of the least-squares less than 0.02 \cite{gupta2018social}.}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.75\textwidth}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=1\textwidth]{fig/ADE.png}
\label{subfig:non-linear:ADE}
\caption{\small ADE}
\end{subfigure}
\begin{subfigure}{0.75\textwidth}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=1\textwidth]{fig/FDE.png}
\label{subfig:non-linear:FDE}
\caption{\small FDE}
\end{subfigure}
\caption{The prediction errors for linear, non-linear and all trajectories measured by (a) ADE and (b) FDE for all the ablative models, as well as the proposed model.}
\label{fig:non-linear}
\end{figure}
Fig.~\ref{fig:abl_qualitative_results} showcases some exapmles of the qualitative results of the full AMENet in comparison to the ablative models in different scenes.
In general, all the models are able to predict trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. Given a short observation of trajectories, i.\,e.,~ 8 time steps, all the models are able to capture the general speed and heading direction for agents located in different areas in the space.
AMENet predicts the most accurate trajectories which are very close or even completely overlap with the corresponding ground truth trajectories.
Compared with the ablative models, AMENet predicts more accurate destinations (the last position of the predicted trajectories), which is in line with the quantitative results shown in Table~\ref{tb:results}.
One very clear example in \textit{hyang6} (Fig.~\ref{hyang_6209}) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (Fig.~\ref{gates_1001}). The sudden maneuvers of agents are very difficult to forecast even for human observers. \ch{extend this paragraph and add a discussion of the model's limitation}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{fig/scenario_bookstore_3.pdf}
\caption{\small bookstore3}
\label{bookstore_3290}
\end{subfigure}
\begin{subfigure}{0.54\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.47cm, width=1\textwidth]{fig/scenario_coupa_3.pdf}
\caption{\small coupa3}
\label{coupa_3327}
\end{subfigure}
\begin{subfigure}{0.31\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -0.47cm, width=1\textwidth]{fig/scenario_deathCircle_0.pdf}
\caption{\small deathCircle0}
\label{deathCircle_0000}
\end{subfigure}
\begin{subfigure}{0.28\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{fig/scenario_gates_1.pdf}
\caption{\small gates1}
\label{gates_1001}
\end{subfigure}
\begin{subfigure}{0.52\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm -1.64cm, width=1\textwidth]{fig/scenario_hyang_6.pdf}
\caption{\small hyang6}
\label{hyang_6209}
\end{subfigure}
\begin{subfigure}{0.27\textwidth}
\includegraphics[trim=0cm 0.5cm 0cm 0cm, width=1\textwidth]{fig/scenario_nexus_0.pdf}
\caption{\small nexus0}
\label{nexus_0038}
\end{subfigure}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) in different scenes. Sub-figures are not aligned in the same size, in order to demonstrate the very different space size and layout. \ch{Important: optimize the visualization and show non-linear trajectories!}}
\label{fig:abl_qualitative_results}
\end{figure}
\subsection{Extensive Studies on Benchmark InD}
\label{subsec:InD}
\ch{Add the extended dataset and add two baseline models in parallel}
To further investigate the performance of our methods, we carry out extensive experiments on a newly published large-scale benchmark InD\footnote{\url{https://www.ind-dataset.com/}}.
It consists of 33 datasets and was collected using drones on four very busy intersections (as shown in Fig.~\ref{fig:qualitativeresultsInD}) in Germany in 2019 by Bock et al. \cite{inDdataset}.
Different from Trajnet that most of the environments (i.\,e.,~shared spaces \cite{reid2009dft,robicquet2016learning}) are pedestrian friendly, the interactions in InD are more designed for vehicles. This makes the prediction task more changing due to the very different travel speed between pedestrians and vehicles, and their direct interactions.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and 12 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet and the remaining datasets are used for training.
Table~\ref{tb:resultsInD} lists the performance of our method measured by ADE and FDE for each intersection and the overall average errors. We can see that our method is still able to generate feasible trajectories and reports good results (average errors (0.731/1.587) which are lightly inferior than the ones tested on Trajnet (0.545/1.102).
\begin{table}[t!]
\centering
\small
\caption{Quantitative Results of AMENet on InD measured by ADE/FDE, and average performance across the whole datasets. \ch{add citation for the baseline models}}
\begin{tabular}{lllllll}
\hline
Model & V-LSTM & SGAN & AMENET & V-LSTM & SGAN & AMENET \\ \hline
InD type & \multicolumn{3}{c}{@top 10} & \multicolumn{3}{c}{Most-likely} \\ \hline
Int. A & 2.04/4.61 & 2.84/4.91 & \textbf{0.95/1.94} & 2.29/5.33 & 3.02/5.30 & \textbf{1.07/2.22} \\
Int. B & 1.21/2.99 & 1.47/3.04 & \textbf{0.59/1.29} & 1.28/3.19 & 1.55/3.23 & \textbf{0.65/1.46} \\
Int. C & 1.66/3.89 & 2.05/4.04 & \textbf{0.74/1.64} & 1.78/4.24 & 2.22/4.45 & \textbf{0.83/1.87} \\
Int. D & 2.04/4.80 & 2.52/5.15 & \textbf{0.28/0.60} & 2.17/5.11 & 2.71/5.64 & \textbf{0.37/0.80} \\
Avg. & 1.74/4.07 & 2.22/4.29 & \textbf{0.64/1.37} & 1.88/4.47 & 2.38/4.66 & \textbf{0.73/1.59} \\ \hline
\end{tabular}
\label{tb:resultsInD}
\end{table}
\begin{figure} [bpht!]
\centering
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/06_Trajectories020_12.pdf}
\caption{\small{Intersection-A}}
\label{subfig:Intersection-A}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/14_Trajectories030_12.pdf}
\caption{\small{Intersection-B}}
\label{subfig:Intersection-B}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/27_Trajectories011_12.pdf}
\caption{\small{Intersection-C}}
\label{subfig:Intersection-C}
\end{subfigure}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/32_Trajectories019_12.pdf}
\caption{\small{Intersection-D}}
\label{subfig:Intersection-D}
\end{subfigure}\hfill
\caption{\small{Benchmark InD: examples for predicting trajectories of mixed traffic in different intersections.}}
\label{fig:qualitativeresultsInD}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose a novel way---dynamic maps---to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets in various real-world environments. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\section*{Acknowledgements}
This work is supported by the German Research Foundation (DFG) through the Research Training Group SocialCars (GRK 1931).
\section*{References}
\section*{Content of the Paper}
MY: cite these 2 CVPR2020 papers:
https://arxiv.org/pdf/2005.04255.pdf
https://arxiv.org/pdf/2005.04259.pdf
\section{Introduction}
\section{Related work}
\section{Methodology}
\section{Experiments and Results}
In this section, we will introduce the experimental settings, the evaluation metric and discussed the experimental results. We will compare our results with the ones from the recent state-of-the-art methods. To analyze how each module in our framework impact the overall performance, we will design a series of ablation studies.
\subsection*{Datasets and Experimental Settings}
We validate our method on the benchamark dataset.....
\subsection*{\textbf{Experiments on Benchmark Challenge dataset}}
\href{http://trajnet.stanford.edu/}{TrajNet}, is a new, large scale trajectory-based activity benchmark, that uses a unified evaluation system to test gathered state-of-the-art methods on various trajectory-based activity forecasting datasets. The benchmark not only covers a wide range of datasets, but also includes various types of targets, from pedestrians to bikers, skateboarders, cars, buses, and golf cars, that navigate in a real world outdoor environment.
\subsubsection*{Benchmark dataset partition}
The benchmark provides trajectories in $x$ and $y$ coordinates in meters on a Cartesian space with 8 time steps for observation and 12 time steps for prediction. Each time step lasts 0.4 seconds. In total, there are 38 subsets with ground truth for training/validation/testing the models, and 20 subsets without ground truth for challenge.
\subsubsection*{Evaluation metrics on the benchmark datasets}
The evaluation metrics are provided by the challenge, they are:
\begin{itemize}
\item MAD, mean average displacement error. This is the aligned L2 distance from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ at each time step.
\item FDE, final displacement error. This is the L2 distance of the last position from $Y$ (ground truth) to the corresponding prediction $\hat{Y}$ at each time step.
\item Overall average, $(\text{MAD} + \text{FDE})/2$
\end{itemize}
\subsubsection*{Comparing with the recent State-of-the-Art Models}
\begin{itemize}
\item Social LSTM\cite{alahi2016social}, CVPR 2016
\item Social GAN\cite{gupta2018social}, CVPR 2018
\item MX-LSTM\cite{hasan2018mx}, CVPR 2018
\item Social Force\cite{helbing1995social}, Physical review E 1995, the most representative expert-based model
\item Linear off5\cite{becker2018evaluation}, arXiv 2018, Trajnet paper
\item SR-LSTM\cite{zhang2019sr}, CVPR 2019
\item Physical Computation\cite{becker2018evaluation}, arXiv 2018, Trajnet paper
\item RED\cite{becker2018evaluation}, arXiv 2018, Trajnet paper
\item Ind-TF\cite{giuliari2020transformer}, arXiv 2020 (might be submitted to ECCV 2020)
\end{itemize}
The results and ranks for the state-of-the-art models and the proposed model (called \textit{ikg\_tnt} on the ranking list) are availabel on \href{http://trajnet.stanford.edu/result.php?cid=1}{Trajnet} website and in Table~\ref{tb:results}.
\subsection*{\textbf{Ablative studies}}
\subsubsection*{dataset Partition}
Run the ablative experiments on the training datasets from SDD for \textbf{mixed traffic}. In total, there are 38 datasets in the training set, 6 datasets from different scenes are selected for ablative studies and the others are used for training the models. They are
\begin{itemize}
\item bookstore\_3
\item coupa\_3
\item deathCircle\_0
\item gates\_1
\item hyang\_6
\item nexus\_0
\end{itemize}
These 6 datasets are selected randomly from each scene from the \href{https://cvgl.stanford.edu/projects/uav\_data/}{Stanford Drone Dataset} or based on the traffic density.
\subsubsection*{Evaluation Metrics for Ablative studies}
\begin{itemize}
\item MAD, mean average dispalcement ($\text{top}@10$ and $\text{mostlikely}$)
\item FDE, final displacement error ($\text{top}@10$ and $\text{mostlikely}$)
\item Hausdorff distance ($\text{top}@10$ and $\text{mostlikely}$) (This is \textcolor{red}{optional})
\item Heading error ($\text{top}@10$ and $\text{mostlikely}$) (This is \textcolor{red}{optional})
\item Speed deviation ($\text{top}@10$ and $\text{mostlikely}$) (This is \textcolor{red}{optional})
\item Collision rate with liner interpolation ($\text{top}@10$ and $\text{mostlikely}$)
\end{itemize}
\subsubsection*{ToDo-1, Ablative Studies on Interaction information from Dynamic maps}
\begin{itemize}
\item AMENet, uses dynamic maps in both $X$ (observation time) and $Y$ prediction time. This is the proposed model.
\item AOENet, uses occupancy maps in both $X$ (observation time) and $Y$ prediction time. This comparison is a validation for the contribution from the dynamic maps.
\item MENet, without multi-head-attention for the dynamic maps. This comparison is a validation for the contribution from the multi-head-attention function for the dynamic maps alone the time axis.
\item ACVAE, uses only dynamic maps in $X$ (observation time). This is equivalent to CVAE (Conditional Variation Autoencoder) with attention. This comparison is a validation for the contribution from dynamic maps in \textbf{Y-Encoder}.
\end{itemize}
\include{table/abl_models_interinfo}
\subsubsection*{ToDo-2, Ablative Studies on Motion Information}
\textcolor{red}{Do we need to do these studies?}
\begin{itemize}
\item AMENet takes $[\Delta{x_i^t}, \Delta{y_i^t}]$, the offset (velocity) for a target user at each time step.
\item AMENet takes $[{x_i^t}, {y_i^t}]$, the $x$ and $y$ positions for a target user at each time step.
\item AMENet takes $[\Delta{x_i^t}, \Delta{y_i^t}, {x_i^t}, {y_i^t}]$, teh offset and position for a target user at each time step.
\end{itemize}
\subsubsection*{ToDo-3, Ablative Studies on Second-Time Inference}
The brief pipeline of the proposed model can be seen in Fig~\ref{fig:framework}.
\begin{itemize}
\item \textbf{Training}, X-Encoder, Y-Encoder, Z-Space, and reconstruction of $Y$
\item \textbf{First-Time Inference}, X-Encoder, Z-Space (sampling), and prediction of \textcolor{red}{Y-prime}. The information for Y-Encoder is not available in prediction time.
\item \textbf{Second-Time Inference}, derives the input for Y-Encoder from \textcolor{red}{Y-prime}, generating \textcolor{red}{Y-prime-prime} using X-Encoder, Y-Encoder (computed from \textcolor{red}{Y-prime}) and Z-Space .
\end{itemize}
\include{table/abl_model_reconstruction}
\textcolor{red}{Do we need to discuss these studies?}
Based on the experiments, the second-time inference does not really improve (often worsens) the performance for predicting $Y$ measured by MAD and FDE. But it seems to have a positive effect (not very strong) on the prediction measured by collision rate.
\subsubsection*{ToDo-4, Ablative Studies on Time Horizon}
If we want to investigate the performance of the proposed model for longer term trajectory prediction, we need to use another dataset. Because the challenge dataset only provides a predefined sequence length: observed trajectory length 9 time-step positions and predicted trajectory 12 time-step positions. One good open-source dataset for \textbf{mixed traffic} is the \href{https://www.ind-dataset.com/}{Ind-Dataset} from Germany collected in 2019 by Bock et al. \cite{inDdataset}. The dataset has been already preprocessed in a similar format as the benchmark challenge dataset for observing 8 time steps (one time-step difference due to the offset extraction) and up to 32 time steps.
\begin{itemize}
\item observing 8 time steps and predicting 12 time steps. This is the same setting as the benchmark dataset and also a conventional setting for microscopical trajectory prediction.
\item observing 8 time steps and predicting 16 time steps.
\item observing 8 time steps and predicting 20 time steps.
\item observing 8 time steps and predicting 24 time steps.
\item observing 8 time steps and predicting 28 time steps.
\item observing 8 time steps and predicting 32 time steps.
\end{itemize}
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 3cm 8cm 0, width=1\textwidth]{fig/model_framework2.pdf}
\caption{The framework of the proposed model. The target road user's motion information (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space) and the interaction information with the neighborhood road users (their orientation, travel speed and relative position to the target road user) at each time step, which are encoded as dynamic maps that centralized of the target road user, are the input for the proposed model for accurate and realistic trajectory prediction. In training phase, both the target user's motion information and interaction information with neighborhood road users in both observation and prediction time are encoded by X-Encoder and Y-Encoder, respectively. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a latent variable $Z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the target road user's motion and interactions with the neighborhood road users. The latent variable $Z$ is pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. Then the output of X-Encoder is concatenated with the $Z$ variable for reconstructing the target road user's future trajectory via Decoder. The decoder is trained by minimizing the mean square error between the ground truth $Y$ and the reconstructed trajectory $\hat{Y}$.
In the inference time, the model only has access to the target user's motion information and the interaction information with the neighborhood road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with $Z$ variable sampled from the Z-Space. The decoder uses the concatenated information as the input for predicting the target road user's future trajectory.}
\label{fig:framework}
\end{figure}{}
\section{Introduction}
Correctly understanding the motion behavior of road agents in the near future is crucial for many intelligent transportation systems (ITS), such as intent detection \cite{goldhammer2013early,hashimoto2015probabilistic,koehler2013stationary}, trajectory prediction \cite{alahi2016social,gupta2018social,vemula2018social} and autonomous driving \cite{franke1998autonomous}, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
\section{Results}
\begin{table}[bpht!]
\centering
\caption{Comparison between the proposed model and the state-of-the-art models}
\begin{tabular}{llll}
\hline
Model & Overall avg$\downarrow$ & MAD$\downarrow$ & FDE$\downarrow$ \\
\hline
Social LSTM\cite{alahi2016social} & 1.3865 & 3.098 & 0.675\\
Social GAN\cite{gupta2018social} & 1.334 & 2.107 & 0.561\\
MX-LSTM\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399\\
Social Force\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 \\
Linear off5\cite{becker2018evaluation} & 0.8185 & 1.266 & 0.371\\
SR-LSTM\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370\\
Physical Computation\cite{becker2018evaluation} & 0.793 & 1.226 & 0.360 \\
RED\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359\\
Ind-TF\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356}\\
Ours & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\section{Conclusions}
In this paper, we present
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction of road users in the near future is a crucial task in different communities, such as photogrammetry \cite{schindler2010automatic,klinger2015probabilistic,klinger2017probabilistic,cheng2018mixed}, intelligent transportation systems (ITS) \cite{morris2008survey,cheng2018modeling,cheng2020mcenet}, computer vision \cite{alahi2016social,mohajerin2019multi},
mobile robot applications \cite{mohanan2018survey},
\textit{etc}..
This task enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for its next operation, especially in urban mixed-traffic zones (a.k.a. shared spaces \cite{reid2009dft}).
The trajectory of a non-erratic agent is often referred as plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social relations, social rules and norms between agents) positions in 2D or 3D aligned in a timing law.
Trajectory prediction is generally defined as to predict the plausible and socially-acceptable positions of target agents at each time step within a predefined future time interval relying on observed partial trajectories over a certain period of time \cite{helbing1995social,alahi2016social}.
The target agent is defined as the dynamic object for which the actual prediction is made, mainly pedestrian, cyclist, vehicle and other road users.
The prediction task can be generalized as short-term or long-term trajectory prediction depending on the prediction time horizons. In this study, under five seconds is categorized as short term, otherwise as long term.
A typical prediction process of mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=1\textwidth]{fig/first_fig.pdf}
\caption{Predicting plausible and socially-acceptable positions of agents (e.\,g.,~target agent in black) at each time step within a predefined future time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively and accurately predict trajectory of mixed agents remains an unsolved problem in many research communities. The challenges are mainly from three aspects: 1) the complex behavior and uncertain moving intent of each agent, 2) the presence and interactions between the target agent and its neighboring agents and 3) the multi-modality of paths: there are usually more than one socially-acceptable paths that an agent could move in the future.
\my{Too long related works here, you also have sec.2 on related work.}
There exists a large body of literature that focuses on addressing part or all of the aforementioned challenges in order to make accurate trajectory prediction.
The traditional methods model the interactions based on hand-crafted features, such as force-based rules \cite{helbing1995social}, Game Theory \cite{johora2020agent}, or constant velocity \cite{best1997new}.
The performance is crucially affected by the quality of manually designed features and they lack generalizability \cite{cheng2020trajectory}.
Recently, boosted by the development of deep learning technologies \cite{lecun2015deep}, data-driven methods
keep reporting new state-of-the-art performance on benchmarks \cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,cheng2020mcenet}.
For instance, Recurrent Neural Networks (RNNs) based models are used to model the interactions between agents and predict the future positions in sequence \cite{alahi2016social,xue2018ss}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. The models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed, such as Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based framework Social GAN \cite{gupta2018social} and Conditional Variational Auto-Encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} based framework DESIRE \cite{lee2017desire}.
\begin{comment}
For instance, Social LSTM \cite{alahi2016social} models the interactions between the target pedestrian and its close neighbors and predicts the future positions in sequence. Many later works follow this pioneering work that treats the trajectory prediction problem as a sequence prediction problem based on Recurrent Neural Networks (RNNs) \cite{wu2017modeling,xue2018ss,bartoli2018context,fernando2018soft}.
However, these works design a discriminative model and produce a deterministic outcome for each agent. Hence, the models tend to predict the ``average" trajectories because the commonly used objective function minimizes the Euclidean distance between the ground truth and the predicted outputs.
To predict multiple socially-acceptable trajectories for each target agent, different generative models are proposed. Social GAN \cite{gupta2018social} designs a Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} based a framework that trains an RNN encoder-decoder as generator and an RNN based encoder as discriminator. DESIRE \cite{lee2017desire} proposes to use conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning} to learn the latent space of future trajectories and predicts multiple possible trajectories by repeatably sampling from the learned latent space.
\end{comment}
In spite of the great success in this domain, most of these methods are designed for a specific agent: pedestrian.
In reality, pedestrian, cyclist and vehicle are the three mayor types of agents and their behaviors affect each other. To make precise trajectory prediction, their interactions should be considered jointly. Besides, the interactions between the target agent and the others are equally treated. But different agents may not equally affect the target agent on how to move in the near future. For instance, the nearer agents should affect the target agent stronger than the more distant ones, and the target vehicle is affected more by the pedestrians who tend to cross the road than the vehicles which are nearly behind it. Last but not least, the robustness of the models are not fully tested in real world outdoor mixed traffic environments (e.\,g.,~roundabouts, intersections) with various unseen traffic situations. Can a model trained in some spaces to predict accurate trajectories in other unseen spaces?
To address the aforementioned limitations, we propose this work named \emph{Attentive Maps Encoder Network} (AMENet) leveraging the ability of generative models that generate diverse patterns of future trajectories and modeling the interactions between target agents with the others as attentive dynamic maps.
The dynamic map manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each time step for interaction modeling and the attention mechanism enables the model to automatically focus on the salient features extracted over different time steps.
An overview of our proposed framework is depicted in Fig.~\ref{fig:framework}. (1) Two encoders are designed for learning representations of the observed trajectories (X-Encoder) and the future trajectories (Y-Encoder) respectively and they are in identical structure. Taking X-Encoder as an example (see Fig.~\ref{fig:encoder}), the encoder first extracts the motion information from the target agent (coordinate offset in sequential time steps) and the interaction information with the other agents respectively. Particularly, to explore the dynamic interactions, the motion information of each agent is characterised by its orientation, speed and position at each time step. Then a self-attention mechanism is utilized over all agents to extract the dynamic interaction maps. This is where the name \emph{Attentive Maps Encoder} comes from. The motion and interaction information along the observed time interval are collected by two independent Long Short-Term Memories (LSTMs) and then fused together. (2) The output of the Y-Encoder is supplement to a variational auto-encoder to learn the latent space of future trajectories distribution, which is assumed as a Gaussian distribution. (3) The output of the variational auto-encoder module (it is achieved by reparameterization of encoded features during training phase and resampling from the learned latent space during the inference phase) is fed forward to the following decoder associated with the output of X-Encoder as condition to forecast the future trajectory, which works in the same way as a conditional variational auto-encoder (CVAE) \cite{kingma2013auto,kingma2014semi,sohn2015learning}.
The main contributions are summarised as follows:
\begin{itemize}
\item[1] We propose a generative framework attentive Maps Encoder Network (AMENet) for multi-path trajectory prediction.
AMENet inserts a generative module that is trained to learn a latent space for encoding the motion and interaction information in both observation and future, and predicts multiple feasible future trajectories conditioned on the observed information.
\item[2] We design a novel module, \emph{attentive maps encoder} that learns spatio-temporal interconnections among agents based on dynamic maps using a self-attention mechanism.
\item[3] Our model is able to predict heterogeneous road users, i.\,e.,~pedestrian, cyclist and vehicle rather than only focusing on pedestrian, in various unseen real-world environments, which makes our work different from most of the previous ones that only predict pedestrian trajectories.
\end{itemize}
The efficacy of the proposed method has been validated on the most challenging benchmark \emph{Trajnet} \cite{sadeghiankosaraju2018trajnet} that contains various datasets in various environments for short-term trajectory prediction. Our method reports the new state-of-the-art performance and wins the first place on the leader board.
\my{Trajnet in the referene has no year information. Check all other reference papers.}
Its performance for predicting long term trajectory (up to 32 time-step positions of 12.8 seconds) is also investigated on the benchmark inD \cite{inDdataset} that contains mixed traffic in different intersections.
Each component of the proposed model is validated via a series of ablative models.
\section{Related Work}
Our work focuses on predicting trajectories of mix road agents.
In this section we discuss the recent related works mainly in the following aspects: modeling this task as a sequence prediction, modeling the interactions between agents for precise path prediction, modeling with attention mechanisms, and utilizing generative models to predict multiple plausible trajectories. Our work concentrates on modeling the dynamic interactions between agents and training a generative model to predict multiple plausible trajectories for each target agent.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling the trajectory prediction as a sequence prediction task is the most popular approach. The 2D/3D position of a target agent is predicted step by step along the time axis.
The widely applied models include but not limit to linear regression and Kalman filter \cite{harvey1990forecasting}, Gaussian processing \cite{tay2008modelling} and Markov decision processing \cite{kitani2012activity}.
However, these traditional methods largely reply on the quality of manually designed features and are unable to tackle large-scale data.
Recently, data-driven deep learning technologies, especially RNN-based models and the variants, e.\,g.,~Long Short-Term Memories (LSTMs) \cite{hochreiter1997long}, have demonstrated the powerful ability in extracting representations from massive data automatically and are used to learn the complex patterns of trajectories.
In recent years, RNN-based models keep pushing the edge of accuracy of predicting pedestrian trajectory \cite{alahi2016social,gupta2018social,sadeghian2018sophie,zhang2019sr}, as well as other types of road users \cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we also utilize LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent is not only decided by its own willing but also crucially affected by the interactions between it and the other agents. Therefore, effectively modeling the social interactions among agents is important for accurate trajectory prediction.
One of the most influential approaches for modeling interactions is the Social Force Model \cite{helbing1995social}, which models the repulsive force for collision avoidance and the attractive force for social connections. Game Theory is utilized to simulate the negotiation between different road users \cite{johora2020agent}.
Such rule-based interaction modelings have been incorporated into deep learning models. Social LSTM proposes an occupancy grid to locate the positions of close neighboring agents and uses a social pooling layer to encode the interaction information for trajectory prediction \cite{alahi2016social}. Many following works design their specific ``occupancy'' grid for interaction modeling \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interaction between individual agent and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed for interaction modeling. For example, Social GAN \cite{gupta2018social} embeds relative positions between the target and all the other agents with each agent's motion hidden state and uses an element-wise pooling to extract the interaction between all the pairs of agents, not only the close neighboring agents;
Similarly, all the agents are considered in SR-LSTM \cite{zhang2019sr}. It proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework. The motion gate and agent-wise attention are used to select the most important information from neighboring agents.
Most of the aforementioned models extract interaction information based on the relative position of the neighboring agents in relation to the target agent.
The dynamics of interactions are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Recently, different attention mechanisms \cite{bahdanau2014neural,xu2015show,vaswani2017attention} are incorporated in neural networks for learning complex spatio-temporal interconnections.
Particularly, their effectiveness have been proven in learning powerful representations from sequence information in, such as, neural machine translation \cite{bahdanau2014neural,vaswani2017attention} and image caption generation \cite{xu2015show,anderson2018bottom,he2020image}, \textit{etc}..
Some of the recent state-of-the-art methods also have adapted attention mechanisms for sequence modeling and interaction modeling to predict trajectories.
For example, a soft attention mechanism \cite{xu2015show} is incorporated in LSTM to learn the spatio-temporal patterns from the position coordinates \cite{varshneya2017human}. Similarly, SoPhie \cite{sadeghian2018sophie} applies two separate soft attention modules, one called physical attention for learning the salient features between agent and scene and the other called social attention for modeling agent to agent interactions. In the MAP model \cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF \cite{giuliari2020transformer} replaces RNN with Transformer \cite{varshneya2017human} for modeling trajectory sequences.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism \cite{varshneya2017human} along the time axis.
The self-attention mechanism is defined as mapping a query and a set of key-value pairs to an output. First, the similarity between the query and each key is computed to obtain a weight. The weights associated with all the keys are then normalized via, e.\,g.,~a softmax function and are applied to weigh the corresponding values for obtaining the final attention.
Unlikely RNN based structures that propagate the information along the symbol positions of the input and output sequence, which leads to increasing difficulties for information propagation in long sequences,
the self-attention mechanism relates different positions of a single sequence in order to compute a representation of the entire sequence. The dependency between the input and output is not restricted to their distance of positions.
\subsection{Generative Models}
\label{sec:rel-generative}
Up to date, VAE \cite{kingma2013auto} and GAN \cite{goodfellow2014generative} and their variants (e.\,g.,~Conditional VAE \cite{kingma2014semi,sohn2015learning})
are the most popular generative models in the era of deep learning.
They are both able to generate diverse outputs by sampling from noise. The essential difference is that, GAN trains a generator to generate a sample from noise and a discriminator to decide whether the generated sample is real enough. The generator and discriminator enhance mutually during training.
In contrast, VAE is trained by maximizing the lower bound of training data likelihood for learning a latent space that approximates the distribution of the training data.
Generative models have shown promising performance in different tasks, e.\,g.,~super resolution, image to image translation, image generation \ch{add citation}, as well as trajectory prediction \cite{lee2017desire,gupta2018social,cheng2020mcenet}.
Predicting one single trajectory may not sufficient due to the uncertainties of road users' behavior.
Gupta~et al.~\cite{gupta2018social} train a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performance of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Similarly, Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
Lee~et al.~\cite{lee2017desire} propose a CVAE model to predict multiple plausible trajectories.
Cheng~et al.~\cite{cheng2020mcenet} propose a CVAE like model named MCENet to predict multiple plausible trajectories conditioned on the scene context and previous information of trajectories.
In this work, we incorporate a CVAE module to learn a latent space of possible future paths for predicting multiple plausible future trajectories conditioned on the observed past trajectories.
Our work essentially distinguishes from the above generative models in the following points: (1) We insert not only ground truth trajectory, but also the dynamic maps associated with the ground truth trajectory into the CVAE module during training, which is different from the conventional CVAE that follows a consistent input and output structure (e.\,g.,~the input and output are both trajectories in the same structure \cite{lee2017desire}.)
(2) Our method does not explore information from images, i.\,e.,~visual information is not used and
future trajectories are predicted only based on the map data (i.\,e.,~position coordinate).
Therefore, it is more computationally effective than the methods that need information from images.
In addition, our model is trained on some available spaces but is validated on other unseen spaces. The visual information, such as vegetation, curbside and buildings, are very different from one space to another. The over-trained visual features, on the other hand, may jeopardise the model's robustness and lead to a bad performance in an unseen space of totally different environment \cite{cheng2020mcenet}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework3.pdf}
\caption{An overview of the proposed framework. It consists of four modules: X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have the same structure. The Sample Generator produces diverse samples of future generations. The Decoder module is used to decode the features from the produced samples in the last step and predicts the future trajectory sequentially. The specific structure of X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet in details (Fig.~\ref{fig:framework}) in the following structure: a brief review on \emph{CVAE} (Sec.~\ref{subsec:cvae}), \emph{Problem Definition} (Sec.~\ref{subsec:definition}), \emph{Motion Input} (Sec.~\ref{subsec:input}), \emph{Dynamic Maps} (Sec.~\ref{subsec:dynamic}), \emph{Diverse Sampling} (Sec.~\ref{subsec:sample}) and \emph{Trajectory Ranking} (Sec.~\ref{subsec:ranking}).
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $P(Y_n|X)$, where $X$ is the previous trajectory information and $Y_n$ is one of the possible future trajectories.
In order to realise this goal that generates controllable diverse samples of future trajectories based on past trajectories, a deep generative model, a conditional variational auto-encoder (CVAE), is adopted inside our framework.
CVAE is an extension of the generative model VAE \cite{kingma2013auto} by introducing a condition to control the output \cite{kingma2014semi}.
Concretely, it is able to learn the stochastic latent variable $z$ that characterizes the distribution $P(Y_i|X_i)$ of $Y_i$ conditioned on the input $X_i$, where $i$ is the index of sample.
The objective function of training CVAE is formally defined as:
\begin{equation}
\label{eq:CVAE}
\log{P(Y_i|X_i)} \geq - D_{KL}(Q(z_i|Y_i, X_i)||P(z_i)) + \E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}],
\end{equation}
where $Y$ and $X$ stand for future and past trajectories in our task, respectively, and $z_i$ for latent variable. The objective is to maximize the conditional probability $\log{P(Y_i|X_i)}$, which is equivalent to minimize $\ell (\hat{Y_i}, Y_i)$ and minimize the Kullback-Leibler divergence $D_{KL}(\cdot)$ in parallel.
In order to enable back propagation for stochastic gradient descent in $\E_{Q(z_i|Y_i, X_i)}[\log{P(Y_i|z_i, X_i)}]$, a re-parameterization trick \cite{rezende2014stochastic} is applied to $z_i$, where $z_i$ can be re-parameterized by $z_i = \mu_i + \sigma_i \odot \epsilon_i$. Here, $z$ is assumed to have a Gaussian distribution $z_i\sim Q(z_i|Y_i, X_i)=\mathcal{N}(\mu_i, \sigma_i)$. $\epsilon_i$ is sampled from noise that follows a normal Gaussian distribution, and the mean $\mu_i$ and the standard deviation $\sigma_i$ over $z_i$ are produced by two side-by-side \textit{fc} layers, respectively (as shown in Fig.~\ref{fig:framework}). In this way, the derivation problem of the sampling process $Q(z_i|Y_i, X_i)$ is turned into deriving the sample results $z_i$ w.\,r.\,t.~$\mu_i $ and $\sigma_i$. Then, the back propagation for stochastic gradient descent can be utilized to optimize the networks, which produce $\mu_i $ and $\sigma_i$.
\subsection{Problem Definition}
\label{subsec:definition}
The multi-path trajectory prediction problem is defined as: for an agent $i$, received as input its observed trajectories $\mathbf{X}_i=\{X_i^1,\cdots,X_i^T\}$ and predict its $n$-th plausible future trajectory $\hat{\mathbf{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,n}^{T'}\}$. $T$ and $T'$ denote the sequence length of the past and being predicted trajectory, respectively. The trajectory position of $i$ at time step $t$ is characterized by the coordinate as $X_i^t=({x_i}^t, {y_i}^t)$ (3D coordinate is also possible, but in this work only 2D coordinate is considered) and so as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^t, \hat{y}_{i,n}^t)$.
For simplicity, we omit the notation of time steps if it is explicit in the following parts of the paper.
The objective is to predict its multiple plausible future trajectories $\hat{\mathbf{Y}}_i = \hat{\mathbf{Y}}_{i,1},\cdots,\hat{\mathbf{Y}}_{i,N}$ that are as accurate as possible to the ground truth $\mathbf{Y}_i$. This task is formally defined as $\hat{\mathbf{Y}}_i^n = f(\mathbf{X}_i, \text{Map}_i), ~n \in N$. $N$ denotes the total number of predicted trajectories and $\text{Map}_i$ denotes the dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the time steps. More details of the dynamic maps will be given in Sec.~\ref{subsec:dynamic}.
\subsection{Motion Input}
\label{subsec:input}
Specifically, we use the offset $({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ of the trajectory positions between two consecutive time steps as the motion information instead of the coordinates in a Cartesian space, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}. Compared with coordinates, the offset is independent from the given space and less sensitive in the regard of overfitting a model to a particular space or travel directions.
The offset can be interpreted as speed over time steps that are defined with a constant duration.
As long as the original position is known, the absolute coordinates at each position can be calculated by cumulatively summing the sequence offsets.
We apply the augmentation technique to randomly rotate the trajectories to prevent the system from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=1\textwidth]{fig/encoder.pdf}
\caption{Structure of the X-Encoder. The encoder has two branches: the upper one is used to extract motion information of target agents (i.\,e.,~movement in $x$- and $y$-axis in a Cartesian space), and the lower one is used to learn the interaction information among the neighboring road users from dynamic maps over time. Each dynamic map consists of three layers that represents orientation, travel speed and relative position which are encoded as dynamic maps that centralized on the target road user respectively. The motion information and the interaction information are encoded by their own LSTM sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a \textit{fc} layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder but it is used for extracting features from the future trajectories and only used in the training phase.}
\label{fig:encoder}
\end{figure}
\subsection{Dynamic Maps}
\label{subsec:dynamic}
Different from the recent works of parsing the interactions between the target and neighboring agents using an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, we propose a novel and straightforward method---attentive dynamic maps---to learn interaction information among agents.
As demonstrated in Fig.~\ref{fig:encoder}, a dynamic map at a given time step consists of three layers that interpret the information of \emph{orientation}, \emph{speed} and \emph{position}, respectively, which is derived from the trajectories of the involved agents. Each layer is centralized on the target agent's position and divided into uniform grid cells. The layers are divided into grid because: (1) comparing with representing information in pixel level, is more computational effective in grid level; (2) the size and moving speed of an agent is not fixed and it occupies a local region of pixels in arbitrary form, the spatio-temporal information of each pixel is different from each other even though they belong to the same agent. Therefore, we represent the spatio-temporal information as an average value within a grid. We calculate the value of each grid in different layers as follows:
the neighboring agents are located into the corresponding grids by their relative position to the target agent and they are also located into the grids by the anticipated relative offset (speed) to the target agent at each time step in the $x$- and $y$-axis direction.
Eq.~\eqref{eq:map} denotes the mapping mechanism for target user $i$ considering the orientation $O$, speed $S$ and position $P$ of all the neighboring agents $j \in \mathcal{N}(i)$ that coexist with the target agent $i$ at each time step.
\begin{equation}
\label{eq:map}
\text{Map}_i^t = \sum_{j \in \mathcal{N}(i)}(O, S, P) | (x_j^t-x_i^t, ~y_j^t-y_i^t, ~\Delta{x}_j^t-\Delta{x}_i^t, ~\Delta{y}_j^t-\Delta{y}_i^t).
\end{equation}
The \emph{orientation} layer $O$ represents the heading direction of neighboring agents. The value of the orientation is in \emph{degree} $[0, 360]$ and then is normalized into $[0, 1]$. The value of each grid is the mean of the orientations of all the agents existing within the grid.
The \emph{speed} layer $S$ represents all the neighboring agents' travel speed. Locally, the speed in each grid is the average speed of all the agents within a grid. Globally, across all the grids, the value of speed is normalized by the Min-Max normalization scheme into $[0, 1]$.
The \emph{position} layer $P$ stores the positions of all the neighboring agents in the grids calculated by Eq.~\eqref{eq:map}. The value of the corresponding grid is the number of individual neighboring road users existing in the grid normalized by the total number of all of the neighboring road users at that time step, which can be interpreted as the grid's occupancy density.
Each time step has a dynamic map and therefore the spatio-temporal interaction information among agents are interpreted dynamically over time.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.7\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{The maps information with accumulated time steps for the dataset \textit{nexus-0}.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} as an example showcased by the dataset \textit{nexus-0} (see more information on the benchmark datasets in Sec~\ref{subsec:benchmark}).
Each rectangular grid cell is of \SI{1}{meter} for both width and height and up to \SI{16}{meters} in each direction centralized on the target agent is within the region of interest, in order to include not only close but also distant neighboring agents.
The visualization demonstrates certain motion patterns of the agents, including the distribution of orientation, speed and position over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps, and some areas are much denser than the others.
\subsubsection{Attentive Maps Encoder}
\label{subsubsec:AMENet}
As discussed above, each time step has a dynamic map which summaries the orientation, speed and position information of all the neighboring agents. To capture the spatio-temporal interconnections from the dynamic maps for the following modules, we propose the \emph{Attentive Maps Encoder} module.
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes as motion information input the offsets $\sum_t^T({\Delta{x}_i}^t, {\Delta{y}_i}^t)$ for each target agent over the observed time steps. The motion information firstly is passed to an 1D convolutional layer (Conv1D) with one-step stride along the time axis to learn motion features one time step after another. Then it is passed to a fully connected (\textit{fc}) layer. The output of the \textit{fc} layer is passed to an LSTM module for encoding the temporal features along the trajectory sequence of the target agent into a hidden state, which contains all the motion information.
The lower branch takes the dynamic maps $\sum_t^T\text{Map}_i^t$ as input.
The interaction information at each time step is passed through a 2D convolutional layer (Conv2D) with ReLU activation and Maximum Pooling layer (MaxPool) to learning the spatial features among all the agents. The output of MaxPool at each time step is flattened and concatenated alone the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to a self-attention module to learn the interaction information with an attention mechanism. Here, we adopt the multi-head attention method from Transformer, which is a linear projecting of multiple self-attention in parallel and concatenating them together~\cite{vaswani2017attention}.
The attention function is described as mapping a query and a set of key-value pairs to an output. The query ($Q$), keys ($K$) and values ($V$) are transformed from the spatial features, which are encoded in the above step, by linear transformations:
\begin{align*}
Q =& \pi(\text{Map})W_Q, ~W_Q \in \mathbb{R}^{D\times D_q},\\
K =& \pi(\text{Map})W_K, ~W_K \in \mathbb{R}^{D\times D_k},\\
V =& \pi(\text{Map})W_V, ~W_A \in \mathbb{R}^{D\times D_v},
\end{align*}
where $W_Q, W_K$ and $W_V$ are the trainable parameters and $\pi(\cdot)$ indicates the encoding function of the dynamic maps. $D_q, D_k$ and $D_v$ are the dimension of the vector of query, key, and value (they are the same in the implementation).
Then the self-attended features are calculated as:
\begin{equation}
\label{eq:attention}
\text{Attention}(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
This operation is also called \emph{scaled dot-product attention}.
To improve the performance of the attention layer, \emph{multi-head attention} is applied:
\begin{align}
\label{eq:multihead}
\begin{split}
\text{MultiHead}(Q, K, V) &= \text{ConCat}(\text{head}_1,...,\text{head}_h)W_O \\
\text{head}_i &= \text{Attention}(QW_{Qi}, KW_{Ki}, VW_{Vi})
\end{split}
\end{align}
where $W_{Qi}\in \mathbb{R}^{D\times D_{qi}}$ indicates the linear transformation parameters for query in the $i$-th self-attention head and $D_{qi} = \frac{D_{q}}{\#head}$. It is the same for $W_{Ki}$ and $W_{Vi}$. Note that, $\#head$ is the total number of attention head and it must be an aliquot part of $D_{q}$. The outputs of each head are concatenated and passed a linear transformation with parameter $W_O$.
The output of the multi-head attention is passed to an LSTM which is used to encode the dynamic interconnection in time sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a \textit{fc} layer for feature fusion, as the complete output of the X-Encoder, which is denoted as $\Phi_X$.
The Y-Encoder has the same structure as X-Encoder, which is used to encode both target agent's motion and interaction information from the ground truth during the training time. The output of the Y-Encoder is denoted as $\Phi_Y$. The dynamic maps are also leveraged in the Y-Encoder, however, it is not reconstructed from the Decoder (only the future trajectories is reconstructed). This extended structure distinguishes our model from the conventional CVAE structure \cite{kingma2013auto,kingma2014semi,sohn2015learning} and the work from \cite{lee2017desire} that the input and output maintain in the same form.
\subsection{Diverse Sample Generation}
\label{subsec:sample}
In the training phase, $\Phi_X$ and $\Phi_Y$ are concatenated and forwarded to two successive \textit{fc} layers followed by the ReLU activation and then passed two parallel \textit{fc} layers to produce the mean and standard deviation of the distribution, which are used to re-parameterize $z$ as discussed in Sec.~\ref{subsec:cvae}.
Then, $\Phi_X$ is concatenated with $z$ as condition and fed to the following decoder (based on LSTM) to reconstruct $Y$ sequentially. It is worth noting that $\Phi_X$ is used as condition to help reconstruct $\mathbf{Y}$ here.
The MSE loss ${\ell}_2 (\mathbf{\hat{Y}}, \mathbf{Y})$ (reconstruction loss) and the $\text{KL}(Q(z|\mathbf{Y}, \mathbf{X})||P(z))$ loss are used to train our model.
The MSE loss forces the reconstructed results as close as possible to the ground truth and the KL-divergence loss will force the set of latent variables $z$ to be a Gaussian distribution.
During inference, Y-Encoder is removed and the X-Encoder works in the same way as in the training phase to extract information from observed trajectories. To generate a future prediction sample, a latent variable $z$ is sampled from $\mathcal{N}(\mathbf{0}, ~I)$ and concatenated with $\Phi_X$ (as condition) as the input of the decoder.
To generate diverse samples, this step is repeated $N$ times to generate $N$ samples of future prediction conditioned on $\Phi_X$.
To summarize, the overall pipeline of Attentive Maps Encoder Network (AMENet) consists of four modules, namely, X-Encoder, Y-Encoder, Z-Space and Decoder.
Each of the modules uses different types of neural networks to process the motion information and dynamic maps information for multi-path trajectory prediction. Fig~\ref{fig:framework} depicts the pipeline of the framework.
\begin{comment}
Attention mechanisms have been widely used in sequence modeling, such as Neural Machine Translation (NMT) \citep{bahdanau2014neural}, which reduces the sequence dependency based on the distance between the input and output. One of the most successful attention mechanism---multi-head-attention---proposed by Vaswani et al. in the Transformer network enhances the global dependencies between the input and output without regard to their distance dependency~\cite{vaswani2017attention}. For example, different words in a sentence can have strong or weak connections with other words. But the words of the same meaning in two different languages can often locate in very different positions. Analogously, the agents coexist in a certain time sequence can intervene each other in various degrees from one time step to another and their behaviors change over time. The multi-head-attention mechanism enables us to automatically learn similar behavior patterns of interactions without regard to the time point. To the best of our knowledge, we are the first to incorporate multi-head-attention for interpreting interactions between road agents.
In the training phase, X-Encoder and Y-Encoder encode the past and future motion information and the dynamic maps information in parallel, in order to take motion and interaction factors along the complete time horizon. The outputs of X-Encoder and Y-Encoder are concatenated and used to learn a set of latent variables $z$ into a space called Z-Space to capture stochastic properties of the movement of the target road user, in consideration the uncertainties of the target road user's motion and interactions with the neighboring road users. The set of latent variables $z$ are pushed towards a Gaussian distribution by minimizing a Kullback-Leiber divergence. In the end, Decoder uses the output of X-Encoder concatenated with $z$ for reconstructing the target road user's future trajectory.
In the inference phase, the model only has access to the target user's motion information and the interaction information with the neighboring road users from a short observation time. Both sources of the information are encoded by the trained X-Encoder and concatenated with a $z$ variable sampled from the Z-Space. The trained decoder uses the concatenated information as the input for predicting the target road user's future trajectory. $z$ can be sampled multiple times for generating multiple feasible trajectories. More details of how the information is processed in each of these modules will be given in the following paragraphs.
\end{comment}
\subsection{Trajectories Ranking}
\label{subsec:ranking}
A bivariate Gaussian distribution is used to rank the multiple predicted trajectories $\hat{Y}^1,\cdots,\hat{Y}^N$ for each agent. At each time step, the predicted positions $({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})$, where $~n{\in}N$ at time step $t'\in T'$ for agent $i$, are used to fit a bivariate Gaussian distribution $\mathcal{N}({\mu}_{xy},\,\sigma^{2}_{xy}, \,\rho)^{t'}$. The predicted trajectories are sorted by the joint probability density functions $p(.~)$ over the time axis using Eq.~\eqref{eq:pdf}\eqref{eq:sort}. $\widehat{Y}^\ast$ denotes the most-likely prediction out of $N$ predictions.
\begin{align}
\label{eq:pdf}
P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'}) \approx p[({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})|\mathcal{N}({\mu}_{xy},\sigma^{2}_{xy},\rho)^{t'}]\\
\label{eq:sort}
\widehat{Y}^\ast = \text{arg\,max}\sum_{n=1}^{N}\sum_{t'=1}^{T'}{\log}P({\hat{x}_{i,n}}^{t'}, {\hat{y}_{i,n}}^{t'})
\end{align}
\section{Experiments}
\label{sec:experiments}
In this section, we will introduce the benchmark which is used to evaluate our method, the evaluation metrics and the comparison of results from our method with the ones from the recent state-of-the-art methods. To further justify how each proposed module in our framework impacts the overall performance, we design a series of ablation studies and discuss the results in details.
\subsection{Trajnet Benchmark Challenge Datasets}
\label{subsec:benchmark}
We verify the performance of the proposed method on the most challenging benchmark Trajnet datasets \cite{sadeghiankosaraju2018trajnet}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provide an only test system for fair comparison among different submitted methods.
Trajnet covers a wide range of datasets and includes various types of road users (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in a real world outdoor mixed traffic environment.
The data are collected from 38 scenes with ground truth for training and the ones from the other 20 scenes without ground truth for test (i.\,e., open challenge competition). Each scene presents various traffic density in different space layout for mixed traffic, which makes the prediction task challenging.
It highly requires a model to have generalizability, in order to obtain adapt to various complex scenes.
Trajectories are recorded as the $xy$ coordinates in meters or pixels projected on a Cartesian space. Each trajectory provides 8 steps for observation and the following 12 steps for prediction. The duration between two successive steps is 0.4 seconds.
However, the pixel coordinates are not in the same scale across the whole dataset. Without uniforming the pixels into the same scale, it is extremely difficult to train a general model for the whole dataset. Hence, we follow all the previous works \cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,gupta2018social,giuliari2020transformer} that use the coordinates in meters.
In order to train and evaluate the proposed method, as well as the ablative studies, 6 different scenes are selected as offline test set from the 38 scenes in the training set.
Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.
The best trained model is slected based on the evaluation performance on the offline test set and then is used for the online evaluation.
Fig.~\ref{fig:trajectories} shows the visualization of the trajectories in each scene.
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{fig/trajectories_bookstore_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{fig/trajectories_coupa_3.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{fig/trajectories_deathCircle_0.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{fig/trajectories_gates_1.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{fig/trajectories_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{fig/trajectories_nexus_0.pdf}
\caption{Visualization of each scene of the offline test set. From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:trajectories}
\end{figure}
\subsection{Evaluation Metrics}
The mean average displacement error (MAD) and the final displacement error (FDE) are the two most commonly applied metrics to measure the performance in terms of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}.
\begin{itemize}
\item MAD is the aligned L2 distance from $Y$ (ground truth) to its prediction $\hat{Y}$ averaged over all time steps. We report the mean value for all the trajectories.
\item FDE is the L2 distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate in time.
\end{itemize}
We evaluate both the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction.
Most-likely prediction is selected by the trajectories ranking mechanism, see Sec~\ref{subsec:ranking}.
$@top10$ prediction means the 10 predicted trajectories with the highest confidence, the one which has the smallest ADE and FDE compared with the ground truth. When the ground truth is not available (for online test), only the most-likely prediction is selected. Then it comes to the single trajectory prediction problem, as most of the previous works did \cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,giuliari2020transformer}.
\subsection{Quantitative Results and Comparison}
\label{subsec:results}
We compare the performance of our method with the most influential previous work and the recent state-of-the-art works published on the Trajnet challenge (up to 05/06/2020) for trajectory prediction to ensure the fair comparison.
The compared works include: the rule-based model \emph{Social Force}~\cite{helbing1995social};
\emph{Social LSTM}~\cite{alahi2016social} that proposes social pooling with a rectangular occupancy grid for close neighboring agents which is widely adopted thereafter in this domain, e.\,g., state refinement LSTM~\emph{SR-LSTM}~\cite{zhang2019sr} and RNN Encoder-based model \emph{RED}~\cite{becker2018evaluation}; \emph{MX-LSTM}~\cite{hasan2018mx} exploits the head pose information of agents to help analyze its moving intention;
\emph{Social GAN}~\cite{gupta2018social} proposes to utilize the generative model GAN for multi-path trajectories prediction;
\wt{\emph{Social GAN}~\cite{gupta2018social}
proposes to utilize the generative models GAN for multi-path trajectories prediction, which are the one of the most closed works to our work (the other one is \emph{DESIRE} \cite{lee2017desire}, however they neither report the online test performance and nor release code. Therefore, we do not compare with \emph{DESIRE} for fairness purpose.) ;}
\emph{Ind-TF}~\cite{giuliari2020transformer} which utilizes transformer network for this task.
\begin{comment}
\my{Too much detail about each SOTA method, try to squeeze in one sentence about SOTA methods.}
\begin{itemize}
\item Social LSTM~\cite{alahi2016social}, proposes a social pooling layer in which a rectangular occupancy grid is used to pool the existence of the neighbors at each time step. After that, many following works \cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} adopt their social pooling layer for this task, including the ablative model AOENet (see Sec~\ref{sec:ablativemodels}).
\item Social GAN~\cite{gupta2018social}, applies generative adversarial network for multiple future trajectories generation which is essentially different from previous works. It takes the interactions of all agents into account.
\item MX-LSTM~\cite{hasan2018mx}, takes the position information and the visibility attentional area driven by the head pose \ms{what is head pose estimate?} as the input and proposes a new pooling mechanism that considers the target agent's view frustum of attention for modeling the interactions with all the other neighboring agents.
\item Social Force~\cite{helbing1995social}, is one of the most well-known approaches for pedestrian agent simulation in crowded space. It uses different forces on the basis of classic physic mechanics to mimic human behavior. The repulsive force prevents the target agent from colliding with others or obstacles and the attractive force drives the agent close to its destination or companies.
\item SR-LSTM~\cite{zhang2019sr}, proposes an LSTM based model with a states refinement module to align all the coexisting agents together and adaptively refine the state of each participant through a message passing framework. A social-aware information selection mechanism with element-wise gate and agent-wise attention layer is used to extract the social effects between the target agent and its neighboring agents.
\item RED~\cite{becker2018evaluation}, uses a Recurrent Neural Network (RNN) Encoder with Multilayer Perceptron (MLP) for trajectory prediction. The input of RED is the offsets of the positions for each trajectory.
\item Ind-TF~\cite{giuliari2020transformer}, applies the Transformer network \cite{vaswani2017attention} for trajectory prediction. One big difference from the aforementioned models is that the prediction only depends on the attention mechanism for the target agent's motion and considers no social interactions with other neighboring agents.
\end{itemize}
\end{comment}
Table~\ref{tb:results} lists the performances from above methods and ours on the Trajnet test set measured by MAD, FDE and overall average $(\text{MAD} + \text{FDE})/2$. The data are originally reported on the Trajnet challenge leader board\footnote{http://trajnet.stanford.edu/result.php?cid=1}. We can see that, our method (AMENet) outperforms the other methods significantly and wins the first place on all metrics.
Even thought compared with the most recent model Transformer Networks Ind-TF \citep{giuliari2020transformer}(under reviewed), our method performs better. Particularly, our method improve the FDE performance with large margin (reducing the error from 1.197 to 1.183 meters).
Notes that, our model predicts multiple trajectories by sampling from the learned $z-sapce$ repeatedly. We select the most-likely prediction using the proposed ranking mehtod as discussed in Sec. \ref{subsec:ranking}. The outstanding performances from our method also demonstrate that our ranking method is effective.
\wt{more discussion why our method reports better results.}
\begin{comment}
\wt{We don't need the below statement.}
It is worth mentioning that even though our model predicts multi-path trajectories for each agent, the performance achieved on the benchmark challenge is from the most-likely prediction by ranking the multi-path trajectories using the proposed ranking mechanism. The benchmark evaluation metrics do not provide the information about the collisions of trajectories predicted. It will be reported in the following ablative studies.
\end{comment}
\begin{table}[t!]
\centering
\caption{Comparison between the our method and the state-of-the-art works. The numer is smaller is better. Best values are highlighted in boldface.}
\begin{tabular}{lllll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & MAD [m]$\downarrow$ &Year\\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 & 2018\\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 & 2018\\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 & 2018\\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 & 1995\\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 & 2019\\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 & 2018\\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} & 2020\\
Ours (AMENet)\tablefootnote{Our method is named as \textit{ikg\_tnt} on the leadboard.} & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} & 2020 \\
\hline
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Multi-Path Prediction}
\label{subsec:multipath-selection}
Multi-path trajectories prediction is one of the main contribution of this work and distinguishes our work from most of the existing works essentially.
Here, we discuss its performance w.\,r.\,t. multi-path prediction with the latent space.
Instead of generating a single prediction, AMENet generates multiple feasible trajectories by sampling the latent variable $z$ multiple times (see Sec~\ref{subsec:cvae}).
During training, the motion information and interaction information from observation and ground truth are encoded into the so-called Z-Space (see Fig.~\ref{fig:framework}). The KL-divergence loss forces $z$ to be a normal Gaussian distribution.
Fig.~\ref{fig:z_space} shows the visualization of the Z-Space in two dimensions, with $\mu$ visualized on the left and $\log\sigma$ visualized on the right. Form the figure we can see that, the training phase successfully re-parameterized the $z$ into a Gaussian distribution that captures the stochastic properties of agents' behaviors. When the Y-Encoder is not available in the inference time, the well-trained Z-Space, in turn, enables us to randomly sample a latent variable $z$ from the Gaussian distribution multiple times for generating more than one feasible future trajectories conditioned on the observation.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=.6\textwidth]{fig/z_space.pdf}
\caption{Z-Space of two dimensions with $\mu$ visualized on the left and $\sigma$ visualized on the right. It is trained to follow $\mathcal{N}(0, 1)$ distribution. The variance is visualized in logarithm space, which is very close to zero.}
\label{fig:z_space}
\end{figure}
Table~\ref{tb:multipath} shows the quantitative results for multi-path trajectory prediction. Predicted trajectories are ranked by $\text{top}@10$ with the prior knowledge of the corresponding ground truth and most-likely ranking if the ground truth is not available. Compared with the most-likely prediction, $\text{top}@10$ prediction yields similar but better performance. It indicates that: 1) the generated multiple trajectories increase the chance to narrow down the errors from the prediction to the ground truth, and 2) the predicted trajectories are feasible (if not, the bad predictions will deteriorate the overall performance and leads to worse results than the most-likely prediction).
Fig.~\ref{fig:multi-path} showcases some qualitative examples of multi-path trajectory prediction from our model. We can see that in roundabouts, the interactions between different agents are full of uncertainties and each agent has more possibilities to choose their future paths. We also notice that the predicted trajectories diverge more widely in further time steps. It is reasonable because the further the future is the uncertainty of agents intention is higher. It also proves that the ability of predicting multiple plausible trajectories is important for analyzing the movements of road users because of the increasing uncertainty of the future movements. Single prediction provides limited information for analyzing in this case and is likely to lead to false conclusion if the prediction is not correct/precise in the early step.
\begin{table}[t!]
\centering
\small
\caption{Evaluation of multi-path trajectory prediction using AMENet on the offline test set of Trajnet. Predicted trajectories are ranked by $\text{top}@10$ (former) and most-likely and are measured by MAD/FDE.}
\begin{tabular}{lll}
\\ \hline
Dataset & Top@10 & Most-likely \\ \hline
bookstore3 & 0.477/0.961 & 0.486/0.979 \\
coupa3 & 0.221/0.432 & 0.226/0.442 \\
deathCircle0 & 0.650/1.280 & 0.659/1.297 \\
gates1 & 0.784/1.663 & 0.797/1.692 \\
hyang6 & 0.534/1.076 & 0.542/1.094 \\
nexus6 & 0.642/1.073 & 0.559/1.109 \\
Average & 0.535/1.081 & 0.545/1.102 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.514\textwidth]{multi_preds/deathCircle_0240.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.476\textwidth]{multi_preds/gates_1001.pdf}
\caption{Multi-path predictions from AMENet}
\label{fig:multi-path}
\end{figure}
\my{I am not sure if we want to report InD in this paper. This section somehow breaks the structure of this paper.}
\my{If this is not the main contribution, I would actually remove this section.}
\subsection{Ablation Studies}
\label{sec:ablativemodels}
In order to analyze the impact of each proposed module in our framework, i.\,e.,~dynamic maps, self-attention, and the extended structure of CVAE, three ablative models are carried out.
\begin{itemize}
\item AMENet, the full model of our framework.
\item AOENet, substitutes dynamic maps with occupancy grid \citep{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both X-Encoder and Y-Encoder. This setting is used to validate the contribution from the dynamic maps.
\item MENet, removes self-attention module in the dynamic maps branch. This setting is used to validate the effectiveness of the self-attention module that learns the spatial interactions among agents alone the time axis.
\item ACVAE, only uses dynamic maps in X-Encoder. It is equivalent to CVAE ~\citep{kingma2013auto,kingma2014semi,sohn2015learning} with self-attention. This setting is used to validate the contribution of the extended structure for processing the dynamic maps information in Y-Encoder.
\end{itemize}
Table~\ref{tb:resultsablativemodels} shows the quantitative results frome the above ablative models.
Errors are measured by MAD/FDE on most-likely prediction.
By comparison between AOENet and AMENet we can see that that when we replace the dynamic maps with the occupancy grid, both the MAD and FDE increase by a remarkable margin across all the datasets.
It demonstrates that our proposed dynamic maps is more helpful for exploring the interaction information among agents than the occupancy grid.
We also can see that if the self-attention module is removed (MENet) the performance decreases by a remarkable margin across all the datasets.
This phenomena indicates that the self-attention module is effective in learning the interaction among agents from the dynamic maps.
The comparison between ACVAE and AMENet shows that when we remove the extended structure in Y-Encoder for dynamic maps, the performances, especially FDE, decrease significantly across all the datasets
The extended structure provides the ability the model for processing the interaction information even in prediction time during training. It improves the model's performance, especially for predicting more accurate destinations. This improvement has been also confirmed by the benchmark challenge (see Table~\ref{tb:results}). One interesting observation of the comparison between ACVAE and AOENet/MENet is that, ACVAE performs much better than AOENet and MENet measured by MAD and FDE. This observation further proves that, even without the extended structure in Y-Encoder, the dynamic maps with self-attention are very beneficial for interpreting the interactions between a target agent and its neighboring agents. Its robustness has been demonstrated by the ablative models across various datasets.
\begin{table}[hbpt!]
\setlength{\tabcolsep}{3pt}
\centering
\small
\caption{Evaluation of dynamic maps, self-attention, and the extended structure of CVAE via AOENet, MENet and ACVAE, respectively, in comparison with the proposed model AMENet. Errors are measured by MAD/FDE on the most-likely prediction. Best values are highlighted in bold face.}
\begin{tabular}{lllll}
\\ \hline
Dataset & AMENet & AOENet & MENet & ACVAE \\ \hline
bookstore3 & \textbf{0.486}/\textbf{0.979} & 0.574/1.144 & 0.576/1.139 & 0.509/1.030 \\
coupa3 & \textbf{0.226}/\textbf{0.442} & 0.260/0.509 & 0.294/0.572 & 0.237/0.464 \\
deathCircle0 & \textbf{0.659}/\textbf{1.297} & 0.726/1.437 & 0.725/1.419 & 0.698/1.378 \\
gates1 & \textbf{0.797}/\textbf{1.692} & 0.878/1.819 & 0.941/1.928 & 0.861/1.823 \\
hyang6 & \textbf{0.542}/\textbf{1.094} & 0.619/1.244 & 0.657/1.292 & 0.566/1.140 \\
nexus6 & \textbf{0.559}/\textbf{1.109} & 0.752/1.489 & 0.705/1.140 & 0.595/1.181 \\
Average & \textbf{0.545}/\textbf{1.102} & 0.635/1.274 & 0.650/1.283 & 0.578/1.169 \\ \hline
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
Fig.~\ref{fig:abl_qualitative_results} showcases some exapmles of the qualitative results of the full AMENet in comparison with the ablative models in different scenes.
In general, all the models are able to predict trajectories in different scenes, e.\,g.,~intersections and roundabouts, of various traffic density and motion patterns, e.\,g.,~standing still or moving fast. Given a short observation of trajectories, i.\,e.,~ 8 time steps, all the models are able to capture the general speed and heading direction for agents located in different areas in the space.
AMENet predicates the most accurate trajectories which are very close or even completely overlap with the corresponding ground truth trajectories.
Compared with the ablative models, AMENet predicts more accurate destinations (the last position of the predicted trajectories), which is in line with the quantitative results shown in Table~\ref{tb:results}.
One very clear example in \textit{hyang6} (left figure in Fig.~\ref{fig:abl_qualitative_results}, in the third row) shows that, when the fast-moving agent changes its motion, AOENet and MENet have limited performance to predict its travel speed and ACVAE has limited performance to predict its destination. One the other hand, the prediction from AMENet is very close to the ground truth.
Nevertheless, our models have limited performance in predicting abnormal trajectories, like suddenly turning around or changing speed drastically. Such scenarios can be found in the lower right corner in \textit{gate1} (right figure in Fig.~\ref{fig:abl_qualitative_results}, in the second row). The sudden maneuver of agents are very difficult to forecast even for human observers.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.45\textwidth]{scenarios/bookstore_3290.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.54\textwidth]{scenarios/coupa_3327.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.31\textwidth]{scenarios/deathCircle_0000.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.28\textwidth]{scenarios/gates_1001.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.52\textwidth]{scenarios/hyang_6209.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.27\textwidth]{scenarios/nexus_0038.pdf}
\caption{Trajectories predicted by AMEnet (AME), AOENet (AOE), ME (MENet), CVAE (ACVAE) and the corresponding ground truth (GT) trajectories in different scenes. From From top left to bottom right they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.}
\label{fig:abl_qualitative_results}
\end{figure}
\subsection{Extensive Studies on Benchmark InD}
\label{subsec:InD}
To further investigate the performance of our methods, we carry out extensive experiments on a newly published large-scale benchmark InD\footnote{\url{https://www.ind-dataset.com/}}.
It consists of 33 datasets and is collected using drones on four very busy intersections (as shown in Fig.~\ref{fig:qualitativeresultsInD})in Germany in 2019 by Bock et al. \cite{inDdataset}.
Different from Trajnet that most of the environments (i.\,e.,~shared spaces \cite{reid2009dft,robicquet2016learning}) are pedestrian friendly, the interactions in InD are more dominated by vehicles. This makes the prediction task more changing due to the very different travel speed between pedestrians and vehicles, and their direct interactions.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and 12 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet and the remaining datasets are used for training.
Table~\ref{tb:resultsInD} lists the performance of our method measured by MAD and FDE for each intersection and the overall average errors. We can see that our method is still able to generate feasible trajectories and reports good results (average errors (0.731/1.587) which are lightly inferior than the ones tested on Trajnet (0.545/1.102)).
\begin{table}[t]
\centering
\small
\caption{Quantitative Results of AMENet on InD measured by MAD/FDE, and average performance across the whole datasets.}
\begin{tabular}{lll}
\\ \hline
InD & Top@10 & Most-likely \\ \hline
Intersection-A & 0.952/1.938 & 1.070/2.216 \\
Intersection-B & 0.585/1.289 & 0.653/1.458 \\
Intersection-C & 0.737/1.636 & 0.827/1.868 \\
Intersection-D & 0.279/0.588 & 0.374/0.804 \\
Average & 0.638/1.363 & 0.731/1.587 \\ \hline
\end{tabular}
\label{tb:resultsInD}
\end{table}
\begin{figure} [t]
\centering
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/06_Trajectories020_12.pdf}
\caption{\small{Intersection-A}}
\label{subfig:Intersection-A}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/14_Trajectories030_12.pdf}
\caption{\small{Intersection-B}}
\label{subfig:Intersection-B}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/27_Trajectories011_12.pdf}
\caption{\small{Intersection-C}}
\label{subfig:Intersection-C}
\end{subfigure}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0in 0in 0in 0in, width=1\linewidth]{InD/32_Trajectories019_12.pdf}
\caption{\small{Intersection-D}}
\label{subfig:Intersection-D}
\end{subfigure}\hfill
\caption{\small{Benchmark InD: examples for predicting trajectories of mixed traffic in different intersections.}}
\label{fig:qualitativeresultsInD}
\end{figure}
\subsection{Studies on Long-Term Trajectory Prediction}
\label{subsec:longterm}
In this section, we investigate the model's performance on predicting long-term trajectories in real-world mixed traffic situations in different intersections.
Since the Trajnet benchmark (see Sec~\ref{subsec:benchmark}) only provides trajectories of 8 time steps for observation and 12 time steps for prediction, we instead use the newly published large-scale open-source datasets InD\footnote{\url{https://www.ind-dataset.com/}} for this task. InD was collected using drones on four different intersections in Germany for mixed traffic in 2019 by Bock et al. \cite{inDdataset}. In total, there are 33 datasets from different intersections.
We follow the same processing format as the Trajnet benchmark to down sample the time steps of InD from video frame rate \SI{25}{fps} to \SI{2.5}{fps}, or 0.4 seconds for each time step. We obtain the same sequence length (8 time steps) of each trajectory for observation and up to 32 time steps for prediction. One third of all the datasets from each intersection are selected for testing the performance of AMENet on long-term trajectory prediction and the remaining datasets are used for training.
Fig.~\ref{fig:AMENet_MAD} shows the trend of errors measured by MAD, FDE and the number of collisions in relation to time steps. The performance of AMENet at time step 12 is comparable with the performance on Trajnet datasets for both $\text{top}@10$ and most-likely prediction. On the other hand, the performance measured by MAD and FDE increase with the increment of time steps. Behaviors of road users are more unpredictable and predicting long-term trajectories are more challenging than short term trajectories only based on a short observation. One interesting observation is the performance measured by the number of collisions or invalid predicted trajectories. Overall, the number of collisions is relatively small and increases with the increment of time steps for the $\text{top}@10$ prediction. However, the most-likely prediction leads to fewer collisions and demonstrates no consistent ascending trend regarding the time steps. One possible explanation could be that the $\text{top}@10$ prediction is selected by comparing with the corresponding ground truth. There is no consideration of collisions. On the other hand, the most-likely prediction selects the average prediction out of multiple predictions for each trajectory using a bivariate Gaussian distribution (see Sec~\ref{sec:multipath-selection}). It yields better results regarding safety from the majority voting mechanism than merely selecting the best prediction based on the distance to the ground truth.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_MAD.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_FDE.pdf}
\includegraphics[trim=0.5cm 0cm 1cm 0cm, width=0.495\textwidth]{time_axis_1/AMENet_collision.pdf}
\caption{AMENet tested on InD for different predicted sequence lengths measured by MAD, FDE and number of collisions, respectively.}
\label{fig:AMENet_MAD}
\end{figure}
Fig.~\ref{fig:qualitativeresults} shows the qualitative performance of AMENet for predicting long-term trajectories
in the big intersection with weakened traffic regulations in InD. From Scenario-A in the left column we can see that AMENet generates accurate predictions for 12 and 16 time steps (visualized in first two rows) for two pedestrians. When they encounter each other at 20 time steps (third row), the model correctly predicts that the left pedestrian yields. But the predicted trajectories slightly deviate from the ground truth and lead to a very close interaction. With further increment of time steps, the prediction is less accurate regarding travel speed and heading direction. From Scenario-B in the right column we can see similar performance. The model has limited performance for fast-moving agent, i.\,e.,~the vehicle in the middle of the street.
\begin{figure} [bpht!]
\centering
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_12.pdf}
\label{subfig:s-a-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_12.pdf}
\label{subfig:s-b-12}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_16.pdf}
\label{subfig:s-a-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_16.pdf}
\label{subfig:s-b-16}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_20.pdf}
\label{subfig:s-a-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_20.pdf}
\label{subfig:s-b-20}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_24.pdf}
\label{subfig:s-a-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_24.pdf}
\label{subfig:s-b-24}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/29_Trajectories052_28.pdf}
\label{subfig:s-a-28}
\includegraphics[clip=true,trim=0.2in 0.35in 0.1in 0.2in, width=0.495\linewidth]{time_axis_1/27_Trajectories046_28.pdf}
\label{subfig:s-b-28}
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/29_Trajectories052_32.pdf}
\caption{\small{Scenario-A 12 to 32 steps}}
\label{subfig:s-a-32}
\end{subfigure}\hfill
\begin{subfigure}{0.495\textwidth}
\includegraphics[clip=true,trim=0.2in 0.05in 0.1in 0.2in, width=\linewidth]{time_axis_1/27_Trajectories046_32.pdf}
\caption{\small{Scenario-B 12 to 32 steps}}
\label{subfig:s-b-32}
\end{subfigure}
\caption{\small{Examples for predicting different sequence lengths in Scenario-A (left column) and Scenario-B (right column). From top to bottom rows the prediction lengths are 12, 16, 20, 24, 28 and 32 time steps. The observation sequence lengths are 8 time steps.}}
\label{fig:qualitativeresults}
\end{figure}
To summarize, long-term trajectory prediction based on a short observation is extremely challenging. The behaviors of different road users are more unpredictable with the increase of time. In the future work, in order to push the time horizon from 12 steps or 4.8 seconds to longer time, it may require extra information to update the observation and improve the performance of the model. For example, if the new positions of the agents are acquired in later time steps, the observation time horizon can be shifted accordingly, which is similar to the mechanism in Kalman Filter \cite{kalman1960new} that the prediction can be calibrated by the newly available observation to improve performance for long-term trajectory prediction.
\section{Conclusions}
In this paper, we present a generative model called Attentive Maps Encoder Networks (AMENet) that uses motion information and interaction information for multi-path trajectory prediction of mixed traffic in various real-world environments.
The latent space learnt by the X-Encoder and Y-Encoder for both sources of information enables the model to capture the stochastic properties of motion behaviors for predicting multiple plausible trajectories after a short observation time.
We propose an novel way---dynamic maps---to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different time steps.
The efficacy of the model has been validated on the most challenging benchmark Trajnet that contains various datasets in various real-world environments. Our model not only achieves the state-of-the-art performance, but also wins the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet is validated via a series of ablative studies.
In order to further investigate the model's performance on long-term trajectory prediction, the newly published benchmark InD is utilized. The performance of AMENet is on a par with the one on Trajnet for 12 time steps, but slowly goes down up to 32 time-step positions of 12.8 seconds.
In the future work, we will extend our prediction model for safety prediction, for example, using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and detecting abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\begin{comment}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_bookstore_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_coupa_3.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_deathCircle_0.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_gates_1.pdf}
\includegraphics[trim=0cm 2cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_hyang_6.pdf}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.6\textwidth]{fig/dynamic_maps_nexus_0.pdf}
\caption{Maps information of each dataset. From top to the bottom rows are the visualization of the dynamic maps for \textit{bookstore-3}, \textit{coupa-3}, \textit{deathCircle-0}, \textit{gates-1}, \textit{hyang-6} and \textit{nexus-0} with accumulated time steps.}
\label{fig:dynamic_maps}
\end{figure}
To more intuitively show the dynamic maps information, we gather all the agents over all the time steps and visualize them in Fig.~\ref{fig:dynamic_maps} for the different sub-sets extracted from the benchmark datasets (see more information in Sec~\ref{subsec:benchmark} for the banchmark datasets).
The visualization demonstrates certain motion patterns of the agents from different scenes, including the distribution of orientation, speed and positions over the grids of the maps. For example, all the agents move in a certain direction with similar speed on a particular area of the maps. Some areas are much denser than the others.
\end{comment}
\my{Bib list is also too long, you have 74 at moment. Try to delete less important ones. I give a number, say less than 50.}
\section*{References}
\section{Introduction}
\label{sec:introduction}
Accurate trajectory prediction is a crucial task in different communities, such as intelligent transportation systems (ITS) for traffic management and autonomous driving~\cite{morris2008survey,cheng2018modeling,cheng2020mcenet},
photogrammetry mapping and extraction~\cite{schindler2010automatic,klinger2017probabilistic,cheng2018mixed,ma2019deep},
computer vision~\cite{alahi2016social,mohajerin2019multi} and mobile robot applications~\cite{mohanan2018survey}.
It enables an intelligent system to foresee the behaviors of road users and make a reasonable and safe decision for the next operation.
It is defined as the prediction of plausible (e.\,g.,~collision free and energy efficient) and socially-acceptable (e.\,g.,~considering social rules, norms, and relations between agents) positions in 2D/3D of non-erratic target agents (pedestrians, cyclists, vehicles and other types~\cite{rudenko2019human}) at each step within a predefined future time interval relying on observed partial trajectories over a certain period of discrete time steps~\cite{helbing1995social,alahi2016social}.
A prediction process in mixed traffic is exemplified in Fig.~\ref{fig:sketch_map}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.6in 5.8in 2.6in 0.6in, width=\textwidth]{fig/model_example.pdf}
\caption{Predicting future positions of agents (e.\,g.,~target agent in black) at each step within a predefined time interval by observing their past trajectories in mixed traffic situations.}
\label{fig:sketch_map}
\end{figure}
How to effectively predict accurate trajectories for heterogeneous agents still remains challenging due to: 1) the complex behavior and uncertain moving intention of each agent, 2) the presence and interactions between agents, and 3) multi-path choices: there is usually more than one socially-acceptable path that an agent could use in the future.
Boosted by Deep Learning (DL)~\cite{lecun2015deep} technologies and the availability of large-scale real-world datasets and benchmarks, recent methods utilizing Recurrent Neural Networks (RNNs) and/or Convolutional Neural Networks (CNNs) have made significant progress in modeling the interactions between agents and predicting their future trajectories ~\cite{alahi2016social,lee2017desire,vemula2018social,gupta2018social,xue2018ss,cheng2020mcenet}.
However, it is difficult for those methods to distinguish the effects of heterogeneous neighboring agents in different situations. For example, the target vehicle is affected more by the pedestrians in front of it tending to cross the road than by the following vehicles.
Besides, minimizing the Euclidean distance between the ground truth and the prediction is commonly used as the objective function in some discriminative models~\cite{vemula2018social,xue2018ss}, which produce a deterministic outcome and is likely to predict the ``average" trajectories.
In this regard, generative models \cite{goodfellow2014generative,kingma2014auto,kingma2014semi,sohn2015learning} are proposed for predicting multiple socially-acceptable trajectories~\cite{lee2017desire,gupta2018social}.
In spite of the great progress, most of these methods are designed for homogeneous agents (e.\,g.,~only pedestrians).
An important research question remains open: how to predict accurate trajectories in different scenes for all the various types of heterogeneous agents?
To address this problem, we propose a model named \emph{Attentive Maps Encoder Network} (AMENet). It inherits the ability of deep conditional generative models~\cite{sohn2015learning} using Gaussian latent variables for modeling complex future trajectories and learns the interactions between agents by attentive dynamic maps.
The interaction module manipulates the information extracted from the neighboring agents' orientation, speed and position in relation to the target agent at each step and the attention mechanism~\cite{vaswani2017attention} enables the module to automatically focus on the salient features extracted over different steps.
Fig.~\ref{fig:framework} gives an overview of the model.
Two encoders learn the representations of an agent's behavior into a latent space: the X-Encoder learns the information from the observed trajectories, while the Y-Encoder learns the information from the future trajectories of the ground truth and is removed in the inference phase.
The Decoder is trained to predict the future trajectories conditioned on the information learned by the X-Encoder and the representations sampled from the latent space.
The main contributions of this study are summarized as follows:
\begin{itemize}[nosep]
\item[1] The generative framework AMENet encodes uncertainties of an agent's behavior into the latent space and predicts multi-path trajectories.
\item[2] A novel module, \emph{attentive dynamic maps}, learns spatio-temporal interconnections between agents considering their orientation, speed and position.
\item[3] It predicts accurate trajectories for heterogeneous agents in various unseen real-world environments, rather than focusing on homogeneous agents.
\end{itemize}
The efficacy of AMENet is validated using the recent benchmarks \emph{Trajnet}~\cite{sadeghiankosaraju2018trajnet} that contains 20 unseen scenes in various environments and InD~\cite{inDdataset} of four different intersections for trajectory prediction. Each module of AMENet is validated via a series of ablation studies. Its detailed implementation information is given in Appendix~C and the source code is available at \url{https://github.com/haohao11/AMENet}.
\section{Related Work}
Trajectory prediction has been studied for decades and we discuss the most relevant works with respect to sequence prediction, interaction modeling and generative models for multi-path prediction.
\subsection{Sequence Modeling}
\label{sec:rel-seqmodeling}
Modeling trajectories as sequences is one of the most common approaches. The 2D/3D positions of an agent are predicted step by step.
The widely applied models include linear regression and Kalman filter~\cite{harvey1990forecasting}, Gaussian processes~\cite{tay2008modelling} and Markov decision processing~\cite{kitani2012activity}.
These traditional methods largely rely on the quality of manually designed features and have limited performance in tackling large-scale data.
Benefiting from the development of DL technologies \cite{lecun2015deep} in recent years, the RNNs and ~Long Short-Term Memories (LSTMs)~\cite{hochreiter1997long} are inherently designed for sequence prediction tasks and successfully applied for predicting pedestrian trajectories~\cite{alahi2016social,gupta2018social,sadeghian2018sophie,zhang2019sr} and other types of road users~\cite{mohajerin2019multi,chandra2019traphic,tang2019multiple}.
In this work, we use LSTMs to encode the temporal sequential information and decode the learned features to predict trajectories in sequence.
\subsection{Interaction Modeling}
\label{sec:rel-intermodeling}
The behavior of an agent can be crucially affected by others. Therefore, effectively modeling the interactions is important for accurate trajectory prediction.
The negotiation between road agents is simulated by Game Theory~\cite{johora2020agent} or Social Forces~\cite{helbing1995social}, i.\,e.,~the repulsive force for collision avoidance and the attractive force for social connections.
Such rule-based interaction modelings have been incorporated into DL models. Social LSTM~\cite{alahi2016social} proposes an occupancy grid to map the positions of close neighboring agents and uses a Social pooling layer to encode the interaction information for trajectory prediction. Many works design their specific ``occupancy'' grid~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}.
Cheng~et al.~\cite{cheng2020mcenet} consider the interactions between individual and group agents with social connections and report better performance.
Meanwhile, different pooling mechanisms are proposed. The generative adversarial network (GAN)~\cite{goodfellow2014generative} based model Social GAN~\cite{gupta2018social} embeds relative positions between the target and all the other agents and then uses an element-wise pooling to extract the interaction between all the pairs of agents;
The SR-LSTM (States Refinement LSTM) model~\cite{zhang2019sr} proposes a states refinement module for aligning all the agents together and adaptively refines the state of each agent through a message passing framework.
However, only the position information is leveraged in most of the above DL models.
The interaction dynamics are not fully captured both in spatial and temporal domains.
\subsection{Modeling with Attention}
\label{sec:rel-attention}
Attention mechanisms~\cite{bahdanau2014neural,xu2015show,vaswani2017attention} have been utilized to extract semantic information for predicting trajectories \cite{varshneya2017human,sadeghian2018sophie,al2018move,giuliari2020transformer}.
A soft attention mechanism~\cite{xu2015show} is incorporated in LSTMs to learn the spatio-temporal patterns from the position coordinates~\cite{varshneya2017human}.
SoPhie~\cite{sadeghian2018sophie} applies two separate soft attention modules: the physical attention learns salient agent-to-scene features and the social attention models agent-to-agent interactions.
In the MAP model (Move, Attend, and Predict)~\cite{al2018move}, an attentive network is implemented to learn the relationships between the location and time information.
The most recent work Ind-TF~\cite{giuliari2020transformer} utilizes the Transformer network~\cite{vaswani2017attention} for modeling trajectory sequences. Transformer is a type of neural network structure for modeling sequences and widely applied in machine translation for sequence prediction.
In this work, we model the dynamic interactions among all road users by utilizing the self-attention mechanism~\cite{vaswani2017attention} along the time axis.
\subsection{Generative Models}
\label{sec:rel-generative}
Nowadays, in the era of DL, GAN~\cite{goodfellow2014generative}, VAE~\cite{kingma2014auto} and the variants such as CVAE~\cite{kingma2014semi,sohn2015learning}, are the most popular generative models.
Gupta~et al.~\cite{gupta2018social} trained a generator to generate future trajectories from noise and a discriminator to judge whether the generated ones are fake or not. The performances of the two modules are enhanced mutually and the generator is able to generate trajectories that are as precise as the real ones. Amirian~et al.~\cite{amirian2019social} propose a GAN-based model for generating multiple plausible trajectories for pedestrians.
The CVAE model is used to predict multi-path trajectories conditioned on the observed trajectories~\cite{lee2017desire}, as well as scene context~\cite{cheng2020mcenet}.
Besides the generative models, Makansi et al.~\cite{makansi2019overcoming} treat the multi-path trajectory prediction problem as multi-model distributions estimation. Their method first predicts multi-model distributions with an evolving strategy by combining Winner-Takes-ALL loss~\cite{guzman2012multiple}, and then fits a distribution to the samples from the first stage for trajectory prediction.
In this work, we incorporate a CVAE module to learn a latent space for predicting multiple plausible future trajectories conditioned on the observed trajectories.
Our work essentially differs from the above models in the following ways. Interactions are modeled by the dynamic maps considering 1) not only position, but also orientation and speed and 2) are automatically extracted with the self-attention mechanism, and 3) the interactions associated with the ground truth are also encoded into the latent space, which is different from a conventional CVAE model only ``auto-encoding" the ground truth trajectories~\cite{lee2017desire}.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 3.5in 0in 0.5in, width=1\textwidth]{fig/model_framework.pdf}
\caption{An overview of the proposed framework. It consists of four modules: the X-Encoder and Y-Encoder are used for encoding the observed and the future trajectories, respectively. They have an identical structure. The Sample Generator produces diverse samples conditioned on the input of the previous encoders. The Decoder is used to decode the features from the produced samples and predicts the future trajectories sequentially. FC stands for fully connected layer. The specific structure of the X-Encoder/Y-Encoder is given by Fig.~\ref{fig:encoder}.}
\label{fig:framework}
\end{figure}
\section{Methodology}
\label{sec:methodology}
In this section, we introduce the proposed model AMENet (Fig.~\ref{fig:framework}) in detail in the following structure: a brief review on the \emph{CVAE} (Sec.~\ref{subsec:cvae}), the detailed structure of \emph{AMENet} (Sec.~\ref{subsec:AMENet}) and the \emph{Feature Encoding} (Sec.~\ref{subsubsec:featureencoding}) of the motion input and the attentive dynamic maps.
\subsection{Diverse Sample Generation with CVAE}
\label{subsec:cvae}
The CVAE model is an extension of the generative model VAE \cite{kingma2014auto} that introduces a condition to control the output \cite{kingma2014semi}. More details of the theory is provided in Appendix~A. The following describes the basics of the CVAE model.
Given a set of samples $(\boldsymbol{X, Y})=((\boldsymbol{X}_1, \boldsymbol{Y}_1 ),\cdots,(\boldsymbol{X}_N, \boldsymbol{Y}_N))$, it jointly learns a recognition model $q_\phi(\mathbf{z}|\boldsymbol{Y},\,\boldsymbol{X})$ of a variational approximation of the true posterior $p_\theta(\mathbf{z}|\boldsymbol{Y},\,\boldsymbol{X})$ and a generation model $p_\theta(\boldsymbol{Y}|\boldsymbol{X}, \,\boldsymbol{z})$ for predicting the output $\boldsymbol{Y}$ conditioned on the input $\boldsymbol{X}$. $\boldsymbol{z}$ are the stochastic latent variables, $\phi$ and $\theta$ are the respective recognition and generative parameters. The goal is to maximize the \textit{Conditional Log-Likelihood}:
\begin{equation}
\begin{split}
\log{p_\theta(\boldsymbol{Y}|\boldsymbol{X})} &= \log\sum_{\boldsymbol{z}} p_\theta(\boldsymbol{Y}, \boldsymbol{z}|\boldsymbol{X})\\
&= \log{(\sum_{\boldsymbol{z}} q_\phi(\boldsymbol{z}|\boldsymbol{X}, \boldsymbol{Y})\frac{p_\theta(\boldsymbol{Y}|\boldsymbol{X}, \boldsymbol{z})p_\theta(\boldsymbol{z}|\boldsymbol{X})}{q_\phi(\boldsymbol{z}|\boldsymbol{X}, \boldsymbol{Y})})}.
\end{split}
\end{equation}
By means of Jensen's inequality, the evidence lower bound can be obtained:
\begin{equation}
\label{eq:CVAE}
\log{p_\theta(\boldsymbol{Y}|\boldsymbol{X}}) \geq
-D_{KL}(q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})||p_\theta(\mathbf{z}))
+ \mathbb{E}_{q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})}
[\log p_\theta(\boldsymbol{Y}|\boldsymbol{X}, \,\mathbf{z})].
\end{equation}
Here both the approximate posterior $q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})$ and the prior $p_\theta(\mathbf{z})$ are assumed to be Gaussian distributions for an analytical solution \cite{kingma2014auto}. During training, the Kullback-Leibler divergence $D_{KL}(\cdot)$ pushes the approximate posterior to the prior distribution $p_\theta(\mathbf{z})$. The generation error $\mathbb{E}_{q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})}(\cdot)$ measures the distance between the generated output and the ground truth. During inference, for a given observation $\boldsymbol{X}_i$, one latent variable $z$ is drawn from the prior distribution $p_\theta(\mathbf{z})$, and one of the possible outputs $\hat{\boldsymbol{Y}}_i$ is generated from the distribution $p_\theta(\boldsymbol{Y}_i|\boldsymbol{X}_i,\,z)$. The latent variables $\mathbf{z}$ allow for the one-to-many mapping from the condition to the output via multiple sampling.
\subsection{Attentive Encoder Network for Trajectory Prediction}
\label{subsec:AMENet}
In tasks like trajectory prediction, we are interested in modeling a conditional distribution $p_\theta(\boldsymbol{Y}_n|\boldsymbol{X})$, where $\boldsymbol{X}$ is the past trajectory information and $\boldsymbol{Y}_n$ is one of its possible future trajectories.
In order to realize this goal that generates controllable samples of future trajectories based on past trajectories, a CVAE module is adopted inside our framework.
The multi-path trajectory prediction problem with the consideration of motion and interaction information is defined as follows: agent $i$, receives as input its observed trajectories $\boldsymbol{X}_i=\{X_i^1,\cdots,X_i^T\}$ for predicting its $n$-th plausible future trajectory $\hat{\boldsymbol{Y}}_{i,n}=\{\hat{Y}_{i,n}^1,\cdots,\hat{Y}_{i,\,n}^{T'}\}$. $T$ and $T'$ denote the total number of steps of the past and future trajectories, respectively. The trajectory position of $i$ is characterized by the coordinates as $X_i^t=({x_i}^t, {y_i}^t)$ at step $t$ or as $\hat{Y}_{i,n}^{t'}=(\hat{x}_{i,n}^{t'}, \hat{y}_{i,n}^{t'})$ at step $t'$. 3D coordinates are also possible, but in this work only 2D coordinates are considered. The objective is to predict its multiple plausible future trajectories $( \hat{\boldsymbol{Y}}_{i,1},\cdots,\hat{\boldsymbol{Y}}_{i,N})$ that are as accurate as possible to the ground truth $\boldsymbol{Y}_i$. This task is mathematically defined as $\hat{\boldsymbol{Y}}_{i,\,n} = f(\boldsymbol{X}_i, \text{AMap}_i)$ and \rw{$n \leq N$}. Here, $N$ denotes the total number of the predicted trajectories and $\text{AMap}_i$ denotes the attentive dynamic maps centralized on the target agent for mapping the interactions with its neighboring agents over the steps. More details of the attentive dynamic maps will be given in Sec.~\ref{subsubsec:dynamic}.
We extend the CVAE model as follows to solve this problem:
\begin{equation}
\begin{split}
f(\boldsymbol{X}_i, \text{AMap}_i) &= \log{p_\theta(\boldsymbol{Y}_i|\boldsymbol{X}_i}, \text{AMap}_i),\\
&\geq -D_{KL}(q_\phi(\mathbf{z}|\boldsymbol{X}_i \,\boldsymbol{Y}_i,\,\text{AMap}_i)||p_\theta(\mathbf{z}))\\
&+ \mathbb{E}_{q_\phi(\mathbf{z}|\boldsymbol{X}_i,\,\boldsymbol{Y}_i,\,\text{AMap}_i)}
[\log p_\theta(\boldsymbol{Y}_i|\boldsymbol{X}_i, \text{AMap}_i,\,\mathbf{z})].
\end{split}
\end{equation}
Note that for simplicity, the notation of steps $T$ and $T'$ is omitted. $q_\phi(\cdot)$ accesses the interactions captured by $(\text{AMap}_i)_{t=1}^{T}$ and $(\text{AMap}_i)_{t'=1}^{T'}$, respectively, from both the observation and the future time, while $p_\theta(\cdot)$ only accesses the interactions captured by $(\text{AMap}_i)_{t=1}^{T}$ from the observation time.
In the training phase, $q_\phi(\cdot)$ and $p_\theta(\cdot)$ are jointly learned. The recognition model is trained via the X-Encoder and Y-Encoder.
The encoded outputs from both encoders are concatenated and then forwarded to two side-by-side fully connected (FC) layers to produce the mean and the standard deviation of the latent variables $\mathbf{z}$. The generation model is trained via the Decoder. It takes the output of the X-Encoder as the condition and the latent variables to generate the future trajectory. We employ an LSTM network in the Decoder for predicting the future trajectory sequentially.
The Mean Squared Error (MSE) between the predicted trajectories and the ground-truth ones is used as the reconstruction loss. During inference, the Y-Encoder is removed and the X-Encoder works in the same way as in the training phase. The Decoder generates a prediction conditioned on the output of the X-Encoder and the sampled latent variable $z$. This step is repeated $N$ times to predict multiple trajectories.
Fig.~\ref{fig:encoder} shows the detailed structure of the X-Encoder/Y-Encoder, which are designed for learning the information from the motion input and the attentive dynamic maps.
\begin{figure}[t!]
\centering
\includegraphics[trim=0.5in 2.2in 3.6in 0.5in, width=0.75\textwidth]{fig/model_encoder.pdf}
\caption{The structure of the X-Encoder. The upper branch extracts motion information of target agents and the lower one learns the interaction information between neighboring agents from the dynamic maps over time attentively. The motion information and the interaction information are encoded by their respective LSTMs sequentially. The last outputs of the two LSTMs are concatenated and forwarded to a fully connected (FC) layer to get the final output of the X-Encoder. The Y-Encoder has the same structure as the X-Encoder.}
\label{fig:encoder}
\end{figure}
The X-Encoder is used to encode the past information. It has two branches in parallel to process the motion information (upper branch) and dynamic maps information for interaction (lower branch).
The upper branch takes the offsets $({\Delta{x}_i}^t, {\Delta{y}_i}^t)_{t=1}^{T}$ for each target agent over the observed steps. The motion information firstly is passed to a 1D convolutional layer (Conv1D) with a one-step stride along the time axis to learn motion features one step after another. Then the output is sequentially passed to a FC layer and an LSTM module for encoding the temporal features into a hidden state, which contains all the motion information of the target agent.
The lower branch takes the dynamic maps $({\text{Map}}_i^t)_{t=1}^T$ as input.
The interaction information at each step is passed through a 2D convolutional layer (Conv2D) with the ReLU activation and a Max Pooling layer (MaxPool) for learning the spatial features among all the agents. The output of MaxPool at each step is flattened and concatenated along the time axis to form a timely distributed feature vector. Then, the feature vector is fed forward to the attention layer for learning the interaction information.
The output of the attention layer is passed to an LSTM used to encode the dynamic interconnection in the sequence.
Both the hidden states (the last output) from the motion LSTM and the interaction LSTM are concatenated and passed to a FC layer for feature fusion, as the complete output of the X-Encoder.
The Y-Encoder has the same structure as the X-Encoder, which is used to encode both the target agent's motion and interaction information from the ground truth during the training time. The dynamic maps are also leveraged in the Y-Encoder, although, they are not reconstructed from the Decoder (only the future trajectories are reconstructed). This extended structure distinguishes our model from the conventional CVAE structure~\cite{kingma2014auto,kingma2014semi,sohn2015learning} and the work from~\cite{lee2017desire}, in which only the ground truth trajectories are inserted for training the recognition model (see Sec.~\ref{subsec:cvae}).
For tasks of single-path prediction, such as the Trajnet challenge or path planning, a ranking strategy is proposed to select the \textit{most-likely} predicted trajectory out of the multiple predictions.
We apply a bivariate Gaussian distribution to rank the predicted trajectories $(\hat{\boldsymbol{Y}}_{i,1},\cdots,\hat{\boldsymbol{Y}}_{i,N})$ for each agent. At step $t'$, all the predicted positions for the agent $i$ are stored in the vector $|{\hat{\mathsf{X}}_{i}}, {\hat{\mathsf{Y}}_{i}}|^{t'}$. We follow the work ~\cite{graves2013generating} to fit the positions into a bivariate Gaussian distribution:
\begin{equation}
\label{eq:ped}
f(\hat{x}_i, \hat{y}_i)^{t'} = \frac{1}{2\pi\mu_{\hat{\mathsf{X}}_{i}}\mu_{\hat{\mathsf{Y}}_{i}}\sqrt{1-\rho^2}}\exp{\frac{-Z}{2(1-\rho^2)}},
\end{equation}
where
\begin{equation}
Z = \frac{(\hat{x}_i-\mu_{\hat{\mathsf{X}}_{i}})^2}{{\sigma_{\hat{\mathsf{X}}_{i}}}^2} + \frac{(\hat{y}_i-\mu_{\hat{\mathsf{Y}}_{i}})^2}{{\sigma_{\hat{\mathsf{Y}}_{i}}}^2} - \frac{2\rho(\hat{x}_i-\mu_{\hat{\mathsf{X}}_{i}})(\hat{y}_i-\mu_{\hat{\mathsf{Y}}_{i}})}{\sigma_{\hat{\mathsf{X}}_{i}}\sigma_{\hat{\mathsf{Y}}_{i}}}.
\end{equation}
$\mu$ denotes the mean and $\sigma$ the standard deviation. $\rho$ is the correlation between $\hat{\mathsf{X}}_{i}$ and $\hat{\mathsf{Y}}_{i}$.
A predicted trajectory is scored as the sum of the relative likelihood of all of its steps:
\begin{equation}
S(\boldsymbol{\hat{Y}}_{i,n}) = \sum_{t'=1}^{T'}f(\hat{x}_i, \hat{y}_i)^{t'}.
\end{equation}
All the predicted trajectories are ranked according to this score. The one with the highest score is selected for the single-path prediction.
\subsection{Feature Encoding}
\label{subsubsec:featureencoding}
In this subsection we discuss how to encode the motion and interaction information in detail.
\subsubsection{Motion Input}
\label{subsubsec:input}
The motion information for each agent is captured by the position coordinates at each step.
Specifically, we use the offset of the trajectory positions between two consecutive steps $({\Delta{x}}^t, {\Delta{y}}^t) = (x^{t+1} - x^{t}, ~ y^{t+1} - y^{t}) $ as the motion information, which has been widely applied in this domain \cite{gupta2018social,becker2018evaluation,zhang2019sr,cheng2020mcenet}.
Compared to coordinates, the offset is independent from the given space and less sensitive with respect to overfitting a model to particular space or travel directions.
It is interpreted as speed over steps that are defined with a constant duration.
The coordinates at each position are calculated back by cumulatively summing the sequence offsets from the given original position.
The class information of agent's type is useful for analyzing its motion \cite{cheng2020mcenet}. However, the Trajnet benchmark does not provide this information for the trajectories and we do not use it here.
As augmentation technique we randomly rotate the trajectories to prevent the model from only learning certain directions. In order to maintain the relative positions and angles between agents, the trajectories of all the agents coexisting in a given period are rotated by the same angle.
\subsubsection{Attentive Dynamic Maps}
\label{subsubsec:dynamic}
We propose a novel and straightforward method: attentive dynamic maps to learn agent-to-agent interaction information. The mapping method is inspired by the recent works of parsing the interactions between agents based on an occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent}, which uses a binary tensor to map the relative positions of the neighboring agents of the target agent \cite{alahi2016social}. The so-called dynamic maps extend this method, in which the interactions at each step are modeled via three layers dedicated for orientation, speed and position information. Each map changes from one step to the next and therefore the spatio-temporal interaction information between agents is interpreted dynamically over time.
The map is defined as a rectangular area around the target agent, and is divided into grid cells and centralized on the agent's current location, see Fig.~\ref{fig:encoder}. $W$ and $H$ denote the width and height.
First, referred to the target agent $i$, the neighboring agents $\mathsf{N}(i)$ are mapped into the closest grid $\text{cells}_{w \times h}^t$ according to their relative position and they are also mapped onto the cells reached by their anticipated relative offset (speed) in the $x$ and $y$ directions.
\begin{equation}
\label{eq:cell}
\begin{split}
&\text{cells}_{w}^t = x_j^t-x_i^t + (\Delta x_j^t - \Delta{x}_i^t), \\
&\text{cells}_{h}^t = y_j^t-y_i^t + (\Delta y_j^t - \Delta{y}_i^t), \\
&\text{where~} w \leq W ,~ h \leq H,~ j \in \mathsf{N}(i)~\text{and}~ j\neq i.
\end{split}
\end{equation}
Second, the orientation, speed and position information is stored in the mapped cells in the respective layer for each neighboring agent. The \emph{orientation} layer $O$ stores the heading direction. For the neighboring agent $j$, its orientation from the current to the next position is the angle $\vartheta_j$ in the Euclidean plane and calculated in the given radians by $\vartheta_j = \text{arctan2} (\Delta y_j^t,\, \Delta x_j^t)$. Its value is shifted into the interval $[0^{\circ},\,360^{\circ})$.
Similarly, the \emph{speed} layer $S$ stores the travel speed and the \emph{position} layer $P$ stores the positions using a binary flag in the cells mapped above. Last, layer-wise, a Min-Max normalization scheme is applied for normalization.
The map covers a large vicinity area. Empirically we found $32\times32\,\text{m}^2$ a proper setting considering both the coverage and the computational cost. There is a trade-off between a high and a low resolution map. A cell is filled by a maximum of one agent if its size is small. But the high resolution will lead to a very sparse map (most of the cells have no value) and the surrounding areas of the neighboring agent will be treated as having no impact on the target agent. On the other hand, there may exist an overlap of multiple agents in one cell with a very different travel speed or orientation if the cell size is too big. In this study we resolve this problem by setting the cell size
as $1\times1\,\text{m}^2$. Based on the distribution of the experimental data, there are only a few cells with overlapped agents, which is also supported by the preservation of personal space~\cite{gerin2005negotiation}. However, the information of agents' size is not given by the experimental data, the approximation of the cell size may not be valid for large agents. In future work, the size of the agents will be considered and an extended margin will be applied to avoid the problem of agents falling out of the grid cell bounds.
Interactions among different agents are dynamic in various situations from one step to another in a sequence. Some steps may impact the agents' behaviors more than other steps.
To explore such varying information, we employ a self-attention mechanism~\cite{vaswani2017attention} to learn the interaction information from the dynamic maps over time and call them \emph{attentive dynamic maps}. The self-attention module takes as input the dynamic maps and attentively learns the interconnections over the steps. The detailed information of this module is given in Appendix~B.
\section{Experiments}
\label{sec:experiments}
In this section, the evaluation metrics, benchmarks, experimental settings and the recent state-of-the-art methods for comparison are introduced for evaluating the proposed model. The ablation studies that partially remove the modules in the proposed method are conducted to justify each module's contribution. Finally, the experimental results are analyzed and discussed in detail.
\subsection{Evaluation Metrics}
The mean average displacement error (ADE) and final displacement error (FDE) are the most commonly applied metrics to measure the performance of trajectory prediction~\cite{alahi2016social,gupta2018social,sadeghian2018sophie}. ADE is the aligned Euclidean distance from $Y$ (ground truth) to its prediction $\hat{Y}$ averaged over all steps. FDE is the Euclidean distance of the last position from $Y$ to the corresponding $\hat{Y}$. It measures a model's ability for predicting the destination and is more challenging as errors accumulate with time. We report the mean values for all the trajectories.
We evaluate both the most-likely prediction and the best prediction $@top10$ for the multi-path trajectory prediction.
The most-likely prediction is selected by the trajectories ranking as described in Sec~\ref{subsec:AMENet}.
$@top10$ prediction is the best one out of ten predicted trajectories that has the smallest ADE and FDE compared with the ground truth. When the ground truth is not available (for the online test), only the most-likely prediction is selected. Then it becomes to the single trajectory prediction problem, as most of the previous works did~\cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,giuliari2020transformer}.
\subsection{Trajnet Benchmark Challenge}
\label{subsec:benchmark}
We first verify the performance of the proposed method on Trajnet~\cite{sadeghiankosaraju2018trajnet}. It is the most popular large-scale trajectory-based activity benchmark in this domain and provides a uniform evaluation system for fair comparison among different submitted methods. A wide range of datasets (e.\,g.,~ETH~\cite{pellegrini2009you}, UCY~\cite{lerner2007crowds} and Stanford Drone Dataset~\cite{robicquet2016learning}) for heterogeneous agents (pedestrians, bikers, skateboarders, cars, buses, and golf cars) that navigate in real-world outdoor mixed traffic environments are included.
The data was collected from 38 scenes with ground truth for training and another 20 scenes without ground truth for testing (i.\,e.,~open challenge competition). Each scene presents various traffic densities in different space layouts, which makes the prediction task challenging and requires a model to generalize, in order to adapt to the various complex scenes. Trajectories are provided as the $xy$ coordinates in meters (or pixels) projected on a Cartesian space, with 8 steps for observation and the following 12 steps for prediction.
The duration between two successive steps is 0.4 seconds.
We follow all the previous works~\cite{helbing1995social,alahi2016social,zhang2019sr,becker2018evaluation,hasan2018mx,gupta2018social,giuliari2020transformer} that use the coordinates in meters.
In order to train and evaluate the proposed method, as well as the ablation studies,
6 of the total 38 scenes in the training set are selected as the offline test set. Namely, they are \textit{bookstore3}, \textit{coupa3}, \textit{deathCircle0}, \textit{gates1}, \textit{hyang6}, and \textit{nexus0}.
The selection of the scenes is based on the space layout, data density and percentage of non-linear trajectories, see Table~\ref{tb:multipath}. Fig.~\ref{fig:trajectories} visualizes the trajectories in each scene.
The trained model that has the best performance on the offline test set is selected as our final model and used for the online testing.
\begin{figure}[bpht!]
\centering
\includegraphics[width=\textwidth]{fig/trajectories_update.png}%
\label{trajectories_nexus_0}
\caption{Visualization of each scene of the offline test set.}
\label{fig:trajectories}
\end{figure}
\subsection{Quantitative Results and Comparison}
\label{subsec:results}
We compare the performance of our model with the most influential previous works and the recent state-of-the-art works published on the Trajnet challenge.
\begin{itemize}[leftmargin=*,nosep]
\item\emph{Social Force}~\cite{helbing1995social} is a rule-based model with the repulsive force for collision avoidance and the attractive force for social connections;
\item\emph{Social LSTM}~\cite{alahi2016social} proposes Social pooling with a rectangular occupancy grid for close neighboring agents, which is widely adopted in this domain~\cite{lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent};
\item\emph{SR-LSTM}~\cite{zhang2019sr} uses a states refinement module for extracting social effects between the target agent and its neighboring agents;
\item\emph{RED}~\cite{becker2018evaluation} uses RNN-based Encoder with Multilayer Perceptron (MLP) for trajectory prediction;
\item\emph{MX-LSTM}~\cite{hasan2018mx} exploits the head pose information of agents to help analyze its moving intention;
\item\emph{Social GAN}~\cite{gupta2018social} proposes to utilize GAN for multi-path trajectory prediction, which is the one of the closest works to our work; the other one is DESIRE~\cite{lee2017desire}. But neither the online test nor code was reported. Hence, we do not compare with DESIRE;
\item\emph{Ind-TF}~\cite{giuliari2020transformer} proposes a novel idea that utilizes the Transformer network~\cite{vaswani2017attention} for sequence prediction. No social interactions between agents are considered in this work.
\end{itemize}
The performances of single trajectory prediction from different methods on the Trajnet challenge are given in Table~\ref{tb:results}.
The results were originally reported on the leader board\footnote{\url{http://trajnet.stanford.edu/result.php?cid=1}} up to the date of 14 June 2020. AMENet outperformed the other models and won the first place measured by the aforementioned metrics. Compared with the most recent model Ind-TF~\cite{giuliari2020transformer}, AMENet achieved comparative performance in ADE and slightly better in FDE (from 1.197 to 1.183 meters).
The superior performance given by AMENet here also validates the efficacy of the ranking method to select the most-likely prediction from the multiple predicted trajectories, as introduced in Sec.~\ref{subsec:AMENet}.
\begin{table}[t!]
\centering
\caption{Comparison between our method and the state-of-the-art models. Smaller values indicate a better performance and best values are highlighted in boldface.}
\begin{tabular}{llll}
\hline
Model & Avg. [m]$\downarrow$ & FDE [m]$\downarrow$ & ADE [m]$\downarrow$ \\
\hline
Social LSTM~\cite{alahi2016social} & 1.3865 & 3.098 & 0.675 \\
Social GAN~\cite{gupta2018social} & 1.334 & 2.107 & 0.561 \\
MX-LSTM~\cite{hasan2018mx} & 0.8865 & 1.374 & 0.399 \\
Social Force~\cite{helbing1995social} & 0.8185 & 1.266 & 0.371 \\
SR-LSTM~\cite{zhang2019sr} & 0.8155 & 1.261 & 0.370 \\
RED~\cite{becker2018evaluation} & 0.78 & 1.201 & 0.359 \\
Ind-TF~\cite{giuliari2020transformer} & 0.7765 & 1.197 & \textbf{0.356} \\
This work (AMENet)$^{*}$ & \textbf{0.7695} & \textbf{1.183} & \textbf{0.356} \\
\hline
\end{tabular}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{\textwidth}}{$^{*}$named as \textit{ikg\_tnt} on the leader board}
\end{tabular}
\label{tb:results}
\end{table}
\subsection{Results for Multi-Path Prediction}
\label{subsec:multipath-selection}
The performance for multi-path prediction is investigated using the offline test set.
Table~\ref{tb:multipath} shows the quantitative results. Compared to the most-likely prediction, as expected the $@\text{top}10$ prediction yields similar but slightly better performance. It indicates that: (1) the generated multiple trajectories increase the chance to narrow down the errors; (2) the ranking method is effective for ordering the multiple predictions and proposing a good one, which is especially important when the prior knowledge of the ground truth is not available.
\begin{table}[t!]
\centering
\caption{Evaluation measured by ADE/FDE for multi-path trajectory prediction using AMENet on the Trajnet offline test set.}
\begin{tabular}{llllll}
\hline
Dataset & Layout & \#Trajs & \begin{tabular}[c]{@{}l@{}}Non-linear \\ traj rate\end{tabular}
& @top10 & Most-likely \\ \hline
bookstore3 & parking & 429 & 0.71 & 0.477/0.961 & 0.486/0.979 \\
coupa3 & corridor & 639 & 0.31 & 0.221/0.432 & 0.226/0.442 \\
deathCircle0 & roundabout & 648 & 0.89 & 0.650/1.280 & 0.659/1.297 \\
gates1 & roundabout & 268 & 0.87 & 0.784/1.663 & 0.797/1.692 \\
hyang6 & intersection & 327 & 0.79 & 0.534/1.076 & 0.542/1.094 \\
nexus6 & corridor & 131 & 0.88 & 0.542/1.073 & 0.559/1.109 \\
Avg. & - & 407 & 0.74 & 0.535/1.081 & 0.545/1.102 \\ \hline
\end{tabular}
\label{tb:multipath}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.475\textwidth]{fig/deathCircle_0240.png}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=0.475\textwidth]{fig/gates_1001.png}
\caption{Multi-path predictions from AMENet.}
\label{fig:multi-path}
\end{figure}
Fig.~\ref{fig:multi-path} showcases some qualitative examples of the multi-path trajectory prediction by AMENet. As shown in the roundabout deathCircle0 and gates1, each moving agent has more than one possibility (different speeds and orientations) to choose its future path. The predicted trajectories diverge more widely in further steps as the uncertainty about an agent's intention increases with time. Predicting multiple plausible trajectories indicates a larger intended area and raises the chance to cover the path an agent might choose in the future. Also, the ``fan" of possible trajectories can be interpreted as reflecting the uncertainty of the prediction. Conversely, a single prediction provides limited information for inference and is likely to lead to a false conclusion if the prediction is not correct/precise in the early steps. On the other hand, agents that stand still were correctly predicted by AMENet with high certainty, as shown by two agents in gates1 in the upper right area. As designed by the model, only interactions between agents lead to adaptions in the predicted path and deviation from linear paths; the scene context, e.\,g.,~road geometry, is not modeled and thus does not affect prediction.
\subsection{Ablation study}
\label{sec:ablativemodels}
In order to analyze the impact of each module in the proposed framework, i.\,e.,~dynamic maps, self-attention, and the extended structure of the CVAE, several ablative models were investigated.
\begin{itemize}[leftmargin=*,nosep]
\item ENet: (E)ncoder (Net)work, which is only conditioned on the motion information. The interaction information is not leveraged. This model is treated as the baseline model.
\item OENet: (O)ccupancy$+$ENet, where interactions are modeled by the occupancy grid \cite{alahi2016social,lee2017desire,xue2018ss,hasan2018mx,cheng2018modeling,cheng2018mixed,johora2020agent} in both the X-Encoder and the Y-Encoder.
\item AOENet: (A)ttention$+$OENet, where the self-attention mechanism is added.
\item MENet: (M)aps$+$ENet, where interactions are modeled by the proposed dynamic maps in both the X-Encoder and the Y-Encoder.
\item ACVAE: (A)ttention+CVAE, where the dynamic maps are only added in the X-Encoder. It is equivalent to a CVAE model~\cite{kingma2014auto,kingma2014semi,sohn2015learning} with the self-attention mechanism.
\item AMENet: (A)ttention$+$MENet, where the self-attention mechanism is added. It is the full model of the proposed framework.
\end{itemize}
Table~\ref{tb:resultsablativemodels} shows the quantitative results for the ablation studies.
Errors are measured by ADE/FDE on the most-likely prediction. The comparison between OENet and the baseline model ENet shows that extracting the interaction information from the occupancy grid did not contribute to a better performance. Even though the self-attention mechanism was added to the occupancy grid (denoted by AOENet), the slightly enhanced performance still fell behind the baseline model. The comparison indicates that interactions were not effectively learned from the occupancy map with or without the self-attention mechanism across the datasets. The comparison between MENet and ENet shows a similar pattern. The performance was slightly less inferior using the dynamic maps than the occupancy grid (MENet vs. OENet) in comparison to the baseline model. However, profound improvements can be seen after employing the self-attention mechanism. First, the comparison between ACVAE and ENet shows that even without the extended structure in the Y-Encoder, the dynamic maps with the self-attention mechanism in the X-Encoder were very beneficial for modeling interactions. On average, the performance was improved by \SI{4.0}{\percent} and \SI{4.5}{\percent} as measured by ADE and FDE, respectively. Second, the comparison between the proposed model AMENet and ENet shows that after extending the dynamic maps to the Y-Encoder, the errors, especially the absolute values of FDE, further decreased across all the datasets; ADE was reduced by \SI{9.5}{\percent} and FDE was reduced by \SI{10.0}{\percent}. This improvement was also confirmed by the benchmark challenge (see Table~\ref{tb:results}).
The evaluation was decomposed for non-linear and linear trajectories across all of the above models. The linearity of a trajectory not only depends on the continuity of the travel direction, but also on the speed. We use the same scheme as~\cite{gupta2018social} to categorize the linearity of trajectories by a two-degree polynomial fitting. It compares the sum of the squared residuals over the fitting with the least-squares error. A trajectory is categorized as linear if it meets the criteria. Fig.~\ref{fig:non-linear} visualizes the values of (a) ADE and (b) FDE averaged over the six scenes in the offline test set. Across the models, the performance for predicting non-linear trajectories demonstrates a similar pattern compared to predicting all the trajectoires (linear $+$ non-linear) and AMENet outperformed the other models measured by both metrics. Obviously, predicting the linear trajectories is easier than the non-linear ones. In this regard, all the models performed very well (ADE $\leq 0.2$ m and FDE $\leq 0.4$ m), especially the AMENet and ACVAE models. This observation indicates that if there are other agents interacting with each other, the continuity of their motion is likely to be interrupted,~i.\,e.,~deviating from the free-flow trajectories~\cite{rinke2017multi}. The model has to adapt to this deviation to achieve a good performance. On the other hand, if there is no such reason to disrupt the linearity of the motion, then the model does not generate deviated trajectories.
\begin{table}[t!]
\setlength{\tabcolsep}{2.8pt}
\centering
\small
\caption{Evaluation results measured by ADE/FDE on the most-likely prediction for the ablative models and the proposed model AMENet. Best values are highlighted in boldface.}
\begin{tabular}{lllllll}
\hline
Scene & ENet & OENet & AOENet & MENet & ACVAE & AMENet \\ \hline
B & 0.532/1.080 & 0.601/1.166 & 0.574/1.144 & 0.576/1.139 & 0.509/1.030 & \textbf{0.486}/\textbf{0.979} \\
C & 0.241/0.474 & 0.342/0.656 & 0.260/0.509 & 0.294/0.572 & 0.237/0.464 & \textbf{0.226}/\textbf{0.442} \\
D & 0.681/1.353 & 0.741/1.429 & 0.726/1.437 & 0.725/1.419 & 0.698/1.378 & \textbf{0.659}/\textbf{1.297} \\
G & 0.876/1.848 & 0.938/1.921 & 0.878/1.819 & 0.941/1.928 & 0.861/1.823 & \textbf{0.797}/\textbf{1.692} \\
H & 0.598/1.202 & 0.661/1.296 & 0.619/1.244 & 0.657/1.292 & 0.566/1.140 & textbf{0.542}/\textbf{1.094} \\
N & 0.684/1.387 & 0.695/1.314 & 0.752/1.489 & 0.705/1.346 & 0.595/1.181 & textbf{0.559}/\textbf{1.109} \\
Avg. & 0.602/1.224 & 0.663/1.297 & 0.635/1.274 & 0.650/1.283 & 0.577/1.170 & \textbf{0.545}/\textbf{1.102} \\ \hline
\end{tabular}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{\textwidth}}{B: bookstore3, C: coupa3, D: deathCircle0, G: gates1, H: hyang6, N: nexus6}
\end{tabular}
\label{tb:resultsablativemodels}
\end{table}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.5\textwidth}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=1\textwidth]{fig/ADE.png}
\label{subfig:non-linear:ADE}
\caption{\small ADE}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[trim=0cm 0cm 0cm 0cm, width=1\textwidth]{fig/FDE.png}
\label{subfig:non-linear:FDE}
\caption{\small FDE}
\end{subfigure}
\caption{The prediction errors for linear, non-linear and all trajectories measured by (a) ADE and (b) FDE for all the ablative models, as well as the proposed model AMENet.}
\label{fig:non-linear}
\end{figure}
Fig.~\ref{fig:abl_qualitative_results_com} showcases some qualitative results by the proposed AMENet model in comparison to the ablative models. In general, AMENet generated accurate predictions and outperformed the other models in all the scenes, which is especially visible in coupa3 (a) and bookstore3.
All the models predicted plausible trajectories for two agents walking in parallel in coupa3 (b) (denoted by the black box), except the baseline model ENet. Without modeling interactions, the ENet model generated two trajectories that intersected with each other. In hyang6 limited performance can be seen by ENet, AOENet and MENet regarding travel speed and OENet and ACVAE regarding destination for the fast-moving agent. In contrast AMENet kept a good prediction. In nexus6 (a) and (b), only two agents were present, where all the models performed well. More agents were involved in the roundabout scenes, in which the prediction task was more challenging. AMENet generated accurate predictions for most of the agents. However, its performance is limited for the agents that changed speed or direction rapidly from their past movement. We notice one interesting scenario of the two agents that walked towards each other in deathCircle0 (denoted by the black box). In reality, when the right agent changed its heading towards the left agent, the left agent had to decelerate strongly to yield the way. Regarding the interaction and compared with the other models, AMENet generated non-conflict trajectories.
\begin{figure}[t!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, height=0.88\textheight]{fig/predictions_combined.png}
\caption{Trajectories predicted by ENet, OENet, AOENet, MENet, ACVAE and AMENet in comparison with the ground truth (GT) trajectories on Trajnet~\cite{sadeghiankosaraju2018trajnet}.}
\label{fig:abl_qualitative_results_com}
\end{figure}
\subsection{Trajectory prediction on Benchmark InD}
\label{subsec:InD}
To further investigate the generalization performance of the proposed model, we carried out extensive experiments on a newly published large-scale benchmark InD\footnote{\url{https://www.ind-dataset.com/}}.
It consists of 33 datasets and was collected using drones on four very busy intersections (as shown in Fig.~\ref{fig:qualitativeresultsInD}) in Germany in 2019 by Bock et al. \cite{inDdataset}.
Different from Trajnet where most of the environments (i.\,e.,~shared spaces \cite{reid2009dft,robicquet2016learning}) are pedestrian friendly, the intersections in InD are dominated by vehicles. This makes the prediction task more challenging due to the very different travel speed between pedestrians and vehicles, as well as the direct interactions. We follow the same format as the Trajnet benchmark for data processing (Sec.~\ref{subsec:benchmark}). The performance of AMENet is compared with Social LSTM~\cite{alahi2016social} and Social GAN~\cite{gupta2018social}, which are the most relevant ones to our models.
As mentioned above, Social LSTM~\cite{alahi2016social} is the first deep learning method that uses occupancy grid for modeling interactions between agents and Social GAN~\cite{gupta2018social} is the closest deep generative model to ours. It is worth mentioning that we trained and tested all the three models using the same data for fair comparison.
The performance is analyzed quantitatively and qualitatively. Table~\ref{tb:resultsInD} lists the evaluation results measured by ADE/FDE for all the models in each intersection. AMENet predicted more accurate trajectories measured by all the metrics compared with Social LSTM and Social GAN. Fig.~\ref{fig:qualitativeresultsInD} shows one scenario in each of the four intersections. AMENet predicted the deceleration of the car approaching the intersection from the right arm, the trajectory of the cross walking pedestrian and the slowing down of the pedestrian on the sidewalk in intersection A. It correctly predicted two cars slowly approaching the intersection area in intersection B and D, and the waiting scenario for pedestrian cross walking in intersection C.
\begin{table}[t!]
\centering
\small
\caption{Quantitative results of AMENet and the comparative modles on InD~\cite{inDdataset} measured by ADE/FDE. Best values are highlighted in bold face.}
\begin{tabular}{lllllll}
\hline
Model & S-LSTM & S-GAN & AMENET & S-LSTM & S-GAN & AMENET \\ \hline
InD & \multicolumn{3}{c}{@top 10} & \multicolumn{3}{c}{Most-likely} \\ \hline
Int. A & 2.04/4.61 & 2.84/4.91 & \textbf{0.95/1.94} & 2.29/5.33 & 3.02/5.30 & \textbf{1.07/2.22} \\
Int. B & 1.21/2.99 & 1.47/3.04 & \textbf{0.59/1.29} & 1.28/3.19 & 1.55/3.23 & \textbf{0.65/1.46} \\
Int. C & 1.66/3.89 & 2.05/4.04 & \textbf{0.74/1.64} & 1.78/4.24 & 2.22/4.45 & \textbf{0.83/1.87} \\
Int. D & 2.04/4.80 & 2.52/5.15 & \textbf{0.28/0.60} & 2.17/5.11 & 2.71/5.64 & \textbf{0.37/0.80} \\
Avg. & 1.74/4.07 & 2.22/4.29 & \textbf{0.64/1.37} & 1.88/4.47 & 2.38/4.66 & \textbf{0.73/1.59} \\ \hline
\end{tabular}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{\textwidth}}{S-LSTM: Social LSTM~\cite{alahi2016social}, S-GAN: Social GAN~\cite{gupta2018social}}
\end{tabular}
\label{tb:resultsInD}
\end{table}
\begin{figure}[bpht!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, width=\textwidth]{fig/InD_Trajectories_combined_aligned.png}
\caption{\small{Trajectories predicted by AMENet on the InD benchmark~\cite{inDdataset}.}}
\label{fig:qualitativeresultsInD}
\end{figure}
\subsection{Discussion of the Results}
Based on the extensive studies and results, we discuss the advantages and limitations of the AMENet model proposed in this paper.
AMENet demonstrated superior performance over different benchmarks for trajectory prediction. Firstly, the proposed model was able to achieve the state-of-the-art performance on the Trajnet benchmark challenge, which contains various scenes. Secondly, the results of the ablation studies proved that the information of interactions between agents is beneficial for trajectory prediction. However, the performance highly depends on how such information was leveraged. It was difficult for the occupancy grid, which is only based on the positions of the neighboring users, to extract useful information for interaction modeling, because positions change from one step to the next and from one scene to another. Meanwhile, the speed and orientation information is not considered, which may explain why the occupancy grid performed worse than the dynamic maps in the same settings. Thirdly, as interactions change over time, the self-attention mechanism automatically extracted the salient features in the time axis from the dynamic maps.
However, there are several limitations of the model being uncovered throughout the experiments. First, the resolution of the map was approximated according to the experimental data and the size of the neighboring agents was not yet considered. This may limit the model for dealing with big-sized agents, such as buses or trucks. We leave this to future work. Second, from the qualitative results we notice that the model had limited performance for predicting the behavior of the agents that drastically change direction and speed, which is in general a very challenging task without extra information from the agents, such as body post or eye gaze. Last but not least, in this study, scene context information was not included. The lack of this information may lead to a wrong prediction, e.\,g.,~trajectories leading into obstacles or inaccessible areas. Scene context can have a positive effect that a trajectory follows a (curved) path. On the other hand, a strong constraint from the scene context can easily overfit a model for some particular scene layout~\cite{cheng2020mcenet}. Hence, a good mechanism for parsing the scene information is needed to balance the trade-off, especially for a model trained in one scene and applied in another.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we have presented a generative model called Attentive Maps Encoder Network (AMENet) for multi-path trajectory prediction and made the following contributions.
(1) The model captures the stochastic properties of road users' motion behaviors after a short observation time via the latent space learned by the X-Encoder and Y-Encoder that encode motion and interaction information, and predicts multiple plausible trajectories.
(2) We propose a novel concept--attentive dynamic maps--to extract the social effects between agents during interactions. The dynamic maps capture accurate interaction information by encoding the neighboring agents' orientation, travel speed and relative position in relation to the target agent, and the self-attention mechanism enables the model to learn the global dependency of interaction over different steps.
(3) The model targets heterogeneous agents in mixed traffic in various real-world traffic environments.
The efficacy of the model was validated on the benchmark Trajnet that contains various datasets in different real-world environments and the InD benchmark for different intersections. The model not only achieved state-of-the-art performance, but also won the first place on the leader board for predicting 12 time-step positions of 4.8 seconds.
Each component of AMENet has been validated via a series of ablation studies.
In future work, we plan to include more information to further improve the prediction accuracy, such as the type and size information of agents and spatial context information. In addition, we will extend our trajectory prediction model for safety analysis, e.\,g.,~using the predicted trajectories to calculate time-to-collision \cite{perkins1968traffic} and to detect abnormal trajectories by comparing the anticipated/predicted trajectories with the actual ones.
\section*{Acknowledgements}
This work is supported by the German Research Foundation (DFG) through the Research Training Group SocialCars (GRK 1931).
|
1,477,468,750,919 | arxiv | \subsection{Problem Statement}
We will first define the system model studied throughout this paper.
\begin{definition}[Control System]
$\ $ A state feedback control system $\Psi(X, U, f, \K)$ consists
of a plant, a controller over
$X \subseteq \reals^n$ and $U \subseteq \reals^m$.
\begin{compactenum}
\item $X \subseteq \reals^n$ is the \emph{state space} of the system.
The control inputs belong to a set
$U$ defined as a polyhedron:
\begin{equation}\label{eq:input-sat}
U = \{\vu \ | \ A \vu \geq \vb \} \,.
\end{equation}
\item The plant consists of a vector field defined by a
continuous and differentiable function $f: X \times U \mapsto \reals^n$.
\item The controller measures the state of the plant $\vx \in X$ and
provides feedback $\vu \in U$. The controller is defined by a
feedback function $\K:X \mapsto U$
(Fig.~\ref{fig:system}).
\end{compactenum}
\end{definition}
For now, we assume $\K$ is a smooth (continuous and differentiable) function.
For a given feedback law $\K$, an
execution trace of the system, starting from an initial
state $\vx_0$ is a function:
$\vx: [0,T(\vx_0)) \mapsto X $, which maps time $t \in [0,T(\vx_0))$ to a state $\vx(t)$,
such that
\[ \dot{\vx}(t) = f(\vx(t), \K(\vx(t))) \,, \] where $\dot{\vx}(\cdot)$ is
the right derivative of $\vx(\cdot)$ w.r.t. time over $[0, T(\vx_0))$.
Since $f$ and $\K$ are assumed to be smooth, there exists a unique trajectory
for any $\vx_0$, defined over some time interval $ [0, T(\vx_0))$.
Here $T(\vx_0)$ is $\infty$ if
trajectory starting from $\vx_0$ exists for all time. Otherwise, $T(\vx_0)$ is finite if the
trajectory ``escapes'' in finite time. For most of the systems we study, the closed loop dynamics
are such that a compact set $S$ will be positive invariant. In fact, this set will be a
sublevel set of a Lyapunov function for the closed loop dynamics. This fact along
with the smoothness of $f, \K$ suffices to establish that $T(\vx_0) = \infty$ for all $\vx_0 \in S$. Unless otherwise noted,
we will consider control laws $\K$ that will guarantee existence of trajectories for all time.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.25\textwidth]{pics/system}
\end{center}
\caption{Closed-loop state feedback system.}\label{fig:system}
\end{figure}
A specification describes the desired behavior of all possible
execution traces $\vx(\cdot)$.
In this article, we study a variety of
specifications, including stability, trajectory tracking, and
safety. For simplicity, we first focus on stability. Extensions to
other specifications are presented in Section~\ref{sec:spec}. Also,
without loss of generality, we assume $\vx = \vzero$ is the desired
equilibrium. Moreover, $f(\vzero, \vzero) = \vzero$.
\begin{problem}[Synthesis for Asymptotic
Stability] \label{prob:stability} Given a plant, the
control synthesis problem is to design a controller (a feedback law $\K$) s.t.
all traces $\vx(\cdot)$ of the closed loop system
$\Psi(X, U, f, \K)$ are asymptotically stable. We require two properties for asymptotic stability.
First, the system is \emph{Lyapunov stable}:
\[ \begin{array}{ll}
(\forall \epsilon > 0) \\
\ \ \; (\exists \delta > 0) \\
\ \ \ \ \ \ \; \left(\begin{array}{c}\forall \vx(\cdot) \\ \vx(0) \in B_{\delta}(\vzero)\end{array}\right)\ (\forall t \geq 0) \; \vx(t) \in \B_\epsilon(\vzero) \,,\\
\end{array}\]
wherein $\B_\delta(\vx) \subseteq \reals^n$ is the ball of radius
$\delta$ centered at $\vx$. In other words, for any
chosen $\epsilon > 0$, we may ensure that
the trajectories will stay inside a ball of
$\epsilon$ radius by choosing the initial
conditions to lie inside a ball of $\delta$ radius.
Furthermore, all the trajectories converge asymptotically towards the origin:
\[\begin{array}{ll}
(\forall \epsilon > 0)\ \left(\forall \vx(\cdot)\right)\ (\exists T > 0)\ (\forall t \geq T) \ \vx(t) \in \B_{\epsilon}(\vzero) \,.
\end{array}\]
I.e., For any chosen $\epsilon > 0$, all trajectories will eventually
reach a ball of radius $\epsilon$ around the origin and stay inside forever.
\end{problem}
Stability in our method is addressed through Lyapunov analysis. More
specifically, our solution is based on control Lyapunov functions
(CLF). CLFs were first introduced by Sontag~\cite{Sontag/1982/Characterization,sontag1983lyapunov}, and
studied at the same time by Artstein~\cite{artstein1983stabilization}. Sontag's work shows that if a system is asymptotically stablizable, then there exists a CLF even if the dynamics are not smooth~\cite{sontag1983lyapunov}. Now, let us recall the definition of a positive and
negative definite functions.
\begin{definition}[Positive Definite]
A function $V: \reals^n$ $\mapsto \reals$ is \emph{positive
definite} over a set $X$ containing $\vzero$, iff $V(\vzero) = 0$
and $V(\vx) > 0$ for all $\vx \in X \setminus \{ \vzero\}$.
Likewise, $V$ is \emph{negative definite} iff $-V$ is positive definite.
\end{definition}
\begin{definition}[Control Lyapunov Function(CLF)]
A smooth, radially unbounded function $V$ is a control Lyapunov
function (CLF) over $X$, if the following
conditions hold~\cite{artstein1983stabilization}:
\begin{equation}\label{eq:clf-def}
\begin{array}{l}
V\ \mbox{is positive definite over}\ X \\
\min_{\vu \in U} (\nabla V) \cdot f(\vx, \vu)\ \mbox{is negative definite over} X\,,
\end{array}
\end{equation}
where $\nabla V$ is the gradient of $V$. Note that $(\nabla V) \cdot f$ is
the Lie derivative of $V$ according to the vector field $f$.
\end{definition}
Another
way of interpreting the second condition is that for each $\vx \in X$,
a control $\vu \in U$ can be chosen to ensure an \emph{instantaneous
decrease} in the value of $V$, as illustrated in Fig.~\ref{fig:clf}.
\paragraph{Solving Stabilization using CLFs:}
Finding a CLF $V$ guarantees the existence of a feedback law
that can stabilize all trajectories to the equilibrium~\cite{artstein1983stabilization}. However, constructing such a feedback law is not trivial and potentially expensive.
Further results can be obtained by restricting the vector field
$f$ to be control affine:
\begin{equation} \label{eq:control-affine}
f(\vx, \vu):\ f_0(\vx) + \sum_{i=1}^m f_i(\vx)u_i \,,
\end{equation}
wherein $f_i : X \mapsto \reals[X]^n$.
Assuming $U: \reals^m$, Sontag provides a method for extracting a feedback law $\K$, for control affine systems from a control Lyapunov
function~\cite{sontag1989universal}. More specifically, if a CLF $V$ is available, the following feedback law stabilizes the system:
\begin{equation}\label{eq:sontag}
\K_i(\vx) = \begin{cases} 0 & \beta(\vx) = 0 \\
-b_i(\vx) \frac{a(\vx) + \sqrt{a(\vx)^2 + \beta(\vx)^2}}{\beta(\vx)} & \beta(\vx) \neq 0\,,
\end{cases}
\end{equation}
where $a(\vx) = \nabla V.f_0(\vx)$, $b_i(\vx) = \nabla V.f_i(\vx)$, and $\beta(\vx) = \sum_{i=1}^m b_i^2(\vx)$.
\begin{remark}
Feedback law $\K$ provided by the Sontag formula is not necessarily continuous at the origin. Nevertheless, such a feedback law still guarantees stabilization. See~\cite{sontag1989universal} for more details.
\end{remark}
Sontag formula can be extended to systems
with saturated inputs where $U$ is an n-ball~\cite{LIN1991UNIVERSAL} or a polytope~\cite{suarez2001global}. Also switching-based feedback is possible, under some mild assumptions (to avoid Zeno behavior)~\cite{curtis2003clf,Ravanbakhsh-Others/2015/Counter-LMI}.
We assume dynamics are affine in control and use these results which reduce Problem~\ref{prob:stability} to that of finding a control Lyapunov function $V$.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.3\textwidth]{pics/clf}
\end{center}
\caption{Control Lyapunov Function (CLF): Level-sets of a CLF
$V$ are shown using the green lines. For each state (blue dot), the
vector field $f(\vx, \vu)$ for $\vu = \K(\vx)$ is the blue arrow,
and it points to a direction which decreases $V$.}\label{fig:clf}
\end{figure}
\input{clf-intro}
\subsection{Discovering CLFs}
We briefly summarize approaches for discovering CLFs for a given plant
model in order to stabilize it to a given equilibrium state.
Efficient methods for discovering CLFs are available only for specific
classes of systems such as feedback linearizable systems, or for
so-called strict feedback systems, wherein a procedure called
\emph{backstepping} can be used~\cite{freeman2008robust}. However, finding CLFs for general nonlinear
systems is challenging~\cite{primbs1999nonlinear}.
One class of solutions uses optimal control theory by setting up the
problem of stabilization as one of minimizing a cost function over the
trajectories of the system. If the cost function is set up
appropriately, then the value function for the resulting dynamic
programming problem is a a CLF~\cite{primbs1999nonlinear,bertsekas1995dynamic}. To do so,
however, one needs to solve a Hamilton-Jacobi-Bellman (HJB) partial
differential equation to discover the value function, which can be
quite hard in practice\cite{bryson1975applied}. In fact, rather than
solve HJB equations to obtain CLFs, it is more common to derive a CLF
using a procedure such as backstepping and apply inverse optimality
results to derive cost functions~\cite{freeman2008robust}.
A second class of solution is based on parameterization. More
specifically, a class of function $V_{\vc}(\vx)$ is parameterized by a
set of unknown parameters $\vc$. This parameterization is commonly
specified as a linear combination of basis functions of the
form $V_{\vc}(\vx) : \sum c_i g_i(\vx)$. Furthermore, the functions
$g_i$ commonly range over all possible monomials up to some prespecified
degree limit $D$. Next, an instantiation of
the parameters $\vc$ is discovered so that the resulting function $V$
is a CLF. Unfortunately, discovering such parameters requires the
solution to a quantifier elimination problem, in general. This is quite
computationally expensive for nonlinear systems. Previously, authors
proposed a framework which uses sampling to avoiding expensive
quantifier eliminations~\cite{ravanbakhsh2015counterexample}. Despite
the use of sampling, scalability remains an issue. Another solution is
based on sum-of-squares
relaxations~\cite{Shor/1987/Class,lasserre2001global,Parillo/2003/Semidefinite},
along the lines of approaches used to discover Lyapunov
functions~\cite{Papachristodoulou+Prajna/2002/Construction}. However,
discovering CLFs using this approach entails solving a system of
bilinear matrix
inequalities~\cite{tan2004searching,henrion2005solving}. In contrast
to LMIs, the set of solutions to a BMIs form a nonconvex set, and
solving BMIs is well-known to be computationally expensive, in
practice. Rather than solving a BMI to find a CLF, and then
extracting the feedback law from the CLF, an alternative approach is
to simultaneously search for a Lyapunov function $V$ and an unknown
feedback law at the same
time~\cite{el1994synthesis,tan2004searching,majumdar2013control}. The latter approach also yields bilinear matrix inequalities of
comparable sizes. Rather than seek algorithms that are
guaranteed to solve BMIs, a simpler approach is to attempt to solve the
BMIs using \emph{alternating minimization}: a form of coordinate
descent that fixes one set of variables in BMI, obtaining an LMI over
the remaining variables. However, these approaches usually stuck in a
local ``saddle point'', and fail as a
result~\cite{Helton+Merino/1997/Coordinate}.
Approaches that parameterize a family of functions $V_{\vc}(\vx)$ face
the issue of choosing a family such that a CLF belonging to that family
is known to exist whenever the system is asymptotically stabilizable in the
first place. There is a rich literature on the existence of CLFs for a given
class of plant models. As mentioned earlier, if a system is
\emph{asymptotically stablizable}, then there exists a CLF even if the
dynamics are not smooth~\cite{sontag1983lyapunov}. However, the CLF does not
have to be smooth. Recent results, have shown some
light on the existence of polynomial Lyapunov functions for certain
classes of systems. Peet showed that an exponentially stable system
has a polynomial local Lyapunov function over a bounded region~\cite{Peet/2009/Exponentially}. Thus,
if there exists some feedback law that exponentially stabilizes a
given plant, we may conclude the existence of a polynomial
CLF for that system. This was recently extended to rationally stable systems
i.e., the distance to equilibrium decays as $o(t^{-k})$ for trajectories
starting from some set $\Omega$, by Leth et al.~\cite{Leth+Others/2017/Existence}. These results
do not guarantee that a search for a polynomial CLF will be successful
due to the lack of a bound on the degree $D$. This can be addressed by increasing
the degree of the monomials until a CLF is found, but the process can be prohibitively
expensive. Another drawback is that most approaches use SOS relaxations over polynomial systems to check the CLF conditions, although there is no guarantee as yet that polynomial CLFs that are also verifiable through SOS relaxations exist.
Another class of solutions involves approximate dynamic programming to
find approximations to value
functions~\cite{bertsekas2008approximate}. The solutions obtained
through these approaches are not guaranteed to be CLFs and thus may
need to be discarded, if the final result does not satisfy the
conditions for a CLF. Approximate solutions are also investigated
through learning from demonstrations \cite{zhong2013value}.
Khansari-Zadeh et al. learn a CLF from
demonstrations through a combination of sampling states and corresponding
feedback provided by the demonstrator. A likely CLF is learned
through parameterizing a class of functions $V_{\vc}(\vx)$, and finding
conditions on $\vc$ by enforcing the conditions for the
CLFs at the sampled states~\cite{KHANSARIZADEH2014}. The conditions
for being a CLF should be checked on the solution obtained by solving these
constraints.
Compared to the techniques described above, the approach presented in
this paper is based on parameterization by choosing a class of
functions $V_{\vc}(\vx)$ and attempting to find a suitable $\vc \in C$
so that the result is a CLF. Our approach avoids having to solve BMIs
by instead choosing finitely many sample states, and using
demonstrator's feedback to provide corresponding sample controls for
the state samples. However, instead of choosing these samples at
random, we use a verifier to select samples. Furthermore, our approach
can also systematically explore the space of possible parameters $C$
in a manner that guarantees termination in number of iterations
polynomial in the dimensionality of $C$ and $\vx$. The result upon
termination can be a guaranteed CLF $V$ or failure to find a CLF among
the class of functions provided.
\subsection{Comparison with Other Approaches}
We now compare our method against other techniques used to
automatically construct provably correct controllers.
\paragraph{Comparison with CEGIS:}
We have claimed that the use of demonstrator
helps our approach deal with a computationally
expensive quantifier alternation in the CLF condition.
To understand the impact of this aspect of our approach,
we first we compare
the proposed method with our previous work, namely counterexample
guided inductive synthesis
(CEGIS) that is designed to solve constraints with quantifier alternation,
and applied to the synthesis of CLFs~\cite{ravanbakhsh2015counterexample}. In this framework, the
learning process only relies on counterexamples provided by a verifier component, without involving
demonstrations. Despite a timeout that is set to two hours, our
CEGIS method timed out for \emph{all the problem instances} discussed in
this article, without discovering a CLF. As a result, we exclude this approach from
further comparisons. These results suggest that demonstrations are essential
for fast convergence.
\paragraph{Learning CLFs from Data:} On the other hand,
Khansari-Zadeh et al.~\cite{KHANSARIZADEH2014} learn likely CLFs from
demonstrations from sets of states that are sampled without (a) the use of a verifier to check,
and (b) counterexamples as new samples, both of which are features of our approach.
Therefore, the correctness of the controller thus derived is not formally
guaranteed. To this end, we verify if the solution is in fact
a CLF.
The methodology of Khansari-Zadeh et al. is implemented using the following steps:
\begin{compactenum}
\item Choose a parameterization of the desired CLF $V_{\vc}(\vx)$ (identical to our approach).
\item Generate samples in batches, wherein for each batch:
\begin{compactenum}
\item Sample $N_1=50$ states uniformly at random, and for each state $\vx_i$, add the constraint $V_{\vc}(\vx_i) \geq 0 $,
for $i \in [1, N_1]$.
\item Sample $N_2=5$ States at random, and for each state $\vx_j$ ($j \in [1, N_2]$),
simulate the MPC demonstrator for $N_3 = 10$ time steps to obtain state control
samples
\[\{(\vx_{j,1}, \vu_{j,1}),\ldots,(\vx_{j,N_3}, \vu_{j,N_3})\} \,.\]
\item Add the constraints $\grad V_{\vc} \cdot f |_{\vx = \vx_{j,k}, \vu=\vu_{j,k}}< 0$ for $j=1, \ldots, N_2$ and
$k = 1, \ldots, N_3$ to enforce the negative definiteness of the CLF.
\end{compactenum}
\item At the end of batch $k$, solve the system of linear constraints thus far to check if there is a feasible solution.
\item If there is no feasible solution, then \textbf{exit}, since no function in $V_{\vc}(\vx)$ is compatible with the data.
\item If there is a feasible solution, check this solution using the \textsc{verifier}.
\item If the verifier succeeds, then \textbf{exit} successfully with the CLF discovered.
\item Otherwise, continue to generate another batch of samples.
\end{compactenum}
We enforce the constraint $V(\vx) > 0$ and $\grad V \cdot f < 0$ over different sets of samples,
since simulating the demonstrator is much more expensive for each point.
The approach iterates between generating successive batches of data until a preset
timeout of two hours as long as (a) there are CLFs remaining to consider and (b) no
CLF has been discovered thus far. The time taken to learn and verify the solution is not considered
against the total time limit, and also not added to the overall time reported.
Besides stability, the approach is also adapted for other properties, which are used in our benchmarks.
\begin{table*}[t]
\caption{\small Results for ``demonstration-only" method. \#Sam.: number of samples, \#Dem: number of demonstrations, Case: best-case or worst-case, Time: total computation time (minutes), TO: time out ($>$ 2 hours).}\label{tab:no-CE}
\begin{center}
\begin{tabular}{ ||l||r|r||l|r|r|r|c||r|r|r|| }
\hline
\multicolumn{1}{||c||}{Problem} & \multicolumn{2}{c||}{Stats} &
\multicolumn{5}{c||}{Performance} & \multicolumn{3}{c||}{Proposed Method} \\
\hline
System Name & Succ. \% & TO \% & Case & \#Sam. & \#Dem. & Time & Status & \#Sam. & \#Dem. & Time \\
\hline
\multirow{2}{*}{Unicycle-Segment 2} & \multirow{2}{*}{60} & \multirow{2}{*}{0}
& best & 400 & 40 & 1 & Succ & \multirow{2}{*}{65} & \multirow{2}{*}{2} & \multirow{2}{*}{4} \\
& & & worst & 600 & 72 & 1 & Fail & & &\\
\hline
\multirow{2}{*}{Unicycle-Segment 1} & \multirow{2}{*}{45} & \multirow{2}{*}{0}
& best & 600 & 35 & 2 & Succ & \multirow{2}{*}{79} & \multirow{2}{*}{23} & \multirow{2}{*}{12} \\
& & & worst & 800 & 70 & 3 & Fail & & &\\
\hline
\multirow{2}{*}{TORA} & \multirow{2}{*}{60} & \multirow{2}{*}{30}
& best & 6300 & 535 & 43 & Succ & \multirow{2}{*}{84} & \multirow{2}{*}{19} & \multirow{2}{*}{8} \\
& & & worst & 17100 & 1580 & TO & Fail & & &\\
\hline
\multirow{2}{*}{Inverted Pendulum} & \multirow{2}{*}{30} & \multirow{2}{*}{70}
& best & 2250 & 137 & 84 & Succ & \multirow{2}{*}{58} & \multirow{2}{*}{34} & \multirow{2}{*}{19} \\
& & & worst & 15750 & 300 & TO & Fail & & &\\
\hline
\multirow{2}{*}{Bicycle} & \multirow{2}{*}{100} & \multirow{2}{*}{0}
& best & 2700 & 55 & 2 & Succ & \multirow{2}{*}{33} & \multirow{2}{*}{7} & \multirow{2}{*}{1}\\
& & & worst & 54000 & 1883 & 48 & Succ & & &\\
\hline
\multirow{2}{*}{Bicycle $\times$ 2} & \multirow{2}{*}{0} & \multirow{2}{*}{100}
& best & 81600 & 1736 & TO & Fail & \multirow{2}{*}{89} & \multirow{2}{*}{30} & \multirow{2}{*}{46}\\
& & & worst & - & - & - & - & & &\\
\hline
\multirow{2}{*}{Forward Flight} & \multirow{2}{*}{0} & \multirow{2}{*}{0}
& best & 900 & 35 & 4 & Fail & \multirow{2}{*}{72} & \multirow{2}{*}{4} & \multirow{2}{*}{10}\\
& & & worst & 2700 & 254 & 31 & Fail & & &\\
\hline
\multirow{2}{*}{Hover Flight} & \multirow{2}{*}{0} & \multirow{2}{*}{100}
& best & 7150 & 227 & TO & Fail & \multirow{2}{*}{132} & \multirow{2}{*}{57} & \multirow{2}{*}{47}\\
& & & worst & - & - & - & - & & &\\
\hline
\end{tabular}
\end{center}
\end{table*}
The results are reported in Table.~\ref{tab:no-CE}. Since the
generation of random samples are involved, we run the procedure $10$
times on each benchmark, and report the percentage of trials that
succeeded in finding a CLF, the number of timeouts and the number of
trials that ended in an infeasible set of constraints. We note that
the success rate is $100\%$ for just one problem instance. For four
other problem instances, the method is successful for a fraction
of the trials. The remaining benchmarks fail on all trials.
Next, the minimum and maximum number of demonstrations needed in the
trials to find a CLF is reported as the ``best-case'' and ``worst-case''
respectively. We note that our approach requires much fewer demonstrations
even when compared the best case scenario. Thus, we conclude from this
data that the
time spent by our approach for finding counterexamples is justified by the
\emph{significant decrease} in the number of demonstrations, and thus, faster
convergence. This is beneficial especially for cases where generating
demonstrations is expensive.
For one of the benchmarks (the forward flight problem of the Caltech ducted fan),
the method stops for all cases because a function compatible with the data does not exist.
As such, this suggests that no CLF compatible with the demonstrator exists.
On the other hand, our approach successfully finds a CLF while considering just
four demonstrations.
Finally, for two of the larger problem instances, we continue to obtain
feasible solutions at the end of the time limit, although the verifier
cannot prove the learned function is a CLF. In other words, there are
values of $\vc$ left, that have not been considered by the verifier.
Our approach uses counterexamples, along with a judicious choice of candidate CLFs to eliminate
all but a bounded volume of candidates.
\paragraph{Comparison with Bilinear Solvers:} We now compare our method against approaches based on
bilinear formulations found in related work~\cite{el1994synthesis,majumdar2013control,tan2004searching}.
We wish to find a Lyapunov function $V$ and a corresponding feedback law $K: X \mapsto U$, simultaneously.
Therefore, we assume $K$ is a linear combination of basis functions $K: \sum_{k=1}^{r'} \theta_{k} h_k(\vx)$.
Likewise, we parameterize $V$ as a linear combination of basis functions, as well: $V : \sum_{k=1}^r c_{k} g_k(\vx)$.
Then, we wish to find $\vc$ and $\vth$ that satisfy the constraints corresponding to the
property at hand.
To synthesize a CLF, we wish to find $V_{\vc}, K_{\vth}$, so that $V_{\vc}(\vx)$ and its
Lie derivative under the feedback $u = K_{\vth}(\vx)$ is negative definite. This is relaxed
as an optimization problem:
\[ \begin{array}{rcl}
\min\limits_{\vc,\vth,\gamma} \textcolor{red}{\gamma} &\ \\
\mathsf{s.t.}
& V_{\vc} \mbox{ is positive definite } \\
& (\forall\ \vx \neq \vzero)\ \grad V_{\vc}(\vx) \cdot f(\vx, K_{\vth}(\vx)) \leq {\color{red}\gamma} ||\vx||_2^2 \\
\end{array}\]
The decision variables include $\vc, \vth$ that parameterize $V$ and $K$, respectively.
In fact, if a feasible solution is obtained such that $\gamma < 0$ then we may stop the optimization and
declare that a CLF has been found.
To solve this bilinear problem, we use alternative minimization approach described below.
First, $V$ is initialized to be a positive definite function (by initializing $\vc$ to some fixed value).
Then, the approach repeatedly alternates between the following steps:
\begin{enumerate}
\item $\vc$ is fixed, and we search for a $\vth$ that minimizes $\gamma$.
\item $\vth$ is fixed, and we search for a $\vc$ that minimizes $\gamma$.
\end{enumerate}
Each of these problems can be relaxed using Sum of Squares (SOS)
programming~\cite{prajna2002introducing}. The approach is iterated and
results in a sequence of values $\gamma_0 \geq \gamma_1 \geq \gamma_2 \geq \cdots \geq \gamma_i$,
wherein $\gamma_i$ is the value of the objective after $i$ optimization instances have
been solved. Since the solution of one optimization instance forms a feasible solution for the
subsequent instance, it follows that $\gamma_i$ are monotonically nondecreasing. The iterations stop whenever
$\gamma$ does not decrease sufficiently between iterations. After termination,
the approach succeeds in finding $V_{\vc}$, $K_{\vth}$
only if $\gamma < 0$. Otherwise the approach fails.
Finding a suitable initial value for $\vc$ is an important
factor for success. As proposed by Majumdar et al, we pose and solve a
linear feedback controller by applying the LQR method to the
linearization of the dynamics~\cite{majumdar2013control}. In this
case, we initialize $V$ using the optimal cost function provided by
the LQR. We also note that the linearization for the dynamics is not
controllable for all cases and we can not always use this
initialization trick.
Additionally, Majumdar et al. (ibid) discuss solutions to handle input saturation, requiring
$K_{\vth}(\vx) \in U$ to avoid input saturation.
Here, we consider two different variations of this method:
(i) inputs are not saturated, (ii) inputs are saturated. We consider variation (ii) only if the method is successful without forcing the input saturation.
For the Lyapunov function $V$ we consider quadratic monomials as our basis functions, and for the feedback law
$K$, we consider both linear and quadratic basis functions as separate problem instances.
Similar to the SDP relaxation considered in this work,
the SOS programming approach uses a degree limit $D$ for the multiplier polynomials used in the
positivstellensatz (cf.~\cite{lasserre2009moments}). The limits used for the bilinear optimization
approach are identical to those used in our method for each benchmark. The bilinear method is adapted to other properties used in our
benchmarks and the results are shown in Table~\ref{tab:bilinear}.
For the first two problem instances, the linearized dynamics are not
controllable, and thus, we can not use the LQR trick for
initialization.
For the remaining instances, we were able to use the LQR trick successfully
to find an initial solution. Starting from this solution, the bilinear approach
is successful on four problem instances, but fails for the hover flight problem. This
suggests that even the LQR trick may not always provide a good
initialization. For two of the larger problem instances,
the bilinear method fails because of numerical errors, when dealing
with large SDP problems. While the SOS programming has similar
complexity compared to our method, it encounters numerical problems
when solving large problems. We believe two factors are important
here. First, our method solves different smaller verification problems
and verifies each condition separately, while in a SOS formulation all
conditions on $V$ and $\nabla V$ are formulated into one big SDP
problem. Moreover, in our method when we encounter a numerical error,
we simply use the (potentially wrong) solution as a spurious
counterexample without losing the soundness. Then, using
demonstrations we continue the search. On the other hand, when the
bilinear optimization procedure encounters a numerical error, it
is unable to make further progress towards an optimal solution.
\begin{table}[t]
\caption{\small Results for ``bilinear formulation" method. $K$: basis functions used to parameterize $K$, L: basis functions are monomials with maximum degree $1$ (linear), Q: basis functions are monomials with maximum degree $2$ (quadratic), LQR: if LQR is used for initialization, ST.: saturation type, NP: numerical problem, St.: status.}\label{tab:bilinear}
\begin{center}\scriptsize
\begin{tabular}{ ||l||c|c||c|c|| }
\hline
\multicolumn{1}{||c||}{Problem} & \multicolumn{2}{|c||}{Param.} &
\multicolumn{2}{c||}{Status} \\
\hline
System Name & $K$ & LQR & ST.(i) & ST.(ii) \\
\hline
\multirow{1}{*}{Unicycle-Seg. 2} & L & \crossMark & - & - \\%& - \\
\hline
\multirow{1}{*}{Unicycle-Seg. 1} & L & \crossMark & - & - \\%& - \\
\hline
TORA & L & \tick & \tick & \tick \\%& - \\
\hline
Inverted Pend. & L & \tick & \tick & \tick \\%& - \\
\hline
Bicycle & L & \tick & \tick & \tick \\%& - \\
\hline
Bicycle $\times$ 2 & L & \tick & NP & - \\%& - \\
\hline
Forward Flight & L & \tick & NP & - \\%& - \\
\hline
\multirow{2}{*}{Hover Flight} & L & \tick & \crossMark & - \\%& - \\
& Q & \tick & \crossMark & - \\%& - \\
\hline
\end{tabular}
\end{center}
\end{table}
In conclusion, our method has several benefits when compared to the
bilinear formulation. First, our method does not assume the linearized system is controllable to initialize a solution.
Second, our method uses demonstrations to generate a
candidate instead of a local search, and we provide an upper-bound on
the number of iterations. And finally, our method can sometimes recover from
numerically ill-posed SDPs, and thus scales better as demonstrated through
experiments. On the flip side, unlike the bilinear formulation, our method
relies on a demonstrator that may not be easy to implement.
\subsection{Case Study I:} This system is two-wheeled mobile robot
modeled with five states $[x, y, v, \theta, \gamma]$ and two control
inputs~\cite{francis2016models}, where $x$ and $y$ define the position
of the robot, $v$ is its velocity, $\theta$ is the rotational
position and $\gamma$ is the angle between the front and rear
axles. The goal is to stabilize the robot to a target velocity
$v^*=5$, and $\theta^* = \gamma^* = y^* = 0$ as shown in Fig.~\ref{fig:bicycle}. The dynamics of the model is as
follows:
\begin{equation*}\label{ex:bicycle-dyn}
\left[ \begin{array}{l}
\dot{x} \\
\dot{y} \\ \dot{v} \\ \dot{\theta} \\ \dot{\sigma}
\end{array}\right] =
\left[ \begin{array}{l}
v\cos(\theta) \\
v\sin(\theta) \\ u_1 \\ v\sigma \\ u_2
\end{array} \right] \,,
\end{equation*}
where $\sigma = tan(\gamma)$ (see Fig.~\ref{fig:bicycle}).
Variable $x$ is immaterial in
the stabilization problem and is dropped to obtain a model with
four state variables $[y, v, \theta, \sigma]$. Also, the sine function is approximated with a
polynomial of degree one.
The inputs are saturated over the intervals $U: [-10, 10]\times[-10, 10]$, and
the specification is reach-while-stay, provided by the following sets
\[
\begin{array}{rl}
S: &[-2, 2]\times[3, 7]\times[-1, 1]\times[-1, 1] \\
I: &\B_{0.4}(\vzero) \\
T: &\B_{0.1}(\vzero) \,.
\end{array}
\]
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{pics/bicycle}
\end{center}
\caption{A schematic view of the bicycle model.}\label{fig:bicycle}
\end{figure}
The method finds the following CLF:
\begin{align*}
V =
&0.37 y^2
+ 0.52 y\theta
+ 3.11 \theta^2
+ 0.98 y\sigma
+ 2.23 \sigma\theta +\\
&4.46 \sigma^2
- 0.36 vy
- 0.29 v\theta
+ 0.95 v\sigma
+ 3.86 v^2\,.
\end{align*}
This CLF is used to design a
controller. Fig.~\ref{fig:bicycle-sim} shows the projection of
trajectories on to $x$-$y$ plane for the synthesized controller in red. The
blue trajectories are generated using the MPC controller that served as the
demonstrator. The
behavior of the system for both controllers are similar but not
identical. Notice that the initial state in
Fig.~\ref{fig:bicycle-sim}(c) is not in the region of attraction
(guaranteed region). Nevertheless, the CLF-based controller can
still stabilize the system while keeping the system in the safe
region. On the other hand, the MPC violates the safety constraints
even when the safety constraints are imposed in the MPC
scheme. The safety is violated because in the beginning $\theta$
gets larger than $1$ and it gets close to $\pi/2$ (the robots moves
almost vertically).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{pics/bicycle-sim}
\end{center}
\caption{Simulation for the bicycle robot - Projected on x-y
plane. Simulation traces are plotted for three different initial states. Blue (red) traces corresponds to trajectories of the system for MPC controller (CLF-based controller).}\label{fig:bicycle-sim}
\end{figure}
\subsection{Case Study II:}
The problem of keeping the inverted pendulum in a vertical
position is considered. This case study has applications in balancing
two-wheeled robots~\cite{CHAN201389}. The system has two degrees
of freedom: the position of the cart $x$, and the degree
of the inverted pendulum $\theta$. The goal is to keep
the pendulum in a vertical position by moving the cart with
input $u$ (Fig.~\ref{fig:inverted-pendulum}).
The system has four state variables
$[x, \dot{x}, \theta, \dot{\theta}]$ with the following
dynamics~\cite{landry2005dynamics}:
\begin{equation*}\label{eq:inverted-pendulum-dyn}
\left[ \begin{array}{l}
\ddot{\vx} \\ \ddot{\theta}
\end{array}\right] =
\left[ \begin{array}{l}
\frac{4u - 4\epsilon\dot{x} + 4ml\dot{\theta}^2 \sin(\theta) - 3mg\sin(\theta)\cos(\theta)}{4(M+m)-3m\cos^2(\theta)} \\ \frac{ (M+m)g\sin(\theta) - (u - \epsilon\dot{x}) \cos(\theta) - ml\dot{\theta}^2\sin(\theta)\cos(\theta)}{l(\frac{4}{3}(M+m)-m\cos(\theta)^2)}
\end{array} \right] \,,
\end{equation*}
where $m = 0.21$ and $M=0.815$ are masses of the pendulum and the cart
respectively, $g=9.8$ is the gravitational acceleration, and $l=0.305$
is distance of center of mass of the pendulum from the cart. After partial
linearization, the dynamics have the following form:
\begin{equation*}\label{eq:inverted-pendulum-dyn}
\left[ \begin{array}{l}
\ddot{\vx} \\ \ddot{\theta}
\end{array}\right] =
\left[ \begin{array}{l}
4u + \frac{4(M+m)g \tan(\theta) - 3mg\sin(\theta)\cos(\theta)}{4(M+m)-3m\cos^2(\theta)} \\ \frac{- 3 u \cos(\theta)}{l}
\end{array} \right] \,.
\end{equation*}
The trigonometric and rational functions are approximated with polynomials of degree three. The input is saturated $U:[-20, 20]$ and sets for
a safety specification are $S: [-1, 1]^4 , \ I:\B_{0.1}(\vzero)$.
Fig.~\ref{fig:inverted-sim} shows the some of the traces of the closed loop system
for the CLF-based controller as well as the MPC controller. Notice that the trajectories of the CLF based
controller are quite distinct from the MPC, especially in regions where the demonstration is not provided during the CLF synthesis process. For example, in Figure.~\ref{fig:inverted-sim}(b), the behaviors of these controllers are similar
outside the initial set $I$. However, inside $I$ (near the equilibrium) the
behavior is different, since the demonstrations are only generated for states outside $I$.
The CLF-based controller is designed using the following CLF generated by the learning framework:
\begin{align*}
V =& 16.37 \dot{\theta}^2 + 50.37 \dot{\theta}\theta
+ 75.16 \theta^2 + 13.51 x\dot{\theta}
+ 43.26 x\theta + \\
& 10.44 x^2 + 23.30 \dot{\theta}\dot{x} + 38.09 \dot{x}\theta
+ 11.13 \dot{x}x + 9.55 \dot{x}^2 \,.
\end{align*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.3\textwidth]{pics/inverted-pendulum}
\end{center}
\caption{A schematic view of the ``inverted pendulum on a cart".}\label{fig:inverted-pendulum}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1\textwidth]{pics/inverted-sim}
\end{center}
\caption{Simulation for the inverted pendulum system. Simulation traces are plotted
for two initial states. Red (blue) traces show the simulation for the CLF-based (MPC) controller.}\label{fig:inverted-sim}
\end{figure*}
\subsection{Case Study III:}
Caltech ducted fan has been used to study the aerodynamics of a single
wing of a thrust vectored, fixed wing aircraft~\cite{jadbabaie2002control}.
In this case study, we wish to
design forward flight control in which the angle of attack needs to be
set for a stable forward flight. The model of the system is carefully
calibrated through wind tunnel experiments. The system has four states:
$v$ is the velocity, $\gamma$ is the moving direction the ducted fan,
$\theta$ is the rotational position, and $q$ is the angular velocity.
The control inputs are the thrust $u$ and the angle at which the
thrust is applied $\delta_u$ (Fig.~\ref{fig:ducted-fan}).
Also, the inputs are saturated:
$U :[0, 13.5] \times [-0.45, 0.45]$.
The dynamics are:
\begin{equation*}\label{ex:ducted-fan-forward-dyn}
\left[ \begin{array}{l}
m \dot{v} \\ m v \dot{\gamma} \\ \dot{\theta} \\ J \dot{q}
\end{array}\right] =
\left[ \begin{array}{l}
-D(v, \alpha) - W \sin(\gamma) + u \cos(\alpha + \delta_u) \\
L(v, \alpha) - W \cos(\gamma) + u \sin(\alpha + \delta_u) \\
q \\ M(v, \alpha) - u l_T \sin(\delta_u)
\end{array} \right] \,,
\end{equation*}
where the angle of attack $\alpha = \theta - \gamma$, and
$D$, $L$, and $M$ are polynomials in $v$ and $\alpha$.
For full list of parameters, see~\cite{jadbabaie2002control}.
According to the dynamics, $\vx^*:\ [6, 0, 0.1771, 0]$ is a stable
equilibrium (for $\vu^*:\ [3.2, -0.138]$) where the ducted fan can move forward with velocity $6$.
Thus, the goal is to reach near $\vx^*$.
The system is not affine in control. We replace $u$ and $\delta_u$
with $u_s = u \sin(\delta_u)$ and $u_c = u \cos(\delta_u)$:
\begin{equation*}\label{ex:ducted-fan-forward-dyn}
\left[ \begin{array}{l}
\dot{v} \\ \dot{\gamma} \\ \dot{\theta} \\ \dot{q}
\end{array}\right] =
\left[ \begin{array}{l}
\frac{-D(v, \alpha) - W \sin(\gamma) + u_c \cos(\alpha) - u_s \sin(\alpha)}{m} \\
\frac{L(v, \alpha) - W \cos(\gamma) + u_c \sin(\alpha) + u_s \cos(\alpha)}{mv} \\
q \\ \frac{M(v, \alpha) - l_T u_s}{J}
\end{array} \right] \,.
\end{equation*}
Projection of $U$ into the new coordinate will yield a sector of a circle.
Then, set $U$ is safely under-approximated by a polytope $\hat{U}$ as shown in Fig.~\ref{fig:uhat}.
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth]{pics/ducted-fan}
\end{center}
\caption{A schematic view of the Caltech ducted fan.}\label{fig:ducted-fan}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth]{pics/uhat}
\end{center}
\caption{Set of feasible inputs $U$ and its under approximation $\hat{U}$ in the new coordinate for case study III.}\label{fig:uhat}
\end{figure}
Next, we perform a
translation so that the $\vx^*$ ($\vu^*$) is the origin of the
state (input) space in the new coordinate system.
In order to obtain a polynomial dynamics, we approximate
$v^{-1}$, $\sin$ and $\cos$ with polynomials of degree one, three and three,
respectively.
These changes yield a polynomial control affine dynamics, which fits the
description of our model.
For the reach-while-stay specification, the sets are defined as the following:
\begin{align*}
S&:[3, 9]\times[-0.75, 0.75]\times[-0.75, 0.75]\times[-2, 2] \\
I&:\{[v, \gamma, \theta, q]^t |(0.4v)^2 + \gamma^2 + \theta^2 + q^2 < 0.4^2 \} \, \\
T&:\{[v, \gamma, \theta, q]^t |(0.4v)^2 + \gamma^2 + \theta^2 + q^2 < 0.05^2 \} \,.
\end{align*}
The projection of some of the traces of the system in $x$-$y$ plane is shown in Fig.~\ref{fig:forward-sim}. We set $x_0 = y_0 = 0$ and
\[
\dot{x} = v \cos(\gamma) , \ \dot{y} = v \sin(\gamma)\,.
\]
The CLF-based
controller is designed using the following generated CLF:
\begin{align*}
V =&+ 3.23 q^2 + 2.17 q\theta
+ 3.90 \theta^2 - 0.2 qv
- 0.45 v\theta \\
& + 0.53 v^2 + 1.66 q\gamma - 1.33 \gamma\theta
+ 0.48 v\gamma + 3.90 \gamma^2 \,.
\end{align*}
The traces show that the CLF-based controller stabilizes faster, however, the MPC
controller uses the aerodynamics to achieve the same goal with a better performance.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{pics/forward-sim}
\end{center}
\caption{Simulation for forward flight of
Caltech ducted fan - Projected on x-y
plane.
Blue (red) traces are trajectories of the closed loop system
with the MPC (CLF-based) controller. The rotational position is shown for some of the states (in black for the initial state) for each trajectory. Initial states
are $[2, 0.4, 0.717, 0]$, $[-1, -0.25, -0.133, 0]$, and $[-1, 0.4, 0.177, 0]$
for (a), (b), and (c), respectively.}\label{fig:forward-sim}
\end{figure}
\subsection{Case Study IV:}\label{case:hover}
This case study addresses another problem for the planar Caltech ducted fan~\cite{jadbabaie2002control}.
The goal is to keep the planar ducted fan in a hover mode. The system has three degrees
of freedom, $x$, $y$, and $\theta$, which define the position and orientation of
the ducted fan. There are six state variables $x$, $y$, $\theta$, $\dot{x}$, $\dot{y}$, $\dot{\theta}$ and two control inputs $u_1$, $u_2$ ($U \in [-10, 10]\times[0, 10]$).
The dynamics are
\begin{equation*}\label{ex:ducted-fan-hover-dyn}
\left[ \begin{array}{l}
m \ddot{x} \\ m \ddot{y} \\ J \ddot{\theta}
\end{array}\right] =
\left[ \begin{array}{l}
-d_c\dot{x} + u_1 \cos(\theta) - u_2 \sin(\theta) \\
-d_c\dot{y} + u_2 \cos(\theta) + u_1 \sin(\theta) - mg \\
r u_1
\end{array} \right] \,,
\end{equation*}
where $m = 11.2$, $g = 0.28$, $J = 0.0462$, $r = 0.156$ and $d_c = 0.1$.
The system is stable at origin for $\vu^*: [0, mg]$. Therefore, we set
$\vu*$ as the origin for the input space.
The specification
is a reach-while-stay property with the following sets:
\begin{align*}
S &: [-1,1]\times[-1,1]\times[-0.7,0.7]\times[-1, 1]^3 \\
I &:\B_{0.25}(\vzero), T:\B_{0.1}(\vzero) \,.
\end{align*}
The trigonometric functions are approximated with degree two
polynomials and the procedure finds a quadratic CLF:
\begin{align*}
V =& 1.64 \dot{\theta}^2 - 0.56 \dot{\theta}\dot{y}
+ 13.53 \dot{y}^2 + 0.07 \dot{\theta}y + 1.15 y\dot{y} +\\
&1.16 y^2 + 1.74 \theta\dot{\theta} + 0.03 \dot{y}\theta - 0.77 y\theta + 4.80 \theta^2 -\\
&4.57 \dot{\theta}\dot{x} + 0.85 \dot{x}\dot{y} + 0.34 y\dot{x} - 8.59 \dot{x}\theta + 12.77 \dot{x}^2 -\\
&0.45 \dot{\theta}x + 0.06 \dot{y}x + 0.51 yx - 3.71 x\theta + 4.12 x\dot{x} + \\
&1.88 x^2 \,.
\end{align*}
Some of the traces are shown in Fig.~\ref{fig:hover-sim}. As the simulation suggest,
the MPC controller behaves very differently and the CLF-based controller
yield solutions with more oscillations. The CLF-based controller first
stabilizes $x$ and $\theta$ and then value of $y$ settles. Also, once
the trace is inside the target region, the CLF-based controller does
not guarantee decrease in $V$ as this fact is intuitively visible
in Fig.~\ref{fig:hover-sim}(c).
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{pics/hover-sim}
\end{center}
\caption{Simulation for Case Study IV - Projected on x-y
plane. The trajectories corresponding to the CLF-based (MPC) controller are shown in red
(blue) lines. The boundary of the target set is
shown in yellow.}\label{fig:hover-sim}
\end{figure}
\subsection{Case Study V:}
In this case study, a unicycle model~\cite{liberzon2012switching} is considered.
It is known that no continuous feedback can stabilize the unicycle, and therefore no
continuous CLF exists. However, considering a reference trajectory for a moving unicycle,
one can keep the system near the reference trajectory, using control funnels.
The unicycle model has the dynamics:
\[
\dot{x} = u_1 \cos(\theta) \ , \ \dot{y} = u_1 \sin(\theta) \ , \ \dot{\theta} = u_2\,.
\]
By a change of basis, a simpler dynamic model is used here (see.~\cite{liberzon2012switching}):
\[
\dot{x_1} = u_1, \dot{x_2} = u_2, \dot{x_3} = x_1 u_2 - x_2 u_1 \,.
\]
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{pics/unicycle}
\end{center}
\caption{(a) Trajectory tracking using control funnel - Projected on x-y
plane. The reference trajectory is shown with the green line, consists of two segments.
Starting from $R_0$, the state remains in the funnel (blue region)
until it reaches $R_\T$. Boundary of each smaller blue region shows
the boundary of the funnel for a specific time. (b) Simulation traces for
some random initial states.
}\label{fig:unicycle}
\end{figure*}
We consider a planning problem, in which starting
near $[\theta, x, y] = [\frac{\pi}{2}, -1, -1]$, the goal is to reach near $[\theta, x, y] = [0, 2, 0]$.
In the first step, a feasible trajectory $\vx^*(t)$ is generated as
shown in Fig.~\ref{fig:unicycle}(a).
Then $\vx^*(t)$ is approximated with piecewise polynomials.
More precisely, trajectory consists
of two segments. The first segment brings the car to the
origin and the second segment moves the car to the
destination.
Each segment is approximated using
polynomials in $t$ with degree up to three:
\begin{align*}
&\mbox{seg. 2} : \begin{cases}
\theta(t)^* = 0 \\
x^*(t) = t \\
y^*(t) = 0
\end{cases} \\
&\mbox{seg. 1} : \begin{cases}
\theta^*(t) = \pi - t \\
x^*(t) = -(1-0.64t)(1+0.64t) \\
y^*(t) = -(1-0.64t)(1-0.2t-0.25t^2) \,.
\end{cases}
\end{align*}
Let $Tr(\theta, x, y)$ represent the transformation of the state in terms
of $(\theta, x, y)$ coordinate system to the $(x_1, x_2, x_3)$ coordinates.
Also, for two set $A$, and $B$, let $A \oplus B$ be the Minkowski sum of $A$ and $B$.
For example, we write $\{Tr(\theta, x, y)\} \oplus \B_{\delta}(\vzero)$ to denote a state
and a ball of radius $\delta$ around it. Moreover, let $S_1$ ($S_2$) be the minimal box
which contains the trajectory $\vx^*(\cdot)$ for the first (second) segment in the
$(x_1, x_2, x_3)$ coordinates.
For the first segment, the goal is to reach from the initial set
$I:\{Tr(\pi/2, -1, -1)\} \oplus \B_{1}(\vzero)$ to the target set $T:\{Tr(0, 0, 0)\} \oplus \B_{1}(\vzero)$.
Also, the safe set is defined as $S:S_1 \oplus [-1.5,1.5]^3$.
That is, an enlarged box around $S_1$.
And in the next segment, the goal is to reach from initial set $I:Tr(0, 0, 0) \oplus \B_{1}(\vzero)$
to $T:Tr(0, 2, 0) \oplus \B_{1}(\vzero)$ as the target, while staying in $S:S_2 \oplus [-2, 2]^3$.
For each segment, we search for a Lyapunov-like function $V$ as a time varying function, quadratic in the states.
Our method is applied to this problem,
and we are able to find a strategy to implement the plan with guarantees.
The boundary of the funnels is shown in Fig.~\ref{fig:unicycle}(a). Also, some
simulation traces are shown in Fig.~\ref{fig:unicycle}(b), where the CLF controller
is implemented using the generated funnels. As simulations suggest, the funnels
can effectively stabilize the traces to the trajectory, when the unicycle
is moving forward.
\input{performance}
\input{comparison}
\subsection{Illustrative Example: TORA System}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{pics/tora.pdf}
\end{center}
\caption{TORA System. (a) A schematic diagram of the TORA
system. (b) Execution traces of the system using MPC control
(blue traces) and Lyapunov based control (red traces) starting
from same initial point.}
\label{Fig:tora-example}
\end{figure*}
Figure~\ref{Fig:tora-example}(a) shows a mechanical system, called
translational oscillations with a rotational actuator (TORA).
The system consists of
a cart attached to a wall using a spring. Inside the cart, there is
an arm with a weight which can rotate. The cart itself can oscillate freely
and there are no friction forces. The system has two degrees of freedom,
including the position of the cart $x$, and the rotational position of
the arm $\theta$. The controller can rotate the arm through input $u$.
The goal is to stabilize the cart
to $x=0$, with its velocity, angle, and angular velocity
$\dot{x} = \theta=\dot{\theta} = 0$. We refer the reader to Jankovic et
al.~\cite{jankovic1996tora} for a derivation of the dynamics, shown below
in terms of state variables $(x_1,\ldots,x_4)$, collectively written
as a vector $\vx$, and a single control input $(u_1)$, written as a vector $\vu$, after a basis transformation:
\begin{equation}\label{eq:tora-dyn}
\dot{x_1} = x_2,\, \dot{x_2} = -x_1 + \epsilon \sin(x_3),\, \dot{x_3} = x_4,\, \dot{x_4} = u_1\,.
\end{equation}
$\sin(x_3)$ is approximated using a degree three polynomial
approximation which is quite accurate over the range $x_3 \in [-2,2]$.
The equilibrium $x = \dot{x} = \theta = \dot{\theta} = 0$ now
corresponds to $x_1 = x_2 = x_3 = x_4 = 0$. The system has a single
control input $u_1$ that is bounded $u_1 \in [-1.5, 1.5]$. Further,
we define a ``safe set''
$S: [-1,1] \times [-1,1] \times [-2,2] \times [-1,1]$, so that if
$\vx(0) \in S$ then $\vx(t) \in S$ for all time $t \geq 0$.
\paragraph{MPC Scheme:} A first approach to solve the problem uses
a nonlinear model-predictive control (MPC) scheme using
a discretization of the system dynamics with time step $\tau = 1$.
The time $t$ belongs to set $\{0, \tau, 2\tau,\ldots,N\tau = \T\}$ and:
\begin{equation}\label{eq:discretized-dynamics}
\vx(t + \tau) = \vx(t) + \tau f(\vx(t), \vu(t)) \,,
\end{equation}
with $f(\vx,\vu)$ representing the vector field of the ODE in
~\eqref{eq:tora-dyn}. Fixing the time horizon $\T = 30$, we use a
simple cost function
$J(\vx(0), \vu(0), \vu(\tau), \ldots, \vu(\T-\tau)\})$:
\begin{equation}\label{eq:mpc-formulation}
\sum_{t \in \{0,\tau,...,\T-\tau\}} \left(||\vx(t)||_2^2 +
||\vu(t)||_2^2\right) + N \ ||\vx(\T)||_2^2 \,.
\end{equation}
Here, we
constrain $\vu(t) \in [-1.5, 1.5]$ for all
$t$ and define $\vx(t+\tau)$ in terms of
$\vx(t)$ using the discretization in~\eqref{eq:discretized-dynamics}.
Such a control is implemented using a first/second order numerical
gradient descent method to minimize the cost
function~\cite{Nocedal+Wright/2006/Numerical}. The stabilization of
the system was informally confirmed through hundreds of simulations
from different initial states. However, the MPC scheme is expensive,
requiring repeated solutions to (constrained) nonlinear optimization
problems in real-time. Furthermore, in general, the closed loop lacks
formal guarantees despite the \emph{high confidence} gained from
numerous simulations.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}
\matrix[every node/.style={rectangle, draw=black, line width=1.5pt}, row sep=20pt, column sep=15pt]{ & \node[fill=blue!20](n0){\begin{tabular}{c}
\textsc{Learner} \end{tabular}}; & \\
\node[fill=green!20](n1){\begin{tabular}{c}
\textsc{Verifier} \end{tabular}}; & & \node[fill=red!20](n2){\begin{tabular}{c}
\textsc{Demonstrator} \end{tabular} }; \\
};
\path[->, line width=2pt] (n0) edge[bend right] node[left]{$V(\vx)?$} (n1)
(n1) edge[bend right] node[below]{\;\;\;\begin{tabular}{c}
Yes or \\
No($\vx_{j+1}$)\end{tabular}} (n0)
(n0) edge [bend left] node[above]{$\vx_j$} (n2)
(n2) edge [bend left] node[above]{$\vu_j$} (n0);
\draw (n0.north)+(0,0.3cm) node {$(\vx_1, \vu_1), \ldots, (\vx_j, \vu_j)$};
\end{tikzpicture}
\end{center}
\caption{Overview of the learning framework for learning a control Lyapunov function.}\label{fig:learning-framework}
\end{figure}
\paragraph{Learning a Control Lyapunov Function:} In this article, we introduce
an approach which uses the MPC scheme as a \textsc{demonstrator},
and attempts to learn a control Lyapunov function. Then, a
control law (in a closed form) is obtained from the CLF. The overall
idea, depicted in Fig.~\ref{fig:learning-framework}, is to pose
queries to the \emph{offline} MPC at finitely many \emph{witness}
states $\{ \vx^{(1)}, \ldots, \vx^{(j)} \}$. Then, for each witness state
$\vx^{(i)}$, the MPC is applied to generate a sequence of control
inputs $\vu^{(i)}(0), \vu^{(i)}(\tau), \cdots, $ $\vu^{(i)}(\T-\tau)$ with $\vx^{(i)}$ as the initial
state, in order to drive the system into the equilibrium starting from
$\vx^{(i)}$. The MPC then retains the first control input
$\vu^{(i)}:\ \vu^{(i)}(0)$, and discards the remaining (as is standard in
MPC). This yields the so called observation pairs $(\vx^{(i)}, \vu^{(i)})$
that are used by the \textsc{learner}.
The
\textsc{learner} attempts to find a candidate function
$V(\vx)$ that is positive definite and which decreases at each witness
state $\vx^{(i)}$ through the control input $\vu^{(i)}$. This function $V$
is potentially a CLF function for the system.
This function is fed to the \textsc{verifier}, which checks whether
$V(\vx)$ is indeed a CLF, or discovers a state $\vx^{(j+1)}$ which
refutes $V$. This new state is added to the witness set and the
process is iterated. The procedure described in this paper synthesizes
the control Lyapunov function $V(\vx)$ below:
\begin{align*}
V =& 1.22 x_2^2 + 0.31 x_2x_3 + 0.44 x_3^2 - 0.28 x_4x_2\\
& + 0.80 x_4x_3 + 1.69 x_4^2 + 0.07 x_1x_2 - 0.66 x_1x_3\\
& - 1.85 x_4x_1 + 1.6 x_1^2\,.
\end{align*}
Next, this function is used to design a associated control law that
guarantees the stabilization of the model described in Eq.~\eqref{eq:tora-dyn}.
Figure~\ref{Fig:tora-example}(b)
shows a closed loop trajectory for this control law vs control law
extracted by the MPC. At each step, given a
current state $\vx$, we compute an input $\vu \in [-1.5, 1.5]$ such
that:
\begin{equation}\label{eq:clf-decrease}
(\nabla V) \cdot f(\vx, \vu) < 0 \,.
\end{equation}
First, the definition of a CLF guarantees that any
state $\vx \in S$, a control input $\vu \in [-1.5,1.5]$ that
satisfies Eq.~\eqref{eq:clf-decrease} exists.
Such a $\vu$ may be chosen directly by means of a formula involving
$\vx$~\cite{LIN1991UNIVERSAL,suarez2001global} unlike the MPC which
solves a nonlinear problem in Eq.~\eqref{eq:mpc-formulation}. Furthermore, the
resulting control law guarantees the stability of the resulting closed loop.
\subsection{Termination}
Recall that in the framework, the learner provides a candidate and the
verifier refutes the candidate by a counterexample and a new
observation is generated by the demonstrator. The following
lemma relates the sample $\vc_j \in C_{j-1}$ at the $j^{th}$ iteration
and the set $C_{j}$ in the subsequent iteration.
\begin{lemma}\label{lemma:cj-half-space}
There exists a half-space $H^*_{j}:\ \va^t \vc \geq b$ such that (a) $\vc_j$ lies on boundary of hyperplane $H^*_{j}$, and (b)
$C_{j} \subseteq C_{j-1} \cap H^*_{j}$.
\end{lemma}
\begin{proof}
Recall that we have $\vc_j \in C_{j-1}$ but $\vc_j \not\in C_{j}$ by Theorem~\ref{thm:formal-learning-thm}.
Let $\hat{H}_j: \va^t \vc = \hat{b}$ be a separating hyperplane between the (convex) set $C_{j}$ and the point $\vc_j$, such that $C_{j} \subseteq \{ \vc\ |\ \va^t \vc \geq \hat{b}\}$. By setting the offset $b:\ \va^t \vc_j$,
we note that $b \leq \hat{b}$. Therefore, by defining $H^*_{j}$ as $\va^t \vc \geq b$, we obtain
the required half-space that satisfies conditions (a) and (b).
\end{proof}
While sampling a point from $C_{j-1}$ is solved by solving a linear programming problem, Lemma.~\ref{lemma:cj-half-space} suggests that the choice
of $\vc_j$ governs the convergence of the algorithm. Figure.~\ref{fig:learning-iteration}
demonstrates the importance of this choice by showing candidate $\vc_j$, hyperplanes $H_{j1}$
and $H_{j2}$ and $C_{j}$.
For a faster termination, we wish to remove a ``large portion'' of
$C_{j-1}$ to obtain a ``smaller'' $C_{j}$. There are two important
factors which affect this: (i) counterexample $\vx_j$ selection and
(ii) candidate $\vc_j$ selection. Counterexample $\vx_j$, would
affect $\vu_j: \D(\vx_j)$, $g(\vx_j)$, and $f(\vx_j, \vu_j)$ and
therefore defines the hyper-planes $H_{j1}$ and $H_{j2}$. On the other
hand, candidate $\vc_j \not\in C_{j}$. We
postpone discussion on the counterexample selection to the next
section, and for the rest of this section we focus on different
techniques to generate a candidate $\vc_j \in C_{j-1}$.
The goal is to find a $\vc_j$ s.t.
\begin{equation} \label{eq:volume-reduction}
\Vol(C_{j}) \leq \alpha \Vol(C_{j-1}) \,,
\end{equation}
for each iteration $j$ and a fixed constant $0 \leq \alpha < 1$, independent of the hyperplanes $H_{j1}$ and $H_{j2}$. Here $\Vol(C_j)$ represents the
volume of the (closure) of the set $C_j$. Since closure of $C_j$ is
contained in $C$ which happens to be compact, this volume will
always be finite. Note that if we can guarantee Eq.~\eqref{eq:volume-reduction},
it immediately follows that $\Vol(C_j) \leq \alpha^j \Vol(C_0)$. This implies
that the volume of the remaining candidates ``vanishes'' rapidly.
\begin{remark} By referring to $\Vol(C_j)$, we are implicitly
assuming that $C_j$ is not embedded inside a subspace of
$\reals^r$, i.e., it is full-dimensional. However, this assumption
is not strictly true. Specifically, $C_0 : C \cap H_0$, where
$H_0$ is a hyper-plane. Thus, strictly speaking, the volume of
$C_0$ in $\reals^r$ is $0$. This issue is easily addressed by
first factoring out the linearity space of $C_0$, i.e., the affine
hull of $C_0$. This is performed by using the equality constraints
that describe the affine hull to eliminate variables from
$C_0$. Subsequently, $C_0$ can be treated as a full dimensional
polytope in $\reals^{r-d_j}$, wherein $d_j$ is the dimension of
its linearity space.
Furthermore, since $C_{j} \subseteq C_0$, we can continue
to express $C_{j} $ inside $\reals^{r-d_j}$ using the same basis
vectors as $C_0$. A further complication arises if $C_{j}$ is
embedded inside a smaller subspace. We do not treat this case in
our analysis. However, note that this can happen for at most $r$
iterations and thus, does not pose a problem for the termination
analysis.
\end{remark}
Intuitively, it is clear from Figure~\ref{fig:learning-iteration} that
a candidate at the \emph{center} of $C_{j-1}$ would be a good
one. We now relate the choice of $\vc_j$ to an appropriate
definition of center, so that Eq.~\eqref{eq:volume-reduction} is satisfied.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw[draw=black, line width = 1.5pt, fill=green!20](0,0) -- (2,0.5) -- (2,2.5) -- (0,4) -- (-1,1) -- cycle;
\draw[fill=red!20] (0.5,1.45) circle (0.1);
\draw[draw=blue!30, line width=1.5pt, pattern = north west lines, pattern color=black] (0,0) -- (2,0.5) -- (2,1.1) -- (1.2,1.9) -- ( -0.35,0.35) -- cycle;
\draw[draw=black, line width=1.5pt, dashed](-0.7,0.0) -- (2.3, 3.0);
\draw[draw=black, line width=1.5pt, dashed](2.3,0.8) -- (-1, 4.1);
\node at (0.22,1.45) {$\vc_j$};
\node at (0.3,2.3) {$C_{j-1}$};
\node at (0.8,0.6) {$C_{j}$};
\node at (-1.,3.5){$H_{j1}$};
\node at (2.8,2.8){$H_{j2}$};
\end{tikzpicture}
\end{center}
\caption{Search space: Original candidate region $C_j$ (green) at the start of the
$j^{th}$ iteration, the candidate $\vc_j$, and the new region
$C_{j+1}$ (hatched region with blue lines).}\label{fig:learning-iteration}
\end{figure}
\paragraph{Center of Maximum Volume Ellipsoid}
Maximum volume ellipsoid (MVE) inscribed inside a polytope is unique with many useful
characteristics.
\begin{theorem}[Tarasov et al.\cite{tarasov1988method}] \label{thm:mve}
Let $\vc_j$ be chosen as the center of the MVE inscribed in $C_{j-1}$. Then,
\[ \Vol\left(C_{j}\right) \leq \left(1-\frac{1}{r}\right) \Vol\left(C_{j-1}\right) \,.\]
\end{theorem}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.4\textwidth]{pics/ellipsoid}
\end{center}
\caption{Search Space: Original candidate region $C_{j-1}$ ($C_{j}$) is shown in blue (green) polygon.
The maximum volume ellipsoid $E_{j-1}$ ($E_{j}$) is inscribed in $C_{j-1}$ ($C_{j}$) and
its center is the candidate $\vc_j$ ($\vc_{j+1}$).}
\label{fig:ellipsoid}
\end{figure}
Recall, here that $r$ is the number of basis functions such that $C_{j-1} \subseteq \reals^r$. This leads us to a scheme that guarantees termination
of the overall procedure in finitely many steps under
some assumptions. The idea is simple. Select the center of the MVE inscribed in $C_{j-1}$ at each iteration (Fig.~\ref{fig:ellipsoid}).
Let $C \subseteq (-\Delta, \Delta)^r$ for $\Delta >
0$. Furthermore, let us additionally terminate the procedure
as having failed whenever the $\Vol(C_j) < (2\delta)^r$ for
some arbitrarily small $\delta > 0$. This additional termination
condition is easily justified when one considers the precision
limits of floating point numbers and sets of small volumes. Clearly,
as the volume of the sets $C_j$ decreases exponentially, each point
inside the set will be quite close to one that is outside, requiring
high precision arithmetic to represent and sample from the sets
$C_j$.
\begin{theorem} \label{thm:termination}
If at each step $\vc_j$ is chosen as the center of the MVE in $C_{j-1}$, the learning loop terminates in at most
\[ \frac{r (\log(\Delta) - \log(\delta))}{- \log\left(1 - \frac{1}{r}\right) } = O(r^2) \ \mbox{iterations}\,.\]
\end{theorem}
\begin{proof}
Initially, $\Vol(C_0) < (2\Delta)^r$. Then by Theorem~\ref{thm:mve}
\begin{align*}
&\Vol(C_j) \leq (1 - \frac{1}{r})^j \ \Vol(C_0) < (1 - \frac{1}{r})^j (2\Delta)^r \\
\implies & \log\left(\frac{\Vol(C_j)}{(2\Delta)^r}\right) < j \ \log(1-\frac{1}{r})\,.
\end{align*}
After $k = \frac{r(\log(\Delta)-\log(\delta))}{-\log(1-\frac{1}{r})}$ iterations:
\begin{equation*}
\log\left(\frac{\Vol(C_j)}{(2\Delta)^r}\right) < \frac{r(\log(\Delta)-\log(\delta))}{-\log(1-\frac{1}{r})} \ \log(1-\frac{1}{r})\,,
\end{equation*}
and
\begin{align*}
\implies & \log\left(\frac{\Vol(C_j)}{(2\Delta)^r}\right) < r \log\left(\frac{\delta}{\Delta}\right) \\
\implies & \log\left(\frac{\Vol(C_j)}{(2\Delta)^r}\right) < r \log\left(\frac{2\delta}{2\Delta}\right) \\
\implies & \log(\Vol(C_k)) < \log((2\delta)^r)\,.
\end{align*}
And it is concluded that $\Vol(C_k) < (2\delta)^r$, which is the termination condition. And asymptotically $-\log(1 - \frac{1}{r})$ is $\Omega(\frac{1}{r})$ (can be shown using Taylor expansion as $r \rightarrow \infty$) and therefore, the maximum number of iterations would be $O(r^2)$.
\end{proof}
However, checking the termination condition is computationally
expensive as calculating the volume of a polytope is
$\sharp P$ hard, i.e., as hard as counting the number of
solutions to a SAT problem. One solution is to first calculate an
upper bound on the number of iterations using
Theorem~\ref{thm:termination}, and stop if the number of iterations
has exceeded the upper-bound.
A better approach is to consider some robustness for the candidate.
\begin{definition}[Robust Compatibility]
A candidate $\vc$ is $\delta$-robust for $\delta > 0$ w.r.t.
observations (demonstrator), iff for each $\hat{\vc} \in \B_{\delta}(\vc)$,
$V_{\hat{\vc}}:\hat{\vc}^t \cdot \vg(\vx)$ is
compatible with observations (demonstrator) as well.
\end{definition}
Let $E_j$ be the MVE inscribed inside $C_j$ (Fig.~\ref{fig:ellipsoid}).
Following the robustness assumption, it is sufficient to terminate
the procedure whenever:
\begin{equation}\label{eq:termin-cond-2}
\Vol(E_j) < \gamma \delta^r\,,
\end{equation}
where $\gamma$ is the volume of $r$-ball with radius $1$.
\begin{theorem}[\cite{tarasov1988method,khachiyan1990inequality}] \label{thm:mve-2}
Let $\vc_j$ be chosen as the center of $E_{j-1}$. Then,
\[ \Vol(E_{j}) \leq \left(\frac{8}{9}\right) \Vol\left(E_{j-1}\right) \,.\]
\end{theorem}
\begin{theorem} \label{thm:termination2}
If at each step $\vc_j$ is chosen as the center of $E_{j-1}$, the learning loop condition defined by Eq.~\eqref{eq:termin-cond-2} is violated in at most
\[ \frac{r (\log(\Delta) - \log(\delta))}{- \log\left(\frac{8}{9}\right) } = O(r) \ \mbox{iterations}\,.\]
\end{theorem}
\begin{proof}
Initially, $\B_{\Delta}(\vzero)$ is the MVE inside box $[-\Delta, \Delta]^r$ and therefore, $\Vol(E_0) < \gamma \Delta^r$.
Then by Theorem~\ref{thm:mve}
\begin{align*}
&\Vol(E_j) \leq (\frac{8}{9})^j \ \Vol(E_0) < (\frac{8}{9})^j \gamma \Delta^r \\
\implies & \log(\Vol(E_j)) - \log(\gamma \Delta^r) < j \ \log(\frac{8}{9}) \,.
\end{align*}
After $k = \frac{r(\log(\Delta)-\log(\delta))}{-\log(\frac{8}{9})}$ iterations:
\begin{equation*}
\log(\Vol(E_k)) - \log(\gamma \Delta^r) < \frac{r(\log(\Delta)-\log(\delta))}{-\log(\frac{8}{9})} \ \log(\frac{8}{9}) \,,
\end{equation*}
and
\begin{align*}
\implies & \log(\Vol(E_k)) - \log(\gamma \Delta^r) < r(\log(\delta) - \log(\Delta)) \\
\implies & \log(\Vol(E_k)) - \log(\gamma \Delta^r) < \log(\gamma \delta^r) - \log(\gamma \Delta^r) \\
\implies & \log(\Vol(E_k)) < \log(\gamma \delta^r)\,.
\end{align*}
It is concluded that $\Vol(E_k) < \gamma \delta^r$, which is the termination condition. And asymptotically the maximum number of iterations would be $O(r)$.
\end{proof}
Volume of an ellipsoid is effectively computable and thus, such termination
condition can be checked easily. Also, the convergence rate is linear in $r$
as opposed to $r^2$, when the robustness is not guaranteed.
\begin{theorem}\label{thm:clf-or-no-robust-solution}
The learning framework either finds a control Lyapunov functions or proves that no linear combination of
the basis function would yield a function with robust compatibility
with the demonstrator.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:formal-learning-thm}, if verifier certifies correctness of
a solution $V$, then $V$ is a CLF. Assume that the framework terminates after
$k$ iterations and no solution is found. Then, by Theorem~\ref{thm:termination},
$\Vol(E_k) < \gamma \delta^r$. This means that a ball with radius $\delta$
would not fit in $C_k$ as $E_k$ is the MVE inscribed inside $C_k$.
In other words
\[
(\forall \vc \in C_k) \ (\exists \hat{\vc} \in \B_{\delta}(\vc)) \ \hat{\vc} \not\in C_k\,.
\]
On the other hand, for all $\vc \not\in C_k$, $V_\vc$ is not compatible
with the observations $O_j$. Therefore, even if there is a CLF $V_\vc$
s.t. $\vc \in C_k$, the CLF is not robust in its compatibility with the
demonstrator.
\end{proof}
The MVE itself can be computed by solving a convex optimization
problem\cite{tarasov1988method,vandenberghe1998determinant}.
\paragraph{Other Definitions for Center of Polytope:}
Beside the center of MVE inscribed inside a polytope,
there are other notions for defining center of a polytope. These include
the center of gravity and
Chebyshev center.
Center of gravity provides the following inequality~~\cite{bland1981ellipsoid}
\[ \Vol\left(C_{j}\right) \leq \left(1-\frac{1}{e}\right) \Vol\left(C_{j}\right)
< 0.64 \ \Vol(C_{j-1}) \,,\]
meaning that the volume of candidate set is reduced by at least 36\% at each iteration.
Unfortunately, calculating center of gravity is very expensive.
Chebyshev center~\cite{elzinga1975central} of a polytope is the center of the largest Euclidean ball
that lies inside the polytope. Finding a Chebyshev center for a polytope
is equivalent to solving a linear program, and while it yields a good heuristic, it
would not provide an inequality in the form of Eq.~\eqref{eq:volume-reduction}.
There are also notions for defining center for a set of constraints,
including analytic center, and volumetric center.
Assuming $C : \{ \vc \ | \bigwedge_i \va_i^t . \vc < b_i \}$, then
analytic center for $\bigwedge_i \va_i^t . \vc < b_i$ is defined as
\[
ac(\bigwedge_i \va_i^t . \vc < b_i) = \argmin_{\vc} - \sum_i \log (b_i - \va_i^t . \vc) \,.
\]
Notice that infinitely many inequalities can represent $C$ and any point inside $C$ can be
an analytic center depending on the inequalities.
Atkinson et al.~\cite{atkinson1995cutting} and Vaidya~\cite{vaidya1996new} provide
candidate generation techniques, based on these centers
, along with appropriate termination conditions and
convergence analysis.
\section{Introduction} \label{sec:intro}
\input{intro}
\section{Background}\label{sec:background}
\input{bg}
\section{Formal Learning Framework}\label{sec:framework}
\input{framework}
\section{Learner}\label{sec:learner}
\input{learner}
\section{Verifier}\label{sec:verifier}
\input{verifier}
\section{Specifications}\label{sec:spec}
\input{spec}
\section{Experiments}\label{sec:expr}
\input{expr}
\section{Related Work}\label{sec:related}
\input{related}
\section{Discussion and Future Work}\label{sec:disscussion}
\input{discussion}
\section{Conclusion}
We have thus proposed an algorithmic learning framework for
synthesizing control Lyapunov-like functions for a variety of
properties including stability, reach-while-stay.
The framework provides theoretical guarantees of soundness,
i.e., the synthesized controller is guaranteed to be correct by
construction against the given plant model. Furthermore, our approach
uses ideas from convex analysis to provide termination guarantees and
bounds on the number of iterations.
\begin{acknowledgements}
We are grateful to Mr. Sina Aghli, Mr. Souradeep Dutta,
Prof. Christoffer Heckman and Prof. Eduardo Sontag for helpful
discussions. This work was funded in part by NSF under award
numbers SHF 1527075 and CPS 1646556. All opinions expressed are
those of the authors and not necessarily of the NSF.
\end{acknowledgements}
\bibliographystyle{spmpsci}
\subsection{Performance}
As mentioned earlier, the inputs to the learning framework are the
plant, monomial basis functions, and the demonstrator. Also, the degree of
relaxation $D$ is also considered as input.
At each iteration, first a MVE inscribed inside a polytope is calculated. This
task is performed quite efficiently. The MPC scheme used inside the demonstrator is
an input and we do not consider its performance
here. Nevertheless, MPC is known to be very efficient if it is carefully tuned.
We mention that the MPC parameters used here
are selected by a non-expert and usually the time step is very small and
the horizon is very long.
Nevertheless, as the MPC is used offline, they are still suitable for
our framework. Also, costs matrices $Q$, $R$, and $H$ are diagonal:
\[
Q = diag(Q') \ , \ R = diag(R') \ , \ H = N diag(Q') \,,
\]
where $Q' \in \reals^n$ and $R' \in \reals^m$. There are two other
important factors that determines the performance of the whole
learning framework: (i) the time taken by the verifier and
(ii) the number of iterations. Table.~\ref{tab:result} shows the
results of the learning framework for the set of case studies
described thus far. For each problem instance, the parameters of
the MPC, as well as the degree of relaxation are provided.
Also, the performance of the learning framework is tabulated.
First, the procedure starts from $C : [-\Delta, \Delta]^r$ and terminates
whenever $\Vol(E_j) < \gamma \delta^r$. We set $\Delta = 100$ and
$\delta = 10^{-3}$. The results demonstrate that the method terminates in
few iterations, even for the cases where a compatible CLF does not
exists.
Notice that the number of demonstrations is different from the number
of iterations. Recall that two separate problems are solved for the
verification. One involves checking the positivity of $V$, and the
other involves checking whether $\nabla V$ can be
decreased. When a counterexample $\vx_j$ is found for the
former problem, there is no need to check the latter
condition. Furthermore, we do not require a demonstration for such
a scenario. This optimization is added to speed up our overall
procedure by avoiding expensive calls to the MPC. To accommodate
this, our approach calculates $\hat{C}_{j+1}$ (instead of $C_{j+1}$) for such
counterexamples as:
\begin{equation}\label{eq:approx-C_j+1}
\hat{C}_{j+1}:\ \hat{C}_j \cap \left\{ \vc \ |\ V_{\vc}(\vx_{j}) > 0 \right\} \,.
\end{equation}
Otherwise, if
the counterexample violates conditions on $\nabla V$, then
\begin{equation}\label{eq:approx-C_j+1-simple}
\hat{C}_{j+1}:\ \hat{C}_j \cap \left\{ \vc \ |\ \begin{array}{c} V_{\vc}(\vx_{j}) > 0 \\
\nabla V_\vc.f(\vx_j, \vu_j) < 0 \end{array}
\right\} \,.
\end{equation}
However, $\vc_j \not\in \hat{C}_{j+1}$ for both cases and the
convergence guarantees continue to hold.
As Table.~\ref{tab:result} shows, using this trick, the number
of demonstrations can be much smaller than the total number of iterations.
At each iteration, several verification problems are solved which
involve solving large SDP problems. While the complexity of solving
SDP is polynomial in the number of variables, they are still hard to
solve. The verification problem is quite expensive when the number of
variables and degree of relaxation are large. Nevertheless, as the SDP
solvers mature further, we believe our method can solve larger problems, since
the verification procedure is currently the computational bottleneck
for the learning framework. We note that, using larger degree of
relaxation does not necessarily lead to a longer learning process
(e.g. hover flight example). For example, for the inverted pendulum
example, using degree of relaxation five the procedure finds a CLF
faster when compared to the case wherein the degree of relaxation is
set to four
\begin{table*}[t]
\caption{\small Results on the benchmark. $\tau$: MPC time step, $N$: number of horizon steps, $Q'$: defines MPC state cost, $R'$: defines MPC input cost, $D$: SDP relaxation degree bound, \#Dem : number of demonstrations, \#Itr: number of iterations, V. Time: total computation time for verification (minutes), Time: total computation time (minutes)}\label{tab:result}
\begin{center}
\begin{tabular}{ ||l||c|c|c|c||c||c|c|c|c|c|| }
\hline
\multicolumn{1}{||c||}{Problem} & \multicolumn{4}{c||}{Demonstrator} & Verifier &
\multicolumn{5}{c||}{Performance} \\
\hline
System Name & $\tau$ & $N$ & $Q'$ & $R'$ & $D$ & \#Dem & \# Itr & V. Time & Time & Status \\
\hline
\multirow{2}{*}{Unicycle-Segment 2} & \multirow{2}{*}{0.1} & \multirow{2}{*}{10} & \multirow{2}{*}{[1 1 1]} & \multirow{2}{*}{[1 1]} & 3 & 2 & 74 & 3 & 3 & Fail \\
& & & & & 4 & 2 & 57 & 4 & 4 & Succ\\
\hline
\multirow{2}{*}{Unicycle-Segment 1} & \multirow{2}{*}{0.1} & \multirow{2}{*}{20} & \multirow{2}{*}{[1 1 1]} & \multirow{2}{*}{[1 1]} & 3 & 27 & 86 & 9 & 10 & Fail \\
& & & & & 4 & 23 & 71 & 11 & 12 & Succ \\
\hline
\multirow{2}{*}{TORA} & \multirow{2}{*}{1} & \multirow{2}{*}{30} & \multirow{2}{*}{[1 1 1 1]} & \multirow{2}{*}{[1]} & 3 & 52 & 118 & 7 & 14 & Fail \\
& & & & & 4 & 19 & 76 & 5 & 8 & Succ \\
\hline
\multirow{3}{*}{Inverted Pendulum} & \multirow{3}{*}{0.04} & \multirow{3}{*}{50} & \multirow{3}{*}{[10 1 1 1]} & \multirow{3}{*}{[10]} & 3 & 56 & 85 & 7 & 27 & Fail \\
& & & & & 4 & 53 & 69 & 9 & 25 & Succ \\
& & & & & 5 & 34 & 50 & 7 & 19 & Succ \\
\hline
\multirow{2}{*}{Bicycle} & \multirow{2}{*}{0.4} & \multirow{2}{*}{20} & \multirow{2}{*}{[1 1 1 1]} & \multirow{2}{*}{[1 1]} & 2 & 14 & 32 & 2 & 2 & Fail \\
& & & & & 3 & 7 & 25 & 1 & 1 & Succ \\
\hline
\multirow{2}{*}{Bicycle $\times$ 2} & \multirow{2}{*}{0.4} & \multirow{2}{*}{20} & \multirow{2}{*}{[1 1 1 1 1 1 1 1]} & \multirow{2}{*}{[1 1 1 1]} & 2 & 119 & 225 & 77 & 90 & Fail \\
& & & & & 3 & 30 & 81 & 43 & 46 & Succ\\
\hline
\multirow{2}{*}{Forward Flight} & \multirow{2}{*}{0.4} & \multirow{2}{*}{40} & \multirow{2}{*}{[1 1 1 1]} & \multirow{2}{*}{[1 1]} & 4 & 14 & 77 & 16 & 18 & Fail \\
& & & & & 5 & 4 & 64 & 10 & 10 & Succ\\
\hline
\multirow{3}{*}{Hover Flight} & \multirow{3}{*}{0.4} & \multirow{3}{*}{40} & \multirow{3}{*}{[1 1 1 1 1 1]} & \multirow{3}{*}{[1 1]} & 2 & 57 & 147 & 12 & 40 & Fail \\
& & & & & 3 & 57 & 124 & 21 & 47 & Succ \\
& & & & & 4 & 51 & 116 & 30 & 54 & Succ \\
\hline
\end{tabular}
\end{center}
\end{table*}
In previous sections, we discussed that two important factor governs the convergence of
the search process: (i) candidate selection, and (ii) counterexample selection. In order
to study the effect of these processes, we investigate different techniques to evaluate
their performances. For candidate selection, we consider three different methods.
In the first method, a Chebyshev center of $C_j$ is used as a candidate. In the second
method, the analytic center of constraints defining $C_j$ is the selected candidate and
redundant constraints are not dropped. And finally, in the last method, the center of
MVE inscribed in $C_j$ yields the candidate. Also, for each of these methods, we compare
the performance for two different cases: (i) a random counterexample is generated,
(ii) the generated counterexample maximizes constraint violations (see Sec.~\ref{sec:counterexample-selection}). Table~\ref{tab:selection} shows the performance
for each of these cases, applied to the same set of problems.
The results demonstrate that selecting good counterexamples would increase the convergence
rate (fewer iterations). Nevertheless, the time it takes to generate these
counterexamples increases, and therefore, the overall performance degrades. In conclusion, while
generating good counterexamples provides better reduction in the space of candidates, it is computationally expensive, and thus,
it seems to be beneficial to just rely on candidate selection for fast termination.
Table.~\ref{tab:selection} also suggests that Chebyshev center has the worst performance.
Also, the MVE-based method performs better (fewer iterations) compared to the method
which is based on the analytic center.
\begin{table*}[t]
\caption{\small Results on different variations. I: number of iterations, VT: computation time for verification (minutes), T: total computation time (minutes), Simple CE: any counterexample, Max CE: counterexample with maximum violation}\label{tab:selection}
\begin{center}{\scriptsize
\begin{tabular}{ ||l||rrr|rrr||rrr|rrr||rrr|rrr||}
\hline
\multirow{3}{*}{Problem} & \multicolumn{6}{c||}{Chebyshev Center} & \multicolumn{6}{c||}{Analytic Center} & \multicolumn{6}{c||}{MVE Center} \\
\cline{2-19}
& \multicolumn{3}{c|}{Simple CE} & \multicolumn{3}{c||}{Max CE} & \multicolumn{3}{c|}{Simple CE} & \multicolumn{3}{c||}{Max CE} & \multicolumn{3}{c|}{Simple CE} & \multicolumn{3}{c||}{Max CE} \\
\cline{2-19}
& I & VT & T & I & VT & T & I & VT & T & I & VT & T & I & VT & T & I & VT & T \\
\hline
Unicycle - Seg. 2 & 83 & 4 & 4 & 22 & 9 & 9 & 76 & 5 & 6 & 23 & 9 & 10 & 57 & 4 & 4 & 15 & 6 & 6 \\
Unicycle - Seg. 1 & 81 & 6 & 7 & 34 & 17 & 17 & 85 & 10 & 10 & 35 & 15 & 16 & 71 & 11 & 12 & 36 & 18 & 18 \\
TORA & 185 & 7 & 10 & 52 & 12 & 15 & 95 & 5 & 9 & 36 & 9 & 11 & 76 & 5 & 8 & 36 & 12 & 14 \\
Inverted Pend. & 163 & 10 & 23 & 85 & 22 & 30 & 57 & 8 & 20 & 51 & 22 & 32 & 50 & 7 & 19 & 35 & 18 & 25 \\
Bicycle & 99 & 3 & 3 & 40 & 5 & 5 & 31 & 2 & 2 & 20 & 3 & 3 & 25 & 1 & 2 & 15 & 3 & 3 \\
Bicycle $\times$ 2 & 759 & 121 & 127 & 438 & 244 & 246 & 96 & 47 & 50 & 77 & 141 & 143 & 81 & 43 & 46 & 66 & 132 & 133 \\
Forward Flight & 676 & 20 & 21 & 34 & 30 & 31 & 113 & 15 & 16 & 21 & 18 & 19 & 64 & 10 & 10 & 16 & 16 & 16 \\
Hover Flight & 499 & 65 & 90 & 196 & 113 & 127 & 146 & 36 & 67 & 90 & 92 & 109 & 116 & 30 & 54 & 75 & 69 & 82 \\
\hline
\end{tabular}
} \end{center}
\end{table*}
\subsection{Local Lyapunov Function}
Many nonlinear systems are only locally stabilizable, especially
in presence of input saturation. Therefore, we wish to study
stabilization inside a compact set $S$. Let $int(R)$ be the interior
of set $R$. We consider
a compact and connected set $S \subset X$ where the origin
$\vzero \in int(S)$ is the state we seek to stabilize to. Furthermore, we
restrict the set $S$ to be a basic semi-algebraic set defined by a
conjunction of polynomial inequalities:
\[ S: \{ \vx \in \reals^n\ |\ p_{S,1}(\vx) \leq 0, \ldots, p_{S,k}(\vx) \leq 0 \} \,.\]
The stabilization problem can be reduced to the problem of
finding a local CLF $V$ which respect the following constraints
\begin{equation}\label{eq:local-clf-def}
\begin{array}{rl}
& V(\vzero) = 0 \\
(\forall \vx \in S \setminus \{\vzero\}) \ & V(\vx) > 0 \\
(\forall \vx \in S \setminus \{\vzero\}) \ (\exists \vu \in U)\ & \nabla V \cdot f(\vx, \vu) < 0 \,. \\
\end{array}
\end{equation}
Given a function $V$ and a comparison predicate $\Join \in \{ =, \leq, <, \geq, > \} $, we define $V^{\Join \beta}$ as the set:
\[ V^{\Join \beta} = \{\vx | V(\vx) \Join \beta \} \,. \]
Let $\beta^*$ be maximum $\beta$ s.t. $V^{\leq \beta} \subseteq S$.
Having a CLF $V$, it guarantees that there is a strategy to keep the state
inside $V^{< \beta}$, and stabilize to the origin (Fig.~\ref{fig:clf}).
\begin{theorem}
Given a control affine system $\Psi$, where $U : \reals^m$
and a polynomial control Lyapunov function $V$ satisfying Eq.~\eqref{eq:local-clf-def}, there is a feedback function $\K$ for which if $\vx_0 \in V^{< \beta^*}$, then:
\begin{enumerate}
\item $(\forall t \geq 0) \ \vx(t) \in S$
\item $(\forall \epsilon > 0) \ (\exists T \geq 0) \ \norm{\vx(T) - \vzero} < \epsilon$\,.
\end{enumerate}
\end{theorem}
\begin{proof}
First, using Sontag results,
there exists a feedback function $\K^*$ s.t. while $\vx \in S$, then
$\frac{dV}{dt} = \nabla V \cdot f(\vx, \vu) < 0$~\cite{sontag1989universal}. Assuming $\vx(0) = \vx_0 \in V^{<\beta^*} \subset S$, then initially $V(\vx(0)) < \beta^*$. Now, assume the state reaches $\partial S$ at time $t_2$. By continuity, there is a time $t_1 \leq t_2$
s.t. $\vx(t_1) \in \partial (V^{<\beta^*})$ and $(\forall t \in [0, t_1]) \ \vx(t) \in S$. Thus, $V(\vx(t_1)) = \beta^*$ and
\[
V(\vx(t_1)) = \left(V(\vx(0)) + \int_{0}^{t_1} \frac{dV}{dt} dt\right) < V(\vx(0)) \,.
\]
This means $V(\vx(t_1)) < \beta^*$, which is a contradiction. Therefore, the state never reaches $\partial S$ and remains in $int(S)$ forever.
$V$ would be a Lyapunov function for
the closed loop system when the control unit is replaced with the feedback function $\K^*$ and using standard results in Lyapunov theory
$(\forall \epsilon > 0) \ (\exists T \geq 0) \ ||\vx(T) - 0|| < \epsilon$.
\end{proof}
Finding a local CLF is similar to finding a global one. One only needs to
consider set $S$ in the formulation. The observation set would consists of
$(\vx_i, \vu_i)_{i=1}^j$ where $\vx_i$ is inside $S$ and the verifier would
check the following conditions:
\begin{align*}
(\exists \vx \neq \vzero)& \bigwedge_{i=1}^k p_{S,i}(\vx) \leq 0 \land V(\vx) \geq 0 \\
(\exists \vx \neq \vzero)& \bigwedge_{i=1}^k p_{S,i}(\vx) \leq 0 \land
(\forall \vu \in U) \ \nabla V \cdot f(\vx, \vu) \geq 0 \,,
\end{align*}
which is as hard as the one solved in Section.~\ref{sec:verifier}.
\begin{lemma} \label{lem:completeness}
Assuming (i) the demonstrator function $\D$ is smooth, (ii) the closed loop system with feedback law $\D$ is exponentially stable over a bounded region $S$, then there exists a local polynomial CLF, compatible with $\D$.
\end{lemma}
\begin{proof}
Under assumption (i) and (ii), one can show that a polynomial local Lyapunov function $V$ (not control Lyapunov function) exists for the closed loop system $\Psi(X, U, f , \D)$~\cite{peet2008polynomial}:
\[
V(\vzero)=0 \ \land
(\forall \vx \in S \setminus \vzero) \left( \begin{array}{c}
V(\vx) > 0 \\
\nabla V \cdot f(\vx, \D(\vx)) < 0
\end{array} \right) \,.
\]
This means that $V$ is compatible with the demonstrator. $V$ is also a local CLF as it satisfies Eq.~\eqref{eq:local-clf-def}.
\end{proof}
As mentioned, the learning framework fails when the basis functions are not expressive to capture a CLF compatible with the demonstrator and one needs to update the demonstrator and/or the set of basis functions. However, if one believes that the demonstrator satisfies the conditions in Lemma~\ref{lem:completeness}, then, success of the learning procedure is guaranteed, provided the set of basis functions is rich enough.
\subsection{Barrier Certificate}
Barrier certificates are used to guarantee safety properties for the
system. More specifically, given compact and connected semi-algebraic
sets $S$ (safe) and $I$ (initial) s.t. $I \subset int(S)$, the
overall goal is to ensure that whenever $\vx(0) \in I$, we have
$\vx(t) \in S$ for all time $t \geq 0$. The sets $S,I$ are expressed
as semi-algebraic sets of the following form:
\begin{align*}
S: \{ \vx \in \reals^n\ |\ p_{S,1}(\vx) \leq 0, \ldots, p_{S,k}(\vx) \leq 0 \}\\
I: \{ \vx \in \reals^n\ |\ p_{I,1}(\vx) \leq 0, \ldots, p_{I,l}(\vx) \leq 0 \}\,.
\end{align*}
The safety problem can be reduced to the problem of
finding a (relaxed~\cite{prajna2004safety}) control barrier certificate $B$ which respect
the following constraints~\cite{WIELAND2007462}:
\begin{equation}\label{eq:barrier-cert-def}
\begin{array}{rl}
(\forall \vx \in I) \ & B(\vx) < 0 \\
(\forall \vx \not\in int(S)) \ & B(\vx) > 0 \\
(\forall \vx \in S \setminus int(I)) \ (\exists \vu \in U)\ & \nabla B \cdot f(\vx, \vu) < 0 \,. \\
\end{array}
\end{equation}
To find such a barrier certificate, one needs to define $B$ as a linear
combination of basis functions and use the framework to find a correct $B$.
The verifier would check the following conditions that negate each of the
conditions in Eq.~\eqref{eq:barrier-cert-def}.
First we check if there is a $\vx \in I$ such that $B(\vx) \geq 0$.
\[
(\exists \vx)\ \ \bigwedge_{j=1}^l p_{I,j}(\vx) \leq 0 \ \land\ B(\vx) \geq 0\,.
\]
Next, we check if there exists a $\vx \not \in int(S)$ such that $B(\vx) \leq 0$. Clearly, if $\vx \not\in int(S)$, we have $p_{S,i}(\vx) \geq 0$ for at least one $i \in \{1,\ldots,k\}$. This yields $k$ conditions of the form:
\[
(\exists \vx) \ p_{S,i}(\vx) \geq 0 \land B(\vx) \leq 0,\ i \in \{ 1, \ldots, k\}\,.
\]
Finally, we ask if $\exists \vx \in S \setminus int(I)$ that violates the decrease condition. Doing so, we obtain $l$ conditions. For each $i \in \{ 1, \ldots, l \}$, we solve
\begin{align*}
(\exists \vx) & \ \underset{\vx \not\in int(I)}{\underbrace{p_{I,i}(\vx) \geq 0}} \land\ \underset{\vx \in S}{\underbrace{\bigwedge_{j=1}^k p_{S,j}(\vx) \leq 0}} \\
&\ \land (\forall \vu \in U) \ \nabla B \cdot f(\vx, \vu) \geq 0 \,,
\end{align*}
Overall, we have $1 + k + l$ different checks. If any of these checks result
in $\vx$, it serves as a counterexample to the conditions for a barrier function
~\eqref{eq:barrier-cert-def}.
As before, we choose basis functions $g_1, \ldots, g_r$ for the barrier
set $B_\vc: \sum_{k=1}^r c_k g_k(\vx)$.
Given observations set $O_j: \{ (\vx_1, \vu_1), \ldots, (\vx_{j}, \vu_{j})\}$, the corresponding
candidate set $C_j$ of observation compatible barrier functions
is defined as the following:
\[
C_j: \left\{ \vc |\hspace{-0.1cm}
\begin{array}{l}
\bigwedge\limits_{(\vx_i, \vu_i) \in O_j}
\left(\begin{array}{rl}
\vx_i \in I \rightarrow & B_\vc(\vx_i) < 0 \ \land \\
\vx_i \not\in int(S) \rightarrow & B_\vc(\vx_i) > 0 \ \land \\
\vx_i \in S\setminus int(I)& \\ \rightarrow \nabla B_\vc & . f(\vx_i, \vu_i)<0 \end{array}\right)\end{array}\right\}.
\]
The LHS of the implication for each observation $(\vx_i, \vu_i)$ is evaluated
and the RHS constraint is added only when the LHS holds. Nevertheless, $\overline{C_j}$ remains a polytope similar to Lemma.~\ref{lemma:cj-convex}.
\begin{remark}
For the original control barrier certificates, it is sufficient to
check whether $B$ can be decreased on the boundary ($B^{=0}$). The
relaxed version of control barrier certificates is introduced by
Prajna et al.~\cite{prajna2004safety} using sum of squares (SOS)
relaxation. Here we use this relaxation to simplify the candidate
generation process. However, for the verification process this
relaxation is not needed and without any complication, one could
verify the original conditions as opposed to the relaxed ones. This
trick will improve the precision of the method.
\end{remark}
\subsection{Reach-While-Stay}
In this problem, the goal is to reach a target set $T$ from an initial
set $I$, while staying in a safe set $S$, wherein
$I \subseteq S$. The set $S$ is assumed to be compact. By
combining the local Lyapunov function and a barrier certificate, one
can define a smooth, Lyapunov-like
function $V$, that satisfies the following conditions
(see~\cite{Ravanbakhsh-Others/2016/Robust}):
\begin{equation}\label{eq:lyapunov-like-def}
\begin{array}{lrl}
C1:&(\forall \vx \in I) & V(\vx) < 0 \\
C2:&(\forall \vx \not\in int(S)) & V(\vx) > 0 \\
C3:&(\forall \vx \in S \setminus int(T)) (\exists \vu \in U)& \nabla V \cdot f(\vx,\vu)\hspace{-0.05cm}<\hspace{-0.05cm}0. \\
\end{array}
\end{equation}
We briefly sketch the argument as to why such a Lyapunov-like
function satisfies the reach-while-stay, referring the
reader to our earlier work on control certificates for a detailed
proof~\cite{Ravanbakhsh-Others/2016/Robust}. Suppose we have found
a function $V$ satisfying~\eqref{eq:lyapunov-like-def}. $V$ is
strictly negative over the initial set $I$ and strictly positive
outside the safe set $S$. Furthermore, as long as the flow remains
inside the set $S$ without reaching the interior of the target $T$,
there exists a control input at each state to strictly decrease the
value of $V$. Combining these observations, we conclude either (a)
the flow remains forever inside set $S \setminus int(T)$ or (b) must visit the
interior of set $T$ (before possibly leaving $S$). However, option (a) is ruled
out because $S \setminus int(T)$ is a compact set and $V$ is a continuous
function. Therefore, if the flow were to remain within $S \setminus int(T)$ forever
then $V(\vx(t)) \rightarrow -\infty$ as $t \rightarrow \infty$,
which directly contradicts the fact that $V$ must be lower bounded
on a compact set $S \setminus int(T)$. We therefore, conclude that the flow must stay
inside $S$ and eventually visit the interior of the target $T$.
The learning framework extends easily to search for a function $V$
that satisfies the constraints in Eq.~\eqref{eq:lyapunov-like-def}.
\subsection{Finite-time Reachability}
The idea of funnels has been developed to use the Lyapunov argument
for finite-time reachability~\cite{mason1985mechanics}.
Then, following Majumdar et al., a library of
control funnels can provide building blocks for motion
planning~\cite{majumdar2013robust}. Likewise, control funnels are used to
reduce reach-avoid problem to timed automata~\cite{bouyer2017timed}.
In this section, we consider Lyapunov-like functions for establishing
control funnels. Let $I$ be a set of initial states for the plant ($\vx(0) \in I$),
and $T$ be the target set that the system should reach at time $\T > 0$ ($\vx(\T) \in int(T)$).
Let $S$ be the safe set, such that $I, T \subseteq S$
and $\vx(t) \in S$ for time $t \in [0,\T]$. The goal is to find a controller
that guarantees that whenever $\vx(0) \in I$, we have $\vx(t) \in S$ for
all $t \in [0, \T]$ and $\vx(\T) \in int(T)$. To solve this, we search instead
for a control Lyapunov-like function $V(\vx,t)$ that is a function of
the state and time, with the following properties:
\begin{equation}\label{eq:c-funnel-def}
\begin{array}{lrl}
C1:& (\forall \vx \in I) & V(\vx, 0) < 0 \\
C2:& (\forall \vx \not\in int(T)) & V(\vx, \T)\ > \ 0 \\
C3:& \left(\forall \begin{array}{l}t \in [0, \T] \\
\vx \not\in int(S) \end{array}\right) & V(\vx, t) > 0 \\
C4:&\left(\begin{array}{l}\forall t \in [0, \T]\\
\forall \vx \in S\end{array}\right) (\exists \vu \in U)& \dot{V}(t, \vx, \vu) < 0 \,,\\
\end{array}
\end{equation}
where $\dot{V}(t, \vx, \vu) = \frac{\partial V}{\partial t} + \nabla V \cdot f(\vx, \vu)$.
First of all, when initialized to $\vx(0) \in I$, we have $V(\vx,0) < 0$ by condition
C1. Next, the controller's action through condition $C4$ guarantees that $\frac{dV}{dt} < 0$ over the
trajectory for $t \in [0, \T]$, as long as $\vx \in S$. Through $C3$, we can guarantee that $\vx(t) \in S$ for
$t \in [0,\T]$. Finally, it follows that $V(\vx(\T),\T) < 0$. Through $C2$, we conclude that
$\vx \in int(T)$.
As depicted in Fig.~\ref{fig:funnel}, the set $V^{=0}$ forms a barrier, and set $V^{<0}$
forms the required funnel, while $t \leq \T$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{pics/funnel}
\end{center}
\caption{A schematic view of a control funnel. Blue lines show the boundary of
the funnel $V(\vx, t) = 0$. Also, initially $V(\vx_1, 0) < 0$ and at the end of
horizon, $V(\vx_2, \T) > 0$.}\label{fig:funnel}
\end{figure}
\begin{theorem}
Given compact semi-algebraic sets $I$, $S$, $T$, a time horizon $\T$, and a smooth function $V$ satisfying Eq.~\eqref{eq:c-funnel-def}, there exists a control
strategy s.t. for all traces of the closed loop system, if $\vx(0) \in I$, then
\begin{enumerate}
\item $(\forall t \in [0, \T]) \ \vx(t) \in S$
\item $\vx(\T) \in int(T)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Using Sontag result~\cite{sontag1989universal,WIELAND2007462}, there is a feedback $\K$ which decreases
value of $V$ while $t \in [0, \T]$ and $\vx \in S$:
\[
(\forall t \in [0, \T], \vx \in S) \ \dot{V}(t, \vx, \K(\vx)) < 0\,.
\]
Now, assume $\vx(0) \in I$.
By the first condition of Eq.~\eqref{eq:c-funnel-def}, $V(\vx(0), 0) < 0$.
Assume there is a time $t \in [0, \T]$ s.t. $\vx(t) \not\in S$. By compactness of $S$,
and smooth dynamics, there is a time $t_2$ s.t. $V(\vx(t_2), t_2) \in \partial S$
and for all $t < t_2$, $\vx(t) \in int(S)$. According to the third condition of
Eq.~\eqref{eq:c-funnel-def}, $V(\vx(t_2), t_2) > 0$. Since $V$ is a smooth function
there is a time $t_1$ ($0 < t_1 < t_2$) s.t. $V(\vx(t_1), t_1) = 0$ and for all
$t < t_1$, $V(\vx(t), t) \in S$. By the fourth condition in Eq.~\eqref{eq:c-funnel-def}:
\begin{align*}
V(\vx(t_1), t_1) &= V(\vx(0), 0) + \int_0^{t_1} \dot{V}(t, \vx(t), \K(\vx(t))) \\
&< V(\vx(0),0) < 0 \, .
\end{align*}
This is a contradiction and therefore, for all $t \in [0, \T]$, $\vx(t) \in S$.
And similar to the argument above, it is guaranteed that for all $t \in [0, \T]$,
$V(\vx(t),t) < 0$. By the second condition of Eq.~\eqref{eq:c-funnel-def}, it
is guaranteed that if $\vx(\T) \not\in int(T)$, then $V(\vx(\T), \T) > 0$.
Therefore, $\vx(\T) \in int(T)$.
\end{proof}
Using the Lyapunov-like conditions~\eqref{eq:c-funnel-def}, the
problem of finding such control funnels (respecting
Eq.~\eqref{eq:c-funnel-def}) belongs to the class of problem which
could be solved with our method.
\subsection{SDP Relaxation}
Let $\vw:\ [\vx, \vlam]$ collect the state variables $\vx$ and the
dual variables $\vlam$ involved in the conditions stated
in~\eqref{eq:decr-condition}. The core idea behind the SDP relaxation
is to consider a vector collecting all monomials of degree up to $D$:
\[ \vm:\ \left(\begin{array}{c}
1 \\ w_1 \\ w_2\\ \ldots \\ \vw^{D}\\ \end{array}\right) \,,\]
wherein $D$ is chosen to be at least half
of the maximum degree in $\vx$ among all monomials in $g_j(\vx)$ and
$\nabla g_j \cdot f_i(\vx)$:
\[
D \geq \frac{1}{2} \max\left( \bigcup_{j} \left( \{ \mbox{deg}(g_j) \} \cup \{ \bigcup_i\mbox{deg}(\nabla g_j \cdot f_i ) \} \right) \right).
\]
Let us define $Z(\vw): \vm \vm^t$, which is a symmetric matrix
of monomial terms of degree at most $2D$. Each polynomial of degree
up to $2D$ may now be written as a trace inner product
\[p(\vx, \vlam):\ \tupleof{ P, Z(\vw)} = \mathsf{trace}( P Z(\vw) )\,,\]
wherein the matrix $P$ has real-valued entries that define the
coefficients in $p$ corresponding to the various
monomials. Although, $Z$ is a function of $\vx$ and $\vlam$, we will
write $Z(\vx)$ as a function of just $\vx$ to denote the matrix
$Z([\vx, \vzero])$ (i.e., set $\vlam = \vzero$).
Checking Eq.~\eqref{eq:positivity-cond}
is equivalent to solving the following optimization
problem over $\vx$
\begin{equation}\label{eq:positivity-cond-relax}
\begin{array}{ll}
\mathsf{max}_{\vx} \tupleof{I,Z(\vx)} & \\
\mathsf{ s.t. } &\tupleof{\mathcal{V}_{\vc_j}, Z(\vx)} \leq 0\,, \\
\end{array}
\end{equation}
wherein $I$ is the identity matrix, and $V_{\vc_j}(\vx)$ is written in the
inner product form as $\tupleof{\mathcal{V}_{\vc_j}, Z(\vx)}$. Let
$\tupleof{\Lambda_k, Z(\vw)}$ represent the variable $\lambda_k$. $\vlam$ is
represented as vector $\Lambda(Z(\vw))$, wherein the $k^{th}$ element is
$\tupleof{\Lambda_k, Z(\vw)}$. Then, the conditions in
~\eqref{eq:decr-condition} are now written as
\begin{equation}\label{eq:decr-cond-relax}
\begin{array}{ll}
\mathsf{max}_{\vw} \tupleof{I,Z(\vw)} & \\
\mathsf{s.t.} & \hspace{-1.4cm} \tupleof{F_{\vc_j,i}, Z(\vw)} = A_i^t
\Lambda(Z(\vw)),\ i \in \{1,\ldots, m\} \\
& \hspace{-1.4cm} \tupleof{-F_{\vc_j,0}, Z(\vw)} \leq \vb^t \Lambda(Z(\vw)) \\
& \hspace{-1.4cm} \Lambda(Z(\vw)) \geq 0 \,,
\end{array}
\end{equation}
wherein the components $\nabla V_{\vc_j} \cdot f_i(\vx)$
defining the Lie derivatives of $V_{\vc_j}$ are now written
in terms of $Z(\vw)$ as $\tupleof{F_{\vc_j,i},Z(\vw)}$.
Notice that $Z(\vzero)$ is a square matrix where the first element ($Z(\vzero)_{1,1}$) is $1$ and the rest of the entries are zero. Let $Z_0 = Z(\vzero)$ . Then $\tupleof{I, Z_0} = 1$, and $(\forall \vw) \ Z(\vw) \succeq Z_0$.
The SDP relaxation is used to solve these problems and provide an upper
bound of the solution and $D$ defines the degree of
relaxation~\cite{henrion2009gloptipoly}. The
relaxation treats $Z(\vw)$ as a fresh matrix variable $Z$ that is no longer
a function of $\vw$. The constraint $Z \succeq Z_0$ is added.
$Z(\vw): \vm \vm^t$ is a rank one matrix and ideally, $Z$ should
be constrained to be rank one as well. However, such a constraint is
non-convex, and therefore, will be dropped from our relaxation.
Also, constraints involving $Z(\vw)$ in Eqs.~\eqref{eq:positivity-cond-relax} and~\eqref{eq:decr-cond-relax} are added
as support constraints (cf.~\cite{lasserre2001global,lasserre2009moments,henrion2009gloptipoly}).
Both optimization
problems (Eqs.\eqref{eq:positivity-cond-relax} and~\eqref{eq:decr-cond-relax}) are
feasible by setting $Z$ to be $Z_0$. Furthermore, if the optimal
solution for each problem is $1$ in the SDP relaxation, then we will
conclude that the given candidate is a CLF. Unfortunately, the
converse is not necessarily true: the relaxation may fail to
recognize that a given candidate is in fact a CLF.
\begin{lemma}\label{lem:non-zero-sol}
Whenever the relaxed optimization problems in Eqs.~\eqref{eq:positivity-cond-relax} and ~\eqref{eq:decr-cond-relax}
yield $1$ as a solution, then the given candidate $V_{\vc_j}(\vx)$ is in fact a CLF.
\end{lemma}
\begin{proof}
Suppose that $V_{\vc_j}$ is not a CLF but both optimization problems yield an optimal value of $1$. Then, one of Eq.~\eqref{eq:positivity-cond} or Eq.~\eqref{eq:decrease-cond-init} is satisfied.
I.e. $(\exists \vx^* \neq \vzero, \vlam^* \geq \vzero)$ s.t. $V_{\vc_j}(\vx^*) \leq 0$ or $A_i^t \vlam^*=\nabla V_{\vc_j}.f_i(\vx^*) (i \in \{1 \ldots m\}) ,\vlam^{*t} \vb \geq - \nabla V_{\vc_j}.f_0(\vx^*)$.
Let $\vw^* = [\vx^*, \vlam^*]$ and therefore $Z(\vw^*) \succeq Z_0$
is a solution for Eq.~\eqref{eq:positivity-cond-relax} or
Eq.~\eqref{eq:decr-cond-relax}. Let $Z' = Z(\vx^*) - Z_0$. As
$\vw^* \neq \vzero$, $Z'$ has a non-zero diagonal element, and
since $Z' \succeq 0$, we may also conclude that one of the
eigenvalues of $Z'$ must be positive. Therefore,
$\tupleof{I, Z'} > 0$ as the trace of $Z'$ is the sum of
eigenvalues of $Z'$. Thus,
$\tupleof{I, Z(\vw)} > \tupleof{I, Z_0} = 1$. Thus,
the optimal solution of at least one of the two problems has
to be greater than one. This contradicts our original
assumption.
\end{proof}
However, the converse is not true. It is possible for $Z \succeq Z_0$
to be optimal for either relaxed condition, but
$Z \not= Z(\vw)$ for any $\vw$.
This happens because (as mentioned earlier) the relaxation drops two key
constraints to convexify the conditions: (1) $Z$ has to be a rank one
matrix written as $Z: \vm \vm^t$ and (2) there is a $\vw$ such that
$\vm$ is the vector of monomials corresponding to $\vw$.
\begin{lemma}\label{lem:no-lam-relaxation}
Suppose Eq.~\eqref{eq:decr-cond-relax} has a solution $Z \not= Z_0$, then
\begin{align*}
& (\forall \vu \in U) \ \tupleof{F_{\vc_j,0}, Z} + \sum_{i=1}^m \tupleof{F_{\vc_j,i}, Z} u_i \geq 0\,.
\end{align*}
\end{lemma}
\begin{proof}
While in the relaxed problem, the relation between monomials are lost, each inequality in
Eq.~\eqref{eq:decr-cond-relax} holds. Let $\hat{\vlam} = \Lambda(Z)$. Then, we have:
\begin{align*}
\tupleof{F_{\vc_j,i}, Z} = A_i^t \hat{\vlam},\ i \in \{1,\ldots, m\} \\
\tupleof{-F_{\vc_j,0}, Z} \leq \vb^t \hat{\vlam} , \ \hat{\vlam} \geq 0\,.
\end{align*}
Similar to Lemma.~\ref{lem:control-dual} (using Farkas Lemma) this is equivalent to
\begin{equation*}
(\forall \vu \in U) \ \tupleof{F_{\vc_j,0}, Z} + \sum_{i=1}^m \tupleof{F_{\vc_j,i}, Z} u_i \geq 0 \,.
\end{equation*}
\end{proof}
\subsection{Lifting the Counterexamples}
Thus far, we have observed that the relaxed optimization
problems (Eqs.~\eqref{eq:positivity-cond-relax}
and~\eqref{eq:decr-cond-relax}) yield matrices $Z$ as
counterexamples, rather than vectors $\vx$. Furthermore, given a
solution $Z$, there is no way for us to extract a corresponding
$\vx$ for reasons mentioned above. We solve this issue by ``lifting''
our entire learning loop to work with observations of the form:
\[ O_j: \{ (Z_1, \vu_1),\ldots,(Z_{j}, \vu_{j})\} \,,\]
effectively replacing states $\vx_i$ by matrices $Z_i$.
Also, each basis function $g_k(\vx)$ in $\vg$ is now written instead as $\tupleof{G_k, Z}$.
The candidates are therefore, $\sum_{k=1}^r c_k \tupleof{ G_k, Z}$.
Likewise, we write the components of its Lie
derivative $\nabla g_k \cdot f_i$ in terms of $Z$ ($\tupleof{G_{ki}, Z}$).
Therefore
\begin{align}\label{eq:relaxed-template}
\mathcal{V}_\vc = \sum_{k=1}^r c_{k} G_k \ , \ F_{\vc,i} = \sum_{k=1}^r c_{k} G_{ki}\,.
\end{align}
\begin{definition}[Relaxed CLF]\label{def:relaxed-CLF}
A polynomial function $V_\vc(\vx) = \sum_{k=1}^r c_k g_k(\vx)$, s.t. $\tupleof{\mathcal{V}_\vc, Z_0} = 0$ is a $D$-relaxed CLF iff for all
$Z \not= Z_0$:
\begin{equation}\label{eq:relaxed-clf}
\begin{array}{l} \tupleof{\mathcal{V}_\vc, Z} > 0 \ \land \\
(\exists \vu \in U) \ \tupleof{F_{\vc,0}, Z} + \sum_{i=1}^m \tupleof{F_{\vc,i}, Z} < 0\,.
\end{array}
\end{equation}
\end{definition}
\begin{theorem}\label{thm:relaxed-CLF-vs-CLF}
A relaxed CLF is a CLF.
\end{theorem}
\begin{proof}
Suppose that $V_\vc$ is not a CLF. The proof is complete by showing that $V_\vc$
is not a relaxed CLF. If $V_\vc(\vzero) \neq 0$, then $\tupleof{\mathcal{V}_\vc, Z_0} \neq 0$ and $V_\vc$ is not a relaxed CLF. Otherwise, according to Eq.~\eqref{eq:clf-def}
there exists a $\vx \neq \vzero$ s.t.
\[
V_\vc(\vx) \leq 0 \ \lor \ (\forall \vu \in U) \ \nabla V_\vc.f(\vx, \vu) \geq 0 \,.
\]
Therefore, there exists $\vx \neq \vzero$ s.t.
\begin{align*}
&\tupleof{\mathcal{V}_\vc, Z(\vx)} \leq 0 \ \lor \\
&(\forall \vu \in U) \ \tupleof{F_{\vc,0}, Z(\vx)} + \sum_{i=1}^m \tupleof{F_{\vc,i}, Z(\vx)} u_i \geq 0 \,.
\end{align*}
Setting $Z:\ Z(\vx)$ shows that
$V_\vc$ is not a relaxed CLF, since the negation of Eq.~\eqref{eq:relaxed-clf} holds.
\end{proof}
We lift the overall formal learning framework to work with
matrices $Z$ as counterexamples using the following modifications to
various parts of the framework:
\begin{enumerate}
\item First, for each $(Z_j, \vu_j)$ in the observation set, $Z_j$ is the feasible solution
returned by the SDP solver while solving Eqs.~\eqref{eq:decr-cond-relax} and ~\eqref{eq:positivity-cond-relax}.
\item However, the demonstrator $\D$ requires its input to be a state
$\vx \in X$. We define a projection operator $\pi: \zeta \mapsto X$
mapping each $Z$ to a state $\vx: \pi(Z)$, such that the
demonstrator operates over $\pi(Z_j)$ at each step. Note that the
vector of monomials $\vm$ used to define $Z$ from $\vx$ includes the
degree one terms $x_1, \ldots, x_n$. The projection operator
simply selects the entries from $Z$ corresponding to these
variables. Other more sophisticated projections are also possible,
but not considered in this work.
\item The space of all candidates $C$ remains unaltered except
that each basis polynomial is now interpreted as
$g_j: \tupleof{G_j, Z}$ and similarly for the Lie derivative
$(\nabla g_j)\cdot f(\vx, \vu)$. Thus, the learner is effectively
unaltered.
\end{enumerate}
\begin{definition}[Relaxed Observation Compatibility] \label{def:compatible-data-relaxed}
A polynomial function $V_\vc$ is said to be compatible with a set of
$D$-relaxed-observations $O$ iff $V_\vc$ respects the $D$-relaxed CLF conditions
(Eq.~\eqref{eq:clf-def}) for every point in $O$:
\begin{align*}
& \tupleof{\mathcal{V}_\vc, Z_0} = 0 \ \wedge \\
&\bigwedge\limits_{(Z_k, \vu_k) \in O_j}
\left(\begin{array}{c} \tupleof{\mathcal{V}_\vc, Z_k} > 0\ \land\ \\ \tupleof{F_{\vc,0}, Z_k} + \sum_{i=1}^m \tupleof{F_{\vc,i}, Z_k}u_{ki} < 0 \end{array}\right)\,.
\end{align*}
\end{definition}
\begin{definition}[Relaxed Demonstrator Compatibility] \label{def:compatible-dem-relaxed}
A polynomial function $V_\vc$ is said to be compatible with a relaxed-demonstrator
$\D \circ \pi$ iff $V_\vc$ respects the $D$-relaxed CLF conditions
(Eq.~\eqref{eq:clf-def}) for every observation that can be generated by
the relaxed-demonstrator:
\begin{align*}
& \tupleof{\mathcal{V}_\vc, Z_0} = 0 \ \wedge \\ &(\forall Z \succeq Z_0, \ Z \neq Z_0)\\
& \ \ \ \ \ \ \ \
\left(\begin{array}{c} \tupleof{\mathcal{V}_\vc, Z} > 0\ \land\ \\ \tupleof{F_{\vc,0}, Z} + \sum_{i=1}^m \tupleof{F_{\vc,i}, Z} \D(\pi(Z))_i < 0 \end{array}\right)\,.
\end{align*}
In other words, $V_\vc$ is a relaxed Lyapunov function for the closed loop system
$\Psi(X, U, f, \D \circ \pi)$.
\end{definition}
\begin{theorem}
The adapted formal learning framework terminates and either finds a CLF $V$, or proves that
no linear combination of basis functions would yield a
CLF, with robust compatibility w.r.t. the (relaxed) demonstrator.
\end{theorem}
\begin{proof}
$C_{j-1}$ represents all $\vc$ s.t. $V_\vc$
is compatible with relaxed-observation $O_{j-1}$. Still $\mathcal{V}_\vc$ and $F_{\vc,i}$
are linear in $\vc$ (Eq.~\eqref{eq:relaxed-template}), and therefore $C_{j-1}$ which is
the set of all $\vc \in C$ s.t.
\begin{equation*}
\begin{array}{l}
\tupleof{\mathcal{V}_\vc, Z_0} = 0 \ \wedge \\
\bigwedge\limits_{(Z_k, \vu_k) \in O_{j-1}}
\left( \begin{array}{c} \tupleof{\mathcal{V}_\vc, Z_k} > 0\ \land\ \\
\sum_{i=1}^m \tupleof{F_{\vc,i}, Z_k}u_{ki}
+ \tupleof{F_{\vc,0}, Z_k} < 0
\end{array} \right)
\end{array} \,,
\end{equation*}
is a polytope (similar to Lemma~\ref{lemma:cj-convex}).
Suppose, at $j^{th}$ iteration, $V_{\vc_j} : \vc_j^t . \vg$ is generated by the learner.
The relaxed verifier solves Eqs.~\eqref{eq:positivity-cond-relax}
and~\eqref{eq:decr-cond-relax}. If the optimal solution for these problems are $1$,
by Lemma~\ref{lem:non-zero-sol}, $V_{\vc_j}$ is a CLF. Otherwise, it returns a
counterexample $Z_j \succeq Z_0$ and $Z_j \neq Z_0$. More over, according to Eqs.~\eqref{eq:positivity-cond-relax} and~\eqref{eq:decr-cond-relax} and Lemma~\ref{lem:no-lam-relaxation}:
\begin{align*}
&\tupleof{\mathcal{V}_{\vc_j}, Z_j} \leq 0 \ \lor \\ &(\forall \vu \in U) \ \tupleof{F_{\vc_j,0}, Z_j} + \sum_{i=1}^m \tupleof{F_{\vc_j,i}, Z_j} u_i \geq 0\,.
\end{align*}
In other words, $V_{\vc_j}$ is not a $D$-relaxed CLF. Next, the demonstrator
generates a proper feedback for $\pi(Z_j)$ and observation
$(Z_j, \D(\pi(Z_j)))$ is added to the set of observations.
Notice that $V_{\vc_j}$ does
not respect the $D$-relaxed CLF conditions for $(Z_j, \D(\pi(Z_j)))$. I.e.
\begin{align*}
&\tupleof{\mathcal{V}_{\vc_j}, Z_j} \leq 0 \ \lor \\ &\tupleof{F_{\vc_j,0}, Z_j} + \sum_{i=1}^m \tupleof{F_{\vc_j,i}, Z_j} \D(\pi(Z_j))_i \geq 0 \,.
\end{align*}
Therefore, the new set $C_{j}$ does not contain $\vc_j$.
Now, the learner uses the center of maximum volume ellipsoid,
to generate the next candidate. This process repeats and the learning
procedure terminates in finite iterations. When the algorithm
returns with no solution, it means that $\Vol(C_j)$ $\leq \gamma \delta^r$.
Similar to Theorem~\ref{thm:clf-or-no-robust-solution}, this guarantees
that no ball of radius $\delta$ fits inside $C_j$, which represents the
set of all linear combination of basis functions, compatible
with the relaxed observations. Therefore, no linear combination of basis functions
would yield a CLF with robust compatibility with the relaxed
observation and therefore with the relaxed-demonstrator.
\end{proof}
In the rest of this paper, we use CLF for discussions. Nevertheless, the same
results can be applied to relaxed CLF as well.
\subsection{Counterexamples Selection}\label{sec:counterexample-selection}
As discussed earlier, in Section~\ref{sec:learner}, there are
two important factors that affect the overall convergence rate of
the learning framework: (a) the choice of a candidate
$\vc_j \in C_{j-1}$ and (b) the choice of a counterexample $\vx_j$ that
shows that the current candidate $V_{\vc_j}$ is not a CLF. We will now
discuss the choice of a ``good'' counterexample.
As mentioned, when there is a counterexample $\vx_j$ for $V_{\vc_j}$,
there are two half spaces
$H_{j1} : \{\vc \ | \ \va_{j1}^t . \vc > b_{j1}\}$, and
$H_{j2} : \{\vc \ | \ \va_{j2}^t . \vc > b_{j2}\}$ such that
$C_{j} : C_{j-1} \cap H_{j1} \cap H_{j2}$. In particular,
$\vc_j \not\in C_{j}$, yields the following constraints over $\vc_j$:
\begin{equation}\label{eq:cj-property-counterexample}
\va_{j1}^t . \vc_j \leq b_{j1} \lor \va_{j2}^t . \vc_j \leq b_{j2} \,.
\end{equation}
In general, the counterexample affects the coefficients of the
half-spaces $\va_{jl}, b_{jl}$ for $l \in \{1,2\}$. To wit, the
counterexample $\vx_j$ defines values for $\vu_j : \D(\vx_j)$,
$g_i(\vx_j)$, $f_i(\vx_j, \vu_j)$, which in turn, define $H_{j1}$ and
$H_{j2}$. Thus, a good counterexample should ``remove'' as large a
set as possible from $C_{j-1}$. Looking at
Eq.~\eqref{eq:cj-property-counterexample}, it is clear that
$\va_{jl}^t . \vc_j - b_{jl} $ would measure how ``far away'' the
counterexample is from the boundary of the half-space $H_{jl}$,
assuming that $||\va_{jl}||$ is kept constant. As proposed in our
earlier work~\cite{Ravanbakhsh-Others/2015/Counter-LMI}, one could
find a counterexample that maximizes these quantities, so that a
``good'' counterexample can be selected. For checking~\eqref{eq:positivity-cond}, the
verifier finds a counterexample $\vx$ that maximizes a slack variable $\gamma$ s.t.
\[
V_{\vc_j}(\vx) \leq -\gamma \,,
\]
and for the second check~\eqref{eq:decr-condition}, the slack variable
$\gamma$ is introduced and maximized as follows:
\begin{align*}
&\vlam \geq \gamma \ \land \ \bigwedge_{i=1}^m A_i^t \vlam = \nabla V_{\vc_j} \cdot f_i(\vx) \ \land \\
&\vlam^t . \vb \geq -\nabla V_{\vc_j} \cdot f_0(\vx) + \gamma \,.
\end{align*}
As such, we cannot prove improved bounds on the number of
iterations to terminate using this approach. However, we do, in
fact, see a significant decrease in the number of iterations
by adding an objective function to the selection of the counterexample.
|
1,477,468,750,920 | arxiv | \section{Introduction}
In recent years, a variety of experiments has been devoted to explore
the spin-dependent phenomena in hard processes.
Especially, experiments
with transvesely polarized hadrons have opened a new window to study
rich structure of perturbative/nonperturbative dynamics of QCD
associated with the transverse spin \cite{BDR:02}.
One of the fundamental quantities which newly enter into play
is the chiral-odd, twist-2 parton distribution, called the transversity
$\delta q(x)$; it represents the
distribution of transversly
polarized quark inside transversly polarized nucleon, i.e,
the partonic structure of the nucleon which is complementary to that
associated with the other twist-2 distributions, such as the familiar
density and helicity distributions $q(x)$ and $\Delta q(x)$.
However, $\delta q(x)$ has not been well-known so far.
This is because $\delta q(x)$ cannot be measured in inclusive DIS
in contrast to $q(x)$ and $\Delta q(x)$; the chiral-odd nature
requires a chirality flip, so that
$\delta q(x)$ must be always accompanied with another chiral-odd
function in physical observables.
It is very recent that the first global fit of
$\delta q(x)$ is given \cite{Anselmino:07}
using the semi-inclusive DIS data, in combination with
the $e^{+}e^{-}$ data for the associated chiral-odd (Collins) fragmentation function.
Transversely polarized Drell-Yan (tDY) process,
$p^{\uparrow}p^{\uparrow}\longrightarrow l^+l^-X$,
is another promising process to
access the transversity $\delta q(x)$.
Based on QCD factorization, the spin-dependent cross section
$\Delta_T d \sigma \equiv (d\sigma^{\uparrow\uparrow}-d\sigma^{\uparrow\downarrow})/2$
is given as a convolution,
$\Delta_T d\sigma = \int d x_1 d x_2 \delta H (x_1, x_2 ; \mu_F^2)$
$\Delta_T$$d \hat{\sigma} (x_1^0 /x_1, x_2^0 /x_2 ; Q^2, \mu_F^2/Q^2)$,
where $Q$ is the dilepton mass, $\mu_F$ is the factorization scale,
\begin{equation}
\delta H(x_1,x_2;\mu_F^2)=\sum_{q} e_q^2
\left[
\delta q(x_1 ,\mu_F^2)\delta\bar{q}(x_2,\mu_F^2)
+\delta \bar{q}(x_1 ,\mu_F^2)\delta q(x_2, \mu_F^2)
\right],
\label{tPDF}
\end{equation}
is the product of transversity distributions of the two nucleons,
summed over the massless quark flavors $q$
with their charge squared $e_q^2$,
and $\Delta_T d \hat{\sigma}=(d\hat{\sigma}^{\uparrow\uparrow}
-d\hat{\sigma}^{\uparrow\downarrow})/2$
is the corresponding partonic
cross section.
$x_1^0 = \sqrt{\tau}\ e^y , x_2^0 =\sqrt{\tau}\ e^{-y}$
are the relevant scaling variables, where $\tau =Q^2/S$, and
$\sqrt{S}$ and $y$ are the total energy and dilepton's rapidity in the
nucleon-nucleon CM system.
At the leading twist level, the gluon does not contribute
to the transversely polarized, chiral-odd process, corresponding
to helicity-flip by one unit.
The unpolarized cross section,
$d \sigma \equiv (d\sigma^{\uparrow\uparrow}+d\sigma^{\uparrow\downarrow})/2$,
obeys factorization similar as $\Delta_T d \sigma$,
in terms of $H(x_1, x_2 ; \mu_F^2)$ that is given
by (\ref{tPDF}) with $\delta q \rightarrow q$ and $\delta \bar{q} \rightarrow \bar{q}$,
and additional functions involving the gluon distribution that
comes in as higer-order $\alpha_s$ corrections.
Therefore, the double-spin asymmetry in tDY, $A_{TT}\equiv \Delta_T d \sigma/ d \sigma$,
in principle provides clean information on the transversity $\delta q(x)$.
At the leading order (LO)
in QCD perturbation theory, $x_{1,2}^0$ coincide with the momentum fractions
carried by the incident partons, e.g.,
$\Delta_T d \hat{\sigma}\propto \delta(x_1 -x_1^0)\delta(x_2 -x_2^0)$,
so that
\cite{Ralston:1979ys,BDR:02}
\begin{equation}
A_{TT}=\frac{\Delta_T d \sigma}{d \sigma}
=\frac{1}{2}\cos(2 \phi) \frac{\delta H(x_1^0, x_2^0; Q^2)+\cdots}
{H(x_1^0, x_2^0; Q^2)+\cdots} \ ,
\label{eq:att}
\end{equation}
where $\phi$ denotes the azimuthal angle of one of
the leptons with respect to the incoming nucleon's spin axis, and the ellipses
stand for the QCD corrections of NLO or higher.
The $\cos(2\phi)$ dependence
is characteristic of the spin-dependent cross section $\Delta_T d \sigma$
of tDY \cite{Ralston:1979ys}.
$A_{TT}$ to be observed in tDY at RHIC-Spin experiment
was calculated by Martin et al.~\cite{MSSV:98} including the NLO QCD corrections.
The results are somewhat discouraging in that the corresponding $A_{TT}$ are
at most a few percent \cite{MSSV:98}.~\footnote{In \cite{MSSV:98},
the corresponding asymmetries are defined through certain integration over $\phi$,
and equal (\ref{eq:att}) with the formal replacement $\cos(2\phi) \rightarrow 2/\pi$.}
The reason is twofold (see (\ref{eq:att})):
(i) tDY in $pp$ collisions probes the product of the quark transversity-distribution
and the antiquark one as (\ref{tPDF}),
and the latter is likely to be small;
(ii) the rapid growth of the unpolarized sea-quark distributions in
$H(x_1^0, x_2^0; Q^2)$
is caused by the DGLAP evolution in the low-$x$ region that is typically probed at RHIC,
$\sqrt{S}=200$ GeV, $Q \lesssim 10$ GeV, and $\sqrt{\tau}\lesssim 0.05$.
Thus, small $A_{TT}$ at RHIC appears to be
rather general conclusion (see also \cite{WV:98}).
We note that those previous NLO studies of $A_{TT}$ of (\ref{eq:att})
correspond to tDY with the transverse-momentum $Q_T$ of the produced
lepton pair unobserved, and use the cross sections $\Delta_T d \sigma,
d\sigma$ integrated over $Q_T$ in (\ref{eq:att}).
However, in view of the fact that most of the lepton pairs are actually
produced at small $Q_T$ in experiment,
it is important to examine the double transverse-spin asymmetries at a measured $Q_T$,
in particular its behavior for small $Q_T$.
This is defined similarly as (\ref{eq:att}) using the
``$Q_T$-differential'' cross sections, and we denote it as ${\mathscr{A}_{TT}(Q_T)}$
distinguishing from the conventional $Q_T$-independent $A_{TT}$.
In fact, participation of the new scale $Q_T (\ll Q)$
causes profound modifications of the relevant theoretical framework.
For example, now the numerator and the denominator
of ${\mathscr{A}_{TT}(Q_T)}$ may involve the parton distributions associated with the scales $\sim Q_T$,
such as $\delta H(x_1^0 , x_2^0; Q_T^2)$ and $H(x_1^0 , x_2^0; Q_T^2)$, respectively.
For $H(x_1^0 , x_2^0; Q_T^2)$, the low-$x$ rise of the unpolarized
sea-quark distributions, mentioned in (ii) above,
is milder compared with $H(x_1^0 , x_2^0; Q^2)$.
Thus, if the former components play dominant roles compared with the latter
in the denominator of ${\mathscr{A}_{TT}(Q_T)}$ by certain partonic mechanism,
${\mathscr{A}_{TT}(Q_T)}$ for small $Q_T$ region at RHIC can be larger than $A_{TT}$;
the necessary partonic mechanism is indeed
provided as the large logarithmic contributions
of the type $\ln (Q^2 /Q_T^2 )$, which is
another remarkable consequence of the new scale $Q_T$:
the small transverse-momentum $Q_T$ of the final lepton pair is provided by
the recoil from the emission of soft gluons
which produces the large terms behaving
as $\alpha_s^n\ln^m(Q^2/Q_T^2)/Q_T^2 ~(m=0, 1, \ldots, 2n-1)$
at each order of perturbation theory for the tDY cross sections.
Actually, such enhanced ``recoil logarithms'' spoil
the fixed-order perturbation theory, and
have to be resummed to all orders in $\alpha_s$ to make a reliable
prediction of the cross sections at small $Q_T$.
Recently, we have worked out the corresponding ``$Q_T$-resummation''
for the tDY cross sections
up to next-to-leading logarithmic (NLL)
accuracy, which corresponds to summing up exactly the first
three towers of logarithms, $\alpha_s^n\ln^m(Q^2/Q_T^2)/Q_T^2$ with $m=2n-1,2n-2$
and $2n-3$, for all $n$ \cite{KKST:06}.
Utilizing this result, in the present paper,
we develop QCD prediction for ${\mathscr{A}_{TT}(Q_T)}$ as a function of $Q_T$.
We will demonstrate that the soft gluon corrections are significant
so that
${\mathscr{A}_{TT}(Q_T)}$ in the small $Q_T$ region is considerably
large compared with the known value for $A_{TT}$.~\footnote{For the
impact of the $Q_T$ resummation on the spin asymmetries in
semi-inclusive deep inelastic scattering, see \cite{KNV:06}.}
In addition to ${\mathscr{A}_{TT}(Q_T)}$ in tDY at RHIC,
we calculate ${\mathscr{A}_{TT}(Q_T)}$ to be observed at J-PARC when the polarized beam is realized
\cite{Dutta}.
The latter case is also interesting because
the fixed target experiments at J-PARC
probe the parton distributions in the medium $x$ region ($\sqrt{S}=10$
GeV, $Q \raisebox{-0.07cm 2$ GeV, and $\sqrt{\tau}\raisebox{-0.07cm 0.2$), and thus
large asymmetries are expected even for the $Q_{T}$-independent $A_{TT}$ \cite{CDL:06}
(see (ii) above).
We also find that ${\mathscr{A}_{TT}(Q_T)}$ for $Q_T \approx 0$ deserves special attention from theoretical
as well as experimental point of view,
and derive a compact analytic formula for $\mathscr{A}_{TT}(Q_T \approx 0)$.
The paper is organized as follows. In Sec.~2,
the $Q_T$-resummation formula for the tDY cross sections is introduced,
and all ingredients necessary for
calculating the $Q_T$-dependent asymmetries ${\mathscr{A}_{TT}(Q_T)}$ including the NLL
resummation contributions are explained.
In Sec.~3, numerical results of ${\mathscr{A}_{TT}(Q_T)}$ at RHIC and J-PARC
are presented. Sec.~4 is devoted to the discussion of
analytic formula of
${\mathscr{A}_{TT}(Q_T)}$ at $Q_T \approx 0$
using the saddle-point method.
Conclusions are given in Sec.~5.
\section{Resummed cross section and asymmetry for tDY}
Throughout the paper we employ the $\overline{\rm MS}$ factorization and
renormalization scheme with the corresponding scales, $\mu_F$ and $\mu_R$.
We first recall basic points of the fixed-order calculation of the spin-dependent,
$Q_T$-differential cross sections of tDY \cite{KKST:06}.
In the lowest-order approximation via the Drell-Yan mechanism,
the lepton pair is produced with vanishing $Q_T$,
so that the corresponding partonic
cross section is proportional to $\delta (Q_T^2)$.
The one-loop corrections to the partonic cross section
involve the virtual gluon corrections, and the real gluon emission contributions,
$q + \bar{q}
\to l + \bar{l} + g$;
in the latter case, the finite $Q_T$ of the lepton pair is provided by the recoil
from the gluon radiation.
Those have been calculated in the dimensional regularization \cite{KKST:06}, and
the differential cross section of tDY is obtained as
\begin{eqnarray}
\frac{\Delta_T d \sigma^{\rm FO}}{d Q^2 d Q_T^2 d y d \phi}
=
\cos{(2 \phi )}
\frac{\alpha^2}{3\, N_c\, S\, Q^2}
\left[ \Delta_T X\, (Q_T^2 \,,\, Q^2 \,,\, y)
+ \Delta_T Y\, (Q_T^2 \,,\, Q^2 \,,\, y) \right],
\label{cross section}
\end{eqnarray}
where $\Delta_T X$ and $\Delta_T Y$ are, respectively, expressed as
the convolution of (\ref{tPDF}) with the corresponding partonic cross
sections, see \cite{KKST:06} for their explicit form in the $\overline{\rm MS}$ scheme:
$\Delta_T X = \Delta_T X^{(0)} + \Delta_T X^{(1)}$ as the sum of
${\cal O}(\alpha_s^0)$ and ${\cal O}(\alpha_s^1)$ contributions,
where $\alpha_s = \alpha_s(\mu_R^2)$ with $\mu_R$ the renormalization scale,
and $\Delta_T X^{(0)} = \delta H (x_1^0\,,\,x_2^0\,;\, \mu_F^2 )\ \delta (Q_T^2)$.
The partonic cross section associated with $\Delta_T X^{(1)}$ contains
all terms that are singular as $Q_T \rightarrow 0$,
behaving $Q_T^{-2} \times (\ln(Q^2 /Q_T^2 )$ or $1)$ or $\delta (Q_T^2)$,
while the ${\cal O}(\alpha_s)$ terms that are less singular than those
in $\Delta_T X^{(1)}$ are included in the ``finite'' part $\Delta_T Y$.
In (\ref{cross section}), $\Delta_T X$
becomes very large as $\sim \alpha_s \ln(Q^2/Q_T^2 )/Q_T^2$ and $\sim \alpha_s /Q_T^2$
when $Q_T \ll Q$, representing the recoil effects from the emission
of the soft and/or collinear gluon,
and those terms have to be combined with
the large contributions of similar nature that appear in each order of perturbation
theory
as $\alpha_s^n \ln^{2n-1}(Q^2/Q_T^2 )/Q_T^2$,
$\alpha_s^n \ln^{2n-2}(Q^2/Q_T^2 )/Q_T^2$, and so on, from the multiple gluon emission.
The resummation of those logarithmically enhanced contributions
to all orders has been worked out \cite{KKST:06}, in order to obtain a
well-defined, finite prediction for the cross section.
This is carried out by exponentiating the soft gluon effects in the
impact parameter $b$ space, up to the NLL accuracy.
As the result, $\Delta_T X$ of (\ref{cross section})
is replaced by the corresponding NLL resummed component as
$\Delta_T X \rightarrow \Delta_T X^{\rm NLL}$,
with \cite{KKST:06}
\begin{equation}
\Delta_T X^{\rm NLL} (Q_T^2 , Q^2 , y) =
\sum_{i,j,k}e_i^2
\int_0^{\infty} d b \frac{b}{2}
J_0 (b Q_T) e^{S (b , Q)}
( C_{ij} \otimes f_j )
\left( x_1^0 , \frac{b_0^2}{b^2} \right)
( C_{\bar{i} k} \otimes f_k )
\left( x_2^0 , \frac{b_0^2}{b^2} \right).
\label{resum}
\end{equation}
Here $J_0(bQ_T)$ is a Bessel function for the two-dimensional Fourier
transformation from the $b$ space to the $Q_T$ space, and
$b_0=2e^{-\gamma_E}$ with $\gamma_E$ the Euler constant.
The symbol $\otimes$ denotes convolution as
$(C_{ij} \otimes f_j )\ (x, \mu^2) = \int_x^1\,
(d z /z)\, C_{ij} (z, \alpha_s(\mu^2))\, f_j (x / z, \mu^2)$.
Note that the suffix $i , j , k$ can be either $q , \bar{q}$
including the flavor degrees of freedom,
and we set $f_{q}(x, \mu^2) \equiv \delta q ( x , \mu^2)$,
$f_{\bar{q}}(x, \mu^2) \equiv \delta \bar{q} ( x , \mu^2)$.
The soft gluon effects are resummed into the Sudakov factor $e^{S(b,Q)}$ with
\begin{eqnarray}
S(b,Q)=-\int_{b_0^2/b^2}^{Q^2}\frac{d\kappa^2}{\kappa^2}
\left\{ A_q(\alpha_s(\kappa^2)) \ln \frac{Q^2}{\kappa^2}
+ B_q (\alpha_s(\kappa^2))\right\}.
\label{sudakov}
\end{eqnarray}
The functions $A_q$, $B_q$ as well as
the coefficient functions $C_{ij}$ are perturbatively calculable:
$A_q (\alpha_s )= \sum_{n=1}^{\infty} \left( \frac{\alpha_s}{2 \pi}
\right)^n A_q^{(n)}$,
$B_q (\alpha_s )= \sum_{n=1}^{\infty} \left( \frac{\alpha_s}{2 \pi}
\right)^n B_q^{(n)}$,
and
$C_{ij} (z, \alpha_s )
= \delta_{ij}\delta (1 - z) +
\sum_{n=1}^{\infty} \left( \frac{\alpha_s}{2 \pi}\right)^n C_{ij}^{(n)} (z)$.
At the NLL accuracy,
\begin{eqnarray}
A_q^{(1)}=2C_F,
~~~A_q^{(2)}=2C_F\left\{\left(\frac{67}{18}-\frac{\pi^2}{6}\right)C_G
-\frac{5}{9}N_f\right\}, ~~~B_q^{(1)}=-3C_F,
\label{eq:AB}
\end{eqnarray}
where $C_F= (N_c^2 -1)/(2N_c )$, $C_G = N_c$, and $N_f$ is the number of
QCD massless flavors, and
\begin{eqnarray}
C_{ij}^{(1)}(z)
=\delta_{ij} C_F\left(\frac{\pi^2}{2}-4\right)\delta(1-z)
\label{eq:C}
\end{eqnarray}
are derived in \cite{KKST:06}.
The result (\ref{eq:AB}) coincides with that obtained for other
processes \cite{DS:84,KT:82}, demonstrating that $\{A_q^{(1)}, A_q^{(2)}, B_q^{(1)}\}$
are universal (process-independent).~\footnote{$B_q^{(n)}$ ($n \geq 2$)
and $C_{ij}^{(n)}(z)$ ($n \geq 1$) depend on the process~\cite{dG}.
Also, $A_q^{(n)}$ ($n=1,2,\ldots$)
and $B_q^{(1)}$ are independent of the factorization
scheme, but $B_q^{(n)}$ ($n \ge 2$) and $C_{ij}^{(n)}(z)$ ($n \ge 1$)
depend on the factorization scheme
(see e.g. \cite{BCDeG:03}).}
Substituting (\ref{eq:AB}) and
the running coupling constant $\alpha_s (\kappa^2)$ at two-loop level,
the $\kappa^2$ integral in (\ref{sudakov})
can be performed explicitly to the NLL accuracy, and the result can be
systematically organized as (see also \cite{LKSV:01,BCDeG:03})
\begin{eqnarray}
S(b, Q)&=&\frac{1}{\alpha_s (\mu_R^2) }h^{(0)}(\lambda)+h^{(1)}(\lambda)\ ,
\label{sudakov:1}
\end{eqnarray}
where the first and second terms collect the LL and NLL contributions,
respectively, as
\begin{eqnarray}
h^{(0)}(\lambda)&=&\frac{A_q^{(1)}}{2\pi\beta_0^2}[\lambda+\ln(1-\lambda)],
\label{eq:h0}
\\
h^{(1)}(\lambda)&=&\frac{A_q^{(1)}\beta_1}{2\pi\beta_0^3}
\left[\frac{1}{2}\ln^2(1-\lambda)+\frac{\lambda+\ln(1-\lambda)}{1-\lambda}\right]
+\frac{B_q^{(1)}}{2\pi\beta_0}\ln(1-\lambda)
\nonumber
\\&&
-\frac{1}{4\pi^2\beta_0^2}\left[ A_q^{(2)} - 2\, \pi \beta_0 A_q^{(1)}
\ln \frac{Q^2}{\mu_R^2} \right]
\left[\frac{\lambda}{1-\lambda}+\ln(1-\lambda)\right].
\label{sudakov:2}
\end{eqnarray}
In these equations, $\beta_0\,,\, \beta_1$ are the first two coefficients
of the QCD $\beta$ function given by
$\beta_0=( 11C_G-2N_f )/(12\pi)$,
$\beta_1= ( 17C_G^2-5C_GN_f-3C_FN_f)/(24\pi^2 )$, and
\begin{equation}
\lambda = \beta_0\alpha_s( \mu_R^2 ) \ln \frac{Q^2 b^2}{b_0^2}
\equiv \beta_0\alpha_s( \mu_R^2 ) L\ .
\label{eq:lambda}
\end{equation}
In the $b$ space, $L = \ln(Q^2b^2/b_0^2)$ plays the role of the large logarithmic
expansion parameter with $b\sim 1/Q_T$, and $\lambda$ of
(\ref{eq:lambda}) is formally considered as being of order unity in
the resummed logarithmic expansion to the NLL in (\ref{sudakov:1}), where
the neglected NNLL corrections are down by $\alpha_s(\mu_R^2 )$.
Note that, expanding the above NLL formula (\ref{resum}) with
(\ref{eq:AB})-(\ref{eq:lambda}) in powers
of $\alpha_s(\mu_R^2)$, the first three towers of logarithms,
$\alpha_s^n\ln^m(Q^2/Q_T^2)/Q_T^2$ with $m=2n-1,2n-2$
and $2n-3$, in the tDY differential cross section are fully reproduced for all $n$.
Combining this expansion with the finite part $\Delta_T Y$ of (\ref{cross section}),
the result gives the tDY differential cross section which is exact up to
${\cal O}(\alpha_s)$;
thus we use the NLO parton distributions in the $\overline{\rm MS}$ scheme for
$f_j (x, \mu^2)$ in (\ref{resum}), as well as for those involved in $\Delta_T Y$.
We explain some further manipulations for our NLL formula; those
were actually performed in \cite{KKST:06}, but were not described in detail.
The integrand of (\ref{resum})
depends on the parton distributions at the scale $b_0/b$, according to the general
formulation \cite{CSS:85}.
Taking the Mellin moments of $\Delta_T X^{\rm NLL} (Q_T^2 , Q^2 , y)$
with respect to the DY scaling variables $x_{1,2}^{0}$ at fixed $Q$,
\begin{equation}
\Delta_T X^{\rm NLL}_{N_1, N_2} (Q_T^2 , Q^2)
\equiv \int_{0}^1 dx_{1}^0 \left(x_1^{0} \right)^{N_1 -1} \int_{0}^1 dx_2^{0}
\left(x_2^{0} \right)^{N_2 -1} \Delta_T X^{\rm NLL} (Q_T^2 , Q^2 , y),
\label{eq:MT}
\end{equation}
the $b$-dependence of those parton distributions can be disentangled because
the moments, $f_{i,N}(\mu^2 )\equiv \int_0^1 dx x^{N-1} f_{i}(x, \mu^2 )$,
obey the renormalization group (RG) evolution as
$f_{i,N} (b_0^2 /b^2 )$ $=\sum_{j} U_{ij,N} (b_0^2 /b^2 , Q^2 ) f_{j,N}(Q^2)$,
where $U_{ij,N}(\mu^2 , {\mu'}^2 )$ are the NLO evolution operators for
the transversity distributions which are expressed in terms of
the corresponding LO and NLO anomalous dimensions \cite{AM:90,KMHKKV:97}
and the two-loop running coupling constant.
For (\ref{eq:MT}) with (\ref{resum}) and the above RG evolution
substituted,
several ``reorganization'' of the
relevant large-logarithmic expansion is necessary for its consistent evaluation
over the entire range of $Q_T$,
following the systematic procedure in \cite{BCDeG:03} elaborated for
unpolarized hadron collisions:
exploiting the RG invariance,
we have $C_{ij,N} (\alpha_s(b_0^2 / b^2 )) = C_{ij,N} (\alpha_s( Q^2))
e^{[\alpha_s(\mu_R^2 )C_{ij,N}^{(1)} /2\pi] \lambda/(1-\lambda)}$
to the corrections down by $\alpha_s(\mu_R^2 )$,
for the $N$-th moment
of the coefficient function of (\ref{eq:C}),
so that we make the replacement
$C_{ij,N} (\alpha_s(b_0^2 / b^2 )) \rightarrow C_{ij,N} (\alpha_s( Q^2))
= \delta_{ij}[1+ (\alpha_s (Q^2 ) C_F/4 \pi) (\pi^2 -8) ]$,
up to the corrections of NNLL level for (\ref{eq:MT}).
Similarly,
performing the large-logarithmic expansion for explicit formula
of the NLO evolution operator $U_{ij,N} (b_0^2 /b^2 , Q^2 )$,
we find
\begin{eqnarray}
U_{ij,N}(b_0^2 /b^2 , Q^2 ) = \delta_{ij}e^{R_N(\lambda)}, \;\;\;\;\;\;\;\;
R_{N}(\lambda)\equiv \frac{\Delta_T P_{qq,N}}{2\pi\beta_0}\ln(1-\lambda),
\label{LO-evol}
\end{eqnarray}
up to the corrections down by $\alpha_s(\mu_R^2 )$ which correspond to
the NNLL terms when substituted into (\ref{eq:MT}), (\ref{resum}).
Here $\Delta_T P_{qq,N} = -2C_F [\psi(N+1) + \gamma_E - 3/4 ]$ is the
$N$-th Mellin moment of the LO DGLAP splitting function for the transversity.
As a result, (\ref{eq:MT}) is expressed as
\begin{eqnarray}
\Delta_T X^{\rm NLL}_{N_1, N_2} (Q_T^2 , Q^2)
&&=
\left[1+\frac{\alpha_s (Q^2)}{2\pi}C_F(\pi^2-8) \right]
\delta H_{N_1,N_2}(Q^2)
I_{N_1,N_2}(Q_T^2, Q^2)\ ,
\label{resum:2} \\
I_{N_1,N_2}(Q_T^2, Q^2) && \equiv
\int_0^{\infty} d b \frac{b}{2} J_0 (b Q_T)
e^{S (b, Q)+ R_{N_1}(\lambda)+R_{N_2}(\lambda)}\ ,
\label{resum:21}
\end{eqnarray}
where $\delta H_{N_1,N_2}(Q^2)$ is the double Mellin-moments of
$\delta H(x_1^0, x_2^0 ; Q^2)$ of (\ref{tPDF}),
defined similarly as (\ref{eq:MT}). The complete dependence on $b$
is included in the exponential factor $e^{S (b, Q)+ R_{N_1}(\lambda)+R_{N_2}(\lambda)}$
through $L = \ln(Q^2b^2/b_0^2)$,
so that all-order resummation of the large logarithms $L$
and the associated $b$-integral in (\ref{resum:21})
are now accomplished at the partonic level.
We also mention some other ``reorganization'',
which is explained in \cite{KKST:06} and
is necessary
in order to treat properly too short and long distance involved
in the $b$ integration of (\ref{resum:21}):
firstly, to treat too short distance $Qb \ll 1$,
we make the replacement
\begin{equation}
L\rightarrow\tilde{L}=\ln(Q^2b^2/b_0^2+1)\ ,
\label{replaceL}
\end{equation}
in the definition (\ref{eq:lambda}) of $\lambda$,
following \cite{BCDeG:03};
note that the integrand of (\ref{resum:21})
depends on the large-logarithmic expansion
parameter only through $\lambda$
(see (\ref{sudakov:1})-(\ref{sudakov:2}), (\ref{LO-evol})).
This replacement allows us to reduce
the unjustified large logarithmic contributions for $Qb \ll 1$,
due to $L \gg 1$, as $\tilde{L} \rightarrow 0$ and
$e^{S(b,Q)+R_{N_1}(\lambda)+R_{N_2}(\lambda)} \rightarrow 1$,
while
$L$ and $\tilde{L}$ are equivalent to organize
the soft gluon resummation at small $Q_T$ as
$\tilde{L}=L+{\cal O}(1/(Qb)^2 )$ for $Qb \gg 1$.
Secondly, the functions
(\ref{eq:h0}) and (\ref{sudakov:2})
in the Sudakov exponent (\ref{sudakov:1}) are singular when
$\lambda = \beta_0\alpha_s( \mu_R^2 ) \tilde{L} \rightarrow 1$,
and this singular behavior
is related to the presence of the Landau pole
in the perturbative running coupling $\alpha_s (\kappa^2)$
in QCD. To properly define the $b$ integration of (\ref{resum:21})
for the corresponding long-distance region,
it is necessary to specify a prescription to deal with
this singularity
\cite{LKSV:01}:
decomposing the Bessel function in (\ref{resum:21}) into the two Hankel functions as
$J_0(bQ_T) = (H_0^{(1)}(bQ_T )+H_0^{(2)}(bQ_T ) )/2$,
we deform the $b$-integration contour for these two terms
into upper and lower half plane in the complex $b$ space, respectively,
and obtain the two convergent integrals as $|b| \rightarrow \infty$.
The new contour ${\cal C}$ is taken as:
from $b= 0$ to $b=b_c$ on the real axis,
followed by the two branches,
$b=b_c + e^{\pm i\theta}t$ with $t \in \{0, \infty \}$ and $0<\theta<\pi/4$;
a constant $b_c$ is chosen as $0 \le b_c < b_L$,
where $b=b_L$ gives the solution for $\lambda =1$.
Note, this choice of contours
is completely equivalent to the original contour,
order-by-order in $ \alpha_s(\mu_R^2 )$, when the corresponding formulae
are expanded in powers
of $\alpha_s$. Therefore,
this contour deformation prescription provides us with
a (formally) consistent definition of finite $b$-integral of (\ref{resum:21})
within a perturbative framework.
We now denote (\ref{resum:2}), with the replacement (\ref{replaceL})
and the new contour ${\cal C}$ in (\ref{resum:21}),
as $\Delta_T \tilde{X}^{\rm NLL}_{N_1, N_2} (Q_T^2 , Q^2)$, and also denote
the double inverse Mellin transform of
$\Delta_T \tilde{X}^{\rm NLL}_{N_1, N_2} (Q_T^2 , Q^2)$,
from $(N_1 , N_2 )$ space to $(x_1^0 , x_2^0 )$ space,
as $\Delta_T \tilde{X}^{\rm NLL} (Q_T^2 , Q^2, y)$.
Defining (see (\ref{cross section}))
\begin{equation}
\Delta_T \tilde{Y} (Q_T^2 , Q^2, y) \equiv \Delta_T X (Q_T^2 , Q^2, y)
+ \Delta_T Y (Q_T^2 , Q^2, y)
- \left. \Delta_T \tilde{X}^{\rm NLL} (Q_T^2 , Q^2, y) \right|_{\rm FO}\ ,
\label{matching}
\end{equation}
where $\Delta_T \tilde{X}^{\rm NLL} (Q_T^2 , Q^2, y) |_{\rm FO}$ denotes the terms
resulting from the expansion of the resummed expression up to the
fixed-order $\alpha_s(\mu_R^2 )$,
we obtain the final form of our differential cross section
for tDY with the soft gluon resummation as \cite{KKST:06}
\begin{eqnarray}
\frac{\Delta_Td\sigma}{dQ^2dQ_T^2dyd\phi}=
\cos(2\phi)
\frac{\alpha^2}{3\, N_c\, S\, Q^2}
\biggl[\Delta_T\tilde{X}^{\rm NLL}(Q_T^2 , Q^2,y)
+\Delta_T\tilde{Y}(Q_T^2 , Q^2,y)\biggr].
\label{NLL+LO}
\end{eqnarray}
From the derivation explained above, the expansion of
this cross section in powers of $\alpha_s(\mu_R^2 )$
fully reproduces the first three towers of logarithms,
$\alpha_s^n\ln^m(Q^2/Q_T^2)/Q_T^2$ with $m=2n-1,2n-2$
and $2n-3$, associated with the soft-gluon emission for small $Q_T$ ($\ll Q$),
and also coincides exactly with the fixed-order result
(\ref{cross section}) to ${\cal O}(\alpha_s)$. Therefore, this formula (\ref{NLL+LO})
avoids any double counting
over the entire range of $Q_T$. Note that $\Delta_T \tilde{Y} (Q_T^2 , Q^2, y)$
of (\ref{matching}) corresponds to
the ``modified finite component'' in our resummation framework:
because the first and the third terms in the RHS of (\ref{matching}) cancel with each
other for $Q_T \ll Q$, $\Delta_T \tilde{Y} (Q_T^2 , Q^2, y)$
is less singular as $Q_T \rightarrow 0$ than $Q_T^{-2} \times (\ln(Q^2 /Q_T^2 )$ or $1)$
or $\delta (Q_T^2)$, see the discussion below (\ref{cross section}). Combined with
$\Delta_T X^{(0)} \propto \delta (Q_T^2)$, this also implies that
$\Delta_T \tilde{Y} (Q_T^2 , Q^2, y)$ is of order $\alpha_s (\mu_R^2 )$.
In fact, (\ref{matching}) coincides exactly with $\Delta_T Y (Q_T^2 , Q^2, y)$
if (\ref{replaceL}) is not performed.
Because of this ``regular'' behavior of $\Delta_T \tilde{Y} (Q_T^2 , Q^2, y)$
as $Q_T \rightarrow 0$,
we may consider (\ref{matching}) as the definition for the region where $Q_T > 0$;
in this case, the first two terms correspond to (\ref{cross section})
for $Q_T > 0$, i.e.,
\begin{eqnarray}
\frac{\Delta_T d \sigma^{\rm LO}}{d Q^2 d Q_T^2 d y d \phi}
=
\cos{(2 \phi )}
\frac{\alpha^2}{3\, N_c\, S\, Q^2}
\left[ \Delta_T \left. X^{(1)}\, (Q_T^2 , Q^2 , y)\right|_{Q_T^2 >0}
+ \Delta_T Y\, (Q_T^2 , Q^2 , y) \right],
\label{LOcross section}
\end{eqnarray}
which gives the formula for the LO QCD prediction of tDY at the
large-$Q_T$ region. Therefore, our formula (\ref{NLL+LO})
is actually the NLL resummed part, with the contributions to ${\cal O}(\alpha_s )$
(the third term of (\ref{matching})) subtracted, plus the LO cross section;
we refer to (\ref{NLL+LO}) as the ``NLL+LO'' prediction,
which gives the well-defined tDY differential cross section
in the $\overline{\rm MS}$ scheme over the entire range of $Q_T$.
It is straightforward to see that the integral of (\ref{NLL+LO}) over $Q_T$ reproduces
that of (\ref{cross section}) exactly, because
$\tilde{L}= 0$ at $b=0$ (see also \cite{BCDeG:03}).
We can extend the above results to unpolarized DY by mostly trivial substitutions
to switch from spin-dependent quantities to spin-averaged ones,
e.g., by removing ``$\Delta_T$''
and making the replacement, $\delta H(x_1, x_2 ; \mu^2)
\rightarrow H(x_1, x_2 ; \mu^2)$,
$\cos(2\phi) \alpha^2 / (3 N_c S Q^2)$ $\rightarrow 2 \alpha^2 / (3 N_c S Q^2)$, etc.,
in the above relevant formulae.
The explicit form of the spin-averaged quantities,
such as $X (Q_T^2 , Q^2, y)$, $Y (Q_T^2 , Q^2, y)$, as well as those
corresponding to the coefficient functions $C_{ij}(z, \alpha_s )$ in (\ref{resum}),
can be obtained from the results in \cite{CSS:85,AEGM:84}.
A different point from the polarized case is that now the gluon distribution
$f_g(x, \mu^2) \equiv g(x, \mu^2 )$ participates,
so that the suffix $i,j$ of the ``spin-averaged $C_{ij}(z, \alpha_s )$''
can be ``$g$'' as well as ``$q,\bar{q}$''.
This also implies that $\Delta_T P_{qq,N}$ appearing in (\ref{LO-evol}) has to
be replaced by the Mellin moment of the
LO DGLAP splitting functions for the unpolarized case,
which involve the mixing of gluon, and the ``new $U_{ij,N}(b_0^2 /b^2 , Q^2)$''
represent the corresponding ``evolution matrix'' that was discussed in
\cite{BCDeG:03,LKSV:01}.
On the other hand, the formulae (\ref{sudakov:1})-(\ref{sudakov:2}) of the
Sudakov exponent hold also for the unpolarized case,
reflecting that the coefficients (\ref{eq:AB}) relevant at the NLL level
are universal \cite{KT:82,dG,LKSV:01,BCDeG:03}.
We list explicit form of the relevant formulae for the unpolarized cross sections
in Appendix.
Taking the ratio of (\ref{NLL+LO}) to the corresponding NLL+LO prediction for
unpolarized differential cross section, we obtain the
double transverse-spin asymmetry in tDY, for transverse-momentum $Q_T$,
invariant-mass $Q$,
and rapidity $y$ of the produced lepton pair, and azimuthal angle $\phi$
of one of the leptons, as
\begin{eqnarray}
{\mathscr{A}_{TT}(Q_T)}=\frac{1}{2} \cos(2\phi)
\frac{\Delta_T\tilde{X}^{\rm NLL}(Q_T^2 , Q^2, y)
+\Delta_T\tilde{Y}(Q_T^2, Q^2, y)}
{\tilde{X}^{\rm NLL}(Q_T^2, Q^2, y)+\tilde{Y}(Q_T^2, Q^2, y)}.
\label{asym}
\end{eqnarray}
To the fixed-order $\alpha_s$ without the soft gluon resummation,
(\ref{asym}) reduces to the LO prediction of the asymmetry for $Q_T >0$,
\begin{eqnarray}
\mathscr{A}_{TT}^{\rm LO}(Q_T)
=\frac{1}{2} \cos(2\phi)
\frac{\left. \Delta_T X^{(1)}(Q_T^2 , Q^2, y) \right|_{Q_T^2 >0}
+\Delta_T Y(Q_T^2, Q^2, y)}
{\left. X^{(1)}(Q_T^2, Q^2, y)\right|_{Q_T^2 >0} + Y(Q_T^2, Q^2, y)},
\label{asymlo}
\end{eqnarray}
as the ratio of (\ref{LOcross section}) to the corresponding unpolarized cross section.
\section{The asymmetries ${\mathscr{A}_{TT}(Q_T)}$ at RHIC and J-PARC}
We evaluate the asymmetries, derived in the last section, as a function of $Q_T$.
We use the similar parton distributions as in the previous NLO studies
\cite{MSSV:98} of $Q_T$-independent $A_{TT}$ of (\ref{eq:att}):
for the transversity $\delta q(x, Q^2)$ participating in the numerator
of the asymmetries, we use a model of the NLO transversity
distributions,
which obey the corresponding NLO DGLAP evolution equation
and are assumed to saturate the
Soffer bound \cite{Soffer:95}
as $\delta q(x,\mu^2_0)=[q(x,\mu^2_0)+\Delta q(x,\mu^2_0)]/2$
at a low input scale $\mu_0\simeq 0.6$ GeV
using the NLO GRV98 \cite{GRV:98} and GRSV2000 (``standard scenario'') \cite{GRSV:00}
distributions $q(x,\mu_0^2)$
and $\Delta q(x,\mu^2_0)$, respectively.
The NLO GRV98 distributions $q(x, Q^2), g(x, Q^2)$ are also used for calculating
the unpolarized cross sections in the denominator of the asymmetries.
It is known that the $Q_T$-spectrum of DY lepton pair is affected
by another nonperturbative effects,
which become important for small $Q_T$ region \cite{CSS:85}:
we have obtained the well-defined tDY cross sections and asymmetries
that are free from any singularities,
with a consistent definition of the integration in (\ref{resum:21})
over the whole $b$ region.
However, the integrand of (\ref{resum:21}) involving purely perturbative quantities
is not accurate for extremely large $|b|$ region in QCD, and
the corresponding long-distance behavior
has to be complemented by the relevant nonperturbative effects.
Formally those nonperturbative effects play role
to compensate the ambiguity that the prescription for the $b$
integration in (\ref{resum:21}) to avoid the singularity
in the Sudakov exponent $S(b, Q)$ of (\ref{sudakov:1})-(\ref{sudakov:2})
is actually not unique (see \cite{CSS:85}). Therefore,
following \cite{CSS:85,LKSV:01,BCDeG:03}, we make the replacement in (\ref{resum:21}) as
\begin{equation}
e^{S (b , Q)}\rightarrow e^{S (b , Q)- g_{NP} b^2} ,
\label{eq:np}
\end{equation}
with a nonperturbative parameter $g_{NP}$.
Because exactly the same Sudakov factor $e^{S(b, Q)}$
participates in the corresponding formula for the unpolarized case
as noted above (\ref{asym}), we perform the
replacement (\ref{eq:np}) with the same nonperturbative parameter $g_{NP}$
in the NLL+LO unpolarized differential cross section contributing to the denominator of
(\ref{asym}).
This may be interpreted as assuming the same ``intrinsic transverse
momentum'' of partons inside nucleon for both polarized and unpolarized cases,
corresponding to the Gaussian smearing factor of (\ref{eq:np}).
We use $g_{NP}\simeq 0.5$ GeV$^2$,
suggested by the study of the $Q_T$-spectrum in unpolarized case \cite{KS:03}.
For all the following numerical evaluations, we choose
$\phi=0$ for the azimuthal angle of one lepton, $\mu_F =\mu_R =Q$
for the factorization and renormalization scales and
$b_c=0$, $\theta=\frac{7}{32}\pi$ for the integration contour
${\cal C}$ explained below (\ref{replaceL}).
\begin{figure}
\begin{center}
\includegraphics[height=5.8cm]{RHIC_200_5_y2_pol_2.eps}~~~~
\includegraphics[height=5.8cm]{RHIC_200_5_y2_unpol_2.eps}
\end{center}
\caption{The spin-dependent and spin-averaged differential
cross sections for tDY: (a) $\Delta_Td\sigma/dQ^2 dQ_T dy d\phi$
and
(b) $d\sigma/dQ^2 dQ_T dyd\phi$,
as a function of $Q_T$
at RHIC kinematics, $\sqrt{S}=200$ GeV, $Q=5$ GeV, $y=2$ and $\phi=0$,
with $g_{NP}=0.5$ GeV$^2$.
}
\label{fig:1}
\end{figure}
First of all,
we present the transvserse-momentum $Q_T$-spectrum of the DY lepton pair
for $\sqrt{S}=200$~GeV, $Q=5$~GeV, and $y=2$,
which correspond to the detection of dileptons with the PHENIX detector at RHIC.
The solid curve in Fig.~\ref{fig:1}(a)
shows the NLL+LO differential cross section (\ref{NLL+LO}) for tDY, multiplied by $2Q_T$,
with $g_{NP}=0.5$ GeV$^2$ for (\ref{eq:np}).
We also show the contribution from the NLL resummed component
$\Delta_T \tilde{X}^{\rm NLL}$ in (\ref{NLL+LO}) by the dot-dashed curve, and
the LO result using (\ref{LOcross section}) by the dashed curve.
Fig.~\ref{fig:1}(b) is same as Fig.~\ref{fig:1}(a)
but for the unpolarized differential cross sections.
The LO results become large and diverge as $Q_T \rightarrow 0$,
while the NLL+LO results are finite and well-behaved over all regions of $Q_T$.
The soft gluon resummation
gives dominant contribution around the peak of the solid curve,
i.e.,
at intermediate $Q_T$ as well as small $Q_T$.
To demonstrate the resummation effects in detail,
the two-dot-dashed curves in Figs.~\ref{fig:1}(a), (b) show the LL result which
is obtained from the corresponding NLL result (dot-dashed curve)
by omitting the contributions corresponding to the NLL level,
i.e., $h^{(1)}(\lambda)$, $R_{N_1}(\lambda)$, $R_{N_2}(\lambda)$
in (\ref{resum:21}) and $\alpha_s(Q^2) C_F (\pi^2 -8 )/2\pi$
in (\ref{resum:2})
for the polarized case (see (\ref{sudakov:1}) and the discussion below
(\ref{eq:lambda})), and similarly for the unpolarized case.
The LL contributions are sufficient for obtaining the finite cross section,
causing considerable suppression in the small $Q_T$ region.
On the other hand, it is remarkable that the contributions at the NLL level
provide significant enhancement from the LL result,
around the peak region for both polarized and unpolarized cases,
and the effect is more pronounced for the former.
Among the relevant NLL contributions,
the ``universal'' term $h^{(1)}(\lambda)$
produces similar (enhancement) effect for both
(a) and (b) of Fig.~\ref{fig:1}, while the other NLL contributions, associated
with the evolution operators and the ${\cal O}(\alpha_s(Q^2) )$ coefficient
functions (see e.g. (\ref{LO-evol}), (\ref{resum:2})),
give different effects to the polarized and unpolarized cases.
\begin{figure}
\begin{center}
\includegraphics[height=5.8cm]{RHIC_200_5_y2_asym_2.eps}~~~~~~~~
\includegraphics[height=5.8cm]{RHIC_200_5_y2_asym.eps}
\end{center}
\caption{The asymmetries ${\mathscr{A}_{TT}(Q_T)}$ at RHIC kinematics,
$\sqrt{S}=200$ GeV, $Q=5$ GeV, $y=2$ and $\phi=0$:
(a) ${\mathscr{A}_{TT}(Q_T)}$ obtained from each curve in Fig.~\ref{fig:1}.
(b) The NLL+LO ${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}) with (\ref{eq:np}) using various values
for $g_{NP}$.
}
\label{fig:2}
\end{figure}
Fig.~\ref{fig:2}(a) shows the double transverse-spin asymmetries
in the small $Q_T$ region for tDY
at RHIC, obtained as the ratio of the results in
Fig.~\ref{fig:1}(a) to
the corresponding results in Fig.~\ref{fig:1}(b)
for respective lines,
so that the solid curve gives the NLL+LO result (\ref{asym}),
the dot-dashed curve shows the NLL result,
\begin{equation}
\mathscr{A}_{TT}^{\rm NLL}(Q_T) = \frac{1}{2}\cos(2\phi)
\frac{\Delta_T \tilde{X}^{\rm NLL} (Q_T^2 , Q^2 ,y)}{\tilde{X}^{\rm NLL}
(Q_T^2 , Q^2 ,y)}\ ,
\label{asymNLL}
\end{equation}
and the dashed curve shows the LO result (\ref{asymlo}).
The NLL+LO result
is almost flat for $Q_T \rightarrow 0$ as well as around the peak region
of the NLL+LO cross section in Fig.~\ref{fig:1}.
This flat behavior is dominated by the NLL resummed components,
and reflects the fact that the soft gluon emission effects resummed
into the Sudakov factor $e^{S(b,Q)}$ with (\ref{sudakov:1}) are
universal to the NLL accuracy between the numarator and denominator of (\ref{asym}).
Slight increase of the solid line for $Q_T \rightarrow 0$ is due to the terms
$\propto \ln (Q^2 /Q_T^2)$ contained in the ``regular components''
$\Delta_T \tilde{Y}$ and $\tilde{Y}$ in (\ref{asym}) (see
(\ref{matching})),
but such weak singularities which show up only at very small $Q_T$ will
be irrelevant for most practical purposes.
The LO result, obtained as the ratio of the two LO curves divergent as
$Q_T \rightarrow 0$ in Figs.~\ref{fig:1}(a) and (b),
gives the finite asymmetry for $Q_T >0$, but it does not have the flat behavior, i.e.,
decreases for increasing $Q_T$, and is much smaller
than the NLL+LO result. On the other hand, we note that the LL result, retaining only
the resummmed components corresponding to the LL level, is given by
(see (\ref{resum:2}), (\ref{resum:21}))
\begin{equation}
\mathscr{A}_{TT}^{\rm LL}(Q_T)=\frac{1}{2}\cos(2 \phi)
\frac{\delta H(x_1^0, x_2^0; Q^2)}{H(x_1^0,x_2^0; Q^2)} \approx A_{TT}\ ,
\label{LL}
\end{equation}
which is independent of $Q_T$, because the $Q_T$-dependent factor (\ref{resum:21})
with $S(b, Q) + R_{N_1}(\lambda) +R_{N_2}(\lambda)
\rightarrow h^{(0)}(\lambda)/ \alpha_s(Q^2)$
is common for both polarized and unpolarized cases.
Namely the LL resummation effects cancel exactly
between the numerator and the denominator in the asymmetry (\ref{LL}).
As indicated in (\ref{LL}), the resulting value shown by the two-dot-dashed curve in
Fig.~\ref{fig:2}(a) coincides with the $Q_T$-independent asymmetry (\ref{eq:att})
up to the NLO QCD corrections; note that $A_{TT}=4.0$\% including the NLO corrections
similarly as \cite{MSSV:98} (see Table \ref{tab:1} below).
However,
we recognize
that the soft-gluon resummation contributions
at the NLL level enhances
the asymmetry at the small $Q_T$-region significantly, compared with the
LL or fixed-order result.
This is caused by the enhancement of the cross sections in
Fig.~\ref{fig:1} discussed above,
due to the universal $h^{(1)}(\lambda)$ term and the other spin-dependent contributions.
In particular, the evolution operators like (\ref{LO-evol})
in the latter contributions
allow the participation of the parton distributions at the scale $b_0 /b \sim Q_T$,
and the components associated with those parton distributions
indeed play dominant roles due to the mechanism embodied
by the Sudakov factor $e^{S(b,Q)}$ of (\ref{resum:21}).
Combined with the different $x$-dependence between the transversity and density
distributions as noted in (ii) above,
the resulting enhancement arises differently
between (a) and (b) in Fig.~\ref{fig:1}, and
enhances the asymmetry as in Fig.~\ref{fig:2}(a).
In Fig.~\ref{fig:2}(b) we show the NLL+LO asymmetries ${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}),
with (\ref{eq:np})
using various values of $g_{NP}$.
Here the solid curve is the same as the solid curve
in Fig.~\ref{fig:2}(a), using $g_{NP}=0.5$ GeV$^2$.
The result demonstrates that our NLL+LO asymmetry in the relevant small-$Q_T$ region
is almost independent of the value of $g_{NP}$ in the range $g_{NP}=0.3$-0.8 GeV$^2$.
Although, at RHIC kinematics, the $Q_T$-spectrum
from the spin-dependent cross section (\ref{NLL+LO}) with (\ref{eq:np})
receives a sizable smearing effect in the relevant small-$Q_T$ region \cite{KKST:06},
the corresponding $g_{NP}$-dependence
is canceled by the similar dependence of the unpolarized cross section
in the asymmetry (\ref{asym}).
In our framework, such cancellation of the
$g_{NP}$-dependence between the numerator and the denominator of (\ref{asym})
is observed for all relevant kinematics of our interest at RHIC,
and also at J-PARC discussed below.
However, we mention that too small value of $g_{NP}$ is useless in practice:
the Gaussian smearing factor of (\ref{eq:np}) for $g_{NP}=0.1$ GeV$^2$
is insufficient to suppress sensitivity to the extremely large $|b|$
region in (\ref{resum:21}),
so that the $b$ integration
receives the ``inaccurate'' long-distance perturbative contributions
considerably at small $Q_T$,
which lead to unstable numerical behavior for $Q_T \raisebox{-0.07cm 1$~GeV.
For all the following calculations, we use $g_{NP}=0.5$ GeV$^2$.
\begin{figure}
\begin{center}
\includegraphics[height=5.8cm]{RHIC_200_y2_asym.eps}~~~~~~~
\includegraphics[height=5.8cm]{RHIC_200_y0_asym.eps}
\end{center}
\caption{The NLL+LO ${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}) with (\ref{eq:np}) using $g_{NP}=0.5$ GeV$^2$
at RHIC kinematics, $\sqrt{S}=200$ GeV, $\phi=0$ with $y=2$ and $y=0$ for (a) and (b),
respectively.}
\label{fig:3}
\end{figure}
Fig.~\ref{fig:3} shows the NLL+LO asymmetries
${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}) at RHIC kinematics,
$\sqrt{S}=200$ GeV and various values of the dilepton invariant mass $Q$,
using $y=2$ and $y=0$ for (a) and (b), respectively;
the dashed curve in (a) is the same as the solid curve in Figs.~\ref{fig:2}(a), (b).
For all cases in Fig.~\ref{fig:3}, we observe the typical flat behavior of ${\mathscr{A}_{TT}(Q_T)}$
in the small $Q_T$ region, similarly as Fig.~\ref{fig:2}.
On the other hand, ${\mathscr{A}_{TT}(Q_T)}$ increases for increasing $Q$, and the value
in the flat region reaches about 10\% for $Q=20$ GeV in Fig.~\ref{fig:3}(a).
Such dependence on $Q$ is associated with the small-$x$ behavior of the
relevant parton distributions:
smaller $Q$ corresponds to smaller $x_{1,2}^0 = e^{\pm y}Q/\sqrt{S}$,
so that the small-$x$ rise of the unpolarized sea-distributions
enhances the denominator of (\ref{asym}).
We obtain larger ${\mathscr{A}_{TT}(Q_T)}$ for $y=2$ compared with the $y=0$
case, but the $y$-dependence of ${\mathscr{A}_{TT}(Q_T)}$ is not so strong for all $Q$.
For comparison, we also evaluate the $Q_T$-independent asymmetry
$A_{TT}$ of (\ref{eq:att})
including the NLO QCD corrections and with the same nonperturbative inputs
as those used in Fig.~\ref{fig:3}. The results are shown in Table \ref{tab:1},
and these exhibit similar behavior with respect to the $Q$ and $y$ dependence
as that in Fig.~\ref{fig:3}. Note that we reproduce the NLO value of
$A_{TT}$ in Table \ref{tab:1} when we integrate respectively the
numerator and the denominator of the NLL+LO asymmetry (\ref{asym})
over $Q_T$ for each curve of Fig.~\ref{fig:3}
(see discussion below (\ref{LOcross section})).
\begin{table}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{ } & $Q=2$GeV & $Q=5$GeV & $Q=8$GeV & $Q=15$GeV & $Q=20$GeV\\
\hline
$\sqrt{S}=200$GeV & $y=2$ & 3.3\% & 4.0\% & 4.9\% & 6.5\% & 7.4\% \\
\cline{2-7}
& $y=0$ & 3.5\% & 3.7\% & 4.4\% & 5.9\% & 6.9\% \\
\hline
$\sqrt{S}=500$GeV & $y=2$ & 1.8\% & 2.0\% & 2.4\% & 3.4\% & 4.0\% \\
\cline{2-7}
& $y=0$ & 2.2\% & 2.1\% & 2.4\% & 3.2\% & 3.8\% \\
\hline
\end{tabular}
\caption{The $Q_T$-independent asymmetry $A_{TT}$ of (\ref{eq:att})
including the NLO QCD corrections at RHIC kinematics.}
\label{tab:1}
\end{table}
But the NLO $A_{TT}$
are smaller by about 20\% than the corresponding values of the NLL+LO ${\mathscr{A}_{TT}(Q_T)}$
in the ``flat'' region at small $Q_T$.
This enhancement of ${\mathscr{A}_{TT}(Q_T)}$ compared with $A_{TT}$ arises from the soft
gluon resummation at the NLL level, as discussed in Fig.~\ref{fig:2}(a) above.
\begin{figure}
\vspace{0.5cm}
\begin{center}
\includegraphics[height=5.8cm]{RHIC_500_y2_asym.eps}~~~~~~~
\includegraphics[height=5.8cm]{RHIC_500_y0_asym.eps}
\end{center}
\caption{Same as Fig.~\ref{fig:3}, but for $\sqrt{S}=500$ GeV.
}
\label{fig:4}
\end{figure}
Fig.~\ref{fig:4} is same as Fig.~3, but for another RHIC kinematics with
$\sqrt{S}=500$ GeV.
General behavior for the $Q_T$, $Q$ and $y$ dependence is similar as
that in Fig.~\ref{fig:3}.
Comparing the curves with the same values of $Q$, $y$ between
Figs.~\ref{fig:3} and \ref{fig:4},
${\mathscr{A}_{TT}(Q_T)}$ are smaller for higher energy $\sqrt{S}=500$ GeV than those for
$\sqrt{S}=200$ GeV.
This reflects the smaller $x_{1,2}^0 = e^{\pm y}Q/\sqrt{S}$ for larger $\sqrt{S}$,
and the corresponding enhancement
of the denominator in (\ref{asym}).
Similarly as Fig.~\ref{fig:3}, the NLL+LO ${\mathscr{A}_{TT}(Q_T)}$ in the flat region of Fig.~\ref{fig:4}
are larger by 20-30\% than the corresponding NLO $A_{TT}$ shown in Table \ref{tab:1}.
It is generally true,
regardless of the specific kinematics or the detailed behavior of
nonperturbative inputs,
that the NLL+LO ${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}) in the flat region is considerably larger
than the corresponding NLO $A_{TT}$,
because this phenomenon is mainly governed by the partonic mechanism associated
with the soft gluon resummation
at the NLL level, as demonstrated in Figs.~\ref{fig:1}, \ref{fig:2}.
On the other hand, apparently the absolute magnitude of both ${\mathscr{A}_{TT}(Q_T)}$ and $A_{TT}$
is influenced by the detailed behavior of the input parton distributions,
in particular, by their small-$x$ behavior at RHIC.
For example, if we change the input parton distributions, explained above (\ref{eq:np}),
from the NLO GRV98 and GRSV2000 distributions into
the NLO GRV94 \cite{GRV:94} and GRSV96 \cite{GRSV:96} distributions,
the NLO values of $A_{TT}$ become smaller by 30-40\%
than the corresponding values in Table \ref{tab:1}.
We note that the latter distributions are the ones used in the
calculation of \cite{MSSV:98},
and the small-$x$ behavior
of the transversity distributions,
resulting from $\delta q(x,\mu^2_0)=[q(x,\mu^2_0)+\Delta
q(x,\mu^2_0)]/2$ at the input scale $\mu_0$, is
rather different between those two choices of the distributions,
reflecting that the helicity distributions at small $x$
are still poorly determined from experiments.~\footnote{
We thank H.~Yokoya and W.~Vogelsang for clarifying this point.}
\begin{figure}
\begin{center}
\includegraphics[height=5.8cm]{J-PARC_10_2_y0_pol_2.eps}~~~~
\includegraphics[height=5.8cm]{J-PARC_10_2_y0_asym.eps}
\end{center}
\caption{The tDY at J-PARC kinematics, $\sqrt{S}=10$ GeV, $Q=2$ GeV, $y=0$ and $\phi=0$.
(a) The spin-dependent differential
cross section $\Delta_Td\sigma/dQ^2 dQ_T dy d\phi$ using $g_{NP}=0.5$ GeV$^2$.
(b) The asymmetries ${\mathscr{A}_{TT}(Q_T)}$ obtained by using each curve in (a).
}
\label{fig:5}
\end{figure}
Next we discuss tDY foreseen at J-PARC.
Fig.~\ref{fig:5}(a) shows the $Q_T$ spectrum of the produced lepton pair
for J-PARC kinematics,
$\sqrt{S}=10$ GeV, $Q=2$ GeV and $y =0$.
The curves show the spin-dependent differential cross sections, and
have the same meaning as the corresponding curves in Fig.~\ref{fig:1}(a).
The double transverse-spin asymmetries are obtained as the ratio of the results
in Fig.~\ref{fig:5}(a) to the corresponding results for the unpolarized
differential cross sections, as shown in Fig.~\ref{fig:5}(b).
We see that the results at J-PARC obey the similar pattern as those at RHIC shown
in Figs.~\ref{fig:1}, \ref{fig:2}:
the flat behavior is observed for the NLL+LO ${\mathscr{A}_{TT}(Q_T)}$
at $Q_T \rightarrow 0$ as well as around the peak region
of the NLL+LO cross section,
and this is dominated by the NLL resummed components.
Also the soft-gluon resummation contributions
at the NLL level enhances
the asymmetry at the small $Q_T$-region significantly,
compared with the LL of (\ref{LL})
or the fixed-order LO result.
As a result, we get ${\mathscr{A}_{TT}(Q_T)}\simeq 15$\% as the NLL+LO prediction around the flat region,
which should be compared with the corresponding prediction
$A_{TT}= 12.8$\% for (\ref{eq:att})
including the NLO corrections (see Table \ref{tab:2}).
The reason why we obtain much larger values of ${\mathscr{A}_{TT}(Q_T)}$, and also of
$A_{TT}$, than the RHIC case
is the larger $x_{1,2}^{0}=0.2$ probed at J-PARC, where the transversities are larger
and the unpolarized sea distributions are smaller.
Another difference compared with the RHIC case is that the contribution of
the ``regular component'' $\Delta_T \tilde{Y}$ of (\ref{matching})
in Fig.~\ref{fig:5}(a),
and the associated increase of the solid curve as $Q_T \rightarrow 0$
in Fig.~\ref{fig:5}(b),
due to the terms $\propto \ln (Q^2 /Q_T^2)$ in $\Delta_T \tilde{Y}$ and
$\tilde{Y}$ of (\ref{asym}), are more pronounced, but the latter effect shows up
only for $Q_T \lesssim 0.5$ GeV.
\begin{figure}
\begin{center}
\includegraphics[height=5.8cm]{J-PARC_10_y0_asym.eps}~~~~~~~~
\includegraphics[height=5.8cm]{J-PARC_10_y05_asym.eps}
\end{center}
\caption{The NLL+LO ${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}) with (\ref{eq:np}) using $g_{NP}=0.5$ GeV$^2$
at J-PARC kinematics, $\sqrt{S}=10$ GeV, $\phi=0$ with $y=0$ and $y=0.5$ for (a) and (b),
respectively.}
\label{fig:6}
\end{figure}
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{ } & $Q=2$GeV & $Q=2.5$GeV & $Q=3.5$GeV \\
\hline
$\sqrt{S}=10$GeV & $y=0$ & 12.8\% & 12.9\% & 12.5\% \\
\cline{2-5}
& $y=0.5$ & 13.9\% & 14.8\% & 15.9\% \\
\hline
\end{tabular}
\end{center}
\caption{Same as Table \ref{tab:1} but for J-PARC kinematics.}
\label{tab:2}
\end{table}
In Fig.~\ref{fig:6} we show the NLL+LO asymmetries ${\mathscr{A}_{TT}(Q_T)}$ of
(\ref{asym}) at J-PARC kinematics, $\sqrt{S}=10$ GeV and various values of $Q$,
with $y=0$ and $y=0.5$ for (a) and (b), respectively;
the solid curve in (a) is the same as the solid curve in Fig.~\ref{fig:5}(b).
We observe the flat behavior of ${\mathscr{A}_{TT}(Q_T)}$ in the small $Q_T$ region,
where ${\mathscr{A}_{TT}(Q_T)} \simeq 15$-20\% and these values are significantly larger than
the corresponding results for the $Q_T$-independent, NLO asymmetry
$A_{TT}$ of (\ref{eq:att}), shown in Table \ref{tab:2}.
We note that the dependence of ${\mathscr{A}_{TT}(Q_T)}$, as well as $A_{TT}$, on $Q$ is
weak in contrast to the RHIC case;
recall that the rather strong $Q$-dependence in Figs.~\ref{fig:3}, \ref{fig:4}
was induced mainly by the growth of the unpolarized sea-distributions
for the small $x_{1,2}^0$, probed at RHIC.
\section{The saddle point formula}
In Sec.~3, we have observed
the universal flat behavior of NLL+LO ${\mathscr{A}_{TT}(Q_T)}$ of (\ref{asym}) at small $Q_T$,
including the region around the peak of each DY cross section in the numerator and
denominator of (\ref{asym}) for both RHIC and J-PARC cases.
We have also demonstrated in Figs.~\ref{fig:2} and \ref{fig:5}
that those flat behavior is driven by the dominant effects from soft gluon resummation
embodied by the NLL resummed components
$\Delta_T \tilde{X}^{\rm NLL}$ and $\tilde{X}^{\rm NLL}$
in (\ref{asym}).
As a result, the values of
${\mathscr{A}_{TT}(Q_T)}$ obtained in the ``flat region'' of the corresponding
experimental data may be
compared, to a good accuracy, with (\ref{asymNLL}).
Still, extraction of transversity distributions through such analysis
should be a complicated task compared with the usual fixed-order analysis:
in the flat region of ${\mathscr{A}_{TT}(Q_T)}$,
the $b$ integration in the resummed part (\ref{resum})
(see also (\ref{resum:2}), (\ref{resum:21}))
mixes up the parton distributions numerically with
very large perturbative effects due to the Sudakov factor shown
in Figs.~\ref{fig:1} and \ref{fig:5},
as well as with another nonperturbative effects associated with $g_{NP}$
of (\ref{eq:np}).
Thus in each of the numerator and the denominator of (\ref{asym}),
the information on the parton distributions is associated with a portion
of the large numerical quantity whose major part would cancel
in the ratio of (\ref{asym}), and this fact would obscure
the straightforward extraction of the transversity using
the above formulae like (\ref{resum:2}), (\ref{resum:21}), in particular with
respect to its accuracy.
We are able to derive a simple analytic formula which allows more direct extraction of
the transversity distributions
from the experimental data in the flat region of ${\mathscr{A}_{TT}(Q_T)}$ and also clarifies
the accuracy of the resulting distributions.
For this purpose, we first note that the extrapolation of ${\mathscr{A}_{TT}(Q_T)}$
in the flat region to $Q_T = 0$ corresponds to the case without
the (experimentally uninteresting) weak enhancement at very small $Q_T$ due to the
terms $\propto \log(Q^2/Q_T^2)$
in the regular components $\Delta_T \tilde{Y}$ and $\tilde{Y}$ of (\ref{asym}),
so that the resulting value is very close
to the $Q_T \rightarrow 0$ limit of (\ref{asymNLL}) in
both RHIC and J-PARC cases (see Figs.~\ref{fig:2}-\ref{fig:6}).
Namely, $\mathscr{A}_{TT}^{\rm NLL}(Q_T=0)$ may be considered
to give a practical estimate of the data of ${\mathscr{A}_{TT}(Q_T)}$ in the flat region
with a good accuracy.
Then, at $Q_T = 0$, the region $|b| \sim 1/ \Lambda_{\rm QCD}$ becomes important for
the $b$ integration of the relevant resummation formula (\ref{resum:21}).
Note that we can treat ``safely''
such long-distance region, corresponding to the boundary
of perturbative and nonperturbative physics,
owing to that
the nonperturbative smearing (\ref{eq:np}) suppresses the too long-distance region
$|b| \gg 1/\Lambda_{\rm QCD}$, and that the dependence on a specific choice of $g_{NP}$
cancels in the asymmetries $\mathscr{A}_{TT}^{\rm NLL}(Q_T=0)$ as demonstrated in
Fig.~\ref{fig:2}(b) (see also (\ref{eq:attnll}) below).
For simplicity in the presentation, we fix as $\mu_R=Q$ in the following.
In the relevant region $|b| \sim 1/ \Lambda_{\rm QCD}$,
we have
$|\tilde{L}| \sim \ln(Q^2/\Lambda_{\rm QCD}^2 ) \sim 1/\alpha_s(Q^2 )$, i.e.,
$|\lambda| \sim 1$ (see (\ref{replaceL}), (\ref{eq:lambda})).
Because all logarithms, $\tilde{L}$ and $\ln (Q^2 /\Lambda^2_{\rm QCD})$,
are counted equally large for $Q \gg \Lambda_{\rm QCD}$,
the resulting contributions to (\ref{resum:21})
are organized in terms of a single small parameter, $\alpha_s(Q^2 )$,
but with a different classification of the contributions in the order of $\alpha_s(Q^2 )$
from the usual perturbation theory that can be used in an other region,
$0 \le |b| \lesssim 1/Q$:
as discussed below (\ref{eq:lambda}), when $\lambda = {\cal O}(1)$,
the NLL contributions in the Sudakov exponent (\ref{sudakov:1})
produce the ${\cal O}(1)$ effects in the resummation formula (\ref{resum:21}),
while the NNLL contributions
could yield the corrections of ${\cal O}(\alpha_s(Q^2))$.
Therefore, when the region $|b| \sim 1/ \Lambda_{\rm QCD}$ is relevant
as $Q_T \rightarrow 0$
and we neglect the NNLL contributions in (\ref{resum:21}),
the other contributions that correspond to the same order of
$\alpha_s(Q^2)$ in (\ref{resum:2}) should be neglected
for a consistent treatment,
as $[1+ \alpha_s(Q^2) C_F (\pi^2 -8 )/2\pi] \rightarrow 1$, so that~\footnote{
In principle, we should use this classification also for the numerical
calculations presented in Sec.~3 when $Q_T \approx 0$. But
we did not make the corresponding
replecement for the coefficient functions $C_{ij}$ at $Q_T \approx 0$
in the calculations of Figs.~\ref{fig:1}-\ref{fig:6}. If we performed that replacement,
the NLL+LO (\ref{asym}) as well as NLL (\ref{asymNLL})
asymmetries at $Q_T \approx 0$ in those figures would increase by about 5\%.}
\begin{equation}
\Delta_T X^{\rm NLL}_{N_1, N_2} (Q_T^2 =0 , Q^2)
=
\delta H_{N_1,N_2}(Q^2)
I_{N_1,N_2}(Q_T^2=0, Q^2)\ .
\label{resum:3}
\end{equation}
We note that the contributions to
the NLL resummation formula for the unpolarized case can be classified
similarly in the $Q_T \rightarrow 0$ limit;
in particular, the present classification implies that
the gluon distributions decouple for $Q_T \rightarrow 0$
by negelecting the ${\cal O}(\alpha_s (Q^2))$ contributions
of the corresponding
coefficient functions $C_{ij}$ (see (\ref{UNPOL}) in Appendix).
It is also worth noting that this classification coincides with
the ``degree 0 approximation'' discussed in \cite{CSS:85}: in general,
if one wants to evaluate the cross section for $Q_T \approx 0$
in an approximation where any corrections
are suppressed by a factor of $[\ln (Q^2 /\Lambda^2_{\rm QCD})]^{-(N+1)}$,
one needs a ``degree $N$'' approximation; i.e., for the perturbatively
calculable functions
in the general form of resummation formula (\ref{resum}) with (\ref{sudakov}),
one needs $A_q$ to order $\alpha_s^{N+2}$,
$B_q$ to order $\alpha_s^{N+1}$, $C_{ij}$ to order $\alpha_s^{N}$,
and the $\beta$ function to order $\alpha_s^{N+2}$.
This indicates that the NLL accuracy for a resummation formula
corresponds to the degree 0 approximation when the region $Q_T \approx 0$
is considered. In particular, this implies that
the ${\cal O}(\alpha_s)$ contribution in the coefficient
function $C_{ij}$ should be neglected for $Q_T \approx 0$; on the other hand,
that contribution is necessary
to ensure the NLL accuracy for $Q_T \gtrsim \Lambda_{\rm QCD}$
in the classification based on resummed perturbation theory of
towers of logarithms,
$\alpha_s^n \ln^{2n-1}(Q^2/Q_T^2 )/Q_T^2$,
$\alpha_s^n \ln^{2n-2}(Q^2/Q_T^2 )/Q_T^2$,
and $\alpha_s^n \ln^{2n-3}(Q^2/Q_T^2 )/Q_T^2$ \cite{CSS:85,BCDeG:03,KKST:06}.
Now we evaluate the $b$ integral of (\ref{resum:21})
at $Q_T =0$ according to the above classification.
We have to use the exponentiated form in the integrand of (\ref{resum:21})
without Taylor expansion,
because the region $|\lambda| \sim 1$ is relevant
(see (\ref{sudakov:1})-(\ref{LO-evol})) \cite{KS:99}.
This type of integrals
can be evaluated with the saddle point method: we extend the
saddle point evaluation applied to the LL resummation formula with
$g_{NP} =0$ \cite{PP,CSS:85} into the case of our NLL resummation
formula~(\ref{resum:21}) with nonzero $g_{NP}$.
The corresponding extension is possible based on
the present formalism that accomplishes resummation at the partonic level.
We note that previous saddle-point calculations
consider the case with $g_{NP}=0$ to avoid model dependence
for the prediction of the cross sections,
but the resultant saddle-point formula is applicable only to the production
of extremely high-mass DY pair
and is practically useless (see e.g. \cite{PP,CSS:85,QZ}).
We find that, with nonzero $g_{NP}$, we can obtain a new saddle-point formula
applicable to the RHIC and J-PARC cases; also, although the behavior of
the cross sections are influenced
by specific value of $g_{NP}$, the asymmetries are not,
as already noted above.
When $Q$ is large enough, $\alpha_s (Q^2) \ll 1$, so that the $b$
integral in (\ref{resum:21}) with (\ref{eq:np}) and $Q_T =0$ is dominated
by a saddle point determined by mainly the LL
term in the exponent (\ref{sudakov:1}) of the Sudakov form factor~\cite{PP,CSS:85}.
In this case, the contributions to the $b$ integration from too short
($|b| \ll 1/Q$) and long distance ($|b| \gg 1/\Lambda_{\rm QCD}$)
along the integration contour ${\cal C}$ explained below (\ref{replaceL}) are
exponentially suppressed:
this allows us to give up the replacement (\ref{replaceL});
also we may neglect the integration along the two branches,
$b=b_c + e^{\pm i\theta}t$ with $t \in \{0, \infty \}$, in ${\cal C}$, when
$b_c$ is sufficiently large but is
less than the position of the singularity in the Sudakov exponent, $b_L$.
In fact, we can check numerically that the relevant
integrand has a nice saddle point well below $b_L$ (above $0$) for the kinematics
of our interest.
Then, changing the integration variable to
$\lambda$, given by (\ref{eq:lambda}),
we get (see (\ref{eq:h0}), (\ref{sudakov:2}), (\ref{LO-evol}))
\begin{equation}
I_{N_1 , N_2}(Q_T^2 = 0, Q^2)=
\frac{b_0^2}{4Q^2 \beta_0 \alpha_s(Q^2)}\int_{-\infty}^{\lambda_c} d\lambda
e^{-\zeta(\lambda)+ h^{(1)}(\lambda) + R_{N_1}(\lambda)
+ R_{N_2}(\lambda)}\ ,
\label{eq:sp}
\end{equation}
where $\lambda_c = \beta_0 \alpha_{s}(Q^2) \ln(Q^2 b_c^2/b_0^2 )$ ($<1$), and
\begin{equation}
\zeta(\lambda) = - \frac{\lambda}{\beta_0 \alpha_s(Q^2)}
-\frac{h^{(0)}(\lambda)}{\alpha_s (Q^2)}
+ \frac{g_{NP}b_0^2}{Q^2}e^{\frac{\lambda}{\beta_0 \alpha_s (Q^2 )}}\ .
\label{eq:fxi}
\end{equation}
An important point is that the ratio,
$[ h^{(1)}(\lambda) + R_{N_1}(\lambda) + R_{N_2}
(\lambda)]/\zeta(\lambda)$, actually behaves as a quantity of the order
of $\alpha_s(Q^2 )$
in the relevant region $0<\lambda < \lambda_c$ of the integration
in (\ref{eq:sp}), even for nonzero $g_{NP} \simeq 0.5$ GeV$^2$.
The precise position of the saddle point in the integral of (\ref{eq:sp})
is determined by the condition,
$- {\zeta}'(\lambda)+ {h^{(1)}}'(\lambda ) + {R_{N_1}}'(\lambda)
+ {R_{N_2}}' (\lambda) = 0$, and we express its solution as
$\lambda = \lambda_{SP} + \Delta \lambda_{SP}$ where $\lambda_{SP}$ is the
solution of $\zeta'(\lambda)=0$, i.e.,
\begin{equation}
1-\frac{A_q^{(1)}}{2\pi \beta_0}\frac{\lambda_{SP}}{1-\lambda_{SP}} =
\frac{g_{NP}b_0^2}{Q^2}e^{\frac{\lambda_{SP}}{\beta_0 \alpha_s (Q^2 )}}
\label{eq:lsp}
\end{equation}
is satisfied, and
$\Delta \lambda_{SP}=
[{h^{(1)}}' (\lambda_{SP}) + {R_{N_1}}' (\lambda_{SP})
+ {R_{N_2}}' (\lambda_{SP}) ]
/{\zeta}''(\lambda_{SP})$
denotes the shift of the saddle point at the NLL accuracy.
Evaluating (\ref{eq:sp}) around
$\lambda= \lambda_{SP} + \Delta \lambda_{SP}$,
we get
\begin{equation}
I_{N_1 , N_2}(0, Q^2)=\left(
\frac{b_0^2}{4Q^2 \beta_0 \alpha_s(Q^2)} \sqrt{\frac{2\pi}{\zeta''(\lambda_{SP})}}
e^{-\zeta(\lambda_{SP})+h^{(1)}(\lambda_{SP})}
\right)e^{R_{N_1}(\lambda_{SP}) + R_{N_2}(\lambda_{SP})},
\label{eq:speval}
\end{equation}
to the NLL accuracy. Here the contributions from the third or higher order terms
in the Taylor expansion of the exponent in (\ref{eq:sp}) about the saddle point
$\lambda= \lambda_{SP} + \Delta \lambda_{SP}$,
as well as the other terms generated by the shift $\Delta \lambda_{SP}$, are
found to give the effects behaving as ${\cal O} (\alpha_s(Q^2 ) )$, i.e,
are of the same order as the NNLL corrections, and thus are neglected,
similarly as in (\ref{resum:2}), (\ref{resum:3}),
according to the classification of the contributions at $Q_T=0$.
Substituting (\ref{eq:speval}) into (\ref{resum:3})
and performing the double inverse Mellin transformation to the ($x_1^0$, $x_2^0$) space,
the result is expressed as the factor in the parentheses of (\ref{eq:speval}),
multiplied by (\ref{tPDF}) with the scale,
$\mu_F \rightarrow b_0/b_{SP}$ where $b_{SP}= (b_0 /Q)e^{\lambda_{SP}/(2\beta_0
\alpha_s(Q^2))}$,
because $e^{R_{N_1}(\lambda_{SP})}$, $e^{R_{N_2}(\lambda_{SP})}$
in (\ref{eq:speval})
can be identified with the NLO evolution operators from the scale $Q$
to $b_0/b_{SP}$, to the present accuracy (see (\ref{LO-evol})).
The saddle-point evaluation of the corresponding resummation formula for
the unpolarized case can be performed similarly, and the result
is given by the above result for the polarized case, with the replacement
$\delta H ( x_1^0, x_2^0;\ b_0^2 /b_{SP}^2 ) \rightarrow
H ( x_1^0, x_2^0;\ b_0^2 /b_{SP}^2 )$.
The common factor for both the polarized and unpolarized results,
given by the contribution in the parentheses of (\ref{eq:speval}),
involves ``very large perturbative effects'' due to the Sudakov factor,
and shows the well-known asymptotic behavior \cite{PP},
$\sim (\Lambda_{\rm QCD}^2 /Q^2 )^{a\ln(1+1/a)}$
with $a\equiv A_q^{(1)}/(2\pi \beta_0)$,
for $Q \gg \Lambda_{\rm QCD}$;
but this factor cancels out for the asymmetry.
As a result, we obtain the $Q_T\rightarrow 0$ limit of (\ref{asymNLL}) as
\begin{equation}
\mathscr{A}_{TT}^{\rm NLL}(Q_T =0)=\frac{1}{2}\cos(2 \phi )
\frac{\delta H \left( x_1^0, x_2^0;\ b_0^2 /b_{SP}^2 \right)}
{H \left( x_1^0, x_2^0;\ b_0^2 /b_{SP}^2 \right)}\ ,
\label{eq:attnll}
\end{equation}
which is exact, up to the NNLL corrections corresponding to
the ${\cal O}(\alpha_s (Q^2) )$ effects.
This remarkably compact formula is reminiscent of
$\mathscr{A}_{TT}^{\rm LL}(Q_T)$ of (\ref{LL}) that retains only the LL level resummation,
or the $Q_T$ independent asymmetry of (\ref{eq:att}),
but is different in the scale of the parton distributions from those
leading-order results.
Namely, our result (\ref{eq:attnll}) demonstrates:
in the $Q_T =0$ limit,
the all-order soft-gluon-resummation effects on the asymmetry
mostly cancel between the numerator and the denominator
of (\ref{eq:attnll}), but certain contributions at the NLL level survive
the cancellation and
are entirely absorbed into the unconventional scale $b_0/b_{SP}$ for the
relevant distribution functions.
The new scale $b_0/b_{SP}$ is determined by solving (\ref{eq:lsp})
numerically, substituting
$A_q^{(1)}=2C_F$ from (\ref{eq:AB}) and input values for $Q$ and $g_{NP}$,
but it is useful to consider its general behavior: the LHS of (\ref{eq:lsp}) equals 1 at
$\lambda_{SP}=0$, decreases as a concave function for increasing $\lambda_{SP}$,
and vanishes at $\lambda_{SP}=1/[1+A_q^{(1)}/(2 \pi \beta_0 )] \cong 0.6$; while
the RHS is in general much smaller than 1 at $\lambda_{SP}=0$,
increases as a convex function for increasing $\lambda_{SP}$,
and is larger than 1 at $\lambda_{SP} \simeq 1$.
Thus the solution of (\ref{eq:lsp}) corresponds to the case with
${\rm LHS}={\rm RHS}\simeq 1/2$,
more or less independently of the specific value of $Q$ and $g_{NP}$, so that we get
$b_0 /b_{SP} \simeq b_0 \sqrt{2g_{NP}}$.
This result depends only mildly on the nonperturbative parameter $g_{NP}$,
and suggests that one may always use $b_0 /b_{SP} \simeq 1$ GeV,
for the cases of our interest where $Q$ is of several GeV and $g_{NP} \simeq 0.5$ GeV$^2$
as in Figs.~\ref{fig:1}-\ref{fig:6}.
The actual numerical solution of (\ref{eq:lsp}) justifies this simple consideration at
the level of 20\% accuracy.
This fact will be particularly helpful in the first attempt to compare
(\ref{eq:attnll}) with the experimental data so as to
extract the transversity distributions.
Our saddle-point formula (\ref{eq:attnll}) embodies the characteristic features
of the NLL soft gluon resummation effects on the asymmetries ${\mathscr{A}_{TT}(Q_T)}$,
emphasized in Sec.~3.
In particular, our derivation of (\ref{eq:attnll}) demonstrates clearly the mechanism,
which makes the parton distributions at the low scale $\sim Q_T$
play dominant roles,
and leads to the ``enhancement'' of the dot-dashed curve
in Figs.~\ref{fig:2} and \ref{fig:5}.
As noted in the beginning of this section, (\ref{eq:attnll}) may be
directly compared with the experimental value
of the asymmetries ${\mathscr{A}_{TT}(Q_T)}$, observed around the peak of the $Q_T$ spectrum of
the corresponding DY cross sections.
But there is one caution for such application.
As seen from the above derivation, the parton distributions appearing
in (\ref{eq:attnll}) are the NLO distributions up to the corrections
at the NNLL level; e.g., the transversity distributions appearing in the
numerator of (\ref{eq:attnll}) is obtained by evolving the customary NLO
transversity $\delta q(x, Q^2)$ at the scale $Q$,
to the scale $b_0 /b_{SP}$ using (\ref{LO-evol}) that is
{\it the NLO evolution operators up to the NNLL corrections}.
Therefore, the formula (\ref{eq:attnll}) can be used
in the region where NNLL corrections are small;
we know that the NNLL corrections at $Q_T \approx 0$
correspond to ${\cal O}(\alpha_s (Q^2 ))$ effects,
and should be negligible in general.
However, such straightforward estimate might fail
at the edge region of the phase space, e.g., at the small $x$ region:
because the relevant evolution operators (\ref{LO-evol}) actually coincide with
the leading contributions in the large-logarithmic expansion of the usual
LO DGLAP evolution,\footnote{This fact also suggests that one may use the fixed value,
$b_0 /b_{SP} \simeq 1$ GeV, in (\ref{eq:attnll}) for all $Q$ (and $g_{NP}$)
rather than solving (\ref{eq:lsp}) numerically for each different
input value of $Q$, $g_{NP}$,
because the sensitivity of the LO evolution on the small change of the scale is modest.
}
(\ref{eq:attnll}) would not accurate when the NLO corrections
in the usual DGLAP evolution are large compared with
the contributions of (\ref{LO-evol}).
Such situation would typically occur in the region with small $x_{1,2}^0$,
corresponding to the case with large $\sqrt{S}$.
In Table~\ref{tab:3}, we compare $\mathscr{A}_{TT}^{\rm NLL}(Q_T =0)$
using the numerical $b$-integration (``NB''),
obtained as the $Q_T \rightarrow 0$ limit of the dot-dashed curve
in Figs~\ref{fig:3}(a) and \ref{fig:6}(a),
with those using the saddle-point formula (\ref{eq:attnll}).
For the latter we use $b_0 /b_{SP}$ obtained as the
solution of (\ref{eq:lsp}) with $g_{NP}=0.5$ GeV$^2$, and consider the two cases
for the parton distributions participating in (\ref{eq:attnll}):
``SP-I'' uses the parton distributions
which are obtained by evolving the customary NLO distributions at the scale $Q$,
to $b_0 /b_{SP}$ using the NLO evolution operators up to
the NNLL corrections like (\ref{LO-evol});
``SP-II'' uses the customary NLO distributions at the scale $b_0 /b_{SP}$.
Here the ``customary NLO distributions'' are constructed
as described above (\ref{eq:np}).
First of all, the results for SP-I
demonstrate the remarkable accuracy of our simple analytic formula
(\ref{eq:attnll}) for both RHIC and J-PARC, reproducing the results of NB
to the 10\% accuracy.~\footnote{If we use the fixed value,
$b_0 /b_{SP} = 1$ GeV, for all cases, instead of the solution of (\ref{eq:lsp}),
the results in SP-I change by at most 5\%, for both RHIC and J-PARC kinematics.
The corresponding change in SP-II is by less than 5\% for J-PARC,
and by about 10\% (15\%) for $Q =2$-8 GeV ($Q=15$-20 GeV) at RHIC.
}
On the other hand, the results for SP-II indicate that the NNLL
corrections are moderate for large $\sqrt{S}$ at RHIC, while those are
expected to be small for small $\sqrt{S}$ at J-PARC.
We propose that our simple formula (\ref{eq:attnll}) is
applicable to the analysis of low-energy experiment at J-PARC in
order to extract the NLO transversity distributions directly from the data.
On the other hand, (\ref{eq:attnll}) will not be so accurate for analyzing
the data at RHIC, but will be still useful for
obtaining the first estimate of the transversities.
We emphasize that such (moderate) uncertainty in applying our formula
(\ref{eq:attnll}) to the RHIC case is not caused by the saddle-point evaluation,
nor by considering the $Q_T \rightarrow 0$ limit, but rather
is inherent in the general $Q_T$ resummation framework
which, at the NLL level, implies the use of the evolution operators
(\ref{LO-evol}) with the LO DGLAP kernel;
more accurate treatment of the small-$x$ region of the parton distributions
relevant to the RHIC case would require the resummation formula to the
NNLL accuracy, where the NLO DGLAP kernel participates
in the evolution operators (\ref{LO-evol})
from $Q$ to $b_0 /b$ (see e.g. \cite{BCDeG:03}).
\begin{table}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|r|r|r|r|r||r|r|r|}
\hline
& \multicolumn{5}{|c||}{$\sqrt{S}=200$ GeV, \hspace{0.1cm} $y=2$}&
\multicolumn{3}{|c|}{$\sqrt{S}=10$ GeV, \hspace{0.1cm} $y=0$}\\
\hline
$Q$ & 2GeV & 5GeV & 8GeV & 15GeV & 20GeV
& 2GeV & 2.5GeV & 3.5GeV \\
\hline
SP-I
& 4.3\% & 5.4\% & 6.6\% & 8.7\% & 9.8\%
& 14.1\% & 14.5\% & 14.8\% \\
SP-II
& 7.3\% & 8.7\% & 9.8\% & 11.8\% & 12.7\%
& 14.7\% & 14.8\% & 14.2\% \\
NB
& 3.8\% & 4.9\% & 6.1\% & 8.2\% & 9.4\%
& 13.4\% & 14.0\% & 14.9\% \\
\hline
\end{tabular}
\caption{The $Q_T \rightarrow 0$ limit of $\mathscr{A}_{TT}^{\rm NLL}(Q_T)$ of (\ref{asymNLL})
for RHIC and J-PARC kinematics. SP-I and SP-II are the results of the
saddle-point formula (\ref{eq:attnll}) for $g_{NP}=0.5$ GeV$^2$,
using the evolution operators from $Q$ to $b_0 /b_{SP}$, to the NLL accuracy
and to the customary NLO accuracy, respectively. NB is obtained from
the dot-dashed curve in Figs~\ref{fig:3}(a) and \ref{fig:6}(a).}
\label{tab:3}
\end{table}
\section{Conclusions}
In this paper we have presented a study of double transverse-spin
asymmetries for dilepton production at small $Q_T$ in $pp$ collisions.
The logarithmically enhanced contributions, which arise in the
small $Q_T$ region due to multiple soft gluon emission in QCD,
are resummed to all orders in $\alpha_s$
up to the NLL accuracy.
Based on this framework, we calculate numerically the spin-dependent and
spin-averaged cross sections in tDY, and the corresponding asymmetries ${\mathscr{A}_{TT}(Q_T)}$,
as a function of $Q_T$ at RHIC kinematics as well as at J-PARC kinematics.
The soft gluon resummation contributions make the cross sections finite
and well-behaved over all
regions of $Q_T$, so that the singular $Q_T$ spectra in the fixed-order
perturbation theory are redistributed,
forming a well-developed peak in the small $Q_T$ region.
As a result,
both the polarized and unpolarized cross sections become more ``observable''
around the pronounced ``peak region'' at small $Q_T$, involving the bulk of events.
Reflecting the universal nature of the soft gluon effects,
those large resummation-contributions mostly
cancel in the cross section asymmetries ${\mathscr{A}_{TT}(Q_T)}$, leading to the almost
constant behavior of ${\mathscr{A}_{TT}(Q_T)}$ in the small $Q_T$ region,
but, remarkably, the effects surviving the cancellation
raise the corresponding constant value of ${\mathscr{A}_{TT}(Q_T)}$ considerably compared with
the asymmetries in the fixed-order
perturbation theory.
We have obtained a QCD prediction as ${\mathscr{A}_{TT}(Q_T)} \simeq 5$-10\% and 15-20\%
in the ``flat region''
for typical kinematics at RHIC and J-PARC, respectively, where
the different values of ${\mathscr{A}_{TT}(Q_T)}$ are associated with
the different values of parton's momentum fraction probed by these two experiments.
We have also derived a new saddle-point formula for $\mathscr{A}_{TT}(Q_T \approx 0)$,
clarifying the classification of the contributions
involved in the resummation formula for
$Q_T \rightarrow 0$.
The formula is exact to the NLL accuracy, and
embodies the above remarkable features of soft gluon resummation
effects at small $Q_T$
in a compact analytic form.
Our saddle-point formula may be compared with the data of ${\mathscr{A}_{TT}(Q_T)}$
in the peak region of the DY $Q_T$-spectrum, and thus
provides us with a new direct approach to extract the transversity distributions from
experimental data.
We mention that there is
another kind of logarithmically enhanced soft-gluon contributions,
subject to the so-called
``threshold resummation'', besides those treated by the $Q_T$ resummation.
It is known that the threshold resummation effects on the cross sections
can be important when the probed momentum fractions of partons are
rather large like at J-PARC.
The corresponding effects for tDY are studied
\cite{SSVY:05} in $p\bar{p}$ collisions at GSI kinematics, and
the results indicate that the threshold resummation effects will not be so
significant for the kinematical regions corresponding to experiments
at J-PARC, and, furthermore, will cancel mostly in the asymmetries.
We have revealed that
the ``amplification'' of the double transverse-spin asymmetries ${\mathscr{A}_{TT}(Q_T)}$ at small $Q_T$
is driven by the partonic mechanism participating at the NLL level,
as the interplay between
the large logarithmic gluon effects resummded into the universal Sudakov factor
and the DGLAP evolutions specific for each channel.
Thus similar phenomenon is anticipated also
in $p\bar{p}$ collisions at the future experiments at
GSI \cite{PAX:05}, where the large values are predicted
for the $Q_T$ independent asymmetry $A_{TT}$ \cite{SSVY:05,BCCGR:06}.
The application of our $Q_T$ resummation formalism
to $p\bar{p}$ collision will be presented elsewhere \cite{KKT:07}.
\section*{Acknowledgments}
We thank Werner Vogelsang, Hiroshi Yokoya and Stefano Catani
for useful discussions and comments.
The work of J.K. and K.T. was
supported by the Grant-in-Aid
for Scientific Research Nos. C-16540255 and C-16540266.
|
1,477,468,750,921 | arxiv | \section{Conclusions}
We have investigated various corrections that would occur for a specific rapid-cycle Thouless pumping protocol inside a one-dimensional optical lattice. Firstly, it was shown that the finite-sized corrections to an integer pumped charge decay exponentially with the size of the system, as seen in Fig.~\ref{fig:finiteCorrections}, and that these corrections vanish completely for systems with flat energy bands, such that the pumping is ideal even in systems of size 2.
Secondly, we gave some discussion on the order of magnitude of the corrections that would occur when we add NN-hopping terms to the RM-Hamiltonian (\ref{eq:RMHamiltonian}), which would occur in a realistic optical lattice. It was shown that these corrections vanish in the adiabatic limit, but that the rapidity of the cycle introduces new corrections, which are oscillatory and exponentially dependent on the rapidity and the width of the energy bands, and linearly dependent on the NN-hopping terms, as seen in Fig.~\ref{fig:ChargeNNHopping}.
Thirdly, we discussed the corrections with the addition of a harmonic potential to the band insulator. We constructed a lattice variant of the Weyl transform (\ref{eq:weylTransform}) to get the dependence of the corrections on the position and the potential curvature. These corrections can be split into an average effect and a polarizing effect, Eq.~(\ref{eq:pumpedChargeCorrections1}) and Eq.~(\ref{eq:pumpedChargeCorrections2}), which are both oscillatory and exponentially dependent on the rapidity and the width of the bands, as seen in Fig.~\ref{fig:pumpedChargeCoefficients}. Moreover, at the center of the lattice, the corrections to integer valued pumpd charge scale quadratically with the potential curvature.
Lastly, we gave a brief discussion on the change in center of mass of the particle distribution under the rapid-cycle protocol. Here, it was shown that the corrections in the change in center of mass are larger than the correction in actual pumped charge. Namely, these corrections are linear in the potential curvature. Moreover, the compressible regions also create additional corrections to the pumped charge, as seen in Fig.~\ref{fig:errorCOM}.
These investigated corrections give some insight for the realization of the rapid-cycle Thouless pumping protocol in an optical superlattice. It should be noted that this paper does not actually contain any numerical calculations of an optical superlattice, but rather discusses each correction separately. We have also not taken thermal effects or interactions between particles into account. The next step would therefore be to actually implement this rapid-cycle protocol onto an optical lattice.
\section{Effect of Harmonic potential}
\label{sec:harmonicPotential}
In the optical lattice, the particles get trapped inside a harmonic potential, laid in the length of the lattice. In the single-particle subspace, this harmonic potential is given by
\begin{equation}
\hat V = \sum_{\alpha = 0}^{N - 1}\frac{1}{2}\xi \left(\alpha - \alpha_0\right)^2\Big(|2\alpha\rangle \langle 2\alpha| + |2\alpha + 1\rangle \langle 2\alpha + 1|\Big),
\label{eq:harmonicPotential}
\end{equation}
where $\xi \in \mathbb{R}_{>0}$ is analogous to the spring constant in a classical system, and $\alpha_0 = \frac{N-1}{2}$ is the center of the lattice. We will from now on consider $N$ to be odd, such that there is actually a center unit cell where the added potential vanishes. This added potential has the effect of localizing the eigenstates of the total Hamiltonian $\hat H = \hat H_{RM} + \hat V$. Here, we do not consider the NN-hopping terms, suppose the potential is smooth with $\xi \ll 1$ and the size of the system $N$ is large enough such that for the states
\begin{equation}
\mathcal{S
} = \{|\psi\rangle : \hat H |\psi\rangle = E|\psi\rangle \text{ and } E< 0\},
\label{eq:states}
\end{equation}
the amplitude at the edges of the system become negligible. Here, $\mathcal{S}$ is the set of vacuum states in the zero temperature limit with a chemical potential $\mu = 0$. Using the fact that the potential is weak and smooth, the lattice looks locally unperturbed and periodic. Therefore, the non-contractible loop through $\mathscr{P}$ using the rapid-cycle protocol will still result in a non-zero particle transport. This particle transport is also close to integer as shown in Fig.~\ref{fig:rapidPumpPotential}, with some corrections due to the harmonic potential. To get these corrections, we will analyse the system in the phase space by introducing a lattice variant of the Weyl transform \cite{Case2008}. Namely, we define the Weyl transform of an operator $\hat A$ by the $2\times 2$ matrix $\tilde A(n,k)$ given by
\begin{equation}
\begin{split}
&\langle \alpha | \tilde A(n,k) | \beta\rangle = \\
&\sum_{x = 0}^{N-1}e^{-ik\left(2x + \frac{\alpha - \beta}{2}\right)}\langle 2(n+x) + \alpha | \hat A | 2(n - x) + \beta \rangle
\end{split}
\label{eq:weylTransform}
\end{equation}
\begin{figure}
\centering
\includegraphics[width = 8.6cm]{Figures/rapidPumpPotential.eps}
\caption{(a) Numerical calculation of the transported charge through the center of the lattice $\alpha_0$ in a system with $\xi = 0.005$. (b) The path taken through the parameter space $\{(m,\delta)\}$ in the rapid-cycle protocol with $b = 1$ and $\varepsilon = 2$.}
\label{fig:rapidPumpPotential}
\end{figure}
with $\alpha, \beta \in \{0,1\}$. More details and properties of this transformation are given in Appendix \ref{sec:WeylAppendix}. This Weyl transform can be applied to our system containing the Rice-Mele Hamiltonian (\ref{eq:RMHamiltonian}) and the added harmonic potential (\ref{eq:harmonicPotential}), such that the Weyl transform of the Hamiltonian is given by
\begin{equation}
\tilde H(n,k)(\tau) = \hat H_{\gamma}(k, \tau) + \frac{1}{2}\xi n^2 \mathbb{I}_2,
\label{eq:weylTransformedHamiltonian}
\end{equation}
which is the sum of the Rice-Mele Hamiltonian in reciprocal space (\ref{eq:rapidRMHamiltonian}) and a scalar matrix associated with the harmonic potential, translated such that the center of the lattice lies at $n = 0$. The Weyl tansform therefore simplifies to a two dimensional problem, in which we consider the Weyl transformed Liouvile-von Neumann equation
\begin{equation}
i\frac{\partial \tilde \rho}{d\tau} = \widetilde{H\rho} - \widetilde{\rho H}.
\label{eq:liouville-vonNeumann}
\end{equation}
As discussed in Appendix B, this gives rise to an expansion of the local vacuum density matrix in the insulating region, given by
\begin{equation}
\tilde \rho(n) = \tilde \rho_0 + \xi\left(\tilde \rho_1 + n\tilde \rho_2\right) + \xi^2\left(\tilde \rho_3 + n\tilde \rho_4 + n^2 \tilde \rho_5\right) + \mathcal{O}\left(\xi^3\right),
\label{eq:corrDensityMatrix}
\end{equation}
where $\tilde \rho_0$ is the local density matrix of the unperturbed lattice and the subsequent terms are corrections due to the harmonic potential. Eq.~(\ref{eq:corrDensityMatrix}) shows the general dependence of the density matrix on $n$ and $\xi$. Here, $\tilde \rho_1$ and $\tilde \rho_4$ are scalar matrices and also the only correction terms which have non-zero trace. This gives that in the limit $\varepsilon \gg 1$, the trace of the local density matrix is given by
\begin{equation}
\text{tr}(\tilde \rho(n)) = 1 + \frac{1}{4\varepsilon}\left(\xi + 3n\xi^2\right) + \mathcal{O}(\xi^3) > 1,
\label{eq:traceDensity}
\end{equation}
meaning that inside the insulating region, the amount of particles per unit cell is greater than one and there is a slight cross-over with the conduction band in the vacuum state which scales inversely proportional to $\varepsilon$, as shown in Fig.~\ref{fig:densityDistribution}(b).
\begin{figure}
\centering
\begin{tikzpicture}[
roundnode/.style={circle, draw=black!60, fill=white!100, very thick, minimum size=3mm},
]
\node[roundnode] (E) {};
\node[roundnode] (F) [right = 2mm of E, yshift = -2mm]{};
\node (EJa) [right = 1mm of E, yshift = 4mm, align = center, anchor = south] {$j(0)$\\$\longrightarrow$};
\node (EJb) [right = 1mm of E, yshift = -6mm, align = center, anchor = north] {};
\draw[dotted] (EJa) -- (EJb);
\node (EJc) [right = 7.75mm of E, yshift = 4mm, align = center, anchor = south]{};
\node (EJd) [right = 7.75mm of E, yshift = -6mm, align = center, anchor = north] {$\longrightarrow$\\$j\left(\frac{1}{2}\right)$};
\draw[dotted] (EJc) -- (EJd);
\node[roundnode] (C) [left = 10mm of E, yshift = 1mm]{};
\node[roundnode] (D) [right = 2mm of C, yshift = -2mm]{};
\node (CJa) [right = 1mm of C, yshift = 4mm, align = center, anchor = south] {$j(-1)$\\$\longrightarrow$};
\node (CJb) [right = 1mm of C, yshift = -6mm, align = center, anchor = north] {};
\draw[dotted] (CJa) -- (CJb);
\node (CJc) [right = 7.75mm of C, yshift = 3mm, align = center, anchor = south]{};
\node (CJd) [right = 7.75mm of C, yshift = -7mm, align = center, anchor = north] {$\longrightarrow$\\$j\left(-\frac{1}{2}\right)$};
\draw[dotted] (CJc) -- (CJd);
\node[roundnode] (G) [right = 10mm of E, yshift = 1mm]{};
\node[roundnode] (H) [right = 2mm of G, yshift = -2mm]{};
\node (GJa) [right = 1mm of G, yshift = 4mm, align = center, anchor = south] {$j(1)$\\$\longrightarrow$};
\node (GJb) [right = 1mm of G, yshift = -6mm, align = center, anchor = north] {};
\draw[dotted] (GJa) -- (GJb);
\node (GJc) [right = 7.75mm of G, yshift = 6mm, align = center, anchor = south]{};
\node (GJd) [right = 7.75mm of G, yshift = -4mm, align = center, anchor = north] {$\longrightarrow$\\$j\left(\frac{3}{2}\right)$};
\draw[dotted] (GJc) -- (GJd);
\node[roundnode] (A) [left = 10mm of C, yshift = 3mm]{};
\node[roundnode] (B) [right = 2mm of A, yshift = -2mm]{};
\node (AJa) [right = 1mm of A, yshift = 4mm, align = center, anchor = south] {$j(-2)$\\$\longrightarrow$};
\node (AJb) [right = 1mm of A, yshift = -6mm, align = center, anchor = north] {};
\draw[dotted] (AJa) -- (AJb);
\node (AJc) [right = 7.75mm of A, yshift = 3mm, align = center, anchor = south]{};
\node (AJd) [right = 7.75mm of A, yshift = -7mm, align = center, anchor = north] {$\longrightarrow$\\$j\left(-\frac{3}{2}\right)$};
\draw[dotted] (AJc) -- (AJd);
\node[roundnode] (I) [right = 10mm of G, yshift = 3mm]{};
\node[roundnode] (J) [right = 2mm of I, yshift = -2mm]{};
\node (IJa) [right = 1mm of I, yshift = 4mm, align = center, anchor = south] {$j(2)$\\$\longrightarrow$};
\node (IJb) [right = 1mm of I, yshift = -6mm, align = center, anchor = north] {};
\draw[dotted] (IJa) -- (IJb);
\node (X) [left = 2mm of A]{$\hdots$};
\node (Y) [right = 7mm of I]{$\hdots$};
\end{tikzpicture}
\caption{The Rice-Mele chain under a harmonic potential. The current between atoms is calculated through the dashed lines as function of the unit cell.}
\label{fig:riceMeleChainCurrent}
\end{figure}
We are now interested in the corrections to the total pumped charge due to the harmonic potential. The expansion of the density matrix~(\ref{eq:corrDensityMatrix}) gives the general dependence of the corrections in pumped charge on the position $n$ and the spring constant $\xi$. Here, it should be noted that $\tilde \rho_1$ and $\tilde \rho_4$ are scalar matrices and therefore have no contribution to the pumped charge. Moreover, since the position dependence is defined per unit cell, we have to distinguish between the current through a unit cell and between adjacent unit cells, as shown in Fig.~\ref{fig:riceMeleChainCurrent}. We can write the total pumped charge after one cycle through unit cell $n$ as
\begin{equation}
\begin{split}
\Delta Q(n) - \Delta Q_0 &= n\xi\big(A_1 + B_1\big)\\
& + n^2\xi^2\big(A_2 + B_2\big)\\
&+ \xi^2\big(A_3 + B_3\big) + \mathcal{O}(\xi^3)
\end{split}
\label{eq:pumpedChargeCorrections1}
\end{equation}
and the pumped charge between unit cells $n$ and $n+1$ as
\begin{equation}
\begin{split}
\Delta Q\left(n+\frac{1}{2}\right) - \Delta Q_0 &= \left(n+\frac{1}{2}\right)\xi\big(A_1 - B_1\big)\\
&+\left(n+\frac{1}{2}\right)^2\xi^2\big(A_2 - B_2\big)\\
&+ \xi^2 \big(A_3 - B_3\big) + \mathcal{O}(\xi^3),
\end{split}
\label{eq:pumpedChargeCorrections2}
\end{equation}
\begin{figure}
\centering
\includegraphics[width = 8.6cm]{Figures/ChargeCoefficients.eps}
\caption{Numerical calculation of the pumped charge correction coefficients $A_i$ and $B_i$ as function of $\varepsilon$ and $1/b$. The $A_i$ terms represent the average pumped charge between each atom, while the $B_i$ terms describe the polarizing effect within the unit cells.}
\label{fig:pumpedChargeCoefficients}
\end{figure}
where $\Delta Q_0 \in \mathbb{Z}$ is the unperturbed integer valued pumped charge and the terms $A_i$ and $B_i$ depend on the path through the parameter space, i.e. depend on $\varepsilon$ and $b$. Here, the $A_i$ terms can be thought of as the average pumped charge between each of the atoms, while the $B_i$ terms describe the polarization within the unit cells. In Fig.~\ref{fig:pumpedChargeCoefficients}, the numerical calculations of these coefficients are shown. It can be seen that that the behaviour of these functions is oscillatory, with minima where the coefficients become exactly equal to 0. The amplitude of these functions decays exponentially with both $\varepsilon$ and $1/b$. The non-zero $B_i$ coefficients cause a polarization in each unit cell, resulting in a change in energy. In the limit $\varepsilon \gg 1$, it can be derived that the the change in local energy is given by
\begin{equation}
\begin{split}
\Delta E(n) &= 2n\xi B_1 + n^2\xi^2\left(2B_2 - \frac{1}{2}(A_1 - B_1)\right)\\
&+ \xi^2\left(2B_3 - \frac{1}{4}\left(A_2 - B_2\right)\right) + \mathcal{O}(\xi^3).
\end{split}
\end{equation}
This shows that in the rapid-cycle protocol, local excitations start to appear due to the harmonic potential. Interestingly, it possible to have a local correction to the pumped charge, while the expectation value of the local energy does not change. This suggests that there is additional noise on the energy and pumped charge, which could be investigated in further research.
Similar to the addition of NN-hopping, we can see that the rapid-cycle protocol in a harmonic potential creates additional corrections to an integer valued charge pump, whereas an adiabatic protocol only has finite-frequency corrections \cite{Privitera2018, Lohse2015, Nakajima2016}. At the center of the lattice, the corrections in a rapid-cycle scale with $\xi^2$, which makes them quite small for weak harmonic potentials, and could even be negligible w.r.t the correction due to NN-hopping in the optical lattice. When we go off-center, there are corrections which only scale linearly with $\xi$. However, since these corrections also scale linearly with $n$, the average pumped charge of a bulk around the center again scales quadratically with the potential curvature.
\section{NN-hopping on the Optical superlattice}
Thouless pumping can be realized experimentally in a double-well optical superlattice of the form
\begin{equation}
V(x,\tau) = -V_S(\tau)\cos^2\left(\frac{2\pi x}{d}\right) - V_L(\tau)\cos^2\left(\frac{\pi x}{d} - \phi(\tau)\right),
\label{eq:opticalLattice}
\end{equation}
where $d$ is the lattice constant, $V_s$ and $V_L$ the depth of the short and long lattice respectively and $\phi$ the phase difference between the two lattices \cite{Wang2013, Lohse2015, Nakajima2016, Peil2003, Qian2011}. In the discussion of this lattice, we will use the unit of energy to be the recoil energy $E_R := \hbar^2 / (8md^2)$, where $m$ is the mass of the used atom. In the deep tight-binding limit, the two lowest energy bands of this model~(\ref{eq:opticalLattice}) can be approximated by those of the RM-Hamiltonian~(\ref{eq:RMHamiltonian}). Generally however, there is a slight difference between these two models. The band structure of the optical lattice can then be fully captured by considering higher hopping terms to the RM-Hamiltonian. Here, we will only consider the next-to-nearest-neighbour hopping terms, namely
\begin{equation}
\begin{split}
\hat H_{NN} = &\sum_{\alpha = 0}^{N-1}t_3|2\alpha\rangle\langle 2\alpha + 2| + t_4|2\alpha - 1\rangle \langle 2\alpha + 1| + h.c.
\end{split}
\end{equation}
which is added to the RM-Hamiltonian. With the addition of the NN-hopping terms, the two lower bands of the optical lattice coincide with those in the tight-binding approximation in sufficiently deep lattices, where the energy gap is much larger than the width of the bands. These extra NN-hopping terms will in general result in a deviation in the pumped charge. Specifically, when $|t_4 - t_3| \ll E_{gap}$, these corrections to integer values pumped charge are linearly dependent on the difference $|t_4 - t_3|$. For the ease of calculation, we will assume that the NN-hopping terms stay constant during the protocol. Although this is not generally true, this does give an idea of the order of magnitude, or at least an upper bound of the corrections due to the additional terms. In the rapid-cycle protocol, the deviation from integer valued pumped charge after one cycle is calculated as function of $\varepsilon$ and $1/b$ and shown in Fig.~\ref{fig:ChargeNNHopping}. It can be seen that the corrections are oscillatory in $1/b$ and $\varepsilon$, which means there are lines where the corrections vanish completely. Moreover, the amplitude of these oscillations decrease exponentially with both $\varepsilon$ and $1/b$. This means that in the adiabatic limit $b \rightarrow 0$ and in the limit of a flat dispersion $\varepsilon \rightarrow \infty$ there are no corrections to integer valued pumped charge due to NN-hopping terms.
\begin{figure}
\centering
\includegraphics[width = 8.6
cm]{Figures/propChargeNN.eps}
\caption{Numerical calculation of the corrections to integer valued pumped charge due to the NN-hopping terms as function of $\varepsilon$ and $1/b$.}
\label{fig:ChargeNNHopping}
\end{figure}
In order to calculate actual corrections, we should consider the magnitude of $|t_4 - t_3|$ in the optical lattice (\ref{eq:opticalLattice}). For simplicity however, we will only calculate the magnitude of the sum of the NN-hopping terms, i.e. $|t_3 + t_4|$, since this sum can be easily calculated by making use of the fact that
\begin{equation}
\epsilon_-(k) + \epsilon_+(k) = 2\cdot\text{Re}\left((t_3+t_4)e^{ik}\right),
\end{equation}
where $\epsilon_\pm(k)$ are the quasienergies of the RM-Hamiltonian with NN-hopping terms. In order to calculate the magnitude of the difference $|t_4 - t_3|$, one would need some fitting procedure for the bands. It is however expected that the order of magnitude of the difference $|t_4 - t_3|$ is similar to the order of magnitude of the sum $|t_3 + t_4|$. In Fig.~\ref{fig:NNHopping}, the magnitude of the sum of the NN-hopping terms per energy gap is plotted against $V_S$ and $V_L$ where $\phi = 0$. It can be seen that this magnitude decreases exponentially with both $V_S$ and $V_L$. Moreover, on the line $V_S = V_L^2 / (16 E_R)$, the NN-hopping terms are maximal, and this region should therefore be avoided to keep the NN-hopping terms to a minimum. As $\phi$ is varied, the absolute NN-hopping terms do not change significantly, while the energy gap does change. This will result in the ratio between the hopping constants and the energygap to change during the protocol, which already shows that the assumption that the NN-hopping terms stay constant is not true. The parameters $V_S$ an $V_L$ could also be varied during the protocol to overcome this problem.
\begin{figure}
\centering
\includegraphics[width = 8.6
cm]{Figures/NNhopping-terms.eps}
\caption{Numerical calculation of the NN-hopping terms per energy gap for the lower two bands of the optical lattice (\ref{eq:opticalLattice}) as function of $V_S$ and $V_L$ where $\phi= 0$. The dashed line is given by $V_S = V_L^2 / (16 E_R)$. The realization of Thouless pumping by Nakajima \textit{et al.} \cite{Nakajima2016} was done in an optical lattice with $(V_S, V_L) = (20, 30)E_R$, which is given by the star.}
\label{fig:NNHopping}
\end{figure}
Although the rapid-cycle protocol removes the finite-frequency corrections of adiabatic cycles, the NN-hopping terms introduce new corrections, whereas the topological quantization is still ensured in adiabatic cycles. Therefore, the parameters of the optical lattice should be chosen to minimize the NN-hopping terms. Also, to get the full characteristics of the optical lattice, even higher hopping terms should be considered. These are however expected to be negligible w.r.t the NN-hopping terms. Since we have not computed the exact mapping of the whole rapid-cycle protocol onto the optical lattice, we have not actually calculated the exact corrections that would occur in an optical lattice experiment, where there are most certainly varying NN-hopping terms. However, one should expect the order of magnitude of the corrections to be similar.
\section{The doubled-lattice Weyl transform}
\label{sec:WeylAppendix}
Here, we will give a discussion on the Weyl transform on a lattice with periodicity 2. Such a Weyl transform has actually already been constructed \cite{Fialkovsky2020}. However, it turned out to be not applicable to our system \cite{Buot2021}, and therefore a reformulation is needed. Here, we will only give a brief summary on this definition, and more specific details will be reported elsewhere.
We consider a doubled lattice of $N$ unit cells, where it is important that $N$ is odd valued. The intuitive reason for this is that we want a center unit cell, i.e. a 0 coordinate. When $N$ is considered to be even, this Weyl transform actually breaks down due to inconsistencies in the Fourier transform of the delta function. For a $2N\times 2N$ operator, the Weyl transform is now the $2\times 2$ matrix given by Eq.(\ref{eq:weylTransform}). Using the momentum basis, consisting of
\begin{equation}
|k(\alpha)\rangle = \frac{1}{\sqrt{N}}\sum_{n = 0}^{N-1} e^{\frac{ik(2n+\alpha)}{2}} |2n + \alpha\rangle
\end{equation}
for $k\in \mathscr{B}$ and $\alpha \in \{0,1\}$, this can also be rewritten in the momentum basis, where the Weyl transform is given by
\begin{equation}
\begin{split}
&\langle \alpha | \tilde A(n,k) | \beta\rangle \\
&= \sum_{p \in \mathscr{B}}e^{ip\left(2n + \frac{\alpha + \beta}{2}\right)}\langle (k+p)(\alpha)| \hat A | (k-p)(\beta) \rangle.
\end{split}
\label{eq:weylTransform2}
\end{equation}
Note the similarity with the one-dimensional and continuous Weyl transform \cite{Case2008}, where the biggest difference with the transformation given by Fialkovsky and Zubnov \cite{Fialkovsky2020} is that this transformation actually returns a matrix, just like a normal Fourier transformation on a doubled lattice would. The inverse of this transformation in the momentum basis is then given by
\begin{equation}
\begin{split}
&\langle p(\alpha) |\hat A | q(\beta) \rangle\\ &=\frac{1}{N}\sum_{n = 0}^{N-1}e^{-i\frac{p-q}{2}\left(2n + \frac{\alpha + \beta}{2}\right)} \langle \alpha | \tilde A\left(n, \frac{p+q}{2}\right) | \beta \rangle.
\end{split}
\label{eq:InverseWeylTransform2}
\end{equation}
A key property of the Weyl transform is that the trace of two operators $\hat A$ and $\hat B$ can be computed using the trace of the Weyl transforms, that is
\begin{equation}
\text{tr}\left(\hat A \hat B\right) = \frac{1}{N}\sum_{n=0}^{N-1}\sum_{k\in \mathscr{B}} \text{tr}\left(\tilde A(n,k) \tilde B(n,k)\right).
\label{eq:trace}
\end{equation}
Moreover, one can show that in the thermodynamic limit, the Weyl transform of the product of two operators $\hat A$ and $\hat B$ is given by
\begin{widetext}
\begin{equation}
\langle \alpha|\widetilde{AB}(n,k) | \beta \rangle = \sum_{\gamma = 0}^{1}\langle \alpha |\tilde A(n,k)|\gamma \rangle e^{\frac{i}{2}\left[\overleftarrow\partial_n\left(\overrightarrow\partial_k - \frac{i}{2}(\beta - \gamma)\right) - \overrightarrow\partial_n\left(\overleftarrow\partial_k + \frac{i}{2}(\alpha - \gamma)\right)\right]}\langle \gamma | \tilde B(n,k) | \beta\rangle
\end{equation}
\end{widetext}
for $\alpha, \beta \in \{0,1\}$. Note the similarity with the Moyal product \cite{Fialkovsky2020}. The fact that $\tilde A$ and $\tilde B$ are matrices, will however result in additional correction terms in the exponentials. This form really only has meaning if the the exponent is expanded in a power series. We will add a formal parameter $\lambda$ to this expansion to keep track of the order in the expansion, which will be set to 1 later. The expansion of the Weyl transform of the product is then given by
\begin{widetext}
\begin{equation}
\begin{split}
\widetilde{AB} &= \sum_{m = 0}^{\infty}\sum_{\alpha, \beta, \gamma = 0}^{1} \frac{\lambda^m}{m!} |\alpha\rangle \langle \alpha | \tilde A | \gamma \rangle \left[\frac{i}{2}\left[\overleftarrow\partial_n\left(\overrightarrow\partial_k - \frac{i}{2}(\beta - \gamma)\right) - \overrightarrow\partial_n\left(\overleftarrow\partial_k + \frac{i}{2}(\alpha - \gamma)\right)\right)\right]^m \langle \gamma | \tilde B | \beta\rangle\langle \beta|\\
&=: \sum_{m = 0}^{\infty} \lambda^m f_m(\tilde A, \tilde B),
\end{split}
\label{eq:moyalexp}
\end{equation}
\end{widetext}
where we have introduced the functions $f_m(\tilde A, \tilde B)$, which are the $m$-th order expansion terms. This expansion of the product Weyl-transformation now gives rise to an expansion of the vacuum state of a perturbed doubled lattice, as we will show in the following section. This expansion has been shown to give correct predictions using numerical calculations, therefore suggesting that this definition of the Weyl transform is correct and useful. However, some more investigation on this transformation will be done and reported elsewhere.
\section{Expansion of the Weyl-transformed vacuum density matrix}
\label{sec:densityAppendix}
We will now consider the density matrix
\begin{equation}
\hat \rho = \sum_{|\psi\rangle \in \mathcal{S}}|\psi\rangle \langle \psi|
\end{equation}
with $\mathcal{S}$ as in Eq.(\ref{eq:states}). When we consider the unperturbed Rice-Mele chain, so when $\xi = 0$, this density matrix is just the sum over the outer-products of the Bloch-states of the lower band. The addition of a weak harmonic potential (\ref{eq:harmonicPotential}) with $\xi\ll 1$ will then result in small corrections to the density matrix. In particular, it will result in corrections to the Weyl transform of the density matrix. We can expand the Weyl transform of the density matrix according to the same formal parameter $\lambda$ as in Eq.(\ref{eq:moyalexp}), i.e.
\begin{equation}
\tilde \rho(n,k) = \sum_{m=0}^{\infty} \lambda^m \tilde \rho_m(n,k),
\end{equation}
where $\tilde \rho_0 = |F_-\rangle \langle F_-|$, the vacuum density matrix of the Rice-Mele Hamiltonian (\ref{eq:rapidRMHamiltonian}). Importantly, the density matrix is idempotent, i.e. $\widetilde{\rho \rho} = \tilde \rho$. Therefore, it needs to satisfy the condition
\begin{equation}
\label{eq:idemProp}
\tilde \rho_m = \sum_{r+s+t = m}f_r(\tilde \rho_s, \tilde \rho_t).
\end{equation}
Moreover, it needs to commute with the Hamiltonian, i.e. $\widetilde{H\rho} - \widetilde{\rho H} = 0$. Defining $g_m(\tilde A, \tilde B) = f_m(\tilde A, \tilde B) - f_m(\tilde B, \tilde A)$ will then give the additional requirement
\begin{equation}
[\tilde H, \tilde \rho_m] = -\sum_{\substack{r+s = m\\ s < m}}g_{r}(H, \tilde \rho_s).
\label{eq:commProp}
\end{equation}
Finally, we can make use of the fact that
\begin{equation}
\label{eq:firstOrderProp}
\tilde \rho_m\tilde \rho_0 + \tilde \rho_0 \tilde \rho_m = \tilde \rho_m + \frac{1}{\epsilon_-}\left(2\tilde \rho_m \hat H_\gamma + [\tilde H, \tilde \rho_m]\right)
\end{equation}
and combine it with Eq.(B3) and Eq.(B4) to get that the $m$-th order correction term in the Weyl transform of the density matrix is given by
\begin{equation}
\tilde \rho_m = \frac{1}{2}\left[|\epsilon_-| \sum_{\substack{r+s+t = m\\ s,t < m}}f_r(\tilde \rho_s, \tilde \rho_t) + \sum_{\substack{r+s = m\\ s < m}}g_r(\tilde H, \tilde \rho_s)\right] \cdot \hat H_\gamma^{-1}
\end{equation}
which is a function of all the previous order correction terms, such that each correction term can be calculated through iteration. In the limit where $\xi \ll 1$ and $\varepsilon \gg 1$, the correction terms up to second order in $\xi$ can then be calculated to be
\begin{align}
\begin{split}
\tilde \rho_0 &= |F_-\rangle \langle F_-|
\label{eq:zerothOrderTerm}
\end{split},\\
\begin{split}
\tilde \rho_1 &= \frac{n\xi(1+e^{ik})}{\sqrt{16\varepsilon(1+\cos(k))}}|F_+\rangle \langle F_-| + h.c. + \mathcal{O}\left(\frac{1}{\varepsilon}\right)^{\frac{3}{2}}
\end{split}\\
\begin{split}
\tilde \rho_2 &= \frac{\xi}{8\varepsilon}\Big(|F_-\rangle \langle F_-| + |F_+\rangle \langle F_+|\Big)\\
&+ \frac{n^2\xi^2}{8\varepsilon}\Big(|F_+\rangle \langle F_+| - |F_-\rangle \langle F_-|\Big)\\
&+\left(\frac{n^2\xi^2(1+e^{ik})}{\sqrt{16\varepsilon(1+\cos(k))}}|F_+\rangle \langle F_-| + h.c.\right) +\mathcal{O}\left(\frac{1}{\varepsilon}\right)^{\frac{3}{2}},
\label{eq:secondorderHom}
\end{split}\\
\begin{split}
\tilde \rho_3 &= \frac{3n\xi^2}{8\varepsilon}\Big(|F_-\rangle \langle F_-| + |F_+\rangle \langle F_+|\Big) +\mathcal{O}\left(\frac{1}{\varepsilon}\right)^{2} + \mathcal{O}(\xi^3)
\end{split}\\
\begin{split}
\tilde \rho_4 &= \frac{3\xi^2}{32\varepsilon}\Big(|F_+\rangle \langle F_+| - |F_-\rangle \langle F_-|\Big)
\label{eq:fourthOrderHom}
\end{split}\\
\begin{split}
\tilde \rho_m &= \mathcal{O}(\xi^3) \text{ for } m \geq 5.
\label{eq:higherOrderTerm}
\end{split}
\end{align}
It can be seen that this expansion results in an expansion of the density matrix in $\xi$. This shows the dependence of the local density matrix on $n$ and $\xi$ as given in Eq.(\ref{eq:corrDensityMatrix}), and gives the trace of the density matrix is given in Eq.~(\ref{eq:traceDensity})
\section{Change in density distribution}
In the optical lattice experiment, it is not actually the pumped charge through a point which is measured, but rather the change in center of mass of the whole density distribution \cite{Wang2013, Lohse2015, Nakajima2016}. As seen in Fig.~\ref{fig:densityDistribution}(a), the density distribution, and therefore also the center of mass, shifts after a pumping cycle. This measurement technique makes use of the fact that the pumping is close to integer inside the whole insulating region. However, as seen in Eq.~(\ref{eq:pumpedChargeCorrections1}) and Eq.~(\ref{eq:pumpedChargeCorrections2}), the pumped charge inside the insulating region depends on the position. By the continuity equation, this will result in a change in density distribution inside the insulating region after a pumping cycle. In Fig.~\ref{fig:densityDistribution}(b), this change in density distribution per unit cell is plotted after one cycle. Besides a change in density per unit cell, there is also the polarization of density in each unit cell. These two effects will result in a correction to the change in center of mass w.r.t integer value. Moreover, one should note that there are additional corrections due to the compressible region.
\begin{figure}
\centering
\includegraphics[width=8.6cm]{Figures/densityDistribution.eps}
\caption{Numerical calculation of the density distribution per unit cell at $\tau = 0$ (dashed) and $\tau = T$ (solid) over the whole lattice (a) and zoomed in near the center of the lattice (b) for $\xi = 0.005$, $b = 1$ and $\varepsilon = 100$.}
\label{fig:densityDistribution}
\end{figure}
As discussed at the end of Section~\ref{sec:harmonicPotential}, the average pumped charge in the bulk of the insulating region scales with $\xi^2$. It should be noted however, that the width of the insulating region also depends on $\xi$, and scales with $\xi^{-1/2}$. This will cause corrections to the change in center of mass to be linearly dependent on $\xi$. Specifically, using the continuity equation, Eq.~(\ref{eq:pumpedChargeCorrections1}) and Eq.~(\ref{eq:pumpedChargeCorrections2}), it can be demonstrated that the correction to the change in center of mass w.r.t integer value of the insulating region is given by
\begin{equation}
\left|\Delta_{\rm COM} - \Delta Q_0\right| \approx \frac{2}{3}\xi A_2 + \mathcal{O}(\xi^2)
\label{eq:errorCOM}
\end{equation}
in the limit $\varepsilon \gg 1$. In addition to the corrections due to the insulating region, the compressible region will also give some corrections. Although the compressible region is minimal in the same limit $\varepsilon \gg 1$, it does not vanish. In Fig.~\ref{fig:errorCOM}(a), it can be seen that the relation between the correction to the change in center of mass as function of $\xi$ is staggered. This can be explained by the fact that the width of both the insulating and compressible region is always integer valued. Therefore, a small variation in $\xi$ will not directly result in a variation in the width of the compressible region. When the variation in $\xi$ is large enough however, the compressible region will jump to the next atom, which causes the staggered behaviour. In Fig.~\ref{fig:errorCOM}(b), it can be seen that the Eq.~(\ref{eq:errorCOM}) gives a good approximation for $\varepsilon \gg 1$, where there are some slight corrections due to the compressible region.
The finite-frequency corrections to the pumped charge in adiabatic cycles scale with $\omega^2$ \cite{Privitera2018} and the potential corrections in a rapid-cycle protocol scale with $\xi^2$. However, the corrections to the change in center of mass scale only linearly with $\xi$. The rapid-cycle protocol might therefore introduce more corrections to the center of mass method than the adiabatic cycle would have given. Moreover, using this method, one also needs to take the corrections to the pumped charge due to the compressible region into account. Therefore, one might want to consider other methods in order to directly measure the actual pumped charge.
\begin{figure}
\centering
\includegraphics[width = 8.6cm]{Figures/errorCOM.eps}
\caption{(a) The change in center of mass of the density distribution after a pumping cycle as function of $\xi$ with $b = 1$ and $\varepsilon = 100$. (b) The change in center of mass of the density distribution and $\frac{2}{3}A_2$ as function of $\varepsilon$ with $b = 1$ and $\xi = 0.005$.}
\label{fig:errorCOM}
\end{figure}
\section{Finite-size corrections}
The proposed protocol gives a quantized particle transport outside of the adiabatic limit, but still only works in the thermodynamic limit. There are in general corrections to the pumped charge which decrease exponentially with $N$ \cite{Li2017}. Here, we will investigate those corrections for the rapid-cycle protocol specifically. If we consider the pumped charge per $k$-number, then because of its periodicity in $k$, it can be written as a Fourier series, i.e.
\begin{equation}
\Delta Q(k) := \int_{0}^{T}\langle F_- | \partial_k \hat H_{\gamma} | F_-\rangle d\tau = \sum_{n=-\infty}^{\infty} \Delta Q_n e^{ink},
\end{equation}
where $\Delta Q_n$ are the Fourier coefficients. In systems with a size $N\in \mathbb{N}$, the total pumped charge becomes
\begin{equation}
\Delta Q := \frac{1}{N}\sum_{k \in \mathscr{B}}\Delta Q(k) = \Delta Q_0 + \sum_{m=1}^{\infty}\Delta Q_{mN} + \Delta Q_{-mN},
\end{equation}
which reduces to $\Delta Q = \Delta Q_0 \in \mathbb{Z}$ in the limit $N\rightarrow \infty$. Therefore, in finite-sized systems, the corrections are due to the additional Fourier-coefficients. Note that these finite sized corrections vanish if for all $n \in \mathbb{N}$, we have $\Delta Q_{n} = -\Delta Q_{-n}$ or $\Delta Q_n = \Delta Q_{-n} = 0$. It might be possible to construct a protocol in which this is true. In general however, this is not the case and there are still finite-sized correction. In the discussed rapid-cycle protocol, there is an analytical expression for the pumped charge per $k$-number. Namely, the pumped charge after one cycle due to the state $|F_{-}\rangle$ can be derived to be
\begin{equation}
\begin{split}
\Delta Q(k) = -\frac{1}{2}\int_0^{T_{\phi}(\epsilon)}\frac{\cosh(\phi(y)) + \cos(2k)}{\sqrt{2\epsilon + 2\cos(2k)}} dy.
\end{split}
\end{equation}
Note that this is an even function, such that $\Delta Q_{-n} = \Delta Q_n$. Furthermore, it can be demonstrated that in the limit $\varepsilon \gg 1$ we get that $\Delta Q_n \ll 1$ for all $n \geq 2$. So in the limit where the width of the bands vanishes and the dispersion relation becomes flat, all finite-size corrections vanish for systems with $N \geq 2$. In Fig.~\ref{fig:finiteCorrections}(a), the Fourier coefficients have been plotted for different values of $\varepsilon$. It can indeed be seen that in the limit $\varepsilon \gg 1$, most Fourier coefficients are negligible. In Fig.~\ref{fig:finiteCorrections}(b), the finite size corrections are shown as function of $N$. It can be seen that as both $\varepsilon$ and $N$ increase, the finite sized correction start to vanish and become negligible w.r.t the numerical errors. Therefore, even for small systems it is possible to have a close to integer valued rapid-cycle Thouless pumping, where the corrections are actually independent on the rapidity.
\begin{figure}
\centering
\includegraphics[width = 8.6cm]{Figures/fourierFinite.eps}
\caption{(a) Numerical calculation of the Fourier coefficients of the pumped charge for different values of $\varepsilon$. (b) Numerical calculation of the the finite-size corrections as function of $N$. Both plots have been made with $\varepsilon = 1.1$ (solid), $\varepsilon = 5$ (dashed) and $\varepsilon = 1000$ (dotted).}
\label{fig:finiteCorrections}
\end{figure}
\section{Introduction}
The past few decades have been marked by the discovery of various systems where topological properties of the quasiparticle spectrum are connected with the
quantization of particle transport, for example Thouless pumping \cite{Thouless1983} or the integer quantum Hall effect \cite{Thouless1982, Niu1984}. In Thouless pumping, this integer valued particle transport is achieved by performing a non-contractible adiabatic loop through a non-degenerate parameter space. The amount of pumped charge can then be expressed by the Chern number associated with the Berry or Zak phase \cite{Berry1984, Zak1989, Xiao2010}. Although the original mathematics of the Thouless pumping dates back 30 years, the effect has only recently been observed directly using ultracold bosonic atoms in an optical superlattice \cite{Lohse2015, Nakajima2016}.
The adiabacity of the non-contractible loop is required to ensure the topological robustness of the Thouless pump. Generally, corrections to the quantization of particle transport arise when the parameter space is traversed at a finite frequency \cite{Wang2013, Privitera2018}. For special cases of Thouless pumping, such as parametric pumps \cite{Switkes1999, Brouwer1998, Altshuler1999, Levinson2001, Entin-Wohlman2002}, these non-adiabatic effects were studied \cite{Wang2013, Privitera2018, Ohkubo2008_1, Ohkubo2008_2, Cavaliere2009, Uchiyama2014, Watanabe2014}. In order to minimise corrections, strategies such as dissipation assisted pumping \cite{Arceci2020}, non-Hermitian Floquet engineering \cite{Hockendorf2020, Fedorova2020} and adiabatic shortcuts by external control \cite{Takahashi2020, Funo2020} were proposed. Recently however, a family of finite-frequency protocols on the Rice-Mele insulator have been constructed, in which all the quasi-excitations disappear altogether at the end of a rapid-cycle, resulting in a perfectly quantized and noise-free particle transport outside of the adiabatic limit \cite{Malikis2021}. Although this resolves the issue of non-adiabatic breaking of topological quantization in an ideal homogeneous system, there might still be finite-size corrections \cite{Li2017} or corrections due to perturbations in the insulator, such as inhomogeneity due to an external potential. Understanding of such corrections is important in the context of
experimental realization of the rapid cycle pump,
for example, in an ultracold atomic system.
In this paper, starting from the Rice-Mele insulator as the zeroth-order approximation, we investigate the corrections in the expectation value of the pumped charge w.r.t quantization due to performing a rapid-cycle protocol inside a one-dimensional optical lattice. Specifically, we investigate finite-size corrections, introduce next-to-nearest neighbour hopping and add a weak harmonic potential to the system. A lattice variant of the Weyl transform \cite{Case2008} is constructed to retrieve analytical relations between the corrections and the potential curvature. It is shown that all the corrections decay exponentially with the protocol defining parameters, which could also be chosen such that the corrections vanish completely. Lastly, a discussion is given on the change of center of mass after a rapid pumping cycle. It is shown that the corrections due to the rapid-cycle protocol under the harmonic potential are most pronounced in the change in center of mass, which is the current proposed and used method of measuring the charge pump \cite{Wang2013, Lohse2015, Nakajima2016}.
\section{Rapid-cycle Thouless pumping}
We begin with a recapitulation on the Rice-Mele model \cite{Rice1982}. This is a tight binding chain consisting of $2N$ atoms, on which there are orthonormal positional state $|\alpha \rangle$ which are subject to periodic boundary conditions, i.e. $|\alpha + 2N\rangle = |\alpha\rangle$. In the single-particle subspace, the Hamiltonian of this model is given by
\begin{equation}
\begin{split}
\hat H_{RM}(p) = & \sum_{\alpha = 0}^{N-1}\bigg[ m\Big(|2\alpha\rangle \langle 2\alpha | - |2\alpha + 1 \rangle \langle 2\alpha + 1|\Big) \\
&+ \Big(t_1|2\alpha\rangle \langle 2\alpha + 1| + t_2|2\alpha - 1\rangle\langle 2\alpha| + h.c.\Big)\bigg],
\end{split}
\label{eq:RMHamiltonian}
\end{equation}
where $p = (m, t_1, t_2)$ are the tight-binding parameters. A graphical representation of this model is shown in Fig.~\ref{fig:riceMeleChain}. The periodicity ensures that the Hamiltonian~(\ref{eq:RMHamiltonian}) can be written in the reciprocal space
\begin{equation}
\hat H_{RM}(k, p) := \begin{bmatrix}
m & t_1e^{\frac{ik}{2}} + t_2^*e^{-\frac{ik}{2}}\\
t_1^*e^{-\frac{ik}{2}} + t_2e^{\frac{ik}{2}} & -m
\end{bmatrix},
\label{eq:RMHamiltonianReciprocal}
\end{equation}
where $k \in \mathscr{B} = \left\{-\pi + m\frac{2\pi}{N} \mid 0\leq m < N\right\}$, the discretised Brillouin zone. This Hamiltonian has two quasienergies $\epsilon_{\pm}(k, p)$ with the property $\epsilon_{-}(k, p) = -|\epsilon_+(k, p)|$. This creates two distinct energy bands which are seperated by the energy gap $E_{gap} = 2\sqrt{m^2 + \delta^2}$, where $\delta = |t_1| - |t_2|$. If we consider the parameter space $\mathscr{P} = \{(m,t_1, t_2) \mid E_{gap} > 0\}$ where this energy gap is strictly positive, then this space has a non-trivial fundamental group. Considering a non-contractible loop $p : [0, T) \rightarrow \mathscr{P}$ through this parameter space, we can look at the evolution of the lower energy Bloch states $|u_-(k,\tau)\rangle$ according to the Schr\"odinger equation
\begin{equation}
\begin{split}
i\frac{d}{d\tau} |u_-(k,\tau)\rangle = \hat H_{RM}(k, p(\tau))|u_-(k, \tau)\rangle \text{ with}\\
\hat H_{RM}(k, p(0))|u_-(k,0)\rangle = \epsilon_-(k, p(0))|u_-(k,0)\rangle.
\end{split}
\end{equation}
If the path through the parameter space is adiabatic, i.e. infinitely slowely, the Bloch states are ensured to be the lower eigenstates of the instantaneous Hamiltonian at all times and therefore no excitations will occur \cite{Born1928}. It can be shown that in the thermodynamic limit $N\rightarrow \infty$, the non-contractibility of the loop through the parameter space will then result in a non-zero pumped charge which is equal to the winding number of this loop around the degeneracy point \cite{Xiao2010}. This pumped charge can be directly related to the first Chern number associated with the Berry connection form \cite{Xiao2010, Berry1984}.
\begin{figure}
\centering
\begin{tikzpicture}[
roundnode/.style={circle, draw=black!60, fill=white!100, very thick, minimum size=6mm},
]
\node (X) {$\hdots$};
\node[roundnode] (A) [right = 6mm of X] [label=below:$|0\rangle$] {$+m$};
\node[roundnode] (B) [right = 6mm of A][label=below:$|1\rangle$]{$-m$};
\node[roundnode] (C) [right = 6mm of B][label=below:$|2\rangle$]{$+m$};
\node[roundnode] (D) [right = 6mm of C][label=below:$|3\rangle$]{$-m$};
\node (Y) [right = 6mm of D]{$\hdots$};
\draw[->, bend left] (X) to node [midway, above] {$t_2^*$} (A);
\draw[->, bend left] (A) to node [midway, below] {$t_2$} (X);
\draw[->, bend left] (A) to node [midway, above] {$t_1^*$} (B);
\draw[->, bend left] (B) to node [midway, below] {$t_1$} (A);
\draw[->, bend left] (B) to node [midway, above] {$t_2^*$} (C);
\draw[->, bend left] (C) to node [midway, below] {$t_2$} (B);
\draw[->, bend left] (C) to node [midway, above] {$t_1^*$} (D);
\draw[->, bend left] (D) to node [midway, below] {$t_1$} (C);
\draw[->, bend left] (D) to node [midway, above] {$t_2^*$} (Y);
\draw[->, bend left] (Y) to node [midway, below] {$t_2$} (D);
\end{tikzpicture}
\caption{The tight-binding chain of the Rice-Mele model.}
\label{fig:riceMeleChain}
\end{figure}
Outside of the adiabatic limit, i.e. at finite frequencies, there is generally a correction to this integer valued pumped charge \cite{Privitera2018}. Recently however, a family of protocols were constructed which results in noise-free integer valued Thouless pumping at finite frequencies \cite{Malikis2021}. Here, we investigate one such protocol. Consider the space $\{(x,y)\} = \mathbb{R}^2$ and a real oscillating function $\phi(y)$ which is a solution to
\begin{equation}
\partial_y^2\phi + \sinh \phi = 0.
\end{equation}
The integrated form of this differential equation is given by
\begin{equation}
(\partial_y\phi)^2 + 2\cosh \phi = 2\varepsilon,
\end{equation}
where $\varepsilon \in \mathbb{R}_{>0}$ and the period of $\phi$ will be denoted by $T_\phi(\epsilon)$. As the initial condition, we will choose $\phi(0) = 0$. It can be shown that this differential equation is equivalent to the zero curvature condition
\begin{equation}
\partial_y \hat A_x - \partial_x \hat A_y + \left[\hat A_x, \hat A_y\right] = 0
\label{eq:zeroCurvatureCond}
\end{equation}
for the anti-Hermitian matrix-valued vector fields
\begin{align}
\hat A_x &= \frac{1}{4}\begin{bmatrix} i\partial_y \phi & 2\cosh\left(\frac{\phi - ik}{2}\right)\\
-2\cosh\left(\frac{\phi + ik}{2}\right) & -i\partial_y \phi\end{bmatrix}\\
\hat A_y &= \frac{i}{4}\begin{bmatrix}
0 & 2\sinh\left(\frac{\phi - ik}{2}\right)\\
2\sinh\left(\frac{\phi + ik}{2}\right) & 0
\end{bmatrix}
\end{align}
where $k$ is a real valued parameter. The zero curvavature condition (\ref{eq:zeroCurvatureCond}) implies the existence of two orthonormal globally well-defined solutions $| F_{\pm}\rangle \in \mathbb{C}^2$ of the system of equations
\begin{equation}
\partial_x | F_{\pm}\rangle =\hat A_x|F_{\pm}\rangle, \quad \partial_y |F_{\pm}\rangle = \hat A_y |F_{\pm}\rangle.
\label{eq:systemofequations}
\end{equation}
It should be noted that $|F_{\pm}\rangle$ depends on $x,y$ and $k$, which we will not write this down explicitly in the rest of this paper. Let $b\in \mathbb{R}$, $T = \frac{2\pi}{b}$ and the differentiable path $\gamma \colon \left[0, T\right] \rightarrow \mathbb{R}^2$ given by
\begin{equation}
\gamma_x(\tau) = \tau, \quad \gamma_y(\tau) = \frac{T_\phi(\epsilon)}{2\pi}\left[b\tau - \sin(b\tau)\right],
\label{eq:parameterPath}
\end{equation}
then we can define the matrix
\begin{equation}
\hat H_\gamma(k, \tau) = i \dot{\gamma}_x\hat A_x + i\dot{\gamma}_y\hat A_y
\label{eq:rapidRMHamiltonian}
\end{equation}
which coincides with the Rice-Mele Hamiltonian in reciprocal space (\ref{eq:RMHamiltonianReciprocal}). The solutions of the system of equations (\ref{eq:systemofequations}) will now evolve along the path $\gamma$ according to the Schr\"odinger equation
\begin{equation}
i\frac{d}{d\tau} |F_{\pm}\rangle = \hat H_{\gamma} |F_{\pm}\rangle.
\end{equation}
Furthermore, since $\dot{\gamma}_y(0) = \dot{\gamma}_y\left(T\right) = 0$, the solutions $|F_\pm\rangle$ at $\tau = 0$ and at $\tau = T$ are the eigenstates of $\hat H_{\gamma}$, where we choose $|F_-\rangle$ to correspond to the lower eigenvalue. This protocol will result in a non-contractible loop in $\mathscr{P}$, such that the energy gap remains positive. This energy gap does however change during the evolution. Therefore, we will consider the function $$s(\tau) = \int_0^{\tau}d\tau'\ E_{gap}(\tau')$$ and reparametrize the path $\gamma$ in Eq.~(\ref{eq:parameterPath}) by
\begin{equation}
\gamma(\tau) \mapsto \gamma
\left(s^{-1}(\tau)\right) \text{ and } T \mapsto s(T)
\label{eq:reparametrization}
\end{equation}
such that $E_{gap} = 1$ at all times. The non-contractibility of the loop in $\mathscr{P}$ and the fact that $|F_{-}\rangle$ is an eigenstate of the Hamiltonian at the start and end of the protocol will now result in a non-zero integer valued particle transport in the thermodynamic limit \cite{Malikis2021}.
Since this protocol works at finite frequencies, there are excitations of quasiparticles during the evolution. However, it makes sure that all of these excitations vanish at the end, such that the result after a rapid cycle is the exact same as with an adiabatic cycle. This is true for all values of $b$ and $\varepsilon$, which are the only two parameters the protocol depends on. The parameter $\varepsilon$ determines the width of the valence band and the conduction band. i.e.
\begin{equation}
\max\{|\epsilon_\pm(k)|\} - \min\{|\epsilon_\pm(k))|\} = \frac{1}{2}\left[\sqrt{\frac{\varepsilon + 1}{\varepsilon - 1}} - 1\right],
\end{equation}
where it can be seen that the width of the bands becomes infinitely large in the limit $\varepsilon \downarrow 1$ and vanishes in the limit $\varepsilon \rightarrow \infty$. The parameter $b$ determines the steepness and therefore the period of the path $\gamma$ as in Eq.~(\ref{eq:parameterPath}). A larger value for $b$ will also result in a more rapid pumping cycle. One should note that the angular frequency is in fact a function of both $b$ and $\varepsilon$, since the reparametrization (\ref{eq:reparametrization}) depends on $\varepsilon$. In the rest of this paper, we will investigate this specific protocol. It should be noted that other protocols could result in different specific properties. It is however expected that the general properties are similar for all rapid-cycle protocols.
\section*{\noindent\makebox[\linewidth]{\resizebox{0.3334\linewidth}{1.5pt}{$\bullet$}}\bigskip}}
\begin{document}
\preprint{APS/123-QED}
\title{Rapid-cycle Thouless pumping in a one-dimensional optical lattice}
\author{K.J.M. Schouten}\email{koen.schouten@student.uva.nl}
\author{V. Cheianov}
\affiliation{Insituut-Lorentz, Universiteit Leiden, Leiden, The Netherlands}
\date{\today}
\begin{abstract}
An adiabatic cycle around a degeneracy point in the parameter space of a one-dimensional band insulator is known to result in an integer valued noiseless particle transport in the thermodynamic limit.
Recently, it was shown that in the case of
an infinite bipartite lattice the adiabatic Thouless protocol can be continuously deformed into a fine tuned finite-frequency cycle preserving the properties of noiseless quantized transport.
In this paper, we numerically investigate the implementation of such an ideal rapid-cycle Thouless pumping protocol in a one-dimensional optical lattice. It is shown that the rapidity will cause first order corrections due to next-to-nearest-neighbour hopping and second order corrections due to the addition of a harmonic potential. Lastly, the quantization of the change in center of mass of the particle distribution is investigated, and shown to have corrections in the first order of the potential curvature.
\end{abstract}
\maketitle
\input{Sections/introduction}
\input{Sections/protocol}
\input{Sections/finite-size-corrections}
\input{Sections/Optical-lattice}
\input{Sections/Harmonic-potential}
\input{Sections/density-distribution}
\input{Sections/Conclusion}
\section{Acknowledgements}
This publication is part of the project Adiabatic Protocols in Extended Quantum Systems, Project No 680-91-130, which is funded by the Dutch Research Council (NWO). We would like to thank Savvas Malikis for helpful discussions.
|
1,477,468,750,922 | arxiv | \section{Introduction} \label{SS1}
\label{intro}
The Riemann zeta function for ${\rm Re}(s)>1$ is defined by the
series
$$
\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}={}\sb {s+1}F\sb
s\left(\left.\atop{1,\ldots,1}{2,\ldots,2}\right|1\right),
$$
where
$$
{}\sb pF\sb q\left(\left.\atop{a_1,\ldots,
a_p}{b_1,\ldots,b_q}\right|z\right)=
\sum_{n=0}^{\infty}\frac{(a_1)_n\cdots (a_p)_n}{(b_1)_n\cdots
(b_q)_n}\frac{z^n}{n!}
$$
is the generalized hypergeometric function and $(a)_n$ is the
shifted factorial defined by $(a)_n=a(a+1)\cdots (a+n-1),$ $n\ge
1,$ and $(a)_0=1.$
In 1978, R.~Ap\'ery used the faster convergent series for $\zeta(3),$
\begin{equation}
\zeta(3)=\frac{5}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^3\binom{2k}{k}}
\label{eq01}
\end{equation}
to derive the irrationality of this number \cite{po}.
The series (\ref{eq01}) first obtained by A.~A.~Mar\-kov \cite{ma} in 1890 converges
exponentially faster than the original series for $\zeta(3),$ since by Stirling's formula,
$$
\frac{1}{k^3\binom{2k}{k}}\sim \frac{\sqrt{\pi}}{k^{5/2}} \, 4^{-k} \qquad\quad (k\to +\infty).
$$
A general formula giving analogous Ap\'ery-like series
for all $\zeta(2n+3),$ $n\ge 0,$ was proved by Koecher \cite{ko}
(and independently in an expanded form by Leshchiner \cite{le}).
For $|a|<1,$ it reads
\begin{equation}
\sum_{n=0}^{\infty}\zeta(2n+3)a^{2n}=\sum_{k=1}^{\infty}\frac{1}{k(k^2-a^2)}=
\frac{1}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^3\binom{2k}{k}}\,\,\frac{5k^2-a^2
{k^2-a^2}\,\prod_{m=1}^{k-1}\left(1-\frac{a^2}{m^2}\right).
\label{eq02}
\end{equation}
Expanding the right-hand side of (\ref{eq02}) by powers of $a^2$ and
comparing coefficients of $a^{2n}$ on both sides leads to the
Ap\'ery-like series for $\zeta(2n+3)$ \cite{le}:
\begin{equation}
\zeta(2n+3)=\frac{5}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^3\binom{2k}{k}}(-1)^n e_n^{(2)}(k)
+2\sum_{j=1}^n\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{2j+3}\binom{2k}{k}} (-1)^{n-j} e_{n-j}^{(2)}(k),
\label{eq03}
\end{equation}
where for positive integers $r,s,$
$$
e_r^{(s)}(k):=[\,t^r] \prod_{j=1}^{k-1}(1+j^{-s}t)=
\sum_{1\le j_1<j_2<\ldots<j_r\le k-1} (j_1j_2\cdots j_r)^{-s},
$$
and $[\,t^r]$ means the coefficient of $t^r.$
In particular, substituting $n=0$ in (\ref{eq03}) recovers Markov's formula
(\ref{eq01}) and setting $n=1,2$ gives the
following two formulas:
\begin{eqnarray}
\qquad\zeta(5)& = & 2\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^5\binom{2k}{k}}-\frac{5}{2}\sum_{k=1}^{\infty}
\frac{(-1)^{k+1}}{k^3\binom{2k}{k}}\sum_{j=1}^{k-1}\frac{1}{j^2}, \label{zeta5}\\
\zeta(7)& = & 2\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^7\binom{2k}{k}}-2\sum_{k=1}^{\infty}
\frac{(-1)^{k+1}}{k^5\binom{2k}{k}}\sum_{j=1}^{k-1}\frac{1}{j^2}+\frac{5}{2}
\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^3\binom{2k}{k}}\sum_{m=1}^{k-1}\frac{1}{m^2}\sum_{j=1}^{m-1}
\frac{1}{j^2}, \label{zeta71}
\end{eqnarray}
respectively. In 1996, inspired by this result, J.~Borwein and D.~Bradley \cite{bb1} applied extensive
computer searches on the base of integer relations algorithms looking for additional zeta identities
of this sort. This led to the discovery of the new identity
\begin{equation}
\zeta(7)=\frac{5}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^7\binom{2k}{k}}+\frac{25}{2}
\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^3\binom{2k}{k}}\sum_{j=1}^{k-1}\frac{1}{j^4},
\label{zeta7}
\end{equation}
which is simpler than Koecher's formula for $\zeta(7),$
and similar identities for $\zeta(9),$ $\zeta(11),$ $\zeta(13),$ etc. This allowed them to conjecture
that certain of these identities, namely those for $\zeta(4n+3)$ are given by
the following generating function formula \cite{bb}:
\begin{equation}
\sum_{n=0}^{\infty}\zeta(4n+3)a^{4n}=\sum_{k=1}^{\infty}\frac{k}{k^4-a^4}
=\frac{5}{2}\sum_{k=1}^{\infty}\frac{(-1)^{k+1} k}{\binom{2k}{k}(k^4-a^4)}
\prod_{m=1}^{k-1}\left(\frac{m^4+4a^4}{m^4-a^4}\right), \quad |a|<1.
\label{4k3}
\end{equation}
The validity of (\ref{4k3}) was proved later by G.~Almkvist and A.~Granville \cite{algr} in 1999.
Expanding the right-hand side of (\ref{4k3}) in powers of $a^4$ gives the following Ap\'ery-like series
for $\zeta(4n+3)$ \cite{bb}:
\begin{equation}
\zeta(4n+3)=\frac{5}{2}\sum_{j=0}^n\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{4j+3}\binom{2k}{k}}
\sum_{r=0}^{n-j}4^rh_{n-j-r}^{(4)}(k) e_r^{(4)}(k),
\label{eq04}
\end{equation}
where
$$
h_r^{(s)}(k):=[\,t^r] \prod_{j=1}^{k-1}(1-j^{-s}t)^{-1}.
$$
In particular, substituting $n=0$ in (\ref{eq04}) gives (\ref{eq01}) and
putting $n=1$ yields (\ref{zeta7}).
It is easily seen that
for $n\ge 1,$ formula (\ref{eq04}) contains fewer summations than the corresponding formula
for $\zeta(4n+3)$ given by (\ref{eq03}).
There exists a bivariate unifying formula for identities (\ref{eq02}) and (\ref{4k3})
\begin{equation}
\sum_{k=1}^{\infty}\frac{k}{k^4-x^2k^2-y^4}=\frac{1}{2}\sum_{k=1}^{\infty}
\frac{(-1)^{k+1}}{k\binom{2k}{k}}\frac{5k^2-x^2}{k^4-x^2k^2-y^4}
\prod_{m=1}^{k-1}\frac{(m^2-x^2)^2+4y^4}{m^4-x^2m^2-y^4}.
\label{2n4m3}
\end{equation}
It was originally conjectured by H.~Cohen and then proved by D.~Bradley \cite{b}
and, independently, by T.~Rivoal \cite{ri}. Their proof consists of reduction of
(\ref{2n4m3}) to a finite non-trivial combinatorial identity which can be proved
on the basis of Almkvist and Granville's work \cite{algr}. Another proof of (\ref{2n4m3})
based on application of WZ pairs was given by the authors in \cite{he2}.
Since
\begin{equation}
\sum_{k=1}^{\infty}\frac{k}{k^4-x^2k^2-y^4}=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}
\binom{n+m}{n}\zeta(2n+4m+3)x^{2n}y^{4m}, \quad |x|^2+|y|^4<1,
\label{genf}
\end{equation}
the formula (\ref{2n4m3}) generates Ap\'ery-like series for all $\zeta(2n+4m+3),$
$n,m \ge 0,$ convergent at the geometric rate with ratio $1/4$ and contains, as particular cases,
both identities (\ref{eq02}) and (\ref{4k3}). Indeed, setting $x=a$ and $y=0$ yields
Koecher's identity (\ref{eq02}), and setting $x=0,$ $y=a$ yields the Borwein-Bradley identity
(\ref{4k3}). Putting
\begin{equation}
a^2:=\frac{x^2+\sqrt{x^4+4y^4}}{2}, \qquad b^2:=\frac{x^2-\sqrt{x^4+4y^4}}{2},
\label{a2b2}
\end{equation}
we can rewrite (\ref{2n4m3}) in a more symmetrical way
\begin{equation}
\sum_{k=1}^{\infty}\frac{k}{(k^2-a^2)(k^2-b^2)}=\frac{1}{2}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}(5n^2-a^2-b^2)(1\pm a\pm b)_{n-1}}{n\binom{2n}{n}(1\pm a)_n(1\pm b)_n}.
\label{ab}
\end{equation}
Here and below $(u\pm v\pm w)$ means that the product contains the factors $u+v+w,$
$u+v-w,$ $u-v+w,$ $u-v-w.$
In \cite{he2}, the authors showed that the generating function (\ref{genf}) also has
a much more rapidly convergent representation, namely
\begin{equation}
\sum_{k=1}^{\infty}\frac{k}{k^4-x^2k^2-y^4}=\frac{1}{2}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1} r(n)}{n\binom{2n}{n}}
\frac{\prod_{m=1}^{n-1}((m^2-x^2)^2+4y^4)}{\prod_{m=n}^{2n}(m^4-x^2m^2-y^4)},
\label{fast}
\end{equation}
where
$$
r(n)=205n^6-160n^5+(32-62x^2)n^4+40x^2n^3+(x^4-8x^2-25y^4)n^2+10y^4n+y^4(x^2-2).
$$
The identity (\ref{fast}) produces accelerated series for all $\zeta(2n+4m+3),$ $n,m \ge ,$
convergent at the geometric rate with ratio $2^{-10}.$ In particular, if $x=y=0$
we get Amdeberhan-Zeilberger's series \cite{az} for $\zeta(3),$
\begin{equation}
\zeta(3)=\frac{1}{2}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}(205n^2-160n+32)}{n^5\binom{2n}{n}^5}.
\label{saz}
\end{equation}
It is worth pointing out that both identities (\ref{2n4m3}) and (\ref{fast}) were proved in \cite{he2}
by using the same Markov-WZ pair (see also \cite[p.~702]{he3} for the explicit expression), but with the
help of different summation formulas.
A more general form of the bivariate identity (\ref{2n4m3}) for the generating function
\begin{equation*}
\begin{split}
\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\binom{m+n}{n}&(A_0\zeta(2n+4m+4)
+B_0\zeta(2n+4m+3)+C_0\zeta(2n+4m+2))x^{2n}y^{4m}\\
&=\sum_{k=1}^{\infty}
\frac{A_0+B_0k+C_0k^2}{k^4-x^2k^2-y^4}, \qquad\qquad |x|^2+|y|^4<1,
\end{split}
\end{equation*}
where $A_0, B_0, C_0$ are arbitrary complex numbers, was proved in \cite{he2}
by means of the Markov-Wilf-Zelberger theory. More precisely, we have
\begin{equation}
\sum_{k=1}^{\infty}
\frac{A_0+B_0k+C_0k^2}{k^4-x^2k^2-y^4}=\sum_{n=1}^{\infty}
\frac{d_n}{\prod_{m=1}^n(m^4-x^2m^2-y^4)},
\label{general}
\end{equation}
where
\begin{equation*}
\begin{split}
d_n&=\frac{(-1)^{n-1}B_0(5n^2-x^2)}{2n\binom{2n}{n}}\prod_{m=1}^{n-1}
((m^2-x^2)^2+4y^4) \\[3pt]
&+\frac{(40n+10)L_n+(35n^5-35n^3x^2+4n(3x^4+10y^4))L_{n-1}}{4(5n^2-2x^2)}
\end{split}
\end{equation*}
and $L_n$ is a solution of a certain second order linear difference equation with
polynomial coefficients in $n$ and $x, y$ with the initial values $L_0=C_0,$
$L_1=(5-2x^2)A_0/15+(5x^2-1-4(x^4+6y^4))C_0/30.$ If we take $A_0=C_0=0,$ $B_0=1$ in (\ref{general}),
then $L_n=0$ for all $n \ge 0$ and we get the bivariate identity (\ref{2n4m3}).
First results related to generating function identities
for even zeta values belong to Leshchiner \cite{le} who proved (in an expanded form)
that for $|a|<1,$
\begin{equation}
\sum_{n=0}^{\infty}\left(1-\frac{1}{2^{2n+1}}\right)\zeta(2n+2)a^{2n}=\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}}{n^2-a^2}=\frac{1}{2}\sum_{k=1}^{\infty}\frac{1}{k^2\binom{2k}{k}}\,\,
\frac{3k^2+a^2}{k^2-a^2}\,\prod_{m=1}^{k-1}\left(1-\frac{a^2}{m^2}\right).
\label{eq05}
\end{equation}
Comparing constant terms on both sides of (\ref{eq05}) yields
$$
\zeta(2)=3\sum_{k=1}^{\infty}\frac{1}{k^2\binom{2k}{k}}.
$$
In 2006,
D.~Bailey, J.~Borwein and D.~Bradley \cite{bbb} proved another identity
\begin{equation}
\sum_{n=0}^{\infty}\zeta(2n+2)a^{2n}=
\sum_{k=1}^{\infty}\frac{1}{k^2-a^2}=
3\sum_{k=1}^{\infty}\frac{1}{\binom{2k}{k}(k^2-a^2)}
\prod_{m=1}^{k-1}\left(\frac{m^2-4a^2}{m^2-a^2}\right).
\label{2n2}
\end{equation}
It generates similar Ap\'ery-like series for the numbers $\zeta(2n+2),$
which are not covered by Leshchiner's result (\ref{eq05}).
In the same paper \cite{bbb}, a generating function producing fast
convergent series for the sequence $\zeta(2n+4),$ $n=0,1,2,\ldots,$ was found,
which for
$|a|<1,$ has the form
\begin{equation}
\frac{1}{2}\sum_{n=0}^{\infty}
\left(1-\frac{1}{2^{2n+3}}-\frac{3}{6^{2n+4}B_{2n+4}}\right)\zeta(2n+4)
a^{2n}=\sum_{k=1}^{\infty}\frac{1}{k^2\binom{2k}{k}(k^2-a^2)}
\prod_{m=1}^{k-1}\left(1-\frac{a^2}{m^2}\right), \label{eq06}
\end{equation}
here $B_{2n}\in {\mathbb Q}$ are the even indexed Bernoulli numbers generated by
$$
x\coth(x)=\sum_{n=0}^{\infty}B_{2n} \frac{(2x)^{2n}}{(2n)!}.
$$
It was shown that the left-hand side of (\ref{eq06}) represents a Maclaurin expansion of the function
$$
\frac{\pi a\csc(\pi a)+3\cos(\pi a/3)-4}{4a^4}.
$$
Comparing constant terms in (\ref{eq06}) implies that
$$
\zeta(4)=\frac{36}{17}\sum_{k=1}^{\infty}\frac{1}{k^4\binom{2k}{k}}.
$$
The identity (\ref{eq06}) gives a formula for $\zeta(2n+4)$ which for
$n\ge 0$ involves fewer summations then the corresponding formula
generated by (\ref{eq05}). Note that a unifying formula for identities generating even zeta values similar to
the bivariate formula (\ref{2n4m3}) for the odd cases is not known.
The Hurwitz zeta function defined by
$$
\zeta(s,v)=\sum_{k=0}^{\infty}\frac{1}{(k+v)^s}
$$
for $s\in {\mathbb C},$ ${\rm Re}\,s>1$ and $v\ne 0,-1,-2,\ldots$ is a generalization
of the Riemann zeta function $\zeta(s)=\zeta(s,1).$ In this paper, we prove a new identity for values
$\zeta(2n,v)$ which contains as particular cases Koecher's identity (\ref{eq02}), the Bailey-Borwein-Bradley
identity (\ref{2n2}), some special case of identity (\ref{2n4m3}) and many other interesting formulas
related to values of the Hurwitz zeta function. We also get extensions of identities (\ref{2n4m3}) and
(\ref{fast}) to values of the Hurwitz zeta function. The main tool we use here is a construction of new
Markov-WZ pairs. As application of our results, we prove several conjectures on supercongruences proposed by
J.~Guillera and W.~Zudilin \cite{gz}, and Z.-W.~Sun \cite{sun1, sun2}.
\section{Background} \label{SS2}
\vspace{0.3cm}
We start by recalling several definitions and known facts related to
the Markov-Wilf-Zeilberger theory (see \cite{ma, mo, moze}). A
function $H(n,k)$, in the integer variables $n$ and $k,$ is called
{\it hypergeometric} or {\it closed form (CF)} if the quotients
$$
\frac{H(n+1,k)}{H(n,k)} \qquad\mbox{and} \qquad
\frac{H(n,k+1)}{H(n,k)}
$$
are both rational functions of $n$ and $k.$ A hypergeometric
function that can be written as a ratio of products of factorials is
called {\it pure-hypergeometric.} A pair of CF functions $F(n,k)$
and $G(n,k)$ is called a {\it WZ pair} if
\begin{equation}
F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k). \label{WZ}
\end{equation}
A {\it P-recursive} function is a function that satisfies a linear
recurrence relation with polynomial coefficients. If for a given
hypergeometric function $H(n,k),$ there exists a polynomial
$P(n,k)$ in $k$ of the form
$$
P(n,k)=a_0(n)+a_1(n)k+\cdots+a_L(n)k^L,
$$
for some non-negative integer $L,$ and P-recursive functions
$a_0(n), \ldots, a_L(n)$ such that
$$
F(n,k):=H(n,k)P(n,k)
$$
satisfies
(\ref{WZ}) with some function $G,$ then a pair $(F,G)$ is called a
{\it Markov-WZ pair} associated with the kernel $H(n,k)$ (MWZ pair
for short). We call $G(n,k)$ an {\it MWZ mate} of $F(n,k).$ If
$L=0,$ then $(F,G)$ is simply a WZ pair.
In 2005, M.~Mohammed \cite{mo} showed that for any
pure-hypergeometric kernel $H(n,k),$ there exists a non-negative
integer $L$ and a polynomial $P(n,k)$ as above such that
$F(n,k)=H(n,k)P(n,k)$ has an MWZ mate $G(n,k)=F(n,k)Q(n,k),$ where
$Q(n,k)$ is a ratio of two P-recursive functions. Paper \cite{moze}
is accompanied by the Maple package MarkovWZ which, for a given
$H(n,k)$ outputs the polynomial $P(n,k)$ and the $G(n,k)$ as above.
From relation (\ref{WZ}) we get the following summation formulas.
\noindent {\bf Proposition A.} \cite[Theorem 2(b)]{mo} {\it Let
$(F,G)$ be an MWZ pair. If $\lim\limits_{n\to\infty}F(n,k)=0$ for
every $k\ge 0,$ then
\begin{equation}
\sum_{k=0}^{\infty}F(0,k)-\lim_{k\to\infty}\sum_{n=0}^{\infty}G(n,k)=
\sum_{n=0}^{\infty}G(n,0), \label{f1}
\end{equation}
whenever both sides converge.}
\noindent {\bf Proposition B.} \cite[Cor.~2]{mo} {\it Let $(F,G)$ be
an MWZ pair. If $\lim\limits_{k\to\infty}
\sum\limits_{n=0}^{\infty}G(n,k)=0,$ then
\begin{equation}
\sum_{k=0}^{\infty}F(0,k)= \sum_{n=0}^{\infty}(F(n,n)+G(n,n+1)),
\label{f2}
\end{equation}
whenever both sides converge.}
Formulas (\ref{f1}), (\ref{f2}) with an appropriate choice of
MWZ pairs can be used to convert a given hypergeometric series into
a different rapidly converging one.
To ensure wider applications of WZ pairs for proving hypergeometric identities
we use an approach due to I.~Gessel \cite{ge} (see also \cite[\S 7.3, 7.4]{pwz}).
It is based on the fact that if we have a WZ pair $(F,G),$ then we can easily find other WZ
pairs by the following rules.
\noindent {\bf Proposition C.} \cite[Th.~3.1]{ge} {\it Let $(F,G)$ be
a WZ pair.
{\rm(i)} \quad For any complex numbers $\alpha$ and $\beta,$
$(F(n+\alpha,k+\beta), G(n+\alpha,k+\beta))$
is a WZ pair.
{\rm(ii)} \quad For any complex number $\gamma,$ $(\gamma F(n,k), \gamma G(n,k))$ is a WZ pair.
{\rm (iii)} \quad If $p(n,k)$ is a gamma product such that $p(n+1,k)=p(n,k+1)=p(n,k)$ for all $n$
and $k$ for which $p(n,k)$ is defined, then
$
(p(n,k)F(n,k), p(n,k)G(n,k))
$
is a WZ pair.
{\rm (iv)} \quad $(F(-n,k), -G(-n-1,k))$ is a WZ pair.
{\rm (v)} \quad $(F(n,-k), -G(n,-k+1))$ is a WZ pair.
{\rm (vi)} \quad $(G(k,n), F(k,n))$ is a WZ pair.
}
The WZ pairs obtained from $(F,G)$ by any combination of {\rm(i)--(v)} are called the {\it associates}
of $(F,G).$ The WZ pair of the form {\rm(vi)} and all its associates are called the {\it duals}
of $(F,G).$
\section{The identities} \label{SS3}
\vspace{0.3cm}
\begin{theorem} \label{t1}
Let $a, \alpha, \beta\in {\mathbb C},$ $|a|<1,$ $\alpha\ne\beta,$ and $\alpha\pm a,$
$\beta\pm a$ be distinct from $0, -1, -2,\ldots.$ Then we have
\begin{equation}
\begin{split}
&\qquad\qquad\qquad
\frac{1}{\beta-\alpha}\sum_{n=0}^{\infty}a^{2n}(\zeta(2n+2,\alpha)-\zeta(2n+2,\beta)) \\[3pt]
&=
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}
(1+\alpha-\beta)_{n-1}(1+\beta-\alpha)_{n-1}(1+2a)_{n-1}(1-2a)_{n-1}
{(\alpha+a)_n(\alpha-a)_n(\beta+a)_n(\beta-a)_n} \\[3pt]
&\qquad\qquad\times\frac{(5n^2+3n(\alpha+\beta-2)+2(\alpha-1)(\beta-1)-2a^2)}{n\binom{2n}{n}}.
\label{eq07}
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
By the definition of the Hurwitz zeta function, we have
\begin{equation}
\frac{1}{\beta-\alpha}\sum_{n=0}^{\infty}a^{2n}(\zeta(2n+2,\alpha)-\zeta(2n+2,\beta))
=\sum_{k=0}^{\infty}\frac{2k+\alpha+\beta}{((k+\alpha)^2-a^2)((k+\beta)^2-a^2)}.
\label{eq08}
\end{equation}
Now define a Markov kernel $H(n,k)$ by the formula
$$
H(n,k)=\frac{(\alpha+a)_k(\alpha-a)_k(\beta+a)_k(\beta-a)_k(n+2k+\alpha+\beta)
{(\alpha+a)_{n+k+1}(\alpha-a)_{n+k+1}(\beta+a)_{n+k+1}(\beta-a)_{n+k+1}}.
$$
Applying the Maple package Markov-WZ we get the associated WZ pair
$$
F(n,k)=H(n,k) \frac{(-1)^n(1+\alpha-\beta)_n(1+\beta-\alpha)_n(1+2a)_n(1-2a)_n}{\binom{2n}{n}},
$$
$$
G(n,k)=F(n,k) \frac{5n^2+n(3\alpha+3\beta+4)+2\alpha\beta-2a^2+(2k+1)(1+\alpha+\beta)+2k(k+3n)
{2(2n+1)(n+2k+\alpha+\beta)}.
$$
Now by Proposition A, we obtain
$$
\sum_{k=0}^{\infty}F(0,k)=\sum_{n=0}^{\infty}G(n,0),
$$
which implies (\ref{eq07}).
\end{proof}
Multiplying both sides of (\ref{eq07}) by $\beta-\alpha$ and letting $\beta$ tend to infinity
we get an extension of the Bailey-Borwein-Bradley identity to values of the Hurwitz zeta function:
\begin{equation}
\sum_{n=0}^{\infty}a^{2n}\zeta(2n+2,\alpha)=\sum_{n=0}^{\infty}
\frac{(3n+2\alpha-2)(1+2a)_{n-1}(1-2a)_{n-1}}{n\binom{2n}{n}(\alpha+a)_n(\alpha-a)_n}.
\label{eq09}
\end{equation}
Setting $\alpha=1$ in (\ref{eq09}) yields the Bailey-Borwein-Bradley identity (\ref{2n2}).
Replacing $a$ by $a/2$ and $\alpha,$ $\beta$ by $1+a/2,$ $1-a/2,$ respectively, in (\ref{eq07}) and taking into
account (\ref{eq08}), we get Koecher's identity (\ref{eq02}).
Letting $\beta$ tend to $\alpha$ in (\ref{eq07}) and using the equality
$$
\frac{d}{dv}\zeta(s,v)=-s\zeta(s+1,v),
$$
we get the following.
\begin{corollary} \label{c1}
Let $a, \alpha\in {\mathbb C},$ $|a|<1,$ and $\alpha\pm a\ne 0,-1,-2, \ldots.$
Then
\begin{equation*}
\begin{split}
&\qquad\qquad\qquad
\sum_{n=0}^{\infty}(n+1)\zeta(2n+3,\alpha) a^{2n}=\sum_{k=0}^{\infty}\frac{k+\alpha}{((k+\alpha)^2-a^2)^2} \\[3pt]
&=\frac{1}{2}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(5n^2+6n(\alpha-1)+2(\alpha-1)^2-2a^2)}{n\binom{2n}{n}}
\frac{(n-1)!^2(1+2a)_{n-1}(1-2a)_{n-1}}{(\alpha+a)_n^2(\alpha-a)_n^2}.
\end{split}
\end{equation*}
\end{corollary}
Taking $\alpha=1$ in Corollary \ref{c1}, we get the following identity for odd zeta values.
\begin{corollary} \label{c2}
Let $a\in {\mathbb C},$ $|a|<1.$ Then
\begin{equation}
\sum_{n=1}^{\infty}n\zeta(2n+1)a^{2n-2}=\sum_{k=1}^{\infty}\frac{k}{(k^2-a^2)^2}
=\frac{1}{2}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(5n^2-a^2)}{n\binom{2n}{n}(n^2-a^2)^2}
\prod_{m=1}^{n-1}\frac{1-4a^2/m^2}{(1-a^2/m^2)^2}.
\label{eq10}
\end{equation}
\end{corollary}
Note that the right-hand side equality of (\ref{eq10}) also follows from the bivariate identity
(\ref{2n4m3}) or (\ref{ab}) as was shown by D.~Bradley (see \cite[Cor.~1]{b}).
It is clear that identity (\ref{eq10}) gives formulas for odd zeta values which are linear combinations
of series generated by the bivariate identity (\ref{2n4m3}).
Thus comparing constant terms on both sides of (\ref{eq10}) gives Ap\'ery's
series (\ref{eq01}) for $\zeta(3).$ Similarly, comparing coefficients of $a^2$ gives formula (\ref{zeta5})
for $\zeta(5).$ It produces the following complicated expression for $\zeta(7):$
\begin{equation*}
\begin{split}
\zeta(7)&=\frac{11}{6}\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^7\binom{2k}{k}}-\frac{8}{3}\sum_{k=1}^{\infty}
\frac{(-1)^{k-1}}{k^5\binom{2k}{k}}\sum_{j=1}^{k-1}\frac{1}{j^2}-\frac{25}{6}\sum_{k=1}^{\infty}
\frac{(-1)^{k-1}}{k^3\binom{2k}{k}}\sum_{j=1}^{k-1}\frac{1}{j^4} \\
&+\frac{10}{3}\sum_{k=1}^{\infty}
\frac{(-1)^{k-1}}{k^3\binom{2k}{k}}\sum_{j=1}^{k-1}\frac{1}{j^2}\sum_{m=1}^{j-1}\frac{1}{m^2},
\end{split}
\end{equation*}
which can be written as
$$
\zeta(7)=\frac{1}{3}(4K-B),
$$
where $K$ and $B$ are right-hand sides of formulas (\ref{zeta71}) and (\ref{zeta7}), respectively.
More generally, if we denote
$$
g_r^{(s)}(k):=[\,t^r]\prod_{j=1}^{k-1}(1-j^{-s}t)^{-2},
$$
then taking into account that
$$
\frac{5k^2-2a^2}{(k^2-a^2)^2}=\frac{2}{1-a^2/k^2}+\frac{3}{(1-a^2/k^2)^2}=\sum_{j=0}^{\infty}
(3j+5)\frac{a^{2j}}{k^{2j}}
$$
and comparing the coefficients of $a^{2n}$ on both sides of (\ref{eq10}), we get
\begin{corollary} \label{c3}
Let $n$ be a non-negative integer. Then
\begin{equation}
\zeta(2n+3)=\frac{1}{2n+2}\sum_{j=0}^n(3j+5)\sum_{k=1}^{\infty}\frac{(-1)^{k-1}}{k^{2j+3}\binom{2k}{k}}
\sum_{r=0}^{n-j}(-4)^re_r^{(2)}(k) g_{n-j-r}^{(2)}(k).
\label{eq11}
\end{equation}
\end{corollary}
Consider several other particular cases of Theorem \ref{t1}. Replacing $a$ by $a/2,$ $\alpha$ by $1/2,$
and $\beta$ by $1$ in (\ref{eq07}) and noting that
$$
\sum_{k=0}^{\infty}\frac{2k+3/2}{((k+1/2)^2-a^2/4)((k+1)^2-a^2/4)}=8\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}}{n^2-a^2},
$$
we get the following identity.
\begin{corollary} \label{c4}
Let $a$ be a complex number, distinct from a non-zero integer. Then
$$
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^2-a^2}=\frac{1}{4}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(10n^2-3n-a^2)
{n(2n-1)(n^2-a^2)\binom{2n}{n}\prod_{j=1}^n(1-a^2/(n+j)^2)}.
$$
In particular,
$$
\zeta(2)=\frac{1}{2}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(10n-3)}{n^2(2n-1)\binom{2n}{n}}.
$$
\end{corollary}
Substituting $\alpha=1/3,$ $\beta=2/3,$ $a=0$ in Theorem \ref{t1}, we get
$$
\zeta(2,1/3)-\zeta(2,2/3)=\frac{1}{3}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1} n!^2 (15n-4)}{n^3\binom{2n}{n}(1/3)_n(2/3)_n}.
$$
Now observing that
$$
\left(\frac{1}{3}\right)_n\left(\frac{2}{3}\right)_n=\frac{(3n)!}{27^n n!}
$$
and
$$
\zeta(2,1/3)-\zeta(2,2/3)=9\sum_{n=1}^{\infty}\frac{(\frac{n}{3})}{n^2}=:9K
$$
(where $(\frac{n}{p})$ is the Legendre symbol), we get the following formula
$$
K=\sum_{n=1}^{\infty}\frac{(15n-4)(-27)^{n-1}}{n^3\binom{2n}{n}^2\binom{3n}{n}},
$$
which was conjectured by Z.-W.~Sun in \cite{sun1}.
Substituting $\alpha=1/4,$ $\beta=3/4$ recovers Theorem 3 from \cite{he2.5}
and in particular (when $a=0$), it gives the following formula for Catalan's constant
$G:=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)^2}:$
$$
G=\frac{1}{64}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}256^n(40n^2-24n+3)}{\binom{4n}{2n}^2\binom{2n}{n}n^3(2n-1)}.
$$
Applying Proposition B to the Markov-WZ pair found in the proof of Theorem \ref{t1}, we get the following
identity which generates Ap\'ery-like series for the differences $\zeta(2n+2,\alpha)-\zeta(2n+2,\beta)$
converging exponentially fast as $2^{-10}.$
\begin{theorem} \label{t2}
Let $a, \alpha, \beta\in {\mathbb C},$ $|a|<1,$ $\alpha\ne \beta,$ and $\alpha\pm a,$ $\beta\pm a$
be distinct from $0,-1,-2,\ldots.$ Then
\begin{equation}
\begin{split}
&\qquad\qquad\quad\frac{1}{\beta-\alpha}\sum_{n=0}^{\infty}(\zeta(2n+2,\alpha)-\zeta(2n+2,\beta))a^{2n} \\
&=\sum_{n=1}^{\infty}\frac{(-1)^{n-1} p_{\alpha,\beta}(n)}{n\binom{2n}{n}}
\frac{\prod_{j=1}^{n-1}(j^2-(\alpha-\beta)^2)(j^2-4a^2)}{\prod_{j=n-1}^{2n-1}((j+\alpha)^2-a^2)((j+\beta)^2-a^2)},
\label{eq12}
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
&p_{\alpha,\beta}(n)=2(2n-1)(3n+\alpha+\beta-3)((2n-1+\alpha)^2-a^2)((2n-1+\beta)^2-a^2) \\
&+
((n+\alpha-1)^2-a^2)((n+\beta-1)^2-a^2)(13n^2+5n(\alpha+\beta-2)+2((1-\alpha)(1-\beta)-a^2)).
\end{split}
\end{equation*}
\end{theorem}
Setting $\alpha=1/3,$ $\beta=2/3,$ $a=0$ in Theorem \ref{t2}, we get the following fast converging series
for the constant $K:$
$$
K=\sum_{n=1}^{\infty}\frac{(-27)^{n-1}(5535n^3-4689n^2+1110n-80)}{n^3(3n-1)(3n-2)\binom{6n}{3n}^2\binom{3n}{n}}.
$$
Setting $\alpha=1/4,$ $\beta=3/4,$ we recover Theorem 4 from \cite{he2.5}.
Multiplying both sides of (\ref{eq12}) by $\beta-\alpha$ and letting $\beta$ tend to infinity
we obtain an extension of Theorem 2 from \cite{he1} to values of the Hurwitz zeta function
\begin{equation}
\sum_{n=0}^{\infty}\zeta(2n+2,\alpha)a^{2n}=\sum_{n=1}^{\infty}\frac{p(n)}{n\binom{2n}{n}}
\frac{\prod_{m=1}^{n-1}(m^2-4a^2)}{\prod_{m=n-1}^{2n-1}((m+\alpha)^2-a^2)},
\label{dobavl}
\end{equation}
where $p(n)=2(2n-1)((2n-1+\alpha)^2-a^2)+(5n+2\alpha-2)((n+\alpha-1)^2-a^2).$
In particular, setting $\alpha=1,$ $a=0$ in (\ref{dobavl}) we get Zeilberger's series \cite[\S 12]{zeil}
for $\zeta(2),$
$$
\zeta(2)=\sum_{n=1}^{\infty}\frac{21n-8}{n^3\binom{2n}{n}^3}.
$$
Replacing $a$ by $a/2$ and $\alpha, \beta$ by $1+a/2,$ $1-a/2,$ respectively, we recover Theorem 4 from
\cite{he1}.
Letting $\beta$ tend to $\alpha$ in (\ref{eq12}), we get the following
\begin{corollary} \label{c5}
Let $a, \alpha\in {\mathbb C},$ $|a|<1,$ and $\alpha\pm a$
be distinct from $0,-1,-2,\ldots.$ Then
\begin{equation*}
\begin{split}
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\sum_{k=1}^{\infty}k\zeta(2k+1,\alpha)a^{2k-2} \\
&=\frac{1}{2}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}p_{\alpha}(n)}{n\binom{2n}{n}^5((n+\alpha-1)^2-a^2)^2((n+\alpha-1)^2-a^2/4)^2}
\prod_{m=1}^{n-1}\frac{1-4a^2/m^2}{((1+\frac{\alpha-1}{m+n})^2-\frac{a^2}{(m+n)^2})^2},
\end{split}
\end{equation*}
where
\begin{equation*}
\begin{split}
p_{\alpha}(n):=p_{\alpha,\alpha}(n)&=2(2n-1)(3n+2\alpha-3)((2n+\alpha-1)^2-a^2)^2 \\
&+
((n+\alpha-1)^2-a^2)^2(13n^2+10n(\alpha-1)+2((1-\alpha)^2-a^2)).
\end{split}
\end{equation*}
\end{corollary}
Setting $\alpha=1$ in Corollary \ref{c5} we get the following identity.
\begin{corollary} \label{c6}
Let $a\in {\mathbb C},$ $|a|<1.$ Then
$$
\sum_{k=1}^{\infty}k\zeta(2k+1)a^{2k-2}=\frac{1}{2}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}p(n)}{n\binom{2n}{n}^5(n^2-a^2)^2(n^2-a^2/4)^2}
\prod_{m=1}^{n-1}\frac{1-4a^2/m^2}{(1-a^2/(m+n)^2)^2},
$$
where $p(n)=2(2n-1)(3n-1)(4n^2-a^2)^2+(n^2-a^2)^2(13n^2-2a^2).$
\end{corollary}
Setting $a=0$ in Corollary \ref{c6} we get Amdeberhan-Zeilberger's series (\ref{saz}) for $\zeta(3).$
The next theorem gives a generalization of identity (\ref{ab}).
\begin{theorem} \label{t3}
Let $\alpha, a, b\in {\mathbb C}$ and $\alpha\pm a,$ $\alpha\pm b$
be distinct from $0,-1,-2,\ldots.$ Then the following identity holds:
\begin{equation}
\begin{split}
&\qquad\qquad\qquad\qquad\sum_{k=0}^{\infty}\frac{k+\alpha}{((k+\alpha)^2-a^2)((k+\alpha)^2-b^2)} \\
&=\frac{1}{2}
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(1\pm a\pm b)_{n-1}(5n^2-6n(1-\alpha)+2(1-\alpha)^2-a^2-b^2)}{n\binom{2n}{n}
(\alpha\pm a)_n(\alpha\pm b)_n}.
\label{eq13}
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
Taking the kernel
$$
H(n,k)=\frac{(\alpha+a)_k(\alpha-a)_k(\alpha+b)_k(\alpha-b)_k(n+2k+2\alpha)}{(\alpha+a)_{n+k+1}
(\alpha-a)_{n+k+1}(\alpha+b)_{n+k+1}(\alpha-b)_{n+k+1}}
$$
and applying the Maple package MarkovWZ we get that
$$
F(n,k)=\frac{(-1)^n}{\binom{2n}{n}}(1\pm a\pm b)_n H(n,k)
$$
and
$$
G(n,k)=F(n,k)\frac{5n^2+6\alpha n+4n+2\alpha+2\alpha^2+1-a^2-b^2+k(2k+6n+4\alpha+2)}{2(2n+1)(n+2k+2\alpha)}
$$
give a WZ pair, i.e.,
$$
F(n+1,k)-F(n,k)=G(n,k+1)-G(n,k).
$$
Now by Proposition A, we get
$$
\sum_{k=0}^{\infty}F(0,k)=\sum_{n=0}^{\infty}G(n,0),
$$
which implies (\ref{eq13}).
\end{proof}
Making the substitution (\ref{a2b2}) in (\ref{eq13}) we get a generalization of Cohen's identity
to values of the Hurwitz zeta function.
\begin{corollary} \label{c7}
Let $x, y, \alpha\in {\mathbb C},$ $|x|^2+|y|^4<1$ and $\alpha\ne 0,-1,-2,\ldots.$ Then
\begin{equation*}
\begin{split}
&\sum_{k=0}^{\infty}\frac{k+\alpha}{(k+\alpha)^4-x^2(k+\alpha)^2-y^4}=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}
\binom{n+m}{n}\zeta(2n+4m+3,\alpha)x^{2n}y^{4m} \\
&=\!\frac{1}{2}\!\sum_{n=1}^{\infty}
\frac{(-1)^{n-1}(5n^2-6n(1-\alpha)+2(1-\alpha)^2-x^2)}{n\binom{2n}{n}((n+\alpha-1)^4-x^2(n+\alpha-1)^2-y^4)}
\prod_{j=1}^{n-1}\frac{(j^2-x^2)^2+4y^4}{(j+\alpha-1)^4-x^2(j+\alpha-1)^2-y^4}.
\end{split}
\end{equation*}
\end{corollary}
Setting $\alpha=1/2,$ $x=y=0$ in Corollary \ref{c7} we get the following formula:
$$
\zeta(3)=\frac{1}{28}\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(10n^2-6n+1) 256^n}{n^5\binom{2n}{n}^5}.
$$
Applying Proposition B to the Markov-WZ pair used in the proof of Theorem \ref{t3} we get the following identity.
\begin{theorem} \label{t4}
Let $\alpha, a, b\in{\mathbb C}$ and $\alpha\pm a,$ $\alpha\pm b\ne 0,-1,-2,\ldots.$
Then
\begin{equation*}
\begin{split}
&\qquad\qquad\sum_{k=0}^{\infty}\frac{k+\alpha}{((k+\alpha)^2-a^2)((k+\alpha)^2-b^2)} \\
&=\frac{1}{2}
\sum_{n=1}^{\infty}\frac{(-1)^{n-1}(1\pm a\pm b)_{n-1}(\alpha\pm a)_{n-1}(\alpha\pm b)_{n-1}}{n\binom{2n}{n}
(\alpha\pm a)_{2n}(\alpha\pm b)_{2n}}\,q(n),
\end{split}
\end{equation*}
where
\begin{equation*}
\begin{split}
q(n)&=2(2n-1)(3n+2\alpha-3)((2n+\alpha-1)^2-a^2)((2n+\alpha-1)^2-b^2) \\
&+((n+\alpha-1)^2-a^2)((n+\alpha-1)^2-b^2)(13n^2
-10n(1-\alpha)+2(1-\alpha)^2-a^2-b^2).
\end{split}
\end{equation*}
\end{theorem}
Making the change of variables (\ref{a2b2}) in Theorem \ref{t4} we get a generalization of the identity (\ref{fast})
to values of the Hurwitz zeta function.
\begin{corollary} \label{c8}
Let $x, y, \alpha\in {\mathbb C},$ $|x|^2+|y|^4<1$ and $\alpha\ne 0,-1,-2,\ldots.$ Then
\begin{equation}
\begin{split}
&\qquad\quad\qquad\qquad\qquad\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\zeta(2n+4m+3,\alpha)x^{2n}y^{4m} \\
&=\frac{1}{2}\sum_{n=1}^{\infty}
\frac{(-1)^{n-1} Q(n)}{n\binom{2n}{n}}
\frac{\prod_{j=1}^{n-1}((j^2-x^2)^2+4y^4)}{\prod_{j=0}^n((n+j+\alpha-1)^4-x^2(n+j+\alpha-1)^2-y^4)},
\label{eq14}
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
Q(n)&=2(2n-1)(3n+2\alpha-3)((2n+\alpha-1)^4-x^2(2n+\alpha-1)^2-y^4) \\
&+((n+\alpha-1)^4-x^2(n+\alpha-1)^2-y^4)(13n^2
-10n(1-\alpha)+2(1-\alpha)^2-x^2).
\end{split}
\end{equation*}
\end{corollary}
Formula (\ref{eq14}) produces accelerated series for the values $\zeta(2n+4m+3,\alpha),$ $n,m\ge 0,$
convergent at the geometric rate with ratio $2^{-10}.$
{\bf Remark.} Note that Theorem \ref{t4} can also be obtained from Proposition A applied to the Markov-WZ pair
associated with the kernel
\begin{equation}
H_1(n,k)=\frac{(\alpha+a)_{k+n}(\alpha-a)_{k+n}(\alpha+b)_{k+n}(\alpha-b)_{k+n}
{(\alpha+a)_{2n+k+1}(\alpha-a)_{2n+k+1}(\alpha+b)_{2n+k+1}(\alpha-b)_{2n+k+1}}\,(3n+2k+2\alpha).
\label{H1nk}
\end{equation}
\vspace{0.3cm}
\section{Hypergeometric reformulations} \label{SS4}
\vspace{0.1cm}
Theorems \ref{t1}, \ref{t3} admit nice reformulations in terms of the hypergeometric and digamma functions
which can be useful in their own.
Let us recall the definition of the digamma function that is the logarithmic derivative of the Gamma function
$$
\psi(z)=\frac{d}{dz}\log\Gamma(z)=-\gamma+\sum_{n=0}^{\infty}\left(\frac{1}{n+1}-\frac{1}{n+z}\right).
$$
Obviously, $\psi(1)=-\gamma,$ where $\gamma$ is Euler's constant. The function $\psi(z)$ is single-valued and
analytic in the whole complex plane except for the points $z=-m,$ $m=0,1,2,\ldots,$ where it has simple poles.
It's connection with the Hurwitz zeta function is given by the formula
$$
\psi^{(k)}(z)=\frac{d^k}{dz^k}\psi(z)=(-1)^{k+1} k!\zeta(k+1,z).
$$
Now it is easily seen that Theorems \ref{t1} and \ref{t3} are equivalent to the following statements.
\begin{theorem} \label{t5}
Let $a, \alpha, \beta$ be complex numbers such that $\alpha\pm a,$ $\beta\pm a$ be distinct from $0,-1, -2,\ldots.$
Then we have
\begin{equation*}
\begin{split}
\psi(\beta+a)&-\psi(\alpha+a)+\psi(\alpha-a)-\psi(\beta-a)=
\frac{a(\alpha-\beta)((1+\alpha)(1+\beta)+\alpha\beta-2a^2)}{(\alpha^2-a^2)(\beta^2-a^2)} \\[5pt]
&\times
{}\sb 8F\sb 7\left(\left.\atop{1, 1, 1+\alpha-\beta, 1+\beta-\alpha, 1\pm 2a, \frac{7}{5}+
\frac{3(\alpha+\beta)}{10}\pm \frac{\sqrt{D}}{10}
}{\frac{3}{2}, 1+\alpha\pm a, 1+\beta\pm a, \frac{2}{5}+\frac{3(\alpha+\beta)}{10}\pm\frac{\sqrt{D}}{10}}\right|-\frac{1}{4}\right),
\end{split}
\end{equation*}
where $D=40a^2+9(\alpha-\beta)^2-4(\alpha-1)(\beta-1).$
\end{theorem}
\begin{theorem} \label{t6}
Let $\alpha, a, b$ be complex numbers such that $\alpha\pm a,$ $\alpha\pm b$ be distinct from $0,-1, -2,\ldots.$
Then we have
\begin{equation*}
\begin{split}
\psi(\alpha+b)&+\psi(\alpha-b)-\psi(\alpha+a)-\psi(\alpha-a)=
\frac{(a^2-b^2)((1+\alpha)^2+\alpha^2-a^2-b^2)}{2(\alpha^2-a^2)(\alpha^2-b^2)} \\[5pt]
&\times
{}\sb 8F\sb 7\left(\left.\atop{1, 1, 1\pm a\pm b, \frac{7+3\alpha}{5}\pm \sqrt{\frac{a^2+b^2}{5}-(\frac{1-\alpha}{5})^2}
}{\frac{3}{2}, 1+\alpha\pm a, 1+\alpha\pm b, \frac{2+3\alpha}{5}\pm\sqrt{\frac{a^2+b^2}{5}-(\frac{1-\alpha}{5})^2}}\right|-\frac{1}{4}\right).
\end{split}
\end{equation*}
\end{theorem}
Dividing both sides of the identity in Theorem \ref{t5} by $\beta-\alpha$ and letting $\beta$ tend to $\alpha,$
we get the following.
\begin{corollary} \label{c9}
Let $a, \alpha$ be complex numbers such that $\alpha\pm a$ be distinct from $0,-1, -2,\ldots.$
Then we have
\begin{equation*}
\begin{split}
\psi'(\alpha-a)-\psi'(\alpha+a)&=\frac{a((1+\alpha)^2+\alpha^2-2a^2)}{(\alpha^2-a^2)^2} \\
&\times
{}\sb 8F\sb 7\left(\left.\atop{1, 1, 1, 1, 1\pm 2a, \frac{7+3\alpha}{5}
\pm \sqrt{\frac{2a^2}{5}-(\frac{1-\alpha}{5})^2}
}{\frac{3}{2}, 1+\alpha\pm a, 1+\alpha\pm a, \frac{2+3\alpha}{5}\pm\sqrt{\frac{2a^2}{5}-(\frac{1-\alpha}{5})^2}}\right|-\frac{1}{4}\right).
\end{split}
\end{equation*}
\end{corollary}
Multiplying both sides of the above equality by $(\alpha-a)^2$ and letting $\alpha$ tend to $a,$
we get the following identity.
\begin{corollary} \label{c10}
Let $a$ be a complex number such that $2a$ be distinct from $0,-1, -2,\ldots.$
Then we have
\begin{equation*}
{}\sb 5F\sb 4\left(\left.\atop{1, 1, 1-2a, \frac{7+3a\pm\sqrt{9a^2+2a-1}}{5}
}
{\frac{3}{2}, 1+2a, \frac{2+3a\pm\sqrt{9a^2+2a-1}}{5}}\right|-\frac{1}{4}\right)=\frac{4a}{1+2a}.
\end{equation*}
\end{corollary}
Taking, for example, $a=1/4$ in the above identity we get the following non-trivial summation formula:
$$
\sum_{k=1}^{\infty}\frac{(-1)^{k-1} (5k-2)}{k\binom{2k}{k}}=1.
$$
\begin{corollary} \label{c11}
Let $n$ be a non-negative integer. Suppose that $\alpha, \beta$ are complex numbers such that
$2\beta$ and $\alpha+\beta$ are distinct from $-n, -n-1, -n-2,\ldots,$ and $\alpha-\beta\ne n, n-1, n-2, \ldots.$
Then we have
\begin{equation}
\begin{split}
{}\sb 7F\sb 6\left(\left.\atop{n+1, n+1\pm(\alpha-\beta), n+1\pm(2n+2\beta),
n+\frac{7}{5}+\frac{3(\alpha+\beta)}{10}
\pm \frac{\sqrt{D_1}}{10}
}{\frac{3}{2}+n, n+\alpha+1\pm (\beta+n), 2n+2\beta+1, n+\frac{2}{5} +\frac{3(\alpha+\beta)}{10}\pm\frac{\sqrt{D_1}}{10}}\right|-\frac{1}{4}\right) \\[5pt]
=\frac{(n+\alpha+\beta)_{n+1}(n+2)_{n+1}}{(1+\alpha-\beta)_{n+1}(1+2n+2\beta)_{n+1}},
\label{eq15}
\end{split}
\end{equation}
where $D_1=40((n+\beta)^2-(\alpha-1)(\beta-1))+9(\alpha+\beta-2)^2.$
\end{corollary}
\begin{proof}
To prove (\ref{eq15}), it is sufficient to consider both sides of the identity from Theorem~\ref{t5}
as meromorphic functions of variable $a$
and compare corresponding residues at the simple pole $a=\beta+n,$ where $n$ is a non-negative integer, on
both sides. Imposing the restrictions on $\alpha, \beta$ formulated in the hypothesis implies that the residue
on the left-hand side of the identity of Theorem \ref{t5} is equal to $-1,$ and clearly,
we have
\begin{equation*}
\begin{split}
1&=\lim_{a\to n+\beta}(n+\beta-a)\frac{a(\alpha-\beta)((1+\alpha)(1+\beta)+\alpha\beta-2a^2)}{(\alpha^2-a^2)
(\beta^2-a^2)} \\
&\times\sum_{k=n}^{\infty}
\frac{k!(1\pm(\alpha-\beta))_k(1\pm 2a)_k(7/5+3(\alpha+\beta)/10\pm\sqrt{D}/10)_k
{(-4)^k(3/2)_k(1+\alpha\pm a)_k(1+\beta\pm a)_k(2/5+3(\alpha+\beta)/10\pm\sqrt{D}/10)_k} \\[3pt]
&=\frac{(n+\beta)(\alpha-\beta)((1+\alpha)(1+\beta)+\alpha\beta-2(n+\beta)^2)}{(\alpha^2-(n+\beta)^2)(n+2\beta)(-1)^n} \\
&\times\sum_{k=n}^{\infty}\binom{k}{n}
\frac{(1\pm(\alpha-\beta))_k(1\pm (2n+2\beta))_k((14+3(\alpha+\beta)\pm\sqrt{D_1})/10)_k
{(-4)^k(3/2)_k(1+\alpha\pm (\beta+n))_k(1+2\beta+n)_k((4+3(\alpha+\beta)\pm\sqrt{D_1})/10)_k} \\[3pt]
&=\frac{(n+\beta)(\alpha-\beta)(1\pm(\alpha-\beta))_n
(1\pm(2n+2\beta))_n(10n+4+3(\alpha+\beta)\pm\sqrt{D_1})
{20(\alpha^2-(n+\beta)^2)(n+2\beta)4^n(3/2)_n(1+\alpha\pm(\beta+n))_n(1+2\beta+n)_n} \\[5pt]
&\times {}\sb 7F\sb 6\left(\left.\atop{n+1, n+1\pm(\alpha-\beta), n+1\pm(2n+2\beta),
n+\frac{7}{5}+\frac{3(\alpha+\beta)}{10}
\pm \frac{\sqrt{D_1}}{10}
}{\frac{3}{2}+n, n+\alpha+1\pm (\beta+n), 2n+2\beta+1, n+\frac{2}{5} +\frac{3(\alpha+\beta)}{10}\pm\frac{\sqrt{D_1}}{10}}\right|-\frac{1}{4}\right).
\end{split}
\end{equation*}
Now taking into account that
$$
\frac{\alpha-\beta}{\alpha-\beta-n}\cdot \frac{(1+\beta-\alpha)_n}{(1+\alpha-\beta-n)_n}=(-1)^n,
\qquad \frac{n+\beta}{n+2\beta}\cdot\frac{(1-2n-2\beta)_n}{(1+2\beta+n)_n}=\frac{(-1)^n}{2},
$$
$$
\frac{(10n+4+3(\alpha+\beta))^2-D_1}{20}=(\alpha-\beta+n+1)(3n+2\beta+1),
$$
and $4^n(3/2)_n=(2n+1)!/n!,$ we get the required identity.
\end{proof}
Similarly, from Theorem \ref{t6} we get
\begin{corollary} \label{c12}
Let $n$ be a non-negative integer and $a, \alpha$ be complex numbers such
that $2\alpha\ne -n, -n-1,\ldots,$ and $\alpha\pm a\ne -n-1, -n-2,\ldots, -2n-1.$
Then we have
\begin{equation}
\begin{split}
{}\sb 7F\sb 6\left(\left.\atop{n+1, 2n+\alpha+1\pm a, 1-\alpha\pm a,
n+\frac{7+3\alpha}{5}
\pm \sqrt{\frac{a^2+(n+\alpha)^2}{5}-(\frac{1-\alpha}{5})^2}
}{\frac{3}{2}+n, 2n+2\alpha+1, n+\alpha+1\pm a, n+\frac{2+3\alpha}{5}
\pm\sqrt{\frac{a^2+(n+\alpha)^2}{5}-(\frac{1-\alpha}{5})^2}}\right|-\frac{1}{4}\right) \\[5pt]
=\frac{(n+2)_{n+1}(n+2\alpha)_{n+1}}{(n+\alpha+1\pm a)_{n+1}}.
\label{eq16}
\end{split}
\end{equation}
\end{corollary}
\begin{proof}
To prove (\ref{eq16}), we rewrite the identity of Theorem \ref{t6} in the form
\begin{equation}
\begin{split}
&\sum_{k=0}^{\infty}\frac{k+\alpha}{((k+\alpha)^2-a^2)((k+\alpha)^2-b^2)}=
\frac{((1+\alpha)^2+\alpha^2-a^2-b^2)}{4(\alpha^2-a^2)(\alpha^2-b^2)} \\
&\times
{}\sb 8F\sb 7\left(\left.\atop{1, 1, 1\pm a\pm b, \frac{7+3\alpha}{5}\pm \sqrt{D(b^2)}
}{\frac{3}{2}, 1+\alpha\pm a, 1+\alpha\pm b, \frac{2+3\alpha}{5}\pm\sqrt{D(b^2)}}\right|-\frac{1}{4}\right),
\label{eq18}
\end{split}
\end{equation}
where
$$
D(b^2):=\frac{a^2+b^2}{5}-\left(\frac{1-\alpha}{5}\right)^2,
$$
replace $b^2$ by $z$ and consider both sides of (\ref{eq18}) as meromorphic functions of variable $z.$
Suppose that $2\alpha$ is distinct from $-n, -n-1, \ldots.$ This restriction ensures that
$(n+\alpha)^2\ne (j+\alpha)^2$ for any non-negative integer $j\ne n.$
Now
equating residues on both sides of
(\ref{eq18}) at the simple pole $z=(n+\alpha)^2,$ where $n$ is a non-negative integer, we have
\begin{equation*}
\begin{split}
&\frac{n+\alpha}{(n+\alpha)^2-a^2}=\lim_{z\to (n+\alpha)^2}((n+\alpha)^2-z)\cdot \frac{(1+\alpha)^2+\alpha^2-a^2-z
{4(\alpha^2-a^2)(\alpha^2-z)} \\[3pt]
&\times
\sum_{k=n}^{\infty}\frac{ k! \bigl(\frac{7+3\alpha}{5}\pm\sqrt{D(z)}\bigr)_k
}{(-4)^k (3/2)_k\bigl(\frac{2+3\alpha}{5}\pm\sqrt{D(z)}\bigr)_k}
\prod_{j=1}^k\frac{((j\pm a)^2-z)}{((j+\alpha)^2-z)((j+\alpha)^2-a^2)} \\[5pt]
&=
\frac{(1+\alpha)^2-a^2-n^2-2n\alpha}{(-1)^{n-1} (2\alpha+n)_n}
\sum_{k=n}^{\infty}\binom{k}{n}\frac{(1\pm a\pm (n+\alpha))_k}{(-4)^{k+1}(3/2)_k(\alpha\pm a)_{k+1}
(2\alpha+2n+1)_{k-n}} \\[5pt]
&\times\frac{\bigl(\frac{7+3\alpha}{5}\pm\sqrt{D((n+\alpha)^2)}\bigr)_k
{\bigl(\frac{2+3\alpha}{5}\pm\sqrt{D((n+\alpha)^2)}\bigr)_k}
=\frac{(1\pm a\pm (n+\alpha))_n ((2n+\alpha+1)^2-a^2)
{4^{n+1} (3/2)_n(2\alpha+n)_n(\alpha\pm a)_{n+1}} \\[5pt]
&\times
{}\sb 7F\sb 6\left(\left.\atop{n+1, 2n+\alpha+1\pm a, 1-\alpha\pm a,
n+\frac{7+3\alpha}{5}
\pm \sqrt{D((n+\alpha)^2)}
}{\frac{3}{2}+n, 2n+2\alpha+1, n+\alpha+1\pm a, n+\frac{2+3\alpha}{5}
\pm\sqrt{D((n+\alpha)^2)}}\right|-\frac{1}{4}\right).
\end{split}
\end{equation*}
Here in the last equality we used the fact that
$$
\frac{\bigl(\frac{7+3\alpha}{5}\pm\sqrt{D((n+\alpha)^2)}\bigr)_n
{\bigl(\frac{2+3\alpha}{5}\pm\sqrt{D((n+\alpha)^2)}\bigr)_n}=
\frac{(2n+\alpha+1)^2-a^2}{(1+\alpha)^2-a^2-n^2-2n\alpha}.
$$
Finally, after simplifying based on the cancelations
$$
\frac{(1-a-n-\alpha)_n}{(\alpha+a)_{n+1}}=\frac{(-1)^n}{\alpha+a+n}, \qquad
\frac{(1+a-n-\alpha)_n}{(\alpha-a)_{n+1}}=\frac{(-1)^n}{\alpha-a+n},
$$
we get the required identity.
\end{proof}
Setting $\alpha=1$ and replacing $n$ by $n-1$ in (\ref{eq16}) we get
\begin{corollary} \label{c13}
Let $n$ be a positive integer and $a$ be a complex number such that $\pm a\ne -n-1, -n-2,\ldots, -2n.$
Then we have
\begin{equation}
{}\sb 7F\sb 6\left(\left.\atop{n, 2n\pm a, \pm a,
n+1
\pm \sqrt{\frac{a^2+n^2}{5}}
}{n+\frac{1}{2}, 2n+1, n+1\pm a, n
\pm\sqrt{\frac{a^2+n^2}{5}}}\right|-\frac{1}{4}\right)=\prod_{j=n+1}^{2n}\frac{j^2}{j^2-a^2}.
\label{eq17}
\end{equation}
\end{corollary}
Formula (\ref{eq17}) generalizes similar identities from \cite[\S 2]{b}. In particular,
substituting $a=\sqrt{c}/n$ gives \cite[Corollary 2]{b} and substituting $a=i\sqrt{b+n^2}$
gives \cite[Corollary 3]{b}.
\vspace{0.5cm}
\section{Supercongruences arising from the Amdeberhan-Zeilberger series for $\zeta(3)$} \label{SS5}
\vspace{0.1cm}
In this section, we consider supercongruences arising from the Amdeberhan-Zeilberger series (\ref{saz})
for $\zeta(3).$ In \cite{gz}, Guillera and Zudilin proposed conjecturally a $p$-adic analog:
$$
\sum_{k=0}^{p-1}\frac{(\frac{1}{2})_k^5}{(1)_k^5}(205k^2+160k+32)(-1)^k 2^{10k}\equiv 32p^2\pmod{p^5}
\quad\text{for prime}\, p>3.
$$
In \cite{sun2}, Z.-W.~Sun formulated more general conjectures: let $p$ be an odd prime, then
$$
\sum_{k=0}^{p-1}(205k^2+160k+32)(-1)^k\binom{2k}{k}^5\equiv 32p^2 + 64p^3H_{p-1}\pmod{p^7}
\quad\text{for}\, p\ne 5,
$$
where $H_{p-1}=\sum_{k=1}^{p-1}1/k,$ and
\begin{equation}
\sum_{k=0}^{(p-1)/2}(205k^2+160k+32)(-1)^k\binom{2k}{k}^5\equiv 32p^2 + \frac{896}{3}p^5B_{p-3} \pmod{p^6}
\quad\text{for}\, p>3,
\label{eq20}
\end{equation}
where $B_0, B_1, B_2, \ldots$ are Bernoulli numbers.
Moreover, Sun \cite{sun2} introduced the related sequence
$$
a_n=\frac{1}{8n^2\binom{2n}{n}^2}\sum_{k=0}^{n-1}(205k^2+160k+32)(-1)^{n-1-k}\binom{2k}{k}^5
$$
and conjectured that for any positive integer $n,$ $a_n$ should be a positive integer.
In this section, we confirm these conjectures (with the only exception that we prove (\ref{eq20}) modulo $p^5$)
and prove the following theorems.
\begin{theorem} \label{t7}
Let $n$ be a positive integer and let
$$
A_n:=\sum_{k=0}^{n-1}(-1)^{n-1-k}\binom{2k}{k}^5(205k^2+160k+32).
$$
Then the following two alternative representations are valid:
$$
A_n=16n\binom{2n}{n}\sum_{k=0}^{n-1}\binom{n+k-1}{k}^4(2k+n)
$$
and
$$
A_n=8n^2\binom{2n}{n}^2\sum_{k=0}^{n-1}(-1)^k\binom{2n-1}{n+k}\binom{2n-k-2}{n-k-1}^2.
$$
\end{theorem}
Theorem \ref{t7} implies immediately the following
\begin{corollary} \label{c14}
For any positive integer $n,$ $a_n:=\frac{A_n}{8n^2\binom{2n}{n}^2}$ is a positive integer.
\end{corollary}
\begin{theorem} \label{t8}
Let $p$ be an odd prime. Then the following supercongruences take place:
$$
\sum_{k=0}^{p-1}(205k^2+160k+32)(-1)^k\binom{2k}{k}^5\equiv 32p^2+64p^3H_{p-1}\pmod{p^7}\quad \text{for}\,\, p\ne 5,
$$
$$
\sum_{k=0}^{(p-1)/2}(205k^2+160k+32)(-1)^k\binom{2k}{k}^5\equiv 32p^2\pmod{p^5}\quad \text{for} \,\, p>3.
$$
\end{theorem}
The proof of Theorem \ref{t7} is contained in the following two lemmas.
\begin{lemma} \label{l1}
For any positive integer $N,$ the following identity holds:
$$
A_N=16N\binom{2N}{N}\sum_{k=0}^{N-1}\binom{N+k-1}{k}^4(2k+N).
$$
\end{lemma}
\begin{proof}
Consider the Markov kernel $H_1(n,k)$ defined in (\ref{H1nk}) for $\alpha=1,$ $a=b=0:$
$$
H_1(n,k)=\frac{(k+n)!^4}{(k+2n+1)!^4} (3n+2k+2).
$$
Then the corresponding to it WZ pair has the form
$$
F(n,k)=\frac{(-1)^n n!^6}{(2n)!} H_1(n,k), \qquad
G(n,k)=\frac{(-1)^nn!^6(k+n)!^4}{2(2n+1)!(k+2n+2)!^4} q(n,k),
$$
where
\begin{equation*}
\begin{split}
q(n,k)&=(n+1)^4(205n^2+250n+77)+k(254+344k+1526n+3628n^2
+888k^2n \\
&+2928kn^2+4268n^3+248k^2+101k^3
+1648kn+574n^5+2486n^4
+22k^4+2k^5 \\
&+664kn^4+2288kn^3+408k^2n^3+1048k^2n^2+141k^3n^2+240k^3n+26k^4n).
\end{split}
\end{equation*}
It is easy to show (see Proposition C or \cite[Ch.~7]{pwz}) that the pair $(\bar{F}, \bar{G})$
given by
$$
\bar{G}(n,k)=(-1)^{k-1}k\binom{2k}{k}\binom{n+2k-1}{n+k}^4 (3k+2n),
$$
$$
\bar{F}(n,k)=(-1)^k\binom{2k}{k}\left(\frac{(n+2k-1)!}{k!(n+k)!}\right)^4 q_1(n,k)
$$
with
\begin{equation*}
\begin{split}
q_1&(n,k)=205k^6+160k^5+32k^4+2n^6+n^5(4+26k)+n^4(2+42k+141k^2) \\
&+n^3k(16+176k+408k^2)+n^2k^2(48+368k+664k^2)
+nk^3(64+384k+574k^2)
\end{split}
\end{equation*}
is its dual WZ pair, for which
\begin{equation}
\bar{F}(n+1,k)-\bar{F}(n,k)=\bar{G}(n,k+1)-\bar{G}(n,k).
\label{eq19}
\end{equation}
It turns out for our further proof that it is useful to consider the usual binomial coefficient
$\binom{r}{k}$ in a more general setting, i.e., to allow an arbitrary real number to appear in the
upper index of $\binom{r}{k},$ and to allow an arbitrary integer in the lower.
We give the following formal definition (see \cite[\S 5.1]{cm}:
\begin{equation*}
\binom{r}{k} =\begin{cases}
\frac{r(r-1)\cdots(r-k+1)}{k(k-1)\cdots 1}, & \quad \text{if integer} \quad k\ge 0; \\
0, & \quad\text{if integer} \quad k<0.
\end{cases}
\end{equation*}
After this elaboration we see that $\bar{G}(n,k)$ is defined for all integers $n, k.$ Rewriting $\bar{F}(n,k)$
in the form
\begin{equation*}
\bar{F}(n,k) =\begin{cases}
\frac{(-1)^k}{k^4}\binom{2k}{k}\binom{n+2k-1}{n+k}^4 q_1(n,k), & \quad \text{if} \quad k\ne 0; \\[3pt]
2(n+1)^2, & \quad \text{if} \quad k=0, n\ge 0; \\
0, & \quad\text{if} \quad k=0, n<0;
\end{cases}
\end{equation*}
we can conclude that $\bar{F}(n,k)$ is well defined for all $n,k\in {\mathbb Z}.$
Now we can show that relation (\ref{eq19}) takes place for all integers $n$ and $k.$
Indeed, if $k<0$ or if $k=0$ and $n<-1,$ then all parts in (\ref{eq19}) are zero.
If $k=0$ and $n\ge -1,$ then $\bar{G}(n,k)=0,$ $\bar{G}(n,k+1)=2(2n+3),$ $\bar{F}(n+1,k)=2(n+2)^2,$
$F(n,k)=(n+1)^2,$
and relation (\ref{eq19}) is equivalent to the obvious equality
$$
2(n+2)^2-(n+1)^2=2(2n+3).
$$
If $k>0$ and $n+k<0,$ then $\bar{F}(n,k)=\bar{G}(n,k)=0,$
$$
\bar{F}(n+1,k)=\frac{(-1)^k}{k^4}\binom{2k}{k}\binom{n+2k}{n+k+1}^4 q_1(n+1,k),
$$
and
\begin{equation*}
\bar{G}(n,k+1)=(-1)^k(k+1)\binom{2k+2}{k+1}\binom{n+2k+1}{n+k+1}^4(3k+2n+3).
\end{equation*}
If moreover, $n+k+1<0,$ then $\bar{F}(n+1,k)=\bar{G}(n,k+1)=0$ and (\ref{eq19}) holds.
If $n+k+1=0,$ then $\bar{G}(n,k+1)=(-1)^k(k+1)^2\binom{2k+2}{k+1}$ and
$$
\bar{F}(n+1,k)=\frac{(-1)^k}{k^4}\binom{2k}{k} q_1(-k,k)=(-1)^k\binom{2k}{k}(2k+2)(2k+1)=\bar{G}(n,k+1),
$$
and therefore (\ref{eq19}) holds. If $k>0$ and $n+k\ge 0,$ then canceling common factorials
on both sides of (\ref{eq19}) we get the equality
\begin{equation*}
\begin{split}
&(n+2k)^4 q_1(n+1,k)-(n+k+1)^4 q_1(n,k) \\
&=2(2k+1)(n+2k)^4(n+2k+1)^4(3k+2n+3)+k^5(n+k+1)^4(3k+2n),
\end{split}
\end{equation*}
which can be easily checked by straightforward verification.
Now let $N\in {\mathbb N}.$ Considering relation (\ref{eq19}) at the point $(n-N,k)$ we have
\begin{equation}
\bar{F}(n+1-N,k)-\bar{F}(n-N,k)=\bar{G}(n-N,k+1)-\bar{G}(n-N,k).
\label{eq21}
\end{equation}
Summing both sides of (\ref{eq21}) over $k$ from $0$ to $N-1$ we have
\begin{equation}
\sum_{k=0}^{N-1}(\bar{F}(n+1-N,k)-\bar{F}(n-N,k))=\bar{G}(n-N,N)-\bar{G}(n-N,0)=\bar{G}(n-N,N).
\label{eq22}
\end{equation}
Now summing (\ref{eq22}) over $n$ from $0$ to $N-1$ we get
$$
\sum_{k=0}^{N-1}(\bar{F}(0,k)-\bar{F}(-N,k))=\sum_{n=0}^{N-1}\bar{G}(n-N,N).
$$
Since $\bar{F}(-N,k)=0$ for $k=0,1,\ldots, N-1,$ we obtain
$$
\sum_{k=0}^{N-1}\bar{F}(0,k)=\sum_{n=0}^{N-1}\bar{G}(n-N,N)
$$
or
$$
\frac{1}{16}\sum_{k=0}^{N-1}(-1)^k\binom{2k}{k}^5(205k^2+160k+32)=(-1)^{N-1}
N\binom{2N}{N}\sum_{n=0}^{N-1}\binom{N+n-1}{n}^4(2n+N),
$$
and the lemma is proved.
\end{proof}
\begin{lemma} \label{l2}
For any positive integer $N,$ the following identity holds:
$$
A_N=8N^2\binom{2N}{N}^2\cdot\sum_{k=0}^{N-1}(-1)^k\binom{2N-1}{N+k}\binom{2N-k-2}{N-1-k}^2.
$$
\end{lemma}
\begin{proof}
Let $N\in {\mathbb Z},$ $N\ge 0.$ Put
\begin{equation}
S_N:=2\sum_{k=0}^N\binom{k+N}{k}^4(N+2k+1).
\label{eq23}
\end{equation}
Now rewriting $S_N$ in the form of a terminating hypergeometric series, we get
$$
S_N=2(N+1)\sum_{k=0}^N\frac{(N+1)_k^4\, (\frac{N+3}{2})_k}{(1)_k^4\, (\frac{N+1}{2})_k}.
$$
Changing the order of summation and noticing that
$$
(\alpha)_{N-k}=\frac{(-1)^k (\alpha)_N}{(1-\alpha-N)_k}
$$
we get
\begin{equation}
S_N=2\binom{2N}{N}^4(3N+1)\cdot
{}\sb 6F\sb 5\left(\left.\atop{1, \, \frac{1}{2}-\frac{3}{2}N, -N, -N, -N, -N}
{-\frac{1}{2}-\frac{3}{2}N, -2N, -2N, -2N, -2N}\right|1\right).
\label{eq24}
\end{equation}
To evaluate the hypergeometric series on the right-hand side of (\ref{eq24}), we apply
Whipple's transformation \cite[(7.7)]{w} which transforms a Saalschutian ${}\sb 4F\sb 3(1)$
series into a well-poised ${}\sb 7F\sb 6(1)$ series (see \cite[p.~61, (2.4.1.1)]{sl}):
\begin{equation}
\begin{split}
{}\sb 4F\sb 3&\left(\left.\atop{f-a_1-a_2, d_1, d_2, -N}
{f-a_1, f-a_2, g}\right|1\right)=
\frac{(g-d_1)_N(g-d_2)_N}{(g)_N(g-d_1-d_2)_N} \\[5pt]
&\times {}\sb 7F\sb 6\left(\left.\atop{f-1, \frac{1}{2}f+\frac{1}{2}, a_1, a_2, d_1, d_2, -N}
{\frac{1}{2}f-\frac{1}{2}, f-a_1, f-a_2, f-d_1, f-d_2, f+N}\right|1\right).
\label{eq25}
\end{split}
\end{equation}
Setting $a_1=a_2=-N,$ $d_1=-N,$ $d_2=1,$ $f=-3N$ in (\ref{eq25}), we get
\begin{equation}
\begin{split}
{}\sb 6F\sb 5&\left(\left.\atop{1, \, \frac{1}{2}-\frac{3}{2}N, -N, -N, -N, -N}
{-\frac{1}{2}-\frac{3}{2}N, -2N, -2N, -2N, -2N}\right|1\right) \\[5pt]
&=\frac{(2N+1)^2}{(3N+1)(N+1)}
{}\sb 4F\sb 3\left(\left.\atop{1, \, -N, -N, -N}
{-2N, -2N, N+2}\right|1\right).
\label{eq26}
\end{split}
\end{equation}
Therefore from (\ref{eq24}) and (\ref{eq26}) we get
$$
S_N=2\binom{2N}{N}^4\frac{(2N+1)^2}{N+1}\cdot
{}\sb 4F\sb 3\left(\left.\atop{1, \, -N, -N, -N}
{-2N, -2N, N+2}\right|1\right)
$$
or
\begin{equation}
S_N=\frac{2(2N+1)!^2 (2N)!^2}{(N+1)! N!^7}\sum_{k=0}^N\frac{(-N)_k^3}{(-2N)_k^2(N+2)_k}.
\label{eq27}
\end{equation}
Replacing Pocchammer's symbols by factorials in (\ref{eq27}) we arrive at
\begin{equation}
S_N=(N+1)\binom{2N+2}{N+1}\sum_{k=0}^N(-1)^k\binom{2N+1}{N-k}\binom{2N-k}{N-k}^2.
\label{eq28}
\end{equation}
Now replacing $N$ by $N-1$ in (\ref{eq28}), and using (\ref{eq23}) and Lemma \ref{l1}
we get the required identity.
\end{proof}
To prove Theorem \ref{t8}, we need several results concerning harmonic sums modulo a power of prime $p.$
The multiple harmonic sum is defined by
$$
H(a_1, a_2, \ldots, a_r;n)=\sum_{1\le k_1<k_2<\dots<k_r\le n} \frac{1}{k_1^{a_1}k_2^{a_2}\cdots k_r^{a_r}},
$$
where $n\ge r\ge 1$ and $(a_1,a_2,\dots,a_r)\in {\mathbb N}^r.$
For $r=1$ we will also use the notation
$$
H_n^{(a)}:=H(a;n)=\sum_{k=1}^n\frac{1}{k^a} \quad\text{and}\quad
H_n:=H_n^{(1)}.
$$
The values of many harmonic sums modulo a power of prime $p$ are well known.
We need the following results.
\begin{lemma} \label{l3} \cite[Theorem 5.1]{s0}
Let $p$ be a prime greater than $5.$ Then
$$
H_{p-1}\equiv H_{p-1}^{(3)}\equiv 0\pmod{p^2}, \qquad
H_{p-1}^{(2)}\equiv H_{p-1}^{(4)}\equiv 0\pmod p.
$$
\end{lemma}
\begin{lemma} \label{l4}
Let $p>5$ be a prime. Then
$$
H(\{1\}^2;p-1)\equiv -\frac{1}{2}H_{p-1}^{(2)}\pmod{p^4}, \qquad
H(\{1\}^3;p-1)\equiv 0\pmod{p^2},
$$
$$
H(\{1\}^4;p-1)\equiv 0\pmod p.
$$
\end{lemma}
\begin{proof}
Since for $n\ge 1$ (see, for example, \cite{t2})
$$
H(\{1\}^2;n)=\frac{1}{2}(H_n^2-H_n^{(2)}),
$$
$$
H(\{1\}^3;n)=\frac{1}{6}(H_n^3-3H_nH_n^{(2)}+2H_n^{(3)}),
$$
$$
H(\{1\}^4;n)=\frac{1}{24}(H_n^4-6H_n^2H_n^{(2)}+8H_nH_n^{(3)}+3(H_n^{(2)})^2-6H_n^{(4)}),
$$
then by Lemma \ref{l3}, we get the required congruences.
\end{proof}
\begin{lemma} \label{l5}
Let $p>5$ be a prime. Then
$$
\frac{1}{2}\binom{2p}{p}\equiv 1+pH_{p-1}-\frac{p^2}{2}H_{p-1}^{(2)}\pmod{p^5}.
$$
\end{lemma}
\begin{proof}
It is readily seen that
$$
\frac{1}{2}\binom{2p}{p}=\binom{2p-1}{p-1}=\prod_{j=1}^{p-1}\left(1+\frac{p}{j}\right)
=\sum_{j=0}^{p-1}p^j H(\{1\}^j;p-1)
$$
and therefore
$$
\frac{1}{2}\binom{2p}{p}\equiv 1+pH_{p-1}+p^2H(\{1\}^2;p-1)+p^3H(\{1\}^3;p-1)+p^4H(\{1\}^4;p-1)\pmod{p^5}.
$$
Now by Lemma \ref{l4}, we get the required congruence.
\end{proof}
For similar congruences related to the central binomial coefficients, see \cite{t1}.
\subsection*{Proof of Theorem \ref{t8}.}
From Theorem \ref{t7} with $n=p,$ where $p>5$ is a prime, we have
\begin{equation}
\sum_{k=0}^{p-1}(-1)^k\binom{2k}{k}^5(205k^2+160k+32)=8p^2\binom{2p}{p}^2\sum_{k=0}^{p-1}(-1)^k
\binom{2p-1}{p+k}\binom{2p-k-2}{p-1-k}^2.
\label{eq29}
\end{equation}
For the sum on the right-hand side of (\ref{eq29}), changing the order of summation we have
\begin{equation}
\begin{split}
&\sum_{k=0}^{p-1}(-1)^k
\binom{2p-1}{p+k}\binom{2p-k-2}{p-1-k}^2=\sum_{k=0}^{p-1}(-1)^k\binom{2p-1}{k}\binom{p-1+k}{k}^2 \\
&=1+\sum_{k=1}^{p-1}(-1)^k\frac{(2p-1)(2p-2)\cdots(2p-k)}{k!}\frac{p^2(p+1)^2\cdots(p+k-1)^2}{k!^2} \\
&=1+p^2\sum_{k=1}^{p-1}\frac{1}{k^2}\prod_{j=1}^k\left(1-\frac{2p}{j}\right)
\prod_{j=1}^{k-1}\left(1+\frac{p}{j}\right)^2.
\label{eq30}
\end{split}
\end{equation}
Since
\begin{equation*}
\prod_{j=1}^k\left(1-\frac{2p}{j}\right)\equiv 1-2pH_k+4p^2H(\{1\}^2;k)=1-2pH_k+2p^2(H_k^2-H_k^{(2)})\pmod{p^3}
\end{equation*}
and
\begin{equation*}
\begin{split}
\prod_{j=1}^{k-1}\left(1+\frac{p}{j}\right)^2&=\prod_{j=1}^{k-1}\left(1+\frac{2p}{j}+\frac{p^2}{j^2}\right)
\equiv 1+2pH_{k-1}+p^2H_{k-1}^{(2)}+4p^2H(\{1\}^2;k-1) \\
&=1+2pH_{k-1}+2p^2H_{k-1}^2-p^2H_{k-1}^{(2)}\pmod{p^3},
\end{split}
\end{equation*}
substituting these congruences in (\ref{eq30}) and simplifying we obtain
\begin{equation}
\sum_{k=0}^{p-1}(-1)^k\binom{2p-1}{p+k}\binom{2p-k-2}{p-1-k}^2\equiv 1+p^2H_{p-1}^{(2)}-2p^3H_{p-1}^{(3)}
-3p^4\sum_{k=1}^{p-1}\frac{H_{k-1}^{(2)}}{k^2}\pmod{p^5}.
\label{eq31}
\end{equation}
Note that
\begin{equation*}
\begin{split}
\sum_{k=1}^{p-1}\frac{H_{k-1}^{(2)}}{k^2}&=\sum_{k=1}^{p-1}\frac{1}{k^2}\sum_{j=1}^{k-1}\frac{1}{j^2}
=\sum_{k=1}^{p-1}\frac{1}{k^2}\sum_{j=1}^k\frac{1}{j^2}-H_{p-1}^{(4)} \\
&=
\sum_{j=1}^{p-1}\frac{1}{j^2}\Bigl(H_{p-1}^{(2)}-H_{j-1}^{(2)}\Bigr)-H_{p-1}^{(4)}=
\Bigl(H_{p-1}^{(2)}\Bigr)^2-\sum_{j=1}^{p-1}\frac{H_{j-1}^{(2)}}{j^2}-H_{p-1}^{(4)}
\end{split}
\end{equation*}
and therefore,
\begin{equation}
2\sum_{k=1}^{p-1}\frac{H_{k-1}^{(2)}}{k^2}=\Bigl(H_{p-1}^{(2)}\Bigr)^2-H_{p-1}^{(4)}.
\label{eq32}
\end{equation}
Now by Lemma \ref{l3}, from (\ref{eq31}) and (\ref{eq32}) we have
\begin{equation}
\sum_{k=0}^{p-1}(-1)^k\binom{2p-1}{p+k}\binom{2p-k-2}{p-1-k}^2\equiv 1+p^2H_{p-1}^{(2)}\pmod{p^5}.
\label{eq33}
\end{equation}
From Lemma \ref{l5} we find easily
\begin{equation}
\binom{2p}{p}^2\equiv 4\Bigl(1+2pH_{p-1}-p^2H_{p-1}^{(2)}\Bigr)\pmod{p^5}.
\label{eq34}
\end{equation}
Now from (\ref{eq29}), (\ref{eq33}) and (\ref{eq34}), by Lemma \ref{l3}, for any prime $p>5,$ we have
\begin{equation*}
\begin{split}
\sum_{k=0}^{p-1}(-1)^k\binom{2k}{k}^5(205k^2+160k+32)&\equiv
32p^2\Bigl(1+p^2H_{p-1}^{(2)}\Bigr)\Bigl(1+2pH_{p-1}-p^2H_{p-1}^{(2)}\Bigr) \\
&\equiv 32p^2+64p^3H_{p-1}\pmod{p^7}.
\end{split}
\end{equation*}
The validity of this congruence for $p=3$ can be easily checked by straightforward verification.
Taking into account that for an odd prime $p,$
$$
\binom{2k}{k}^5\equiv 0\pmod{p^5}
$$
for $k=\frac{p+1}{2},\dots,p-1,$ and applying Lemma \ref{l3}, we get the second congruence of Theorem~\ref{t8}. \qed
\section{A supercongruence arising from a series for the constant $K.$}
\label{SS6}
\vspace{0.1cm}
In Section \ref{SS3} we proved the accelerated formula
$$
K=\sum_{n=1}^{\infty}\frac{(-27)^{n-1}(15n-4)}{n^3\binom{2n}{n}^2\binom{3n}{n}},
$$
which was earlier conjectured by Z.-W.~Sun. Motivated by this series, Z.-W.~Sun \cite[Conj.~5.6]{sun1}
formulated the following conjecture on supercongruences: for any prime $p>3$ and a positive integer $a,$
$$
\sum_{k=0}^{p^a-1}\frac{15k+4}{(-27)^k}\binom{2k}{k}^2\binom{3k}{k}\equiv 4\left(\frac{p^a}{3}\right) p^a\pmod{p^{2+a}}.
$$
In this connection we prove here the following theorem.
\begin{theorem} \label{t9}
Let $p$ be a prime greater than $3.$ Then
$$
\sum_{k=0}^{p-1}\frac{15k+4}{(-27)^k}\binom{2k}{k}^2\binom{3k}{k}\equiv 4\left(\frac{p}{3}\right) p\pmod{p^{2}}.
$$
\end{theorem}
\begin{proof}
Consider the WZ pair $(F,G)$ defined in the proof of Theorem \ref{t1} with $\alpha=1/3,$ $\beta=2/3,$ $a=0:$
$$
F(n,k)=\frac{(-1)^n(3n+1)! n!^3}{(2n)! 27^n}\cdot\frac{\left(\frac{1}{2}\right)_k^2\left(\frac{2}{3}\right)_k^2(n+2k+1)
{\left(\frac{1}{3}\right)^2_{n+k+1}\left(\frac{2}{3}\right)^2_{n+k+1}},
$$
$$
G(n,k)=F(n,k)\cdot\frac{45n^2+63n+22+18k(3n+k+2)}{18(2n+1)(n+1+2k)}.
$$
Then it is readily seen that the pair $(\bar{F},\bar{G})$ given by
$$
\bar{F}(n,k)=\frac{(-27)^{-k}(2k)!(3k+3n)!^2n!^2}{(3k+1)!k!^3(k+n)!^2(3n)!^2}\,((15k+4)(3k+1)+18n(3k+n+1)),
$$
$$
\bar{G}(n,k)=\frac{3k^3(3k-1)(-27)^{1-k}(2k)!(3k+3n)!^2n!^2(2n+k+1)}{(3k)!k!^3(k+n)!^2(3n+2)!^2}
$$
is its dual WZ pair (see Proposition C or \cite[Ch.~7]{pwz}), for which we have
\begin{equation}
\bar{F}(n+1,k)-\bar{F}(n,k)=\bar{G}(n,k+1)-\bar{G}(n,k), \qquad n,k\ge 0.
\label{eq35}
\end{equation}
Summing (\ref{eq35}) over $k=0,1,\dots,p-1$ and observing that $\bar{G}(n,0)=0,$ we obtain
\begin{equation}
\sum_{k=0}^{p-1}\bar{F}(n+1,k)-\sum_{k=0}^{p-1}\bar{F}(n,k)=\bar{G}(n,p).
\label{eq36}
\end{equation}
Further, for every integer $n$ satisfying $0\le n<\frac{p-2}{3}$ we have
\begin{equation}
\bar{G}(n,p)=\frac{(-27)^{1-p}(2p)! (3p+3n)!^2 n!^2 (2n+p+1)}{(3p-2)!(p-1)!^2p!(p+n)!^2(3n+2)!^2}
\equiv 0\pmod{p^3},
\label{eq37}
\end{equation}
since the numerator is divisible by $p^8$ and the denominator is divisible by $p^5$
and not divisible by $p^6.$ Now from (\ref{eq36}) and (\ref{eq37}) for any non-negative integer
$n<\frac{p-2}{3},$ we have
\begin{equation}
\sum_{k=0}^{p-1}\bar{F}(0,k)\equiv \sum_{k=0}^{p-1}\bar{F}(1,k)\equiv\dots\equiv
\sum_{k=0}^{p-1}\bar{F}(n+1,k) \pmod{p^3}.
\label{eq38}
\end{equation}
Moreover, from (\ref{eq38}) we obtain
\begin{equation}
\bar{F}(0,k) \equiv\begin{cases}
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-1}{3},k\right)\pmod{p^3}, & \quad \text{if} \quad p\equiv 1\pmod{3}; \\[3pt]
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-2}{3},k\right)\pmod{p^3}, & \quad \text{if} \quad p\equiv 2\pmod{3}.
\label{eq39}
\end{cases}
\end{equation}
Since
$$
\sum_{k=0}^{p-1}\bar{F}(0,k)=\sum_{k=0}^{p-1}\frac{(15k+4)}{(-27)^k}\binom{2k}{k}^2\binom{3k}{k},
$$
it is sufficient to prove the required congruence for the right-hand side of (\ref{eq39}).
We consider separately two cases depending on the sign of $(\frac{p}{3}).$
First, let $p\equiv 2\pmod3,$ then we have
$$
\bar{F}\left(\frac{p-2}{3},k\right)=
\frac{(-27)^{-k}(2k)!(3k+p-2)!^2\left(\frac{p-2}{3}\right)!^2
{(3k+1)!k!^3\left(k+\frac{p-2}{3}\right)!^2(p-2)!^2}
((15k+4)(3k+1)+2(p-2)(9k+p+1)).
$$
Note that if $1\le k\le \frac{p-2}{3},$ then the denominator of $\bar{F}(\frac{p-2}{3},k)$ is not divisible by $p$
and the numerator is divisible by $p^2.$ Therefore, we have
\begin{equation}
\bar{F}\left(\frac{p-2}{3},k\right)\equiv 0\pmod{p^2}, \qquad k=1,2,\dots,\frac{p-2}{3}.
\label{eq40}
\end{equation}
Let ${\rm ord}_p\, n$ be the $p$-adic order of $n$ that is the exponent of the highest power of $p$ dividing $n.$
It is clear that
\begin{equation}
{\rm ord}_p\,\bar{F}\left(\frac{p-2}{3},\frac{p+1}{3}\right)={\rm ord}_p\,
\frac{3^{-p-1}\left(\frac{2p+2}{3}\right)!(2p-1)!^2\left(\frac{p-2}{3}\right)!^2}{(p+2)!\left(
\frac{p+1}{3}\right)!^3\left(\frac{2p-1}{3}\right)!^2(p-2)!^2}=1.
\label{eq41}
\end{equation}
Similarly, considering the disjointed intervals $\frac{p+4}{3}\le k\le\frac{p-1}{2},$ $\frac{p+1}{2}\le k\le \frac{2p-1}{3},$
and $\frac{2p+2}{3}\le k\le p-1,$ we obtain
\begin{equation}
{\rm ord}_p\,\bar{F}\left(\frac{p-2}{3},k\right)\ge 3, \qquad k=\frac{p+4}{3}, \frac{p+4}{3}+1, \dots, p-1.
\label{eq42}
\end{equation}
Thus from (\ref{eq40})--(\ref{eq42}) we have
$$
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-2}{3},k\right)\equiv \bar{F}\left(\frac{p-2}{3},0\right)+
\bar{F}\left(\frac{p-2}{3},\frac{p+1}{3}\right)
\pmod{p^2}
$$
or
\begin{equation*}
\begin{split}
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-2}{3},k\right)&\equiv 4+2(p-2)(p+1)+\frac{3^{-p-1}\left(\frac{2p+2}{3}\right)!
(2p-1)!^2\left(\frac{p-2}{3}\right)!^2}{(p+2)!\left(\frac{p+1}{3}\right)!^3\left(\frac{2p-1}{3}\right)!^2(p-2)!^2} \\
&\times((5p+9)(p+2)+8(p-2)(p+1))\pmod{p^2}.
\end{split}
\end{equation*}
Taking into account that
\begin{equation*}
\begin{split}
\frac{(2p-1)!^2}{(p-2)!^2(p+2)!}&=\frac{(p-1)^2p}{(p+1)(p+2)}\frac{(p+1)^2(p+2)^2\cdots(2p-1)^2
{(p-1)!} \\
&\equiv \frac{(p-1)^2p}{(p+1)(p+2)}\cdot (p-1)!\pmod{p^2}\equiv\frac{p!}{2}\pmod{p^2},
\end{split}
\end{equation*}
$$
(5p+9)(p+2)+8(p-2)(p+1)\equiv 2\pmod p,
$$
and simplifying, we get
$$
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-2}{3},k\right)\equiv -2p+\frac{2\cdot 3^{-p}\cdot p!}{\left(\frac{2p-1}{3}\right)!
\left(\frac{p+1}{3}\right)!}\pmod{p^2}.
$$
For primes $p>3,$ by Fermat's theorem, we have $3^p\equiv 3\pmod{p}$ and therefore,
\begin{equation}
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-2}{3},k\right)\equiv
-2p+\frac{2}{3}\frac{p!}{\left(\frac{2p-1}{3}\right)!\left(\frac{p+1}{3}\right)!}\pmod{p^2}.
\label{eq43}
\end{equation}
Now put $n=\frac{p+1}{3}$ and note that
\begin{equation}
\begin{split}
\frac{p\,!}{\left(\frac{2p-1}{3}\right)!\left(\frac{p+1}{3}\right)!}&=\binom{p}{n}
=\frac{p(p-1)\cdots(p-n+1)}{n!}=p\frac{(-1)^{n-1}}{n}\prod_{j=1}^{n-1}\left(1-\frac{p}{j}\right) \\
&\equiv\frac{(-1)^{n-1}}{n} p\pmod{p^2}\equiv -3p\pmod{p^2},
\label{eq44}
\end{split}
\end{equation}
where in the last equality we used the fact that $n-1=\frac{p-2}{3}$ is odd (otherwise, a prime $p>3$ must be even).
Finally, substituting (\ref{eq44}) in (\ref{eq43}) we obtain
$$
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-2}{3},k\right)\equiv -2p-2p=-4p=4\left(\frac{p}{3}\right) p\pmod{p^2},
$$
which proves the theorem for primes $p\equiv 2\pmod{3}.$
Now suppose that $p\equiv 1\pmod{3}.$ Then we have
$$
\sum_{k=0}^{p-1}\bar{F}(0,k)\equiv\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-1}{3},k\right)\pmod{p^3},
$$
where
$$
\bar{F}\left(\frac{p-1}{3},k\right)=\frac{(-27)^{-k}(2k)!(3k+p-1)!^2\left(\frac{p-1}{3}\right)!^2
{(3k+1)!k!^3\left(k+\frac{p-1}{3}\right)!^2(p-1)!^2}((15k+4)(3k+1)+2(p-1)(9k+p+2)).
$$
Note that if $1\le k<\frac{p-1}{3},$ then the denominator of $\bar{F}(\frac{p-1}{3},k)$
is not divisible by $p.$ On the other hand, ${\rm ord}_p\,(3k+p-1)!^2=2$ and therefore,
$\bar{F}(\frac{p-1}{3},k)\equiv 0\pmod{p^2}.$ It is clear that
$$
{\rm ord}_p\,\bar{F}\left(\frac{p-1}{3},\frac{p-1}{3}\right)=1.
$$
Similarly, if $\frac{p+2}{3}\le k\le\frac{2p-2}{3},$ then ${\rm ord}_p\,\bar{F}(\frac{p-1}{3},k)\ge 3,$
and if $\frac{2p+1}{3}\le k\le p-1,$ then ${\rm ord}_p\,\bar{F}(\frac{p-1}{3},k)=3.$
Therefore, we have
\begin{equation}
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-1}{3},k\right)\equiv \bar{F}\left(\frac{p-1}{3},0\right)
+\bar{F}\left(\frac{p-1}{3},\frac{p-1}{3}\right)\pmod{p^2},
\label{eq45}
\end{equation}
where
\begin{equation}
\bar{F}\left(\frac{p-1}{3},0\right)=4+2(p-1)(p+2)\equiv 2p\pmod{p^2}
\label{eq46}
\end{equation}
and
$$
\bar{F}\left(\frac{p-1}{3},\frac{p-1}{3}\right)=\frac{3^{1-p} (2p-2)!^2 (p(5p-1)+2(p-1)(4p-1))
{\left(\frac{2p-2}{3}\right)!\left(\frac{p-1}{3}\right)!p!(p-1)!^2}.
$$
Noting that
$$
\frac{(2p-2)!^2}{p!(p-1)!^2}=\frac{p}{p-1}\frac{(p+1)^2(p+2)^2\cdots(2p-2)^2}{(p-2)!}
\equiv\frac{p}{p-1}\,(p-2)!\equiv p!\pmod{p^2},
$$
and applying Fermat's theorem, we have
$$
\bar{F}\left(\frac{p-1}{3},\frac{p-1}{3}\right)\equiv
\frac{2\cdot p!}{\left(\frac{2p-2}{3}\right)!\left(\frac{p-1}{3}\right)!}\pmod{p^2}.
$$
Now setting $n=\frac{p-1}{3}$ and taking into account that
$$
\frac{(p-1)!}{\left(\frac{2p-2}{3}\right)!\left(\frac{p-1}{3}\right)!}=\binom{p-1}{n}
=(-1)^n\prod_{j=1}^n\left(1-\frac{p}{j}\right)
\equiv 1\pmod{p}
$$
we have
\begin{equation}
\bar{F}\left(\frac{p-1}{3},\frac{p-1}{3}\right)\equiv 2p\pmod{p^2}.
\label{eq47}
\end{equation}
Now by (\ref{eq45})--(\ref{eq47}), we obtain
$$
\sum_{k=0}^{p-1}\bar{F}\left(\frac{p-1}{3},k\right)\equiv 4p=4\left(\frac{p}{3}\right)p\pmod{p^2},
$$
which proves the theorem for primes $p\equiv 1\pmod{3}.$
\end{proof}
|
1,477,468,750,923 | arxiv | \section{Introduction} \label{sec:Intro}
\IEEEPARstart{T}{he} abundance of bandwidth in the millimeter wave (mmWave) spectrum enables gigabit-per-second data rates for cellular systems and local area networks \cite{m1}, \cite{m2}. MmWave systems make use of large antenna arrays at both the transmitter and the receiver to provide sufficient receive signal power. The use of large antenna arrays is justified by the small carrier wavelength at mmWave frequencies which permits large number of antennas to be packed in small form factors.
Due to weather and atmospheric effects, outdoor mmWave antenna elements are subject to blockages from flying debris or particles found in the air as shown in Fig. \ref{fig:ant}. The term ``blockage'' here refers to a physical object partially or completely blocking a subset of antenna elements and should not be confused with mmWave channel blockage. MmWave antennas on handheld devices are also subject to blockage from random finger placement and/or fingerprints on the antenna array. Partial or complete blockage of some of the antenna elements reduces the amount of energy incident on the antenna \cite{absorb}, \cite{absorb2}. For instance, it is reported in \cite{absorb3} that $90\%$ of a 76.5 GHz signal energy will be absorbed by a water droplet of thickness 0.23 mm. A thin water film caused by, for example a finger print, is also reported to cause attenuation and a phase shift on mmWave signals \cite{absorb3}. Moreover, snowflakes, ice stones, and dry and damp sand particles are reported to cause attenuation and/or scattering \cite{absorb}-\cite{absorb4}. Because the size of these suspended particles is comparable to the signal wavelength and antenna size, random blockages caused by these particles will change the antenna geometry and result in a distorted radiation pattern \cite{ag0}, \cite{ag}. Random changes in the array's radiation pattern causes uncertainties in the mmWave channel. It is therefore important to continuously monitor the mmWave system, reveal any abnormalities, and take corrective measures to maintain efficient operation of the system. This necessitates the design of reliable and low latency array diagnosis techniques that are capable of detecting the blocked antennas and the corresponding signal power loss and/or phase shifts caused by the blocking particles. Once a fault has been detected, pattern correction techniques proposed in, for example, \cite{ag0}-\cite{ag5} can be employed to calculate new excitation weights for the array.
Several array diagnostic techniques, which are based on genetic algorithms \cite{gen1}, \cite{gen3}, matrix inversion \cite{matrix}, exhaustive search \cite{esearch}, and MUSIC \cite{music}, have been proposed in the literature to identify the locations of faulty antenna elements. These techniques compare the radiation pattern of the array under test (AUT) with the radiation pattern of an ``error free" reference array. For large antenna arrays, the techniques in \cite{gen1}-\cite{music} require a large number of samples (measurements) to obtain reliable results. To reduce the number of measurements, compressed sensing (CS) based techniques have recently been proposed in \cite{cs1}-\cite{cs5}. Despite their good performance, the techniques in \cite{gen1}-\cite{cs5} are primarily designed to detect the sparsity pattern of a failed array, i.e. the locations of the failed antennas and not necessarily the complex blockage coefficients. Moreover, the CS diagnosis techniques proposed in \cite{cs1} and \cite{cs2} have the following limitations: (i) They require measurements to be made at multiple receive locations and are not suitable when both the transmitter and the receiver are fixed. (ii) They assume fault-free receive antennas, i.e. faults at the AUT only, however, faults can occur at both the transmitter and the receiver. (iii) They can not exploit correlation between faulty antennas to further reduce the diagnosis time. (iv) They do not estimate the effective antenna element gain, i.e. the induced attenuation and phase shifts caused by blockages. These estimates can be used to re-calibrate the array. v) They do not optimize the CS measurement matrices, i.e. the restricted isometry property might not be satisfied. While the CS technique proposed in \cite{cs5} performs joint fault detection/estimation and angle-of-arrival/departure (AoA/D) estimation, it is not suitable for mmWave systems as it requires a separate RF chain for each antenna element. The high diagnosis time required by the techniques proposed in \cite{gen1}-\cite{music} and the limitations of the CS based techniques proposed in \cite{cs1}-\cite{cs5} motivate the development of new array diagnosis techniques suitable for mmWave systems.
In this paper, we develop low-complexity array diagnosis techniques for mmWave systems with large antenna arrays. These techniques account for practical assumptions on the mmWave hardware in which the analog phase shifters have constant modulus and quantized phases, and the number of RF chains is limited (assumed to be one in this paper).
The main contributions of the paper can be summarized as follows:
\begin{itemize}
\item We investigate the effects of random blockages on the far-field radiation pattern of linear uniform arrays. We consider both partial and complete blockages.
\item We derive closed-form expressions for the mean and variance of the far-field radiation pattern as a function of the antenna element blockage probability. These expressions provide an efficient means to evaluate the impact of the number of antenna elements and the antenna element blockage probability on the far-field radiation pattern.
\item We propose a new formulation for mmWave antenna diagnosis which relaxes the need for multi-location measurements, captures the sparse nature of blockages, and enables efficient compressed sensing recovery.
\item We consider blockages at the transmit and/or receive antennas and propose two CS based array diagnosis techniques. These techniques identify the locations and the induced attenuation and phase shifts caused by unstructured blockages.
\item We exploit the two dimensional structure of mmWave antenna arrays and the correlation between the blocked antennas to further reduce the array diagnosis time when structured blockages exist at the receiver.
\item We evaluate the performance of the proposed array diagnosis techniques by simulations in a mmWave system setting, assuming that both the transmit and receive antennas are equipped with a single RF chain and 2-bit phase shifters.
\end{itemize}
\begin{figure}[t]
\begin{center}
\includegraphics[width=2.2in]{antm.eps}
\caption{An example of an outdoor millimeter wave antenna array with different suspended particles partially blocking the array. The suspended particles, with different absorption and scattering properties, modify the array geometry.}
\label{fig:ant}
\end{center}
\end{figure}
The remainder of this paper is organized as follows. In Section \ref{sec:probform}, we formulate the array diagnosis problem and study the effects of random blockages on the far-field radiation pattern of linear arrays. In Section \ref{sec:prop}, we introduce the proposed array diagnosis technique assuming a fault free transmit array and in Section \ref{sec:propj}, we introduce the proposed array diagnosis technique when faults are present at both the transmit and receive arrays. In Section \ref{sec:PA} we provide some numerical results and conclude our work in Section \ref{sec:con}.
\section{Problem Formulation} \label{sec:probform}
We consider a two-dimensional (2D) planar antenna array with $N_\text{x}$ equally spaced elements along the x-axis and $N_\text{y}$ equally spaced elements along the y-axis; nonetheless, the model and the corresponding algorithms can be adapted to other antenna structures as well. Each antenna element is described by its position along the x and y axis, for example, the $(N_\text{x},N_\text{y})$th antenna refers to an antenna located at the $N_\text{x}$th position along the x-axis and the $N_\text{y}$th position along the y-axis. The ideal far-field radiation pattern of this planar array in the direction $(\theta,\phi)$ is given by \cite{at}
\begin{eqnarray}\label{cz1i}
\hspace{-3mm } f(\theta,\phi) \hspace{-2.6mm }&=& \hspace{-3.7mm } \sum_{n=0}^{N_\text{y}-1} \hspace{-0.5mm } \sum_{m=0}^{N_\text{x}-1} \hspace{-1mm } w_{n,m} e^{j m \frac{2\pi d_x}{\lambda} \sin \theta \cos \phi} e^{j n \frac{2\pi d_y}{\lambda} \sin \theta \sin \phi}\hspace{-0.5mm },
\end{eqnarray}
where $d_x$ and $d_y$ are the antenna spacing along the x and y axis, $\lambda$ is the wavelength, and $w_{n,m}$ is the $(n,m)$th complex antenna weight.
Let $\mathbf{a}_\text{x}(\theta,\phi) \in \mathcal{C}^{N_\text{x}\times 1}$ and $\mathbf{a}_\text{y}(\theta,\phi) \in \mathcal{C}^{N_\text{y}\times 1}$ be two vectors where the $m$th entry of $\mathbf{a}_\text{x}(\theta,\phi)$ is $[\mathbf{a}_x(\theta,\phi)]_{m}= e^{j m \frac{2\pi d_x}{\lambda} \sin \theta \cos \phi}$ and the $n$th entry of $\mathbf{a}_\text{y}(\theta,\phi)$ is $[\mathbf{a}_y(\theta,\phi)]_{n} = e^{j n \frac{2\pi d_y}{\lambda} \sin \theta \sin \phi} $. Also, let the matrix $\mathbf{W}\in \mathcal{C}^{N_\text{y}\times N_\text{x}}$ be a matrix of antenna weights, where the $(n,m)$th entry of $\mathbf{W}$ is $[\mathbf{W}]_{n,m}=w_{n,m}$. Then (\ref{cz1i}) can be reduced to
\begin{eqnarray}\label{cz2i}
f(\theta,\phi)= {\text{vec}{(\mathbf{W})}^\mathrm{T} }\mathbf{a}(\theta,\phi),
\end{eqnarray}
where vec$(\mathbf{W})$ is the $N_\text{x}N_\text{y} \times 1$ column vector obtained by stacking the columns of the matrix $\mathbf{W}$ on top of one another, the vector $\mathbf{a}(\theta,\phi)=\mathbf{a}_\text{x}(\theta,\phi) \otimes \mathbf{a}_\text{y}(\theta,\phi)$ is the 1D array response vector, and the operator $\otimes$ represents the Kronecker product. The formulation in (\ref{cz2i}) allows us to represent the 2D array as a 1D array, and as a result, simplify the problem formulation.
In the presence blockages, the far-field radiation pattern of the array in (\ref{cz2i}) becomes
\begin{eqnarray}\label{bs1}
g(\theta,\phi)={\underbrace{\text{vec}{(\mathbf{W})}}_{\mathbf{x}}}^\mathrm{T}\underbrace{ (\mathbf{b} \circ \mathbf{a}(\theta,\phi)) }_{\mathbf{z}},
\end{eqnarray}
where operator $\circ$ represents the Hadamard product, the vector $\mathbf{x} \in \mathcal{C}^{N_\text{x}N_\text{y}\times 1}$ is a vector of antenna weights and the vector $\mathbf{z} \in \mathcal{C}^{N_\text{x}N_\text{y}\times 1}$ is the equivalent array response vector. The $n$th entry of the vector $\mathbf{b} \in \mathcal{C}^{N_\text{x}N_\text{y}\times 1}$ is defined by
\begin{equation}\label{efbp1}
b_n = \left\{
\begin{array}{ll}
\alpha_n, & \hbox{ if the $n$th element is blocked} \\
1, & \hbox{ otherwise, } \\
\end{array}
\right.
\end{equation}
where $n = 1,..., N_\text{x}N_\text{y}$, $\alpha_n = \kappa_n e^{j\Phi_n}$, $0 \leq \kappa_{n} \le 1$ and $0 \leq \Phi_{n} \leq 2\pi$ are the resulting absorption and scattering coefficients at the $n$th element. A value of $\kappa_{n} = 0$ represents maximum absorption (or blockage) at the $n$th element, and the scattering coefficient $\Phi_{n}$ measures the phase-shift caused by the particle suspended on the $n$th element. This makes $b_n$ a random variable, i.e. $b_n=\alpha_n$ with probability $P_\text{b}$ if the $n$th antenna is blocked and $b_n=1$ with probability $1-P_\text{b}$ otherwise. It is clear from (\ref{bs1}) that blockages will change the array manifold and result in a distorted radiation pattern as shown in Fig. \ref{fig:pat}. The resulting pattern is a function of the number of the particles suspended on the array and their corresponding dielectric constants.
\begin{figure}
\begin{center}
\includegraphics[width=3.5in]{pat2d2.eps}
\caption{Original and damaged beam patterns of a 16 element ($4 \times 4$) planar array with $\theta = 90$ degrees and $\frac{d_x}{\lambda}=\frac{d_y}{\lambda}=0.5$. The third, fifth and thirteenth array elements of the equivalent 1D array (see (\ref{cz2i})) are blocked with $b_3=0.37+j0.22$, $b_5=-0.1+j0.34$, and $b_13=-0.64-j0.1$. Blockages result in an increase in the sidelobe level and a decrease in gain. Phase correction improves beam pattern, however, more complex precoder design is required for pattern correction.}
\label{fig:pat}
\end{center}
\end{figure}
In Tables I and II, we summarize the effects of blockages on a linear array. Specifically, we tabulate the mean and variance of a distorted far-field radiation pattern of a linear array subject to blockages. The total number of blockages is assumed to be fixed, however, block locations and intensities (for the case of partial blockage) are assumed to be random. For ease of exposition, we study the effects on the azimuth direction only. A similar analysis can be performed for the elevation pattern. Derivations of the results in Tables I and I can be found in \cite{mpv} and are omitted for space limitation. From Table I, we observe that both complete and partial blockages reduce the amplitude of the main lobe. This reduces the beamforming gain of the array. We also observe that complete blockages have no effect on the variance of the of the main lobe, and hence do not cause randomness in the main lobe. Random partial blockages, however, randomize the main lobe and lead to uncertainties in the mmWave channel. From Table II, we observe that complete and partial blockages distort the sidelobes of the far-field radiation pattern. Table II also shows that the variance of this distortion is a function of the blockage intensity and the antenna element blockage probability $P_\text{b}$.
\begin {table*}[t!]
\center
\caption{Mean and variance of the far-field beam pattern of a linear array steered at $\phi =\phi_\text{T}$ as a function of the antenna element blockage probability $P_\text{b}$ and the blockage coefficient $\alpha_n$. The blockage coefficient is constant if the array is subject to a single type of blockage, and random if the array is subject to multiple types of blockages. Derivation of results can be found in \cite{mpv} and are omitted for space limitations}
\center
\begin{tabular}{ |p{5cm}||p{2.5cm}|p{1.5cm}| }
\hline
Fault Type & Mean & Variance \\
\hline
Complete blockage ($\alpha_{n\in \mathcal{I_\text{k}}}=0$) & $1-P_\text{b}$ & 0\\
Partial blockage ($\alpha_{n\in \mathcal{I_\text{k}}}=\beta$)& $1-P_\text{b}+P_\text{b}\beta$ &$0$ \\
Partial blockages ($\alpha_n$ random)& $1-P_\text{b}+P_\text{b}\mathbb{E}[\alpha_n]$ &$P_\text{b} \text{var}[\alpha_n]$ \\
\hline
\end{tabular}
\end{table*}
\begin {table*}[t!]
\caption{Mean and variance of the far-field beam pattern of a linear array steered at $\phi \not=\phi_\text{T}$ as a function of the antenna element blockage probability $P_\text{b}$ and the blockage coefficient $\alpha_n$; $\gamma = \frac{\pi d_x}{\lambda} (\cos (\phi_\text{}) - \cos (\phi_\text{T}) )$. Derivation of results can be found in \cite{mpv} and are omitted for space limitation.}
\center
\begin{tabular}{ |p{4.5cm}||p{5.2cm}|p{5.5cm}| }
\hline
Fault Type & Mean & Variance \\
\hline
Complete blockage ($\alpha_{n\in \mathcal{I_\text{k}}}=0$) & $(1-P_\text{b}) \frac{\sin ( N_x \gamma ) }{N_x \sin( \gamma)} e^{j (N_x-1) \gamma }$ & $ \frac{P_\text{b}}{N_x}\left(1-P_\text{b}\right)$\\
Partial blockage ($\alpha_{n\in \mathcal{I_\text{k}}}=\beta$) & $(1-P_\text{b}(1-\beta)) \frac{\sin ( N_x \gamma ) }{N_x \sin( \gamma)} e^{j (N_x-1) \gamma }$ &$ \frac{1}{N_x} (1-P_\text{b}+P_\text{b} |\beta|^2) - \frac{1}{N_x} (1-P_\text{b}+P_\text{b}\beta)^2 $ \\
Partial blockage ($\alpha_n$ random) & $(1-P_\text{b}(1-\mathbb{E}[\alpha_n])) \frac{\sin ( N_x \gamma ) }{N_x \sin( \gamma)} e^{j (N_x-1) \gamma }$ &$ \frac{P_\text{b}}{N_x}(1- P_\text{b}+\mathbb{E} [\alpha^2_n] -P_\text{b}\mathbb{E} [\alpha_n]^2)$ \\
\hline
\end{tabular}
\end{table*}
In the following sections, we propose several array diagnosis (or blockage detection/estimation) techniques for mmWave antenna arrays. Once array diagnosis is complete, the estimated attenuation and phase shifts caused by blockages can be used to calibrate the array. Fig. 2, for example, shows the resulting pattern when phase correction is applied to the affected antenna elements. As shown, the resulting beam pattern is slightly improved, however, more complex precoder design is required to modify the excitation weights of the antenna elements and calibrate the array. The calibration process could for example focus on maximizing the beamforming gain and/or minimize the sidelobe level, and as a result, reduce the uncertainty in the mmWave channel. While the excitation weights of failed arrays can be easily modified in digital antenna architectures, additional hardware, e.g. RF chains, antenna switches, subarrays, etc., might be required to generate more degrees of freedom for the precoder design. Hybrid architectures (see e.g. \cite{h1} and \cite{h2}) can be used to modify the excitation weights of the failed array and also reduce the diagnosis time since each RF chain can now obtain independent measurements. Beam pattern correction and precoder design for failed arrays is beyond the scope of this work and is left for future work. Before we proceed with the proposed techniques, we lay down the following assumptions: (i) The number of blockages is assumed to be small compared to the array size. (ii) The channel between the transmitter and the receiver is line-of-sight (LoS) with a single dominant path. In the case of multi-path, the transmitter waits for a period $\tau$, where $\tau$ is proportional to the channel's delay spread, before it transmits the following training symbol. (iii) The transmit and receive array manifolds as well as the transmitter's angle-of-departure and the receiver's angle-of-arrival are known at the receiver. The AoA/D can be known a priori, obtained by, for example, using prior sub-6 GHz channel information \cite{anum}, or provided by an infrastructure via a lower frequency control channel. (iv) Blockages remain constant for a time interval which is larger than the diagnosis time.
\section{Fault Detection at the Receiver}\label{sec:prop}
In the previous section, we showed that blockages distort the beam pattern of the array. To mitigate the effects of blockages, it is imperative to design reliable array diagnosis techniques that detect the fault locations and estimate the values of the blockage coefficients with minimum diagnosis time. Array diagnosis can be initiated after channel estimation. For example, the optional training subfield of the SC PHY IEEE 802.11ad frame could be utilized for periodic array diagnosis. Since the system performance is greatly affected by the antenna architecture and precoder design, which we do not undertake in this work, we do not simulate the effect of CSI training loss in this paper and simply focus on the array diagnosis problem. Note, however, that beamforming at both the transmitter and receiver leads to larger coherence time \cite{ab1}, \cite{ab2} and the angular variation is typically an order of magnitude slower than the conventional coherence time \cite{ab1}. Since all AoDs/AoAs are assumed to be known a priori in this paper, the loss in CSI training time becomes unsubstantial.
In this section we propose three array diagnosis techniques. The first technique is generic in the sense that it does not exploit the block structure of blockages while the second and third techniques exploit the dependencies between blocked antenna elements to further reduce the array diagnosis time.
\subsection{Generic Fault Detection}\label{sec:GFD}
To start the array diagnosis process, the receiver with the AUT requests a transmitter, with known location, i.e. $\theta$ and $\phi$, to transmit $K$ training symbols (known to the receiver). The receiver with the AUT generates a random beam to receive each training symbol as shown in Fig. \ref{adad}(a), i.e., random antenna weights are used at the AUT to combine each training symbol. Mathematically, the $k$th output of the AUT can be written as
\begin{eqnarray}\label{c4}
{h}_k(\theta,\phi)=\sqrt{\rho} s \mathbf{x}_k^\mathrm{T}\mathbf{z} + e_k,
\end{eqnarray}
where $k=1,...,K$, $\rho$ is the effective signal-to-noise ratio (SNR) which includes the path loss, $s=1$ is the training symbol, and $e_k \sim \mathcal{CN}(0,1)$ is the additive noise. The entries of the weighing vector $\mathbf{x}_k $ at the $k$th instant are chosen uniformly and independently at random. Equipped with the path-loss and angular location of the transmitter, the receiver generates the ideal pattern $f(\theta,\phi)$ using (\ref{cz1i}). Subtracting the ideal beam pattern from the received signal in (\ref{c4}) we obtain
\begin{align}
\nonumber y_k &= h_k(\theta,\phi) -f_k(\theta,\phi)=\mathbf{x}_k^\mathrm{T} (\mathbf{b} \circ \mathbf{a}(\theta,\phi)) - \mathbf{x}_k^\mathrm{T} \mathbf{a}(\theta,\phi) + \tilde{e}_k \\ \label{c2} &=\mathbf{x}_k^\mathrm{T} (\mathbf{c} \circ \mathbf{a}(\theta,\phi) ) + \tilde{e}_k,
\end{align}
where $\tilde{e}_k = \frac{e_k}{\sqrt{\rho}}$, and the $n$th entry of the vector $\mathbf{c}$ is $[\mathbf{c}]_n=0$ when there is no blockage at the $n$th element, and $[\mathbf{c}]_n = b_{n} -1$ otherwise. Let $\mathbf{q} = \mathbf{c} \circ \mathbf{a}(\theta,\phi) $, after $K$ measurements we obtain
\begin{eqnarray}\nonumber
\hspace{-0mm}\underbrace{\left[ \begin{array}{c} y_1 \\ y_2\\ \vdots \\y_K\end{array}
\right]}_{\mathbf{y}} \hspace{-1mm}= \hspace{-1mm}\underbrace{\left[
\begin{array}{cccc} x_{1,1} & x_{1,2} & \cdots & x_{1,N_\text{R}} \\
x_{2,1} & x_{2,2} & \cdots & x_{2, N_\text{R}} \\
\vdots & \vdots & \vdots & \vdots \\
x_{K,1} & x_{K,2} & \cdots & x_{K, N_\text{R}}
\end{array}
\right]}_{\mathbf{X}}
\underbrace{\left[
\begin{array}{c} {q}_1 \\ {q}_2 \\ \vdots \\{q}_{ N_\text{R}} \end{array}
\right]}_{\mathbf{q}} \hspace{-1mm}+ \hspace{-1mm} \underbrace{\left[\begin{array}{c} \tilde{e}_1 \\ \tilde{e}_2 \\ \vdots \\\tilde{e}_K \end{array}\right]}_{\mathbf{e}}
\end{eqnarray}
or equivalently
\begin{eqnarray}\label{fb_modela}
\mathbf{y} = \mathbf{X}\mathbf{q} + \mathbf{e},
\end{eqnarray}
where $N_\text{R}=N_\text{x}N_\text{y}$ is the total number of antennas at the receiver. Assuming that the number of blocked antenna elements is small, i.e. $S\ll N_\text{R}$, the vector $\mathbf{q}$ in (\ref{fb_modela}) becomes sparse with $S$ non-zero elements that represent the locations of the blocked antennas.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=200pt]{ad1.pdf}
\caption{\label{fig:fig1}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\centering\includegraphics[width=200pt]{ad2.pdf}
\caption{\label{fig:fig2}}
\end{subfigure}
\caption{An example of real time array diagnosis. (a) Diagnosis of the receiver array. Transmitter sends $K$ training symbols and the receiver receives each training symbol using a random receive beam. (b) Joint diagnosis of both the transmit and the receive arrays. Transmitter sends training symbols using $k_\text{t}$ random transmit beams and the receiver receives each training symbol using $k_\text{r}$ random receive beams.} \label{adad}
\end{figure*}
To mitigate the effects of blockages, it is desired to first detect the locations of the blocked antennas and then the complex random variable $b_n$ with few measurements (or diagnostic time). Note that the system in (\ref{fb_modela}) requires $K \ge N_\text{R}$ measurements to estimate the vector $\mathbf{q}$. While this might be acceptable for small antenna arrays, mmWave systems are usually equipped with large antenna arrays to provide sufficient link budget \cite{rap}. Scaling the number of measurements with the number of antennas would require more measurements and would increase the array diagnosis time. In the following, we show how we can (i) detect the locations of the blocked antennas, and (ii) estimate the complex blockage coefficients $b_n$ with $K \ll N_\text{R}$ measurements by exploiting the sparsity structure of the vector $\mathbf{q}$ under the assumption that blockages remain constant for a time interval which is larger than the diagnosis time. If the time interval is small, a hybrid architecture (see e.g. \cite{h1} and \cite{h2}) can be employed with multiple RF chains and each RF chain can obtain independent measurements. This reduces the diagnosis time by a factor of $N_\text{RF}$, where $N_\text{RF}$ is the number of RF chains.
\subsubsection{Sparsity Pattern Detection} \label{cs}
Compressive sensing theory permits efficient reconstruction of a sensed signal with only a few sensing measurements. While there are many different methods used to solve sparse approximation problems (see, e.g., \cite{cP08}, \cite{spr}), we employ the least absolute shrinkage and selection operator (LASSO) as a recovery method. We adopt the LASSO since it does not require the support of the vector $\mathbf{q}$ to be known a priori. This makes it a suitable detection technique as blockages are random in general. The LASSO estimate of (\ref{fb_modela}) is given by \cite{cP08}
\begin{eqnarray}
\label{LASSO} \arg \min_{\boldsymbol{\nu} \in \mathbb{C}^{N_\text{R}\times 1}} \frac{1}{2}\| \mathbf{y} - \mathbf{X}\boldsymbol{{{\nu}}}\|_2^2 + \Omega \sigma \|\boldsymbol{{{\nu}}}\|_1,
\end{eqnarray}
where $\sigma$ is the standard derivation of the noise $\tilde{e}$, and $\Omega$ is a regularization parameter. The antennas weights, i.e. the entries of the matrix $\mathbf{X}$, are randomly and uniformly selected from the set $\{1+j, 1-j, -1+j, -1-j \}$ in this paper, i.e. weights can be applied to a mmWave antenna with 2-bit phase shifters. In the special case of 1-bit phase shifters, $\mathbf{X}$ becomes a Bernoulli matrix which is known to satisfy the coherence property with high probability \cite{cb1}, \cite{spr}, \cite{spr1}.
\subsubsection{Attenuation and Induced Phase Shift Estimation}\label{csls2}
Once the support $\mathcal{S}$, where $\mathcal{S}=\{n: q_n\neq0\}$, one can apply estimation techniques such as least squares (LS) estimation to estimate and refine the complex coefficient $b_{n\in \mathcal{S}}$. To achieve this, the columns of $\mathbf{X}$ which are associated with the non-zero entries of $\mathbf{q}$ are removed to obtain $\mathbf{X}_\mathcal{S} \in \mathbb{C}^{K\times S}$. Hence, the vector $\mathbf{y}$ in equation (\ref{fb_modela}) can now be written as
\begin{eqnarray}\label{y2}
\mathbf{y}=\mathbf{X}_\mathcal{S}\mathbf{q}_\mathcal{S}+\mathbf{e},
\end{eqnarray}
where $\mathbf{q}_\mathcal{S}$ is obtained by pruning the zero entries of $\mathbf{q}$. Since $K>S$, the entries of $\mathbf{q}_\mathcal{S}$ can be estimated via LS estimation. In particular, one can write the LS estimate after successful sparsity pattern recovery as \cite{LMMSE}
\begin{eqnarray} \label{24b}
\hat{\mathbf{q}}_{\mathcal{S}} = (\mathbf{X}_\mathcal{S}^*\mathbf{X}_\mathcal{S})^{-1} \mathbf{X}_\mathcal{S}^* \mathbf{y}
= \mathbf{q}_\mathcal{S} + \check{\mathbf{e}},
\end{eqnarray}
where $\hat{\mathbf{q}}_\mathcal{S}$ is a noisy estimate of $\mathbf{q}_\mathcal{S}$, and the entries of the output noise vector $\check{\mathbf{e}}$ are Gaussian random variables as linear operations preserve the Gaussian noise distribution. Note that the $n$th entry of the vector $\mathbf{q}$ is $[\mathbf{q}]_{n} = (b_n-1)a_n$, where $a_n$ is the $n$th entry of of the vector $\mathbf{a}(\theta,\phi)$ (see (\ref{c2})-(\ref{fb_modela})). Therefore, the estimated attenuation coefficient $\hat{\kappa}_{n\in\mathcal{S}} = |\frac{\hat{q}_{n\in\mathcal{S}}}{a_{n\in \mathcal{S}}}+1|$ and the estimated induced phase $\hat{\Phi}_{n\in \mathcal{S}} = \angle{\left(\frac{\hat{q}_{n\in\mathcal{S}}}{a_{n\in \mathcal{S}}}+1\right)}$.
\subsection{Exploiting the Block-Structure of Blockages} \label{sec:block1}
Due to the small antenna element size, it is likely that a suspended particle will block several neighboring antenna elements as shown in Fig. \ref{fig:ant}. This results in a block sparse structure which could be exploited to substantially reduce the number of measurements without scarifying robustness. In this section, we reformulate the CS problem to exploit this structure and reduce the array diagnosis time. To formulate the problem, we first rewrite $f(\theta,\phi)$ in (\ref{cz2i}) and $g(\theta,\phi)$ in (\ref{bs1}) as
\begin{eqnarray}\label{cbl1}
f(\theta,\phi)= {\text{vec}{(\mathbf{W})}^\mathrm{T} }{\text{vec}{(\mathbf{A})}},
\end{eqnarray}
and
\begin{eqnarray}\label{cbl1g}
g(\theta,\phi)= {\text{vec}{(\mathbf{W})}^\mathrm{T} }{\text{vec}{(\mathbf{A\circ B})}},
\end{eqnarray}
where $\mathbf{A}=\mathbf{a}_\text{y}(\theta,\phi) \mathbf{a}^\mathrm{T}_\text{x}(\theta,\phi)$ is the array response matrix, $\mathbf{W}$ is the weighting matrix, and $\mathbf{B}$ is the sparse blockage matrix, i.e the entries of $\mathbf{B}$ are ``1'' in the case of no blockage and a random variable (see (\ref{efbp1})) in the case of a blockage. Substituting (\ref{cbl1}) and (\ref{cbl1g}) in (\ref{c4})-(\ref{c2}), the $k$th received measurement (after subtracting it form the ideal pattern) becomes
\begin{eqnarray}
\hspace{-10mm} {y}_{k} \hspace{-2mm}&=& \hspace{-2mm}{\text{vec}{(\mathbf{W}_k)}^\mathrm{T} }{\text{vec}{(\mathbf{A})}}- {\text{vec}{(\mathbf{W}_k)}^\mathrm{T} }{\text{vec}{(\mathbf{A\circ B})}} + \tilde{e}_k \\ \label{ykb10} \hspace{-2mm}&=& \hspace{-2mm} {\text{vec}{(\mathbf{W}_k)}^\mathrm{T} }{\text{vec}{(\mathbf{A}_\text{s})}}+\tilde{e}_k,
\end{eqnarray}
where $\mathbf{W}_k$ is the $k$th random weighting matrix, the innovation matrix $\mathbf{A}_\text{s}=\mathbf{A}-\mathbf{A}_\text{s}$ is sparse, and $e_k$ is the additive noise. Observe that the columns of the matrix $\mathbf{A}$ are either all 0's, in the case of no blockage, or contains a block of non-zero entries. We exploit this structure to reduce the number of measurements and as a result, reduce the array diagnosis time.
Let $\mathbf{X} = [{\text{vec}{(\mathbf{W}_1)}}, {\text{vec}{(\mathbf{W}_2)} }, \cdots, {\text{vec}{(\mathbf{W}_K)} }]^\mathrm{T}$ be the measurement matrix which consists of $K$ random antenna weights and $\mathbf{q} = {\text{vec}{(\mathbf{A}_\text{s})}}$, then after $K$ measurements the innovation vector becomes
\begin{eqnarray}\label{yB}
\mathbf{y}_{\text{}} = \mathbf{X}_{\text{}}\mathbf{q}_{\text{}} + \mathbf{e}.
\end{eqnarray}
Observe that (\ref{yB}) is similar to (\ref{fb_modela}) with the exception that the new formulation allows the vector $\mathbf{q}$ to be block sparse. While the structure of the all 0 columns of $\mathbf{A}_\text{s}$ is known, the structure of the non-zero elements is unknown. This makes the block structure of the vector $\mathbf{q}$ in (\ref{yB}) random and a function of the number of blockages and their size. To complete the array diagnosis process, we employ the expanded block sparse Bayesian learning algorithm with bound optimization (EBSBL-BO) proposed in \cite{B0} to recover the block sparse matrix $\mathbf{q}$. The EBSBL-BO algorithm exploits the intra-block correlation of the sparse vector to improve recovery performance without requiring prior knowledge of the block structure. From $\mathbf{q}$, the amplitude and induced phase shifts can be estimated as shown in Section \ref{csls2}.
\subsection{Extension to Complete Group-Blockages} \label{sec:block2}
When blockages are complete, i.e. $\alpha_n=0$, and span multiple neighboring antennas, the innovation matrix $\mathbf{A}_\text{s}$ in (\ref{ykb10}) becomes sparse with $J$ groups (or clusters) of non-zero entries at the locations of the faults as shown in Fig. \ref{fig:B1}. In this section, we exploit the structure of these faults and propose a technique that identifies the locations of these faults with just $N_\text{x}+N_\text{y}$ measurements provided that the number of groups is small, independent of the group size. Recall $N_\text{x}$ is the number of antennas along the x-axis and $N_\text{y}$ is the number of antennas along the y-axis. The idea is to decompose the matrix $\mathbf{A}_\text{s}$ into two dense vectors as shown in Fig. \ref{fig:B1}. The first vector is a weighted sum of the rows of $\mathbf{A}_\text{s}$ while the second vector is a weighted sum of the columns of $\mathbf{A}_\text{s}$. Since the number of unknown is now $N_\text{x}+N_\text{y}$, only $N_\text{x}+N_\text{y}$ measurements are required to recover the vectors. Once the vectors are recovered, the intersection of the non-zero elements of both vectors provides the location of potential faults. Equipped with the ideal matrix $\mathbf{A}$, the locations of the faults can be refined via an exhaustive search over all possible locations. In what follows, we formulate the problem and show how this is performed.
To formulate the problem, we rewrite ideal far-field radiation pattern in (\ref{cz1i}) as
\begin{eqnarray}\label{cz1irm}
f(\theta,\phi) = \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{p},
\end{eqnarray}
and the damaged pattern in (\ref{bs1}) as
\begin{eqnarray}\label{cz1irmg}
g(\theta,\phi) = \mathbf{w}^{\mathrm{T}} (\mathbf{A}\circ \mathbf{B}) \mathbf{p},
\end{eqnarray}
where $\mathbf{w}\in \mathcal{C}^{N_\text{y}\times 1}$ and $\mathbf{p}\in \mathcal{C}^{N_\text{x}\times 1}$ represent the receive antenna weights, and the matrix $\mathbf{A}\in \mathcal{C}^{N_\text{y} \times N_\text{x}}$ is the antenna response matrix. Substituting (\ref{cz1irm}) and (\ref{cz1irmg}) in (\ref{c4})-(\ref{c2}), the $k$th received measurement (after subtracting it form the ideal pattern) becomes
\begin{eqnarray}\label{ykb10g}
\hspace{-8mm} {y}_{k} \hspace{-2mm}&=& \hspace{-2mm} {\mathbf{w}}^\mathrm{T}_k \mathbf{A}\mathbf{p}_k - {\mathbf{w}}^\mathrm{T}_k (\mathbf{A\circ B})\mathbf{p}_k + \tilde{e}_k = {\mathbf{w}}^\mathrm{T}_k \mathbf{A}_\text{s} \mathbf{p}_k+\tilde{e}_k,
\end{eqnarray}
To start the diagnosis process, the receiver obtains $N_\text{y}$ measurements using the weighting vector $\mathbf{w}$ while fixing the weighting vector $\mathbf{p}$. After $N_\text{y}$ measurements, the receiver obtains $N_\text{x}$ measurements using the weighting vector $\mathbf{p}$ while fixing the weighting vector $\mathbf{w}$. Mathematically, the innovation vector can be written as
\begin{eqnarray}\label{ygm}
\nonumber {y}_{1} &=& {\mathbf{w}^\mathrm{T}_1} \mathbf{A}_\text{s} \mathbf{p}_0+\tilde{e}_1\\
\nonumber \vdots && \hspace{10mm} \vdots \quad \vdots\\
\nonumber {y}_{N_\text{y}} &=& \mathbf{w}^\mathrm{T}_{N_\text{y}} \mathbf{A}_\text{s} \mathbf{p}_0+\tilde{e}_{N_\text{y}}\\
\nonumber {y}_{N_\text{y}+1} &=& \mathbf{w}^\mathrm{T}_0 \mathbf{A}_\text{s} \mathbf{p}_{1}+\tilde{e}_{N_\text{y}+1}\\
\nonumber \vdots && \hspace{10mm} \vdots \quad \vdots\\
\nonumber {y}_{N_\text{y}+N_\text{x}} &=& \mathbf{w}^\mathrm{T}_0 \mathbf{A}_\text{s} \mathbf{p}_{N_\text{x}}+\tilde{e}_{N_\text{y}+N_\text{x}},
\end{eqnarray}
where the weighting vectors $\mathbf{w}_0 \in \mathcal{C}^{N_\text{y} \times 1}$ and $\mathbf{p}_0 \in \mathcal{C}^{N_\text{x} \times 1}$ consist of random weighting entries and are fixed throughout the diagnosis stage. Note that the term $\mathbf{A}_\text{s} \mathbf{p}_0$ represents a weighted sum of all the columns of matrix $\mathbf{A}_\text{s}$, and the term $\mathbf{w}^\mathrm{T}_0 \mathbf{A}_\text{s}$ represents a weighted sum of the rows of $\mathbf{A}_\text{s}$ (see Fig. (\ref{fig:B1})). To simplify above system of equations, let $\mathbf{x}_1=\mathbf{A}_\text{s} \mathbf{p}_0$, $\mathbf{x}_2=(\mathbf{w}^\mathrm{T}_0 \mathbf{A}_\text{s})^\mathrm{T}$, and the matrix $\boldsymbol{\Phi}$ be
\begin{eqnarray}
\boldsymbol{\Phi} = \left[
\begin{array}{cc} \mathbf{W} & \mathbf{0}_1 \\
\mathbf{0}_2 & \mathbf{P}
\end{array}
\right],
\end{eqnarray}
where the weighting matrices $\mathbf{W} \in \mathcal{C}^{N_\text{y} \times N_\text{y}}$ and $\mathbf{P} \in \mathcal{C}^{N_\text{x} \times N_\text{x}}$ are both orthonormal matrices, the matrix $\mathbf{0}_1$ is an all zero matrix of size $N_\text{y} \times N_\text{x}$, the matrix $\mathbf{0}_2$ is an all zero matrix of size $N_\text{x} \times N_\text{y}$.
Then, the innovation vector can be simplified to
\begin{eqnarray}\label{xxs}
\left[
\begin{array}{cc} {y}_1 \\
\vdots\\
{y}_{N_\text{y}+N_\text{x}}
\end{array}
\right] = \underbrace{\left[
\begin{array}{cc} \mathbf{W} & \mathbf{0}_1 \\
\mathbf{0}_2 & \mathbf{P}
\end{array}
\right]}_{\boldsymbol{\Phi}}
\underbrace{\left[
\begin{array}{cc} \mathbf{x}_1 \\
\mathbf{x}_2
\end{array}
\right]}_{\mathbf{x}} + \left[
\begin{array}{cc} \tilde{e}_1 \\
\vdots\\
\tilde{e}_{N_\text{y}+N_\text{x}}
\end{array}
\right],
\end{eqnarray}
or equivalently
\begin{eqnarray}\label{1x}
\mathbf{y} = \boldsymbol{\Phi}\mathbf{x}+\mathbf{e}.
\end{eqnarray}
To recover the locations of the blockages, we estimate the vectors $\mathbf{x}_1$ and $\mathbf{x}_2$ in (\ref{xxs}) as follows
\begin{eqnarray}\label{1xe}
\hat{\mathbf{x}}= \boldsymbol{\Phi}^*\mathbf{y} = \boldsymbol{\Phi}^*\boldsymbol{\Phi}\mathbf{x}+\boldsymbol{\Phi}^*\mathbf{e}= \mathbf{x}+\acute{\mathbf{e}},
\end{eqnarray}
where $\hat{\mathbf{x}}$ is a noisy estimate of $\mathbf{x}$, the estimates $\hat{\mathbf{x}}_1 = [\hat{\mathbf{x}}]_{1:N_\text{y}} $ and $\hat{\mathbf{x}}_2=[\hat{\mathbf{x}}]_{(N_\text{y}+1):(N_\text{y}+N_\text{x})}$. The intersection of the indices of the non-zero elements in $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_1$ correspond to potential blocked/fault locations.
Using $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$, we form the approximate binary matrix $\tilde{\mathbf{B}}$ as follows
\begin{equation}\label{amow}
[\tilde{\mathbf{B}}]_{m,n} = \left\{
\begin{array}{ll}
1, & \hbox{ if $[\mathbf{x}_1]_m$ and $[\mathbf{x}_2]_n$ are non-zero} \\
0, & \hbox{ otherwise. } \\
\end{array}
\right.
\end{equation}
The binary matrix $\tilde{\mathbf{B}}$ can be refined as follows
\begin{eqnarray}\label{es}
\hat{\mathbf{B}}= \arg \min_{\mathbf{D} \in \{0,1\}} \bigg\| \hat{\mathbf{x}}-\left[
\begin{array}{cc} (\mathbf{A} - (\mathbf{A} \circ \mathbf{D})) \mathbf{p}_0 \\
(\mathbf{w}^{\mathrm{T}}_0 (\mathbf{A} - (\mathbf{A} \circ \mathbf{D})))^{\mathrm{T}}
\end{array}
\right] \bigg\|_2,
\end{eqnarray}
where the matrix $\mathbf{D}$ is a submatrix of the matrix $[\tilde{\mathbf{B}}]$. The zero entries of $\hat{\mathbf{B}}$ correspond to the locations of the blocked/faulty antennas.
\subsubsection*{Remarks}
\begin{enumerate}
\item For the special case of a single group blockage, i.e. $J=1$, the zero entries of the matrix $\tilde{\mathbf{B}}$ in (\ref{amow}) correspond to the locations of the faulty antennas and the search step in (\ref{es}) is not required.
\item When the number of groups increases, the computational complexity in (\ref{es}) increases and as a result, conventional compressed sensing recovery techniques might be more favorable in this case.
\item For large group sizes, the matrix $\mathbf{A}_s$ becomes dense and compressed sensing recovery techniques may fail in this case. The technique proposed in this section is able to identify the locations of the faulty antenna elements with just $N_\text{y}+N_\text{x}$ measurements. Even if compressed sensing recovery techniques succeed, the required number of measurements would be much higher than $N_\text{y}+N_\text{x}$.
\item When blockages are partial, the entries of the matrix $\mathbf{D}$ in (\ref{es}) would be complex instead of binary numbers, and the complexity of the exhaustive search step would be high if not prohibitive. A two-stage setting where the entries of $\tilde{\mathbf{B}}$ are independently examined might be favorable in this case.
\end{enumerate}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3in]{block.png}
\caption{An example of an innovation matrix (i.e. $\mathbf{A}_{\text{s}}=\mathbf{A}-\mathbf{A\circ B}$) of a 128 element antenna with 19 blocked/faulty elements. The equivalent dense vectors are obtained by summing the rows and columns of the matrix $\mathbf{A}_{\text{s}}$. Intersection of the indices of the non-zero elements (of the dense vectors) correspond to the location of potentially blocked/faulty elements. }
\label{fig:B1}
\end{center}
\end{figure}
\section{Joint Fault Detection at the Transmitter and the Receiver}\label{sec:propj}
In the previous section, we assumed that the transmit antenna is free from blockages. When blockages exist at the transmit antenna, the receiver receives distorted training symbols and as a result, the technique proposed in the previous section will fail. In this section, we propose a detection technique that jointly detects blockages at both the transmitter and the receiver. For this technique, we assume that the receiver is equipped with the path-loss, angular location and array manifold of the transmitter. To start the diagnosis process, the transmitter sends a training symbol $s=1$ using $k_\text{t}$ random antenna weights (assumed to be known by the receiver). The receiver generates $k_\text{r}$ random antenna weights to receives each training symbol, thus making the total number of measurements $K=k_\text{t}k_\text{r}$. Let $\mathbf{a}_{\text{T}}(\theta,\phi)=\mathbf{a}_\text{tx}(\theta,\phi) \otimes \mathbf{a}_\text{ty}(\theta,\phi)$ be the $N_\text{T}\times 1$ 1D transmit array response vector. The $k$th received measurement at the receiver can be written as
\begin{eqnarray}\label{tx1}
y_k = \mathbf{w}_i^* (\mathbf{b\circ a}(\theta,\phi)) (\mathbf{b}_\text{T} \circ \mathbf{a}_{\text{T}}(\theta,\phi))^*\mathbf{f}_o+\tilde{e}_k,
\end{eqnarray}
where $\mathbf{w}_i$ is a random weighting vector at the receiver, $\mathbf{f}_o$ is a random weighting vector at the transmitter, and $\mathbf{b}_\text{T}$ is a vector of complex coefficients that result from the absorption and scattering caused by the particles blocking the transmit antenna array. After $K$ measurements the receiver obtains the following measurement matrix
\begin{eqnarray}\label{tx2}
\mathbf{Y}_\text{r}= \mathbf{W}^*\mathbf{A}\mathbf{F}+\mathbf{E},
\end{eqnarray}
where $\mathbf{Y}_\text{r}$ is the received measurement matrix, $\mathbf{W} \in \mathbb{C}^{N_\text{R}\times k_\text{r}}$ is a random weighting matrix at the receiver, $\mathbf{A} = (\mathbf{b\circ a}(\theta,\phi)) (\mathbf{b}_\text{T} \circ \mathbf{a}_{\text{T}}(\theta,\phi))^*$ is the $N_\text{R} \times N_\text{T}$ equivalent array response matrix, the matrix $\mathbf{F} \in \mathbb{C}^{N_\text{T}\times k_\text{t}}$ is a random weighting matrix at the transmitter, and $\mathbf{E}$ is the additive noise matrix. Equipped with the weighting matrices $\mathbf{F}$ and $\mathbf{W}$, the receiver generates the ideal vector $\mathbf{Y}_\text{I}=\mathbf{W}^*\mathbf{A}_\text{I}\mathbf{F}$, where $\mathbf{A}_\text{I}=\mathbf{ a}(\theta,\phi) \mathbf{a}^*_{\text{T}}(\theta,\phi)$, and subtracts it from $\mathbf{Y}_\text{r}$ in (\ref{tx2}) to obtain
\begin{eqnarray}\ \label{tx3a}
\mathbf{Y} = \mathbf{Y}_\text{r} - \mathbf{Y}_\text{I}
= \mathbf{W}^*\mathbf{A}_{\text{s}}\mathbf{F}+\mathbf{E} ,
\end{eqnarray}
where $\mathbf{A}_{\text{s}} \in \mathbb{C}^{N_\text{R}\times N_\text{T}}$ is a sparse matrix. The non-zero entries of the columns of $\mathbf{A}_{\text{s}}$ represent the indices of faulty antennas at the transmitter and the non-zero entries of the rows of $\mathbf{A}_{\text{s}}$ represent the entries of the faulty antennas at the receiver. To formulate the CS recovery problem, we vectorize the measurement matrix $\mathbf{Y}$ in (\ref{tx3a}) to obtain
\begin{eqnarray}\label{tx5} \label{tx4}
\text{vec}(\mathbf{Y}) \hspace{-2mm}&=&\hspace{-2mm}\text{vec}( \mathbf{W}^*\mathbf{A}_{\text{s}}\mathbf{F})+\text{vec}(\mathbf{E}) \\
\hspace{-2mm}&=&\hspace{-2mm} \underbrace{(\mathbf{F}^{\mathrm{T}} \otimes\mathbf{W}^*)}_{\mathbf{U}} \underbrace{\text{vec}(\mathbf{A}_{\text{s}})}_{\mathbf{g}}+\underbrace{\text{vec}(\mathbf{E})}_{\mathbf{e}},
\end{eqnarray}
where the vector $\mathbf{y}=\text{vec}(\mathbf{Y})$, the matrix $\mathbf{U}\in \mathbb{C}^{K\times N_\text{T}N_\text{R}}$ is the effective CS sensing matrix, and the vector $\mathbf{g}\in \mathbb{C}^{N_\text{T}N_\text{R}\times 1}$ is the effective sparse vector. Note that if the matrices $\mathbf{W}$ and $\mathbf{F}$ in (\ref{tx5}) both satisfy the coherence property, examples include Gaussian, Bernoulli and Fourier matrices \cite{cb1}, then the matrix $\mathbf{U}=\mathbf{W}\otimes \mathbf{F}$ also satisfies the coherence property and it can be applied in standard CS techniques. For simplicity, the entries of $\mathbf{W}$ and $\mathbf{F}$ are chosen uniformly and independently at random from the set $\{1+j, 1-j, -1+j, -1-j \}$ in this paper. This corresponds to 2-bit phase shifters at both the transmitter and the receiver.
\subsection{Sparsity Pattern Detection and Least Squares Estimation} \label{csls}
The LASSO estimate of (\ref{tx5}) is given by
\begin{eqnarray}
\label{LASSO2} \arg \min_{\boldsymbol{\nu} \in \mathbb{C}^{N_\text{T}N_\text{R}\times 1}} \frac{1}{2}\| {\mathbf{y}} - {\mathbf{U}}\boldsymbol{{{\nu}}}\|_{2}^2 + \Omega \sigma_\text{e} \|\boldsymbol{{{\nu}}}\|_{1},
\end{eqnarray}
where $\sigma_\text{e}$ is the standard derivation of the noise $e$. Once the support $\mathcal{S}$, where $\mathcal{S}=\{i: {g}_i\neq0\}$, of the vector $ {\mathbf{g}}$ is estimated, the columns of $ {\mathbf{U}}$ which are associated with the non-zero entries of $ {\mathbf{g}}$ are removed to obtain $ {\mathbf{U}}_\mathcal{S}$. Hence, the vector $ {\mathbf{y}}$ in (\ref{tx5}) becomes
\begin{eqnarray}\label{tx6}
{\mathbf{y}}= {\mathbf{U}}_\mathcal{S} {\mathbf{g}}_\mathcal{S}+ {\mathbf{e}},
\end{eqnarray}
where $ {\mathbf{g}}_\mathcal{S}$ is obtained by pruning the zero entries of $ {\mathbf{g}}$. The LS estimate after successful sparsity pattern recovery becomes
\begin{eqnarray}\label{tx6b}
\hat{\mathbf{g}}_{\mathcal{S}} = ( {\mathbf{U}}_\mathcal{S}^* {\mathbf{U}}_\mathcal{S})^{-1} {\mathbf{U}}_\mathcal{S}^* {\mathbf{y}}
= {\mathbf{g}}_\mathcal{S} + {\mathbf{e}},
\end{eqnarray}
where $\hat{\mathbf{g}}_\mathcal{S}$ is a noisy estimate of $ {\mathbf{g}}_\mathcal{S}$.
\subsection{Attenuation and Induced Phase Shift Estimation}
Let $\mathbf{r}$ be a vector of size $N_\text{T}N_\text{R}\times 1$ and its $i$th entry is $[\mathbf{r}]_i = [\mathbf{g}_\mathcal{S}]_i \text{ in } (\ref{tx6b})$, if $i\in\mathcal{S}$ and $r_{i}=0$ otherwise. Reshaping $\mathbf{r}$ into $N_\text{R}$ rows and $N_\text{T}$ columns we obtain an estimate $\hat{\mathbf{A}}_s$ of the sparse matrix ${\mathbf{A}}_s$ in (\ref{tx3a}). The non-zero columns of $\hat{\mathbf{A}}_s$ represent the IDs of the transmit array faulty antennas, and the non-zero rows of $\hat{\mathbf{A}}_s$ represent the IDs of the receive array faulty antennas. Let the set $\mathcal{I}_\text{r}$ contain the indices of the zero rows of $\hat{\mathbf{A}}_s$ and the set $\mathcal{I}_\text{t}$ contain the indices of the zero columns of $\hat{\mathbf{A}}_s$. Removing the rows associated with the IDs of the faulty receive antennas from $\hat{\mathbf{A}}_s$ we obtain the matrix $\mathbf{A}_r = [\hat{\mathbf{A}}_s]_{\mathcal{I}_\text{r},:}$, and the sparse vector that represents the faulty transmit antennas becomes
\begin{eqnarray}\label{txtxo}
\hat{\mathbf{q}}_\text{t} =\frac{(\mathbf{a}_r(\theta,\phi)^*{\mathbf{A}}_r)^*}{\|\mathbf{a}_r(\theta,\phi)\|_2^2},
\end{eqnarray}
where the vector $\mathbf{a}_r(\theta,\phi)$ results from selecting the $\mathcal{I}_\text{r}$ entries from the vector $ \mathbf{a}(\theta,\phi)$ in (\ref{tx1}). Based on (\ref{txtxo}), the estimated attenuation coefficient and induced phase of the $i$th antenna element at the transmit array becomes $\hat{\kappa}_{\text{t},i} = |\frac{[\hat{\mathbf{q}}_\text{t}]_i}{[\mathbf{a}_\text{T}(\theta,\phi)]_i}+1|$ and $\hat{\Phi}_{\text{t},i} = \angle{\left(\frac{[\hat{\mathbf{q}}_\text{t}]_{i}}{[\mathbf{a}_\text{T}(\theta,\phi)]_i}+1\right)}$.
Similarly, removing the columns associated with the IDs of the faulty antennas from $\hat{\mathbf{A}}_s$ we obtain the matrix ${\mathbf{A}}_t = [\hat{\mathbf{A}}_s]_{:,\mathcal{I}_\text{t}}$, and the sparse vector that represents the faulty receive antennas becomes
\begin{eqnarray}\label{txrxx}
\hat{\mathbf{q}}_\text{r} =\frac{{\mathbf{A}}_t \mathbf{a}_t(\theta,\phi)}{\|\mathbf{a}_t(\theta,\phi)\|_2^2},
\end{eqnarray}
where the vector $\mathbf{a}_t(\theta,\phi)$ results from selecting the $\mathcal{I}_\text{t}$ entries from the vector $ \mathbf{a}_\text{T}(\theta,\phi)$ in (\ref{tx1}).
From (\ref{txrxx}), the estimated attenuation coefficient and induced phase of the $i$th antenna element at the receive array becomes $\hat{\kappa}_{\text{r},i} = |\frac{[\hat{\mathbf{q}}_\text{r}]_i}{[\mathbf{a}_\text{}(\theta,\phi)]_i}+1|$ and $\hat{\Phi}_{\text{r},i} = \angle{\left(\frac{[\hat{\mathbf{q}}_\text{r}]_{i}}{[\mathbf{a}_\text{}(\theta,\phi)]_i}+1\right)}$.
\section{Numerical Validation } \label{sec:PA}
In this section, we conduct numerical simulations to evaluate the performance of the proposed techniques. We consider a 2D planar array, with $\frac{d_x}{\lambda}=\frac{d_y}{\lambda}=0.5$, that experiences random and independent blockages with probability $P_\text{b}$. To generate the random blockages, the values of $\kappa$ and $\Phi$ in (\ref{efbp1}) are chosen uniformly and independently at random from the set $\{i \in \mathcal{R}: 0 \leq i \leq 1\}$ and $\{0,..,2\pi\}$ respectively. We adopt the success probability, i.e. the probability that all faulty antennas are detected, and the normalized mean square error (NMSE) as a performance measure to quantify the error in detecting the blocked antenna locations and estimating the corresponding blockage coefficients ($\kappa$ and $\Phi$). The NMSE is defined by
\begin{eqnarray}\label{cc1}
\text{NMSE} = \frac{ \| \mathbf{v}-\hat{\mathbf{v}} \|^2_2 }{ \| \mathbf{v} \|^2_2 }.
\end{eqnarray}
When blockages only exist at the receiver, $\mathbf{v} = \mathbf{c} \circ \mathbf{a}(\theta,\phi)$ (see (\ref{c2})), and the $i$th entry of the estimated vector $\hat{\mathbf{v}}$ is $[\hat{\mathbf{v}}]_{i\in \mathcal{S}} = [\hat{\mathbf{q}}]_{i\in \mathcal{S}}$ in (\ref{24b}) and zero otherwise. When blockages exist at both the receiver and the transmitter, ${\mathbf{v}} = (\mathbf{b}_\text{T}\circ \mathbf{a}_\text{T})-\mathbf{a}_\text{T}$ (see (\ref{tx1}) and (\ref{tx3a})) and $\hat{\mathbf{v}} = \hat{\mathbf{q}}_t$ in (\ref{txtxo}) when detecting blockages at the transmitter array, and ${\mathbf{v}} = (\mathbf{b}_\text{}\circ \mathbf{a}_\text{})-\mathbf{a}_\text{}$ and $\hat{\mathbf{v}} = \hat{\mathbf{q}_r}$ in (\ref{txrxx}) when detecting blockages at the receiver array. To implement the LASSO, we use the function {\it{SolveLasso}} included in the {\it{SparseLab}} toolbox \cite{SL}. As a benchmark, we compare the NMSE of the proposed techniques with the NMSE of the Genie aided LS estimate which indicates the optimal estimation performance when the exact locations of the faulty antennas is known, i.e., the support $\mathcal{S}$ in (\ref{24b}) and (\ref{tx6b}) is assumed to be provided by a Genie.
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=3.5in]{nmse_m_10db_LS_2563.eps}
\caption{$N_\text{R}= 16 \times 16 =256$.}
\label{fig:nmse1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\vspace{-1mm}
\includegraphics[width=3.4in]{nmse_m_10db_LS_51224.eps} {\vspace{-2mm}
\caption{$N_\text{R}= 16 \times 32 = 512$.}
\label{fig:nmse2}}
\end{subfigure}
\caption{Detection and estimation of faults in a 2D receive planar array subject to random partial blockages with different blockage probability $P_\text{b}$; $\rho = 10$ dB, and $N_\text{R}= 16 \times 16 =256$. Blockages do not occur in groups and the technique proposed in Section \ref{sec:GFD} is used for diagnosis.}
\label{fig:nmseb}
\end{figure*}
In Figs. \ref{fig:nmseb}-\ref{fig:nmseMP} we consider faults at the receiver antenna array and assume that the transmitter antenna array is fault free. To study the effect of the number of measurements (or diagnosis time) on the performance of the proposed fault detection technique in Section \ref{sec:GFD}, we plot the NMSE when the receive array is subject to blockages with different probabilities in Fig. \ref{fig:nmse1}. For all cases, we observe that the NMSE decreases with increasing number of measurements $K$. The figure also shows that for sufficient number of measurements (on the order of $K\sim \mathcal{O}(P_\text{b}N_\text{R} \log N_\text{R})$), the NMSE of the proposed technique matches the NMSE obtained by the Genie-aided LS technique. This indicates for sufficient number of measurements, the proposed technique successfully detects the locations of the blocked antennas and the corresponding blockage coefficients with $K \ll N_\text{R}$ measurements. The figure also shows that as the blockage probability increases, more measurements are required to reduce the NMSE. The reason for this is that as the blockage probability increases, the average number of blocked antennas increases as well. Therefore, more measurements are required to estimate the locations of the blocked antennas and the corresponding blockage coefficients. In the event of large number of blockages or fast varying blockages, a hybrid antenna architecture with a few RF chains can be adopted to reduce the array diagnosis time.
For comparison, we plot the performance of the CS-based diagnosis technique proposed in \cite{cs1} which requires measurements to be randomly taken at $N_\text{R}$ locations, in Figs. \ref{fig:nmse1} and {\ref{fig:nmse2}}. This technique is chosen as it is based on analog beamforming, and therefore, it can be applied to mmWave systems. For both $P_\text{b}=0.01$ and $P_\text{b}=0.1$, the proposed technique provides a lower NMSE while requiring lower number of measurements. For instance, to obtain a target NMSE of -30dB with $P_\text{b}=0.01$, the proposed technique requires 45 measurements while the algorithm proposed in \cite{cs1} requires 110 measurements, taken at 110 independent locations. This makes the proposed algorithm superior as it is able to obtain lower NMSE with lower number of measurements without the need for taking measurements at multiple locations.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.7in]{nmse_m_256_001.eps}
\caption{ Detection and estimation of faults in a 2D receive planar array subjected to random partial blockages with different receiver SNR; $P_\text{b} = 0.1$, and $N_\text{R}= 16 \times 16 = 256$. Blockages do not occur in groups and the technique proposed in Section \ref{sec:GFD} is used for diagnosis.}
\label{fig:nmse3}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.7in]{nmse_m_10db_LS_256_AS.eps}
\caption{Detection and estimation of faults in a 2D receive planar array subject to random partial blockages with AoD estimation errors; $P_\text{b}=0.1$; $\rho = 10$ dB, and $N_\text{R}= 16 \times 16 =256$. Blockages do not occur in groups and the technique proposed in Section \ref{sec:GFD} is used for diagnosis. Dashed lines represent Gene aided LS estimation.}
\label{fig:nmseAS}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.7in]{multipath.eps}
\caption{Detection and estimation of faults in a 2D receive planar array subject to random partial blockages in the presence of random indirect (or scattered) path; $P_\text{b}=0.1$; $\rho = 10$ dB, $N_\text{R}= 16 \times 16 =256$. Total number of path = 3 (one direct path plus two random path with random delays uniformly distributed across the diagnosis period), and 90\% of the energy is located in the direct path.}
\label{fig:nmseMP}
\end{center}
\end{figure}
To examine the effect of the array size on the number of measurements, we plot the NMSE of the proposed technique with $N_\text{R}= 512$ in Fig. {\ref{fig:nmse2}}. The figure shows that the NMSE decreases with increasing number of measurements $K$. Nonetheless, this decrease occurs at a lower rate when compared to the case when $N_\text{R}= 256$ in Fig. \ref{fig:nmse1}. This is particularly observed for higher blockage probabilities. The reason for this is that as the array size increases, the average number of blocked antennas increases as well. Therefore, more measurements are required to estimate the locations of the blocked antennas and the corresponding blockage coefficients. Similar to the case when $N_\text{R}= 256$, the proposed technique results in lower NMSE with lower diagnosis time when compared to \cite{cs1}. This is mainly due to the CS sensing matrix which in this paper is optimized to satisfy the coherence property, thereby requiring lower number of measurements.
The effect of the SNR on the performance of the proposed technique is shown in Fig. \ref{fig:nmse3}. The figure shows that for sufficient number of measurements, the NMSE obtained by the proposed technique approaches the NMSE obtained by the Genie-aided techniques. The figure also shows that required number of measurements is a function of the receive SNR. For instance, for a receive SNR of 15 dB and 120 measurements, the NMSE of the proposed technique is similar to the NMSE obtained by the Genie-aided technique. As the SNR decreases to 5 dB, more than 200 measurements are required to match the NMSE of the Genie aided technique. To reduce the number of measurements, one can increase the receive SNR by either placing more antennas at the transmitter to increase the array gain or reduce the transmitter-receiver distance to minimize the path-loss.
To investigate the effect of imperfect AoD/AoA estimation on the NMSE performance, we plot the NMSE of the proposed technique in Fig. \ref{fig:nmseAS} under the assumption of imperfect azimuth and elevation angles of departure, i.e. $\phi = \phi+\Delta \phi$, and $\theta = \theta+\Delta \theta$, where $\Delta\phi$ and $\Delta \theta$ are uniformly distributed random variables. Fig. \ref{fig:nmseAS} shows that angular deviations (as small as $\pm 0.25^{\circ}$) result in a 10 dB loss in NMSE performance and this loss increases with larger angular deviations. The reason for this NMSE increase is due to the AoD/AoA mismatch which increases the system noise. Fig. \ref{fig:nmseAS} also shows that for sufficient number of measurements, the NMSE of the proposed technique matches the NMSE of the Gene-aided LS estimation technique (which assumes perfect knowledge of the location of blockages). This suggests that majority of the NMSE results from blockage coefficient estimation errors rather than blockage location detection errors.
In Fig. \ref{fig:nmseMP} we study the impact of random multipath on the NMSE performance. Similar to the imperfect AoD/AoA case, multipath introduces interference which increases the noise floor of the system and, as a result, deteriorate the NMSE performance. We consider a direct path and 2 multipath with random delays and gains. For this setup, 90\% of the received signal energy is contained in the direct path and the scattered path delays are uniformly distributed across the diagnosis time. As expected, Fig. \ref{fig:nmseMP} shows that the NMSE is lower in the presence of multipath and the NMSE does not decrease with increasing number of measurements. This is mainly attributed to multipath interference which is treated as noise in this paper.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.8in]{block_compare.eps}
\caption{ Comparison of the proposed array diagnosis techniques in detecting antenna faults in an $N_\text{R}= 16 \times 16 = 256$ element antenna; (a) Location and intensity of blockages incident on the array ($|\mathbf{1}-\mathbf{B}|$, where $\mathbf{1}$ is an all-ones matrix). (b) Reconstruction error ($|\mathbf{B}-\hat{\mathbf{B}}|$) in dB when ignoring the block structure. The reconstructed matrix is denoted by $\hat{\mathbf{B}}$. (c) Reconstruction error when exploiting the block structure of blockages using EBSBL-BO+LS. For (b) and (c), $\rho$ = 10 dB, number of blocks is 2, block size $J$ = 16, and the number of measurements are fixed to 130.}
\label{fig:B2}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.7in]{EBSBL_lasso11.eps}
\caption{ Comparison of the proposed techniques in detecting faults in a 2D receive planar array subjected to random group blockages. The proposed technique that ignores the block structure uses conventional LASSO as a recovery method, while the technique that exploits the block structure uses the EBSBL-B0 \cite{B0} algorithm for recovery. Blockages affect a group of $S=\Gamma \times J$ antenna elements, where $\Gamma$ is the group size and $J$ is the number of groups. Each antenna element within a group experiences a random blockage intensity; $\rho = 10$ dB, $N_\text{R}= 16 \times 16 = 256$.}
\label{fig:B12}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.7in]{EBSBL_lasso2.eps}
\caption{ Comparison of the proposed techniques in detecting faults in a 2D receive planar array subjected to random group blockages and different noise levels. Each antenna element within a group experiences a random blockage intensity; $N_\text{R}= 16 \times 16 = 256$.}
\label{fig:B13}
\end{center}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=3.5in]{2D.eps}
\caption{Fixed block-size.}
\label{fig:B14}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=3.5in]{2D2_nt64.eps}
\caption{Variable block size.}
\label{fig:B15}
\end{subfigure}
\caption{Detection of faults in a 2D receive planar array subjected to complete (non-partial) blockages that occur in groups as a function of the receive SNR; $K = 16$ measurements, and $N_\text{R}= 8 \times 8 = 64$.}
\label{fig:B152}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=3.7in]{nmsetrx256_m2.eps}
\caption{Joint detection of faults in 2D transmit and receive planar arrays both subject to random partial blockages; $P_\text{b} = 0.1$ for both arrays, $\rho = 0$ dB, $N_\text{R} = 16$, $N_\text{T} = 32,$ and $N_\text{T}N_\text{R} = 512$.}
\label{fig:nmse6}
\end{center}
\end{figure}
In Figs. \ref{fig:B2}-\ref{fig:B152} we consider blockages that span a group of neighboring antennas. More specifically, in Fig. \ref{fig:B2} we consider blockages that affect 16 neighboring antennas. To model the random shapes of the blocking particles, we assume that each antenna element within a block experiences a random blockage intensity, i.e. the values of $\kappa$ and $\Phi$ in (\ref{efbp1}) are chosen uniformly and independently at random from the set $\{i \in \mathcal{R}: 0 \leq i \leq 1\}$ and $\{0,..,2\pi\}$. In Fig. \ref{fig:B2}(a), we plot an example of a 256 element array subject to 2 blockages and each blockage affects a group of 16 antennas with random intensities. In Fig. \ref{fig:B2}(b), we plot the reconstruction error when the proposed technique is used to detect and estimate the blockage coefficients without exploiting the block structure of blockages. As shown, the proposed technique, using conventional CS, identifies locations of the blocked antennas, however, it also results in some false positives which can be reduced by increasing the number of measurements. In Fig. \ref{fig:B2}(c), we plot the reconstruction error when we exploit the block structure of blockages to detected and estimate the blockage coefficients. As shown, this technique leverages the dependencies between the values and locations of the blockages in its recovery process and as a result it minimizes false positives and reduces the number of required measurements.
To highlight the benefit of exploiting correlation between the blocked antenna elements, we compare the NMSE achieved when (i) ignoring the block structure of the blockages and using conventional CS recovery, and (ii) exploiting the block structure of blockages by implementing the technique proposed in Section \ref{sec:block1} in Fig. \ref{fig:B12} and Fig. \ref{fig:B13}. Specifically, we consider random blockages and each blockage spans $\Gamma$ antennas with random blockage intensity. For array diagnosis, we plot the performance of the proposed techniques with and without exploiting the block structure. For $\Gamma=32$ and $J=1$, Fig. \ref{fig:B12} shows that lower NMSE can be achieved when exploiting the block structure. For fixed number of blockages, Fig. \ref{fig:B12} shows that the performance gap decreases with decreasing block size $\Gamma$. As the block size decreases, the correlation between the blocked antenna elements diminishes and hence the performance of this technique comes closer to that achieved by techniques that ignore the block structure of blockages. In Fig. \ref{fig:B13} we compare the performance of the proposed techniques when exploiting and ignoring the block structure of blockages for different receive SNRs. For all cases, the plots show a clear performance benefit when exploiting the block structure in the array diagnosis process.
In Figs. \ref{fig:B14} and \ref{fig:B15}, we consider complete blockages and compare the performance of the technique proposed in Section \ref{sec:block2} with conventional CS recovery techniques (LASSO) for fixed number of measurements $K=16$. Fig. \ref{fig:B14} shows that the proposed technique achieves higher success probability when compared to conventional CS recovery techniques. The success probability of the proposed technique is highest when the number of blocks is $J=1$. When the number of blocks increases, however, the search space increases (see (\ref{es})), and as a result, we observe a performance hit especially at low SNR. In Fig. \ref{fig:B15} we fix the total number of blockages and study the impact of the block size on the success probability. Fig. \ref{fig:B15} shows that for fixed number of blockages, higher success probability is achieved for larger block sizes and the success probability decreases with increasing number of groups. As the number of groups increases, the search space in (\ref{es}) increases, and in the presence of noise, false alarms could occur. This reduces the success probability. Note that in Figs. \ref{fig:B14} and \ref{fig:B15}, conventional CS techniques fails since the required number of required measurements is $K>S \log N_\text{y}N_\text{x}$ which is much more than $N_\text{y}+N_\text{x}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.7in]{trpo2.eps}
\caption{Joint detection of faults in 2D transmit and receive planar arrays all subject to constant blockages; $P_\text{b} = 0.1$ for both arrays, $[\mathbf{b}_\text{T}]_i \in \{0,1\}$, $[\mathbf{b}]_i, \in \{0,1\}$, and $N_\text{T}N_\text{R} = 1024$. In (a) $N_\text{R} = 32$, and $N_\text{T} = 32,$ and in (b) $N_\text{R} = 16$, and $N_\text{T} = 64$.}
\label{fig:pontnr}
\end{center}
\end{figure}
In Figs. \ref{fig:nmse6}-\ref{fig:pontnr}, we assume that both the transmit and receive arrays are subject to random blockages. To analyze the effect of the number of measurements (or diagnosis time) on the NMSE performance when jointly detecting faults on both the transmit and receive arrays, we plot the NMSE of the technique proposed in Section \ref{sec:propj} and the Genie-aided technique in Fig. \ref{fig:nmse6}. For both cases, we observe that the NMSE decreases with increasing number of measurements $K$, and for sufficient $K$, the NMSE of the proposed technique matches the NMSE obtained by the Genie-aided technique. The figure also shows that the NMSE of the receive array is lower than the NMSE of the transmit array. This is due to the transmit-receive array size difference. For a fixed blockage probability $P_\text{b}$, larger array sizes encounter more blockages on average, and therefore, require more measurements to detect faults.
The success probability when detecting faulty antennas in both the transmit and the receive arrays is plotted in Fig. \ref{fig:pontnr}. Fig. \ref{fig:pontnr} (a) shows that for both the transmit and receive array, the success probability increases with increasing number of measurements. Fig. \ref{fig:pontnr} (b) shows that lower number of measurements are required to detect faulty antennas at the receiver when the transmit array size is larger than the receive array size. As the transmit array size increases, the average number of blocked antennas increases as well, thereby requiring more measurements to successfully detect the fault locations.
\section{Conclusions} \label{sec:con}
In this paper, we investigated the effects of blockages on mmWave linear antenna arrays. We showed that both complete and partial blockages distort the far-field beam pattern of a linear array and partial blockages result in a higher beam pattern variance when compared to complete blockages. To detect blockages, we proposed several compressed sensing based array diagnosis techniques. The proposed techniques do not require the AUT to be physically removed and do not require any hardware modification. When faults exist at the receiver only, we showed that the proposed techniques reliably detect the locations of the blocked antennas, if any, and estimate the corresponding attenuation and phase-shift coefficients caused by the blocking particles. Moreover, we showed that the dependencies between the blocked antennas can be exploited to further reduce estimation errors and lower the diagnosis time. When faults exist at both the receiver and the transmiter, we showed that reliable detection and estimatation of blockages can be achievded. Nonetheless, high number of measurements are required in this case, even if compressed sensing is used. For all cases, the estimated coefficients can be used to calculate new antenna excitations weights to recalibrate the transmit/receive antennas. Due to their reliability and low diagnosis time, the proposed techniques can be used to perform real-time mmWave antenna array diagnosis at the receiver and enhance the mmWave communication link.
{ |
1,477,468,750,924 | arxiv | \section{Introduction}
\label{sec1}
\def6.\arabic{equation}{2.\arabic{equation}}
\setcounter{equation}{0}
This paper develops testing tools for two independent sets of functional observations, explicitly allowing for temporal dependence within each set. Functional data analysis has become a mainstay for dealing with those complex data sets that may conceptually be viewed as being comprised of curves. Monographs detailing many of the available statistical procedures for functional data are \citet{ramsay:silverman:2005} and \citet{horvkoko2012}. This type of data naturally arises in various contexts such as environmental data \citep{aue:dubartnorinho:hormann:2015}, molecular biophysics \citep{tavakoli:panaretos:2016}, climate science \citep{zhang2011,aue:rice:sonmez:2018}, and economics \citep{kowal:matteson:ruppert:2019}. Most of these examples intrinsically contain a time series component as successive curves are expected to depend on each other. Because of this, the literature on functional time series has grown steadily; see, for example, \citet{hoermann2010}, \citet{panaretos:tavakoli:2013} and the references therein.
The main goal here is towards developing two-sample tests for comparing the second order properties of functional time series data. Two-sample inference and testing methods for curves have been developed extensively by several authors. \citet{hall:vankeilegom:2007} were concerned with the effect of pre-processing discrete data into functions on two-sample testing procedures. \citet{horvath:kokoszka:reeder:2013} investigated two-sample tests for the equality of means of two functional time series taking values in the Hilbert space of square integrable functions, and \citet{dette:kokot:aue:2019} introduced multiplier bootstrap-assisted two-sample tests for functional time series taking values in the Banach space of continuous functions. \citet{panaretos:2010}, \citet{fremdt:horvath:kokoszka:steinebach:2013}, \citet{pigoli:2014},
{\cite{Paparoditis2016} and {\cite{Guo2016} provided procedures for testing the equality of covariance operators in functional samples.
While general differences between covariance operators can be attributed to differences in the eigenfunctions of the operators, eigenvalues of the operators, or perhaps both, we focus here on constructing two sample tests that take aim only at differences in the eigenfunctions. The eigenfunctions of covariance operators hold a special place in functional data analysis due to their near ubiquitous use in dimension reduction via functional principal component analysis (FPCA). FPCA is the basis of the majority of inferential procedures for functional data. In fact, an assumption common to a number of such procedures is that observations from different samples/populations share a common eigenbasis generated by their covariance operators; see \cite{benko:hardle:kneip:2009} and \cite{pomann:2016}. FPCA is arguably even more crucial to the analysis of functional time series, since it underlies most forecasting and change-point methods, see e.g. \cite{aue:dubartnorinho:hormann:2015}, \cite{hyndman:shang:2009}, and \cite{aston:kirch:2012AAS}. The tests proposed here are useful both for determining the plausibility that two samples share similar eigenfunctions, or whether or not one should pool together data observed in different samples for a joint analysis of their principal components.
We illustrate these applications in Section \ref{sec4} below in an analysis of annual temperature profiles recorded at several locations, for which the shape of the eigenfunctions can help in the interpretation of geographical differences in the primary modes of temperature variation over time. A more detailed argument for the usefulness and impact of such tests on validating climate models is given in the introduction of \citet{zhangshao2015}, to which the interested reader is referred to for details.
The procedures introduced in this paper are noteworthy in at least two respects. First, unlike existing literature, they are phrased in the relevant testing framework. In this paradigm, deviations from the null are deemed of interest only if they surpass a minimum threshold set by the practitioner. Classical hypothesis tests are included in this approach if the threshold is chosen to be equal to zero. There are several advantages coming with the relevant framework. In general, it avoids Berkson's consistency problem \citep{berkson} that any consistent test will reject for arbitrarily small differences if the sample size is large enough. More specific to functional data, the $L^2$-norm sample mean curve differences might not be close to zero even if the underlying population mean curves coincide. The adoption of the relevant framework typically comes at the cost of having to invoke involved theoretical arguments. {A recent review of methods for testing relevant hypotheses in two sample problems with one-dimensional data from a biostatistics perspective can be found in \citet{wellek}, while Section \ref{sec2} specifies the details important here.}
Second, the proposed two-sample tests are built using self-normalization, a recent concept for studentizing test statistics introduced originally for univariate time series in \citet{shao2010} and \citet{shazha2010}. When conducting inference with time series data, one frequently encounters the problem of having to estimate the long-run variance in order to scale the fluctuations of test statistics. This is typically done through estimators relying on tuning parameters that ideally should adjust to the strength of the autocorrelation present in the data. In practice, the success of such methods can vary widely.
As a remedy, self-normalization is a tuning parameter-free method that achieves standardization, typically through recursive estimates. The advantages of such an approach for testing relevant hypotheses of parameters of functional time series were recently recognized in
\citet{detkokvol2018}. In this paper, we develop a concept of self-normalization for the problem of testing for relevant differences between the eigenfunctions of two covariance operators
in functional data. \citet{zhangshao2015} is the work most closely related to the results presented below, as it pertains to self-normalized two-sample tests for eigenfunctions and eigenvalues in functional time series.
An important difference to this work is that the methods proposed here do not require a dimension reduction of the eigenfunctions but compare the functions directly with respect to a norm
in the $L^{2}$-space. A further crucial difference
is that their paper is in the classical testing setup, while ours is in the strictly relevant setting, so that the contributions are not directly comparable on the same footing---even though we report the outcomes from both tests on the same simulated curves in Section \ref{sec-simul}. There, it is found that, despite the fact that the proposed test is constructed to detect relevant differences, it appears to compare favorably against the test of \citet{zhangshao2015} when the difference in eigenfunctions is large. In this sense, both tests can be seen as complementing each other.
The rest of the paper is organized as follows. Section \ref{sec2} introduces the framework, details model assumptions and gives the two-sample test procedures as well as their theoretical properties. Section \ref{sec-simul} reports the results of a comparative simulation study. Section \ref{sec4} showcases an application of the proposed tests to Australian temperature curves obtained at different locations during the past century or so. Section \ref{sec:conclusions} concludes.
Finally some technical details used in the arguments of Section \ref{sec22} are given in Section \ref{sec:proofs}.
\section{Testing the similarity of two eigenfunctions }
\label{sec2}
\def6.\arabic{equation}{2.\arabic{equation}}
\setcounter{equation}{0}
Let $L^{2}([0,1])$ denote the common space of square integrable functions $f\colon[0,1] \to \mathbb{R} $ with inner product $\langle f_{1}, f_{2}\rangle = \int_{0}^1 f_{1}(t)f_{2}(t) dt$ and norm $\| f\| = \big ( \int_{0}^1 f^{2}(t)dt \big)^{1/2}$.
Consider two independent stationary functional time series $(X_t)_{t \in \mathbb{Z}}$ and $(Y_t)_{t \in \mathbb{Z}}$ in $L^2([0,1])$ and assume that each $X_t$ and $Y_{t}$ is centered and square integrable, that is $\mathbb{E}[X_t]=0$, $\mathbb{E}[Y_t]=0$ and $\mathbb{E}[ \|X_t\|^{2}] < \infty $, $\mathbb{E}[ \|Y_t\|^{2}] < \infty $, respectively. In practice centering can be achieved by subtracting the sample mean function estimate
and this will not change our results. Denote by
\begin{eqnarray}\label{1.1}
C^X (s,t) &=& \sum^\infty_{j=1} \tau^X_j v^X_j(s) v^X_j(t), \\ \label{1.2}
C^Y (s,t) &=& \sum^\infty_{j=1} \tau^Y_j v^Y_j(s) v^Y_j(t)
\end{eqnarray}
the corresponding covariance operators; see Section 2.1 of \cite{buecher2018} for a detailed discussion of expected values in Hilbert spaces.
The eigenfunctions of the kernel integral operators with kernels $C^X$ and $C^Y$, corresponding to the ordered eigenvalues $\tau^X_1 \geq \tau^X_2 \geq \cdots $ and $\tau^Y_1 \geq \tau^Y_2 \geq \cdots $, are denoted by $ v^X_1, v^X_2, \ldots$ and $ v^Y_1, v^Y_2, \ldots$, respectively.
We are interested in testing the similarity of the covariance operators $C^X$ and $C^Y$ by comparing their eigenfunctions $ v^X_j$ and $ v^Y_j$ of order $j$ for some $j \in \mathbb{N}$. This is framed as the relevant hypothesis testing problem
\begin{equation}\label{2.21:func}
H^{(j)}_0 \colon \| v^X_j - v^Y_j \|^2 \leq \Delta_j
~~~~\mbox{ versus} ~~~~
H^{(j)}_1 \colon\| v^X_j - v^Y_j \|^2 > \Delta_j ,
\end{equation}
where $\Delta_j >0 $ is a pre-specified constant representing the maximal value for the squared distances $\| v^X_j - v^Y_j \|^2$ between the eigenfunctions which can be accepted as scientifically insignificant.
In order to make the comparison between the eigenfunctions meaningful, we assume throughout this paper that $\langle v^X_{j}, v^Y_{j} \rangle \geq 0$ for all $j \in \mathbb{N}$.
The choice of the threshold $\Delta_j >0 $ depends on the specific application and is essentially defined by the change size one is really interested in from a scientific viewpoint.
In particular, the choice $\Delta_j =0$ gives the classical hypotheses $H^{c}_{0}\colon v^X_j =v^Y_j$ versus $H^{c}_{1}\colon v^X_j \not =v^Y_j$.
We argue, however, that often it is well known that the eigenfunctions, or other parameters for that matter, from different samples will not precisely coincide. Further there is frequently no actual interest in arbitrarily small differences between the eigenfunctions. For this reason, $\Delta_{j}>0$ is assumed throughout.
Observe also that a similar hypothesis testing problem could be formulated for relevant differences of the eigenvalues $\tau^X_j- \tau^Y_j$ of the covariance operators. We studied the development of such tests alongside those presented below for the eigenfunctions, and found, interestingly, that they generally are less powerful empirically. An elaboration and explanation of this is detailed in Remark \ref{eig-rem} below. The arguments presented there are also applicable to tests based on direct long-run variance estimation.
The proposed approach is based on an appropriate estimate, say $ \hat D^{(j)}_{m,n}$, of the squared $L^{2}$-distance $\| v^X_j - v^Y_j \|^2$ between the eigenfunctions, and
the null hypothesis in \eqref{2.21:func} is rejected for large values of this estimate. It turns out that the
(asymptotic) distribution of this distance depends sensitively on all eigenvalues and eigenfunctions of the covariance operators $C^{X}$ and $C^{Y}$ and on
the dependence structure of the
underlying processes. To address this problem we propose a self-normalization of the statistic $ \hat D^{(j)}_{m,n}$.
Self-normalization is a well-established concept in the time series literature and was introduced in two seminal
papers by \cite{shao2010} and \cite{shazha2010} for the construction of confidence intervals and change point analysis,
respectively. More recently, it has been developed further for the specific needs of functional data by \cite{zhang2011} and \cite{zhangshao2015}; see also \cite{shao2015} for a recent review on self-normalization. In the present context,
where one is interested in hypotheses of the form \eqref{2.21:func}, a non-standard approach of self-normalization is necessary to obtain a distribution-free test, which is technically demanding due to the implicit definition of the eigenvalues and eigenfunctions
of the covariance operators.
For this reason, we first present the main idea of our approach in Section \ref{sec21} and defer a detailed discussion to the subsequent Section \ref{sec22}.
\subsection{Testing for relevant differences between eigenfunctions}
\label{sec21}
If $X_1,\ldots, X_m$ and $Y_1,\ldots, Y_n$ are the two samples, then
\begin{equation}\label{1.5}
\hat C_m^X (s,t) = \frac {1}{m} \sum^m_{i=1} X_i(s) X_i(t),
\qquad
\hat C_n^Y (s,t) = \frac {1}{n} \sum^n_{i=1} Y_i(s) Y_i(t)
\end{equation}
are the common estimates of the covariance operators \citep{ramsay:silverman:2005,horvkoko2012}. Denote by $\hat \tau^X_j, \hat \tau^Y_j$ and $\hat v^X_j, \hat v^Y_j$ the corresponding eigenvalues and eigenfunctions.
Together, these define the canonical estimates of the respective population quantities in \eqref{1.1} and \eqref{1.2}. Again, to make the comparison between the eigenfunctions meaningful, it is assumed throughout this paper that the inner product of $\langle \hat v^X_j , \hat v^Y_j\rangle$ is nonnegative for all $j$, which can be achieved in practice by changing the sign of one of the eigenfunction estimates if needed. We use the statistic
\begin{equation}\label{1.4}
\hat D^{(j)}_{m,n} = \| \hat v^X_j - \hat v^Y_j \|^2 = \int^1_0 (\hat v^X_j(t) - \hat v^Y_j(t))^2 dt
\end{equation}
to estimate the squared distance
\begin{equation}
\label{dj}
D^{(j)} = \| v^X_j - v^Y_j \|^{2} = \int^1_0 ( v^X_j(t) - v^Y_j(t))^2 dt
\end{equation}
between the $j$th population eigenfunctions. The null hypothesis will be rejected for large values of $\hat D^{(j)}_{m,n}$ compared to $\Delta_j$.
In the following, a self-normalized test statistic based on $\hat D^{(j)}_{m,n}$ will be constructed; see \cite{detkokvol2018}.
To be precise, let $\lambda \in [0,1]$ and define
\begin{equation}\label{1.5seq}
\hat C_m^X (s,t,\lambda ) = \frac {1}{\lfloor m \lambda \rfloor} \sum^{\lfloor m \lambda \rfloor}_{i=1} X_i(s) X_i(t), \qquad
\hat C_n^Y (s,t,\lambda ) = \frac {1}{\lfloor n \lambda \rfloor} \sum^{\lfloor n \lambda \rfloor}_{i=1} Y_i(s) Y_i(t)
\end{equation}
as the sequential version of the estimators in \eqref{1.5}, noting that the sums are defined as $0$ if ${\lfloor m \lambda \rfloor} < 1$.
Observe that, under suitable assumptions detailed in Section \ref{sec22}, the statistics
$\hat C_m^X (\cdot ,\cdot ,\lambda ) $ and $ \hat C_n^Y (\cdot ,\cdot ,\lambda)$
are consistent estimates of the covariance operators $C^{X}$ and $C^Y$, respectively,
whenever $ 0 < \lambda \leq 1$.
The corresponding sample eigenfunctions of $\hat C^X_m (\cdot, \cdot, \lambda)$ and $\hat C^Y_n (\cdot, \cdot, \lambda)$ are denoted by $\hat v^X_j(t, \lambda)$ and $\hat v^Y_j(t,\lambda)$, respectively,
assuming throughout that $\langle \hat v^X_{j}, \hat v^Y_{j} \rangle \geq 0$. Define the stochastic process
\begin{equation}\label{2.5}
\hat D^{(j)}_{m,n} (t, \lambda) = \lambda (\hat v^X_j (t,\lambda) - \hat v^Y_j (t,\lambda)),
\qquad t \in [0,1]~,~\lambda \in [0,1],
\end{equation}
and note that the statistic $\hat D^{(j)}_{m,n}$ in \eqref{1.4} can be represented as
\begin{equation}\label{2.6}
\hat D^{(j)}_{m,n} = \int^1_0 (\hat D^{(j)}_{m,n} (t,1))^2 dt.
\end{equation}
Self-normalization is enabled through the statistic
\begin{equation}\label{2.7}
\hat V^{(j)}_{m,n} = \Big( \int^1_0 \Big( \int^1_0 (\hat D^{(j)}_{m,n} (t, \lambda))^2 dt - \lambda^2 \int^1_0 (\hat D^{(j)}_{m,n} (t,1))^2dt \Big)^2
\nu ( d \lambda) \Big)^{1/2} ,
\end{equation}
where $\nu $ is a probability measure on the interval $(0,1]$. Note that, under appropriate assumptions, the statistic
$ \hat V^{(j)}_{m,n} $ converges to $0$ in probability. However, it can be proved that its scaled version $\sqrt{m+n} \hat V^{(j)}_{m,n} $
converges in distribution to a random variable, which is positive with probability $1$. More precisely,
it is shown in Theorem \ref{thm2.1} below that, under an appropriate set of assumptions,
\begin{equation}\label{weak}
\sqrt{m+n}
\big ( {\hat D^{(j)}_{m,n}- D^{(j)}} , {\hat V^{(j)}_{m,n}} \big ) \stackrel{\mathcal{D}}{\longrightarrow} \Big ( \zeta_j \mathbb{B} (1),
\Big \{ \zeta_j^{2} \int^1_0 \lambda^2 (\mathbb{B}(\lambda) - \lambda \mathbb{B}(1))^{2} \nu ( d \lambda ) \Big \}^{1/2} \Big)
\end{equation}
as $m,n \to \infty$, where $D^{(j)} $ is defined in \eqref{dj}.
{
Here $\{ \mathbb{B} (\lambda) \}_{\lambda \in [0,1]}$ is a Brownian motion on the interval $[0,1]$ and
$\zeta_j \geq 0$ is a constant, which is assumed to be strictly positive if $D_{j} >0$ (the square $\zeta_j^{2}$ is akin to a long-run variance parameter).}
Consider then the test statistic
\begin{equation}\label{2.6a}
\hat{\mathbb{W}}^{(j)}_{m,n} := \frac {\hat D^{(j)}_{m,n}- \Delta_j}{\hat V^{(j)}_{m,n}}.
\end{equation}
Based on this, the null hypothesis in \eqref{2.21:func} is rejected whenever
\begin{equation} \label{testone}
\hat{\mathbb{W}}^{(j)}_{m,n} > q_{1 - \alpha},
\end{equation}
where $q_{1 - \alpha}$ is the $(1- \alpha)$-quantile of the distribution of the random
variable
\begin{equation}\label{wvar}
\mathbb{W}:= \frac {\mathbb{B}(1)}{ \{ \int^1_0 \lambda^2 (\mathbb{B}(\lambda) - \lambda \mathbb{B}(1))^{2} \nu ( d \lambda ) \}^{1/2}} .
\end{equation}
The quantiles of this distribution do not depend on the long-run variance, but on the measure $\nu$ in the statistic $ \hat V^{(j)}_{m,n} $
used for self-normalization. An approximate $P$-value of the test can be calculated as
\begin{align}\label{p-val-calc}
p= \mathbb{P}(\mathbb{W} > \hat{\mathbb{W}}^{(j)}_{m,n}).
\end{align}
The following theorem shows that the test just constructed keeps a desired level in large samples and has power increasing to one with the sample sizes.
\begin{theorem} \label{thm1}
If the weak convergence in \eqref{weak} holds, then the test \eqref{testone} has asymptotic level $\alpha$ and is consistent
for the relevant hypotheses in \eqref{2.21:func}. In particular,
\begin{eqnarray}\label{test-bev}
\lim_{m,n \to \infty} \mathbb{P} ( \hat{\mathbb{W}}^{(j)}_{m,n} > q_{1 - \alpha} ) &=& \left \{ \begin{array}{c@{\quad}cc}
0 & \mbox{if} & D^{(j)} < \Delta_j. \\
\alpha & \mbox{if} & D^{(j)} = \Delta_j. \\
1 & \mbox{if} & D^{(j)} > \Delta_j.
\end{array} \right.
\end{eqnarray} \hfill $\Box$
\end{theorem}
\begin{proof
If $D_{j} >0$, the continuous mapping theorem and \eqref{weak} imply
\begin{eqnarray}
\label{thm2.1a}
\frac{\hat D^{(j)}_{m,n} - D^{(j)}}{\hat V^{(j)}_{m,n}}
\stackrel{\mathcal{D}}{\longrightarrow} \mathbb{W}~,
\end{eqnarray}
where the random variable $ \mathbb{W}$ is defined in \eqref{wvar}. Consequently,
the probability of rejecting the null hypothesis is given by
\begin{eqnarray}
\label{power}
\mathbb{P} (\hat{\mathbb{W}}^{(j)}_{m,n} > q_{1 - \alpha} )
= \mathbb{P} \bigg ( \frac{\hat D^{(j)}_{m,n} - D^{(j)}}{\hat V^{(j)}_{m,n}}
> \frac {\Delta_j - D^{(j)}}{\hat V^{(j)} _{m,n}}+ q_{1 - \alpha} \bigg).
\end{eqnarray}
It follows moreover from \eqref{weak} that
$\hat V^{(j)}_{m,n} \stackrel {\mathbb{P}}{\rightarrow} 0$ as $m,n \to \infty$ and therefore \eqref{thm2.1a} implies \eqref{test-bev},
{
thus completing the proof in the case $D_{j} >0$. If $D_{j} = 0$ it follows from the proof of \eqref{weak}
(see Proposition \ref{d-approx-1} below) that $\sqrt{m+n} \hat D_{m,n}^{(j)} = o_{ \mathbb{P} }(1)$ and
$\sqrt{m+n} \hat V_{m,n}^{(j)} = o_{ \mathbb{P} }(1)$. Consequently,
$$
\mathbb{P} (\hat{\mathbb{W}}^{(j)}_{m,n} > q_{1 - \alpha} ) = \mathbb{P} \big ( \sqrt{m+n} \hat D_{m,n}^{(j)} > \sqrt{m+n} \Delta_{j} + \sqrt{m+n} \hat V_{m,n}^{(j)} q_{1-\alpha} \big ) = o(1),
$$
which completes the proof.
}
\end{proof}
The main difficulty in the proof of Theorem \ref{thm1}
is hidden by postulating the weak convergence in \eqref{weak}. A proof of this statement is technically demanding. The precise formulation is given in the following section.
\begin{rem}[Estimation of the long-run variance, power, and relevant differences in the eigenvalues]\label{eig-rem}
{\rm ~\\
(1) The parameter $\zeta_j^{2}$ is essentially a long-run variance parameter. Therefore it is worthwhile to mention that on a first glance the weak convergence in \eqref{weak} provides a very simple test for the hypotheses \eqref{2.21:func} if a consistent estimator, say $\hat \zeta_{n,j}^{2}$, of
the long-run variance would be available. To this end, note that in this case it follows from \eqref{weak} that
$ \sqrt{m+n} ( {\hat D^{(j)}_{m,n}- D^{(j)}} ) / \hat \zeta_{n,j }$ converges weakly to a standard normal distribution. Consequently, using the same arguments as in the proof
of Theorem \ref{thm1},we obtain that rejecting the null hypothesis in \eqref{2.21:func}, whenever
\begin{equation}\label{testlrv}
\sqrt{m+n} ( \hat D^{(j)}_{m,n}- \Delta_{j} ) / \hat \zeta_{n,j } > u_{1 - \alpha}~,
\end{equation}
yields a consistent and asymptotic level $\alpha$ test. However a careful inspection of the representation of the long-run variance
in equations \eqref{2.15a}--\eqref{tau-def-app} in Section \ref{sec:proofs} suggests that it would be extremely difficult, if not impossible, to construct a reliable estimate of the parameter $\zeta_j$ in this context, due to its complicated dependence on the covariance operators $C^X$, $C^Y$, and their full complement of eigenvalues and eigenfunctions.
\\
(2)
Defining $\mathbb{K}=\big (\int_{0}^{1} \lambda^2(\mathbb{B}(\lambda)-\lambda \mathbb{B}(1))^2 \nu(d\lambda)\big)^{1/2}$, it follows from
\eqref{power} that
\begin{equation}\label{power-w}
P\big ( \hat{\mathbb{W}}^{(j)}_{m,n} > q_{1 - \alpha} \big ) \approx P\Big ({\mathbb{W}} > \frac{\sqrt{m+n}(\Delta_j - D_j)}{\zeta_j \cdot \mathbb{K} } + q_{1 - \alpha}\Big ),
\end{equation}
where the random variable $\mathbb{W} $ is defined in \eqref{wvar} and
$\zeta_j$ is the long-run standard deviation appearing in Theorem \ref{thm2.1}, which is defined precisely in \eqref{tau-def-app}. The probability on the right-hand side converges to zero, $\alpha$, or 1, depending on $\Delta_j-D_j$ being negative, zero, or positive, respectively. From this one may also quite easily understand how the power of the test depends on $\zeta_j$. Under the alternative, $\Delta_j - D_j < 0 $
and the probability on the right-hand side of \eqref{power-w} increases if $(D_j-\Delta_j)/\zeta_j$ increases. Consequently, smaller long-run variances $\zeta_j^2$ yield more powerful tests. Some values of $\zeta_j$ are calculated via simulation for some of the examples in Section \ref{sec-simul} below.
\\
(3)
Alongside the test for relevant differences in the eigenfunctions just developed, one might also consider the following test for relevant differences in the $j$th eigenvalues
of the covariance operators $C^{X}$ and $C^{Y}$:
\begin{equation}\label{2.21:eval}
H^{(j)}_{0,val} : \; D_{j,val} :=
( \tau^X_j - \tau^Y_j )^2 \leq \Delta_{j,val} ~~\mbox{ versus} ~~ H^{(j)}_{1,val} : ( \tau^X_j - \tau^Y_j )^2 > \Delta_{j,val}.
\end{equation}
Following the development of the above test for the eigenfunctions, a test of the hypothesis \eqref{2.21:eval} can be constructed based on the partial sample estimates of the eigenvalues $\hat{\tau}_j^{X}(\lambda)$ and $\hat{\tau}_j^{Y}(\lambda)$ of the kernel integral operators with kernels $\hat{C}_m^X (\cdot, \cdot , \lambda) $ and $\hat{C}_n^Y (\cdot, \cdot , \lambda) $ in \eqref{1.5seq}.
In particular, let
\begin{eqnarray*}
\hat{T}_{m,n}^{(j)}(\lambda) &=& \lambda (\hat{\tau}_j^{X}(\lambda)-\hat{\tau}_j^{Y}(\lambda)), \mbox{ and } \\
\hat{M}_{m,n}^{(j)} &=& \left(\int_{0}^{1}\{[\hat{T}_{m,n}^{(j)}(\lambda)]^2-\lambda^2[\hat{T}_{m,n}^{(j)}(\lambda)]^2\}^2 \nu(d\lambda)\right)^{1/2}.
\end{eqnarray*}
Then one can show, in fact somewhat more simply than in the case of the eigenfunctions, that the test procedure that rejects the null hypothesis whenever
\begin{equation}\label{testval}
\hat{\mathbb{Q}}^{(j)}_{m,n} = \frac{\hat{T}_{m,n}^{(j)}(1)- \Delta_{j,val}} {\hat{M}_{m,n}^{(j)}} > q_{1-\alpha}
\end{equation}
is a consistent and asymptotic level $\alpha$ test for the hypotheses \eqref{2.21:eval}. Moreover, the power of this test is approximately given by
\begin{equation}\label{power-wq}
P\big ( \hat{\mathbb{Q}}^{(j)}_{m,n} > q_{1 - \alpha} \big ) \approx P\Big ({\mathbb{W}} > \frac{\sqrt{m+n}(\Delta_{j,val} - D_{j,val})}{\zeta_{j,val} \cdot \mathbb{K} } + q_{1 - \alpha}\Big),
\end{equation}
where $\zeta_{j,val}^{2}$ is a different long-run variance parameter.
Although the tests \eqref{2.21:func} and \eqref{2.21:eval} are constructed for completely different testing problems it might be of interest to compare their power properties. For this purpose note that the ratios $(D_j-\Delta_j)/\zeta_j$ and $(D_{j,val}-\Delta_{j,val})/\zeta_{j,val}$, for which the power of each test is an increasing function of, implicitly depend in a quite complicated way on the dependence structure of the $X$ and $Y$ samples and on all eigenvalues and eigenfunctions of their corresponding covariance operators.
One might expect intuitively that relevant differences between the eigenvalues would be easier to detect than differences between the eigenfunctions (as the latter are more difficult to estimate). However, an empirical analysis shows that, in typical examples, the ratio $(D_{j,val}-\Delta_{j,val})/\zeta_{j,val}$ increases extremely slowly with increasing $D_{j,val}$ compared to the analogous ratio for the eigenfunction problem. Consequently, we expected and observed in numerical experiments (not presented for the sake of brevity) that the test \eqref{testval} would be less powerful than the test \eqref{testone}
if in hypotheses \eqref{2.21:eval} and \eqref{2.21:func} the thresholds $\Delta_{j,val}$ and $\Delta_{j} $ are similar. This observation also applies to the tests based on (intractable) long-run variance estimation. Here the power is approximately given by
$ 1 - \Phi \big ( \sqrt{m+n} (\Delta_{j} -D ) / z + u_{1-\alpha} \big ) $, where $\Phi$ is the cdf of the standard normal distribution
and $z$ (and $D$) is either $\zeta_j$ (and $D_{j} $) for the test \eqref{testlrv} or $ \zeta_{j,val}$ (and $D_{j,val}$) for the corresponding test regarding the eigenvalues.
}
\end{rem}
\subsection{Justification of weak convergence}
\label{sec22}
For a proof of \eqref{weak} several technical assumptions are required. The first condition is standard in two-sample inference.
\begin{assumption}\label{theta}
There exists a constant $\theta \in (0,1)$ such that $\lim_{m,n \to \infty} {m}/({m+n}) = \theta$.
\end{assumption}
Next, we specify the dependence structure of the time series $\{X_i\}_{i\in \mathbb{Z}}$ and $\{Y_i\}_{i\in \mathbb{Z}}$.
Several mathematical concepts have been proposed for this purpose \citep[see][among many others]{
bradley2005,bertail2006}.
In this paper, we use the general framework of $L^{p}$-$m$-approximability for weakly dependent functional data as put forward in \cite{hoermann2010}. Following these authors,
a time series $\{X_i\}_{i \in \mathbb{Z}}$ in $L^2([0,1])$ is called {\it $L^{p}$-$m$-approximable} for some $p>0$ if
\begin{itemize}
\item[(a)]
There exists a measurable function $g\colon S^\infty\to L^2([0,1])$, where $S$ is a measurable space, and independent, identically distributed (iid) innovations $\{\epsilon_i\}_{i \in \mathbb{Z}}$ taking values in $S$ such that $X_i=g(\epsilon_i,\epsilon_{i-1},\ldots)$ for $i\in\mathbb{Z}$;
\item[(b)] Let $\{\epsilon_i^\prime\}_{i \in \mathbb{Z}}$ be an independent copy of $\{\epsilon_i\}_{i \in \mathbb{Z}}$, and define \\ $X_{i,m}=g(\epsilon_i,\ldots,\epsilon_{i-m+1},\epsilon^\prime_{i-m},\epsilon^\prime_{i-m-1},\ldots)$. Then,
\[
\sum_{m=0}^\infty\big(\mathbb{E}[\|X_i-X_{i,m}\|^p]\big)^{1/p}<\infty .
\]
\end{itemize}
\begin{assumption}\label{edep} The sequences $\{X_i\}_{i \in \mathbb{Z}}$ and $\{Y_i\}_{i \in \mathbb{Z}}$ are independent, each centered and
$L^p$-m-approximable for some $p>4$.
\end{assumption}
Under Assumption \ref{edep}, there exist covariance operators $C^X$ and $C^{Y}$ of $X_i$ and $Y_i$. For the corresponding eigenvalues $\tau^X_1 \geq \tau^X_2 \geq \cdots$ and $\tau^Y_1 \geq \tau^Y_2 \geq \cdots$, we assume the following.
\begin{assumption} \label{as-spacing} There exists a positive integer $d$ such that $\tau_1^X > \cdots > \tau_d^X > \tau_{d+1}^X >0$ and $\tau_1^Y > \cdots > \tau_d^Y > \tau_{d+1}^Y >0$.
\end{assumption}
The final assumption needed is a positivity condition on the long-run variance parameter $\zeta_j^{2}$ appearing in \eqref{weak}. The formal definition of $\zeta_j$ is quite cumbersome, since it depends in a complicated way on expansions for the differences $\hat{v}^X_j(\cdot,\lambda)-v^X_j$ and $\hat{v}^Y_j(\cdot,\lambda)-v^Y_j$, but is provided in Section \ref{sec:proofs}; see equations \eqref{2.15a}--\eqref{tau-def-app}.
\begin{assumption} \label{var-pos} The scalar $\zeta_j$ defined in \eqref{tau-def-app} is strictly positive, whenever $D_{j} >0$.
\end{assumption}
Recall the definition of the sequential processes $\hat C^{X} (\cdot, \cdot, \lambda) $ and $\hat C^{Y} (\cdot, \cdot, \lambda) $ in \eqref{1.5seq} and their corresponding eigenfunctions $\hat v^X_j (\cdot, \lambda)$ and $\hat v^Y_j (\cdot, \lambda)$.
The first step in the proof of the weak convergence \eqref{weak} is a stochastic expansion of the difference between the sample eigenfunctions $\hat v^{X}_{j} (\cdot, \lambda)$ and $\hat v^{Y}_{j} (\cdot, \lambda)$ and their respective population versions $v^X_j$ and $v^Y_j$. Similar expansions that do not take into account uniformity in the partial sample parameter $\lambda$ have been derived by \cite{kokoreim2013} and \cite{hallhoss2006}, among others; see also \cite{dauxois1982} for a general statement in this context. The proof of this result is postponed to Section \ref{appendix1}.
\begin{proposition}\label{z-approx} Suppose Assumptions \ref{edep} and \ref{as-spacing} hold, then, for any $j\le d$,
\begin{align} \label{2.1}
\sup_{\lambda \in [0,1]} \bigg\| \lambda [\hat v^X_j(t,\lambda) - v^X_j(t)] - \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^X_k(t)}{\tau^X_j - \tau^X_k} \int^1_0 \hat Z^X_m (s_1, s_2, \lambda) v^X_k (s_2) & v^X_j(s_1) ds_1 ds_2 \bigg\| \\&=O_\mathbb{P}\left( \frac{\log^\kappa(m)}{m}\right), \notag
\end{align}
and
\begin{align} \label{2.1}
\sup_{\lambda \in [0,1]} \bigg\| \lambda [\hat v^Y_j(t,\lambda) - v^Y_j(t)] - \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^Y_k(t)}{\tau^Y_j - \tau^Y_k} \int^1_0 \hat Z^Y_n (s_1, s_2, \lambda) v^Y_k (s_2) & v^Y_j(s_1) ds_1 ds_2 \bigg\| \\&=O_\mathbb{P}\left( \frac{\log^\kappa(n)}{n}\right), \notag
\end{align}
for some $\kappa > 0$, where the processes
$\hat Z^X_m$ and $\hat Z^Y_n$ are defined by
\begin{eqnarray} \label{2.3}
\hat Z^X_m (s_1, s_2, \lambda) &=& \frac {1}{\sqrt{m}} \sum^{\lfloor m \lambda \rfloor}_{i=1} \big (X_i(s_1) X_i(s_2) - C^X(s_1, s_2) \big ), \\ \label{2.4}
\hat Z^Y_n (s_1, s_2, \lambda) &=& \frac {1}{\sqrt{n}} \sum^{\lfloor n \lambda \rfloor}_{i=1} \big (Y_i(s_1) Y_i(s_2) - C^Y(s_1, s_2) \big ).
\end{eqnarray}
Moreover,
\begin{align}\label{v-approx-1x}
\sup_{\lambda \in [0,1]} \sqrt{\lambda} \big\| \hat{v}^X_j(\cdot,\lambda) - v^X_j\big\|
= O_\mathbb{P}\left(\frac{\log^{(1/\kappa)}(m)}{\sqrt{m}} \right), \\
\sup_{\lambda \in [0,1]} \sqrt{\lambda} \big\| \hat{v}^Y_j(\cdot,\lambda) - v^Y_j\big\| = O_\mathbb{P}\left (\frac{\log^{(1/\kappa)}(n)}{\sqrt{n}} \right).
\label{v-approx-1y}
\end{align}
\end{proposition}
\medskip
Recalling notation \eqref{2.5}, Proposition~\ref{z-approx} motivates the approximation
\begin{equation}\label{2.9}
\hat D_{m,n}^{(j)}(t,\lambda) - \lambda D_j(t) = \lambda ( \hat v^X_j (t) - \hat v^Y_j (t) ) - \lambda (v^X_j (t) - v^Y_j (t) ) \approx
\tilde D_{m,n}^{(j)}(t, \lambda),
\end{equation}
where the process $ \tilde D_{m,n}^{(j)}$ is defined by
\begin{align}
\nonumber
\tilde D^{(j)}_{m,n} (t, \lambda)
=& \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^X_k (t)}{\tau^X_j - \tau^X_k} \int^1_0 \hat Z^X_m (s_1,s_2,\lambda) v^X_k (s_2) v^X_j (s_1) ds_1 ds\\ \label{2.8}
&- \frac {1}{\sqrt{n}} \sum_{k \neq j} \frac {v^Y_k (t)}{\tau^Y_j - \tau^Y_k} \int^1_0 \hat Z^Y_n (s_1, s_2, \lambda) v^Y_k(s_2) v^Y_j(s_1) ds_1, ds_2.
\end{align}
The next result makes the foregoing heuristic arguments rigorous and shows that the approximation holds in fact uniformly with respect to $\lambda \in [0,1]$.
\begin{proposition}\label{d-approx-1}
Suppose Assumptions \ref{theta}--\ref{var-pos}
hold, then, for any for $j \le d$,
\begin{eqnarray*}
\sup_{\lambda \in [0,1]} \left\| \hat D_{m,n}^{(j)}(\cdot,\lambda) - \lambda D_j(\cdot) - \tilde D_{m,n}^{(j)}(\cdot, \lambda) \right\| & =& o_\mathbb{P}\left(\frac{1}{\sqrt{m+n}}\right),
\\
\sup_{\lambda \in [0,1]} \left| \left\| \hat D_{m,n}^{(j)}(\cdot,\lambda)-\lambda D_j(\cdot) \right\|^2 - \left\| \tilde D_{m,n}^{(j)}(\cdot, \lambda) \right\|^2 \right| &= & o_\mathbb{P}\left(\frac{1}{\sqrt{m+n}}\right),
\end{eqnarray*}
and
\begin{equation}
\label{2.11}
\sqrt{m+n} \sup_{\lambda \in [0,1]} \int^1_0 (\tilde D^{(j)}_{m,n} (t, \lambda))^2 dt = o_{\mathbb{P}}(1).
\end{equation}
\end{proposition}
\begin{proof
According to their definitions,
\begin{align*}
&\hat D_{m,n}^{(j)}(t,\lambda) - \lambda D_j(t) - \tilde D_{m,n}^{(j)}(t, \lambda) \\
&= \lambda [\hat v^X_j(t,\lambda) - v^X_j(t)] - \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^X_k(t)}{\tau^X_j - \tau^X_k} \int^1_0 \hat Z^X_m (s_1, s_2, \lambda) v^X_k (s_2) v^X_j(s_1) ds_1 ds_2 \\
&\;\;\;\;\;\;+ \lambda [\hat v^Y_j(t,\lambda) - v^Y_j(t)] - \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^Y_k(t)}{\tau^Y_j - \tau^Y_k} \int^1_0 \hat Z^Y_n (s_1, s_2, \lambda) v^Y_k (s_2) v^Y_j(s_1) ds_1 ds_2.
\end{align*}
Therefore, by the triangle inequality, Proposition \ref{z-approx}, and Assumption \ref{theta},
\begin{align*}
\sup_{\lambda \in [0,1]} \Big\| & \hat D_{m,n}^{(j)}(\cdot,\lambda) - \lambda D_j(\cdot) - \tilde D_{m,n}^{(j)}(\cdot, \lambda) \Big\| \\
&\le \sup_{\lambda \in [0,1]} \Big\| \lambda [\hat v^X_j(t,\lambda) - v^X_j(t)] - \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^X_k(t)}{\tau^X_j - \tau^X_k} \int^1_0 \hat Z^X_m (s_1, s_2, \lambda) v^X_k (s_2) v^X_j(s_1) ds_1 ds_2 \Big\| \\ \notag
&\;\;\;\;+ \sup_{\lambda \in [0,1]} \Big\| \lambda [\hat v^Y_j(t,\lambda) - v^Y_j(t)] - \frac {1}{\sqrt{m}} \sum_{k \neq j} \frac {v^Y_k(t)}{\tau^Y_j - \tau^Y_k} \int^1_0 \hat Z^Y_n (s_1, s_2, \lambda) v^Y_k (s_2) v^Y_j(s_1) ds_1 ds_2 \Big\| \\
&=O_\mathbb{P}\left( \frac{\log^\kappa(m )}{m}\right) = o_\mathbb{P}\left(\frac{1}{\sqrt{m+n}}\right).
\end{align*}
The second assertion follows immediately from the first and the reverse triangle inequality. With the second assertion in place, we have, using \eqref{v-approx-1x} and \eqref{v-approx-1y}, that
\begin{align}
\sqrt{m+n} \sup_{\lambda \in [0,1]} \int^1_0 (\tilde D^{(j)}_{m,n} (t, \lambda))^2 dt &= \sqrt{m+n} \sup_{\lambda \in [0,1]} \int^1_0 (\hat{D}_{m,n}^{(j)}(t,\lambda) - \lambda D_j(t))^2dt+ o_{\mathbb{P}}(1) \notag \\
&\le 4 \sqrt{m+n} \Big[ \sup_{\lambda \in [0,1]} \lambda^2 \| \hat{v}^X_j(\cdot,\lambda) - v^X_j\|^2 + \sup_{\lambda \in [0,1]} \lambda^2 \| \hat{v}^Y_j(\cdot,\lambda) - v^Y_j\|^2 \Big] \notag \\
& = O_\mathbb{P}\Big(\frac{\log^{(2/\kappa)}(m)}{\sqrt{m}} \Big) = o_\mathbb{P}(1) \notag
\end{align}
which completes the proof.
\end{proof}
Introduce the process
\begin{equation} \label{znhat}
\hat Z^{(j)}_{m,n} (\lambda) = \sqrt{m+n} \int^1_0 ( (\hat D^{(j)}_{m,n} (t,\lambda))^2 - \lambda^2 D^2_j(t))dt
\end{equation}
to obtain the following result. The proof is somewhat complicated and therefore deferred to Section \ref{appendix2}.
\begin{proposition} \label{prop1}
Let $\hat Z^{(j)}_{m,n} $ be defined by \eqref{znhat}, then, under Assumptions \ref{theta}-\ref{var-pos} we have for any $j \le d$,
$$
\{ \hat Z^{(j)}_{m,n} (\lambda) \}_{\lambda \in [0,1]} \rightsquigarrow \{ \lambda \zeta_j \mathbb{B} (\lambda) \}_{\lambda \in [0,1]},
$$
where $\zeta_j$ is a positive constant, $\{ \mathbb{B}(\lambda)\}_{\lambda \in [0,1]}$ is a Brownian motion and $\rightsquigarrow$ denotes weak convergence in Skorokhod topology on $D[0,1]$.
\end{proposition}
\begin{theorem}\label{thm2.1}
If Assumptions \ref{theta}, \ref{edep} and \ref{as-spacing} are satisfied, then for any $j\leq d$
\begin{eqnarray}
\nonumber
\sqrt{m+n} \big ( {\hat D^{(j)}_{m,n} - D^{(j)}}, {\hat V^{(j)}_{m,n}} \big )
\rightsquigarrow \Big ( \zeta_j \mathbb{B}(1) , { \Big \{ \zeta_j^2 \int^1_0 \lambda^2 (\mathbb{B}(\lambda) - \lambda \mathbb{B}(1))^{2} \nu (d \lambda ) \Big \}^{1/2}} \Big ),
\end{eqnarray}
where $\hat D^{(j)}_{m,n}$ and $ \hat V^{(j)}_{m,n} $ are defined by \eqref{2.6} and \eqref{2.7}, respectively, and
$\{ \mathbb{B}(\lambda)\}_{\lambda \in [0,1]}$ is a Brownian motion.
\end{theorem}
\begin{proof
Observing the definition of $\hat D^{(j)}_{m,n}$, $D^{(j)}$, $\hat {Z}^{(j)}_{m,n}$ and $\hat V^{(j)}_{m,n} $ in \eqref{2.6}, \eqref{dj} and \eqref{znhat}
and \eqref{2.7}, we have
\begin{eqnarray*}
\hat D^{(j)}_{m,n} - D^{(j)} &=& \int^1_0 (\hat D_{m,n} (t,1))^2 dt - \int^1_0 D^2_j (t) dt = \frac {\hat{Z}^{(j)}_{m,n}(1)}{\sqrt{m+n}} , \\
\hat V^{(j)}_{m,n} &=& \Big \{ \int^1_0 \Big ( \int^1_0 \big [ (\hat D^{(j)}_{m,n} (t, \lambda))^{2} - \lambda^{2} D_j ^2 (t) \big ] dt
- \lambda^2 \int^1_0 \big [
( \hat D^{(j)}_{m,n} (t,1) )^{2} - D_j^2(t) \big ] dt \Big )^2 \nu (d \lambda ) \Big\}^{1/2} \\
&=& \frac {1}{\sqrt{m+n}} \Big \{ \int^1_0 \big( \hat Z^{(j)}_{m,n} (\lambda) - \lambda^2 \hat Z^{(j)}_{m,n} (1)\big)^2 \nu (d \lambda) \Big \}^{1/2} ~.
\end{eqnarray*}
The assertion now follows directly from Proposition \ref{prop1} and the continuous mapping theorem.
\end{proof}
\subsection{Testing for relevant differences in multiple eigenfunctions}\label{sec-mult}
\label{sec3}
\def6.\arabic{equation}{3.\arabic{equation}}
\setcounter{equation}{0}
In this subsection, we are interested in testing if there is no relevant difference between several eigenfunctions of the covariance operators $C^X$ and $C^Y$.
To be precise, let $j_1 < \ldots < j_p$ denote positive indices defining the orders of the eigenfunctions to be compared. This leads to testing the hypotheses
\begin{equation}\label{1.3}
H_{0,p}\colon
D^{(j_{\ell})}=
\| v^X_{j_\ell} - v^Y_{j_\ell} \|^2_2 \leq \Delta_\ell \quad \mbox{ for all } \ell \in \{ 1, \ldots, p \} ,
\end{equation}
versus
\begin{equation}\label{1.3a}
H_{1,p}\colon
D^{(j_{\ell})}=
\| v^X_{j_\ell} - v^Y_{j_\ell} \|^2_2 > \Delta_\ell \quad \mbox{ for at least one } \ell \in \{ 1, \ldots, p \} ,
\end{equation}
where $\Delta_1, \ldots, \Delta_p > 0$ are pre-specified constants.
After trying a number of methods to perform such a test, including deriving joint asymptotic results for the vector of pairwise distances $\hat D_{m,n} = \big (\hat D^{(j_1)}_{m,n}, \ldots, \hat D^{(j_p)}_{m,n} \big )^\top,$ and using these to perform confidence region-type tests as described in \cite{aitchison1964}, we ultimately found that the best approach for relatively small $p$ was to simply apply the marginal tests as proposed above to each eigenfunction, and then control the family-wise error rate using a Bonferroni correction. Specifically, suppose $P_{j_1}$,\ldots,$P_{j_p}$ are $P$-values of the marginal relevant difference in eigenfunction tests calculated from \eqref{p-val-calc}. Then, under the null hypothesis $H_{0,p}$ in \eqref{1.3} is rejected at level $\alpha$ if $P_{j_k}<\alpha/p$ for some $k$ between $1$ and $p$. This asymptotically controls the overall type one error to be less than $\alpha$. A similar approach that we also investigated is the Bonferroni method with Holm correction; see \cite{holm:1979}. These methods are investigated by simulation in Section \ref{sec-mult-sim} below.
\begin{comment}
In order to address this problem we define the
vector of pairwise squared distances
$D = (D^{(j_1)}, \ldots, D^{(j_p)})^\top = (\int^1_0 D^2_{j_\ell}(t) dt)^p_{\ell = 1}$, and denote by
\begin{equation}\label{3.5}
\hat D_{m,n} = \big (\hat D^{(j_1)}_{m,n}, \ldots, \hat D^{(j_p)}_{m,n} \big )^\top,
\end{equation}
as its corresponding estimate, where $\hat D^{(j)}_{m,n}$ is defined in \eqref{2.6}.
For a test of the hypothesis \eqref{1.3} we propose the statistic
\begin{equation}\label{3.5a}
T_{m,n} = (\hat D_{m,n} - D )^\top \mathbb{V}^{-1}_{m,n} (\hat D_{m,n} - D )
\end{equation}
where we use the matrix
\begin{equation}
\mathbb{V}_{m,n} = \Big ( \int^1_0 \Big \{ \int^1_0 (\hat D^{(j_k)}_{m,n} (t, \lambda))^2 - \lambda^2 ( \hat D^{(j_k)}_{m,n} (t,1))^2 dt \int^1_0 (\hat D^{(j_\ell)}_{m,n} (t,\lambda) )^2 - \lambda^2 (\hat D^{(j_\ell)}_{m,n} (t,1))^2 dt\ \Big \} ^{1/2 } \nu (d \lambda ) \Big )^p_{k,\ell=1}
\label{3.3b}
\end{equation}
for self-normalization. It will be shown in Section \ref{appendix3} that \textcolor{red}{Holger: check!}
\begin{eqnarray}\label{weak2}
&&\big ( \sqrt{m+n} \hat D_{m,n} - D_{n+m} \mathbb{V}_{m,n} \big ) \\
\nonumber && \quad\quad\quad\quad\quad \stackrel{\mathcal{D}}{\longrightarrow} \Big ( A^{1/2} \mathbb{B} (1) ,
A^{1/2} \int^1_0 \lambda^2 \big (\mathbb{B} (\lambda) - \lambda \mathbb{B} (1) \big) \big(\mathbb{B} (\lambda) - \lambda \mathbb{B}(1) \big)^\top
\nu (d \lambda ) A^{1/2}
\Big )
\end{eqnarray}
where $A$ is a positive definite matrix and $A^{1/2}$ denotes its square root. The following result, which describes the weak convergence of the statistic $T_{m,n}$
is then a consequence of the continuous mapping theorem.
\begin{theorem}\label{thm33}
Suppose Assumptions \ref{theta}, \ref{edep} and \ref{as-spacing} with $j_{p} \le d$ hold, then
the statistic $ T_{m,n}$ defined by \eqref{3.5a}
converges weakly to the random variable
\begin{equation}\label{3.7}
\mathbb{T} = \mathbb{B}^\top(1) \Big ( \int^1_0 \lambda^2 \big(\mathbb{B} (\lambda) - \lambda \mathbb{B}(1)\big ) \big(\mathbb{B} (\lambda) - \lambda (\mathbb{B}(1)\big)^\top d \lambda \big )^{-1} \mathbb{B} (1)
\end{equation}
where $\mathbb{B} = (\mathbb{B}_1, \ldots, \mathbb{B}_p)^\top$ is a vector of independent Brownian motions on the interval $[0,1]$.
\end{theorem}
\begin{proposition} \label{prop32}
Let $q_\beta$ denote the $\beta$-quantile of the distribution of the statistic $\mathbb{T}$ defined in \eqref{3.7} and suppose that the assumption of Theorem \ref{thm33} are satisfied, then the set
\begin{equation} \label{confset}
C_{m,n} = \Big \{ D \in \mathbb{R}^p \mid (\hat D_{m,n}(1) - D)^\top \mathbb{V}^{-1}_{m,n} (\hat D_{m,n} (1) - D) \leq q_{1- \alpha} \Big \}
\end{equation}
is an asymptotic $(1- \alpha)$ confidence region for the vector of squared $L^2$-distances $D=(D^{(j_1)}, \ldots, D^{(j_p)})^\top$ between the eigenfunction $v^X_{j_\ell} - v^Y_{j_\ell} \quad (\ell = 1, \ldots, p)$.
\end{proposition}
{\bf Proof.} Recall the definition as $D= (D^{(j_1)},\ldots,D^{(j_p)})^\top$, then
\begin{eqnarray*}
\lim_{m,n \to \infty} \mathbb{P} (D \in C_{m,n}) &=& \lim_{m,n \to \infty} \mathbb{P} \big ( (\hat D_{m,n}(1) - D)^\top \mathbb{V}^{-1}_{m,n} (\hat D_{m,n} (1) - D) \leq q_{1- \alpha} \big ) \\
&=& \mathbb{P} (\mathbb{T} \leq q_{1- \alpha}) = 1 - \alpha.
\end{eqnarray*}
\hfill $\Box$
An asymptotic level $\alpha$ test for the hypothesis \eqref{1.3}
is now obtained by rejecting $H_{0,p}$, whenever
\begin{equation}\label{test}
\underset{\ell = 1}{\overset{p}{\times}} \big [0, \Delta^{(j_\ell)} \big ] \cap C_{m,n} = {\o},
\end{equation}
where $C_{m,n}$ is the confidence set defined in \eqref{confset}
[see \cite{aitchison1964}]. In other words: the null hypothesis is rejected whenever the rectangle $\mathcal{R} = \underset{\ell = 1}{\overset{p}{\times}} [0, \Delta^{(j_\ell)}]$ is a subset of the complement of the confidence set $C_{m,n}$ defined in Proposition \ref{prop32}.
\begin{proposition}
Suppose that the assumptions of Theorem \ref{thm33} hold. The test which rejects the null hypothesis in \eqref{1.3}, whenever \eqref{test} holds, is an asymptotic level $\alpha$ test and consistent.
\end{proposition}
{\bf Proof.} The property of asymptotic level $\alpha$ is obvious. It follows from \eqref{weak2} that the volume of the confidence ellipsoid $C_{m,n}$ is of order $(\det \mathbb{V}_{m,n})^{1/2}=O_{\mathbb{P}} ((m+n)^{-p/2})$. Under the alternative we have $D \notin \mathcal{R}$. As $C_{m,n}$ is a shrinking (with the sample size) ellipsoid centered at the point $\hat D_{m,n}(1)\stackrel{\mathbb{P}}{\longrightarrow}D \notin \mathcal{R}$ it follows
$$
\lim_{m,n \to \infty} \mathbb{P}_{H_1} ( \mathcal{R} \cap C_{m,n} = {\o}) = 1.
$$
\hfill $\Box$
\end{comment}
\section{Simulation study}\label{sec-simul}
\def6.\arabic{equation}{3.\arabic{equation}}
\setcounter{equation}{0}
A simulation study was conducted to evaluate the finite-sample performance of the tests described in \eqref{2.21:func}. Contained in this section is also a
kind of comparison to the self-normalized two-sample test introduced in \cite{zhangshao2015}, hereafter referred to as the ZS test.
However, it should be emphasized that their test is for the classical hypothesis
\begin{equation} \label{class}
H_{0,class} \colon \| v^X_j - v^Y_j \|^2 =0
~~~~\mbox{ versus} ~~~~
H^{(j)}_{1,class} \colon\| v^X_j - v^Y_j \|^2 >0 ,
\end{equation}
and not for the relevant hypotheses \eqref{2.21:func} studied here. Such a comparison is nevertheless useful to demonstrate that both procedures behave similarly in the different testing problems.
All simulations below were performed using the {\tt R} programming language \citep{rcore:2016}. Data were generated according to the basic model proposed and studied in \cite{panaretos:2010} and \cite{zhangshao2015}, which is of the form
\begin{align}\label{dgp-eq}
X_i(t)= \sum_{j=1}^{2}\left\{\xi_{X,j, 1}^{(i)} \sqrt{2} \sin \left(2 \pi j t+\delta_{j}\right)+\xi_{X , j, 2}^{(i)} \sqrt{2} \cos \left(2 \pi j t+\delta_{j}\right)\right\}, \qquad t \in[0,1],
\end{align}
for $i=1,\ldots,m,$ where the coefficients $\xi_{X,i}=(\xi_{X,1,1}^{i}, \xi_{X,2,1}^{i},
\xi_{X,1,2}^{i}, \xi_{X,2,2}^{i
)^{\prime}$ were taken to follow a vector autoregressive model
$$
\xi_{X,i}=\rho \xi_{X,i-1}+\sqrt{1-\rho^{2}} e_{X,i},
$$
with $\rho=0.5$ and $e_{X,i} \in \mathbb{R}^{4}$ a sequence of iid
normal random variables with mean zero and covariance matrix
$$
\Sigma_{e}= \operatorname{diag}(\mathbf{v_X}),
$$
with $\mathbf{v_X}=(\tau_1^{X},\ldots,\tau_4^{X})$. Note that with this specification, the population level eigenvalues of the covariance operator of $X_i$ are $\tau_1^{X},\ldots,\tau_4^{X}$. If $\delta_1=\delta_2=0$, the corresponding eigenfunctions are $v_1^X=\sqrt{2} \sin \left(2 \pi \cdot\right)$, $v_2^X=\sqrt{2}\cos \left(2 \pi \cdot\right)$, $v_3^X=\sqrt{2} \sin \left(4 \pi \cdot\right)$, and $v_4^X=\sqrt{2}\cos \left(4 \pi \cdot\right)$. Each process $X_i$ was produced by evaluating the right-hand side of \eqref{dgp-eq} at 1{,}000 equally spaced points in the unit interval, and then smoothing over a cubic $B$-spline basis with 20 equally spaced knots using the {\tt fda} package; see \cite{ramsay:hooker:graves:2009}. A burn-in sample of length 30 was generated and discarded to produce the autoregressive processes. The sample $Y_i$, $i=1,\ldots,n$, was generated independently in the same way, always choosing $\delta_j=0$, $j=1,2$, in \eqref{dgp-eq}. With this setup, one can produce data satisfying either $H_0^{(j)}$ or $H_1^{(j)}$ by changing the constants $\delta_j$.
In order to measure the finite sample properties of the proposed test for the hypotheses $H^{(j)}_0$ versus $H^{(j)}_1$ in \eqref{2.21:func} , data was generated as described above from two scenarios:
\begin{itemize}
\itemsep-.5ex
\item Scenario 1: $\mathbf{\tau_X}= \mathbf{\tau_Y}= (8,4,0.5,0.3)$, $\delta_2=0$, and $\delta_1$ varying from $0$ to $0.25$.
\item Scenario 2: $\mathbf{\tau_X}= \mathbf{\tau_Y}= (8,4,0.5,0.3)$, $\delta_1=0$, and $\delta_2$ varying from $0$ to $2$.
\end{itemize}
In both cases, we tested the hypotheses \eqref{2.21:func} with $\Delta_j=0.1$, for $j=1,2,3$. We took the measure $\nu$, used to define the self-normalizing sequence in \eqref{2.7}, to be the uniform probability measure on the interval $(0.1,1)$. We also tried other values between 0 and 0.2 for the lower bound of this uniform measure and found that selecting values above 0.05 tended to yield similar performance. When $\delta_1 \approx 0.05$, $\|v_j^X - v_j^Y\|^2_2 \approx 0.1$, and taking $\delta_1 = 0.25$ causes $v_j^X$ and $v_j^Y$ to be orthogonal, $j=1,2$. Hence the null hypothesis $H^{(j)}_0$ holds for $\delta_1< 0.05$, and $H^{(j)}_1$ holds for $\delta_1>0.05$ for $j=1,2$. Similarly, in Scenario 2, one has that $\|v_j^X - v_j^Y\|^2_2 \approx 0.1$ when $\delta_2=0.3155$, $j=3,4$. For this reason, we let $\delta_2$ vary from $0$ to 2. In reference to Remark \ref{eig-rem}, we obtained via simulation that the parameter $\zeta_j$ for the largest eigenvalue process is approximately 4 when $\delta_1=0$ and approximately 10.5 when $\delta_1=0.25$.
The percentage of rejections from 1{,}000 independent simulations when the size of the test is fixed at 0.05 are reported in Figures \ref{fig1:sim} and \ref{fig2:sim} as power curves that are functions of $\delta_1$ and $\delta_2$ when $n=m=50$ and 100. These figures also display the number of rejections of the ZS test for the classical hypothesis \eqref{class} (which corresponds to $H^{(j)}_0$ with $\Delta_j=0$). From this, the following conclusions can be drawn.
\begin{enumerate}
\item The tests of $H_0^{(j)}$ based on $\hat{\mathbb{W}}^{(1)}_{m,n}$ exhibited the behaviour predicted by \eqref{test-bev}, even for relatively small values of $n$ and $m$. Focusing on the tests of $H_0^{(j)}$ with results displayed in Figure \ref{fig1:sim}, we observed that the empirical rejection rate was less than nominal for $\|v_1^{X}-v_1^{Y}\|^2<\Delta_{1}=0.1$, approximately nominal when $\|v_1^{X}-v_1^{Y}\|^2=\Delta_{1}=0.1$, and the power increased as $\|v_1^{X}-v_1^{Y}\|^2$ began to exceed $0.1$. In additional simulations not reported here, these results were improved further by taking larger values of $n$ and $m$.
\item Observe that with data generated according to Scenario 2, $H_0^{(2)}$ is satisfied while $H_0^{(3)}$ is not satisfied for values of $\delta_2>0.3155$. This scenario is seen in Figure \ref{fig2:sim} where the tests for $H_0^{(2)}$ exhibited less than nominal size, as predicated by \eqref{test-bev}, even in the presence of differences in higher-order eigenfunctions. The tests of $H_0^{(3)}$ performed similarly to the tests of $H_0^{(1)}$.
\item The self-normalized ZS test for the classical hypothesis \eqref{class}, which is based on the bootstrap, performed well in our simulations, and exhibited empirical size approximately equal to the nominal size when $\|v_1^{X}-v_1^{Y}\|^2=0$, and increasing power as $\|v_1^{X}-v_1^{Y}\|^2$ increased. For the sample size $m=n=50$ it overestimated the nominal level of $5\%$.
Interestingly, the proposed tests tended to exhibit higher power than the ZS test for large values of $\|v_1^{X}-v_1^{Y}\|^2$, even while only testing for relevant differences. Additionally, the computational time required to perform the proposed test is substantially less than what is required to perform the ZS test, since it does not need to employ the bootstrap.
\end{enumerate}
\begin{figure}[H]
\begin{minipage}{.49\textwidth}
\centering
\includegraphics[width=.99\linewidth]{n50_efun.pdf}
\end{minipage}%
\begin{minipage}{.49\textwidth}
\centering
\includegraphics[width=.99\linewidth]{n100_efun.pdf}
\end{minipage}
\caption{
Percentage of rejections (out of 1{,}000 simulations) of the self-normalized statistic of \cite{zhangshao2015} for the classical hypotheses \eqref{class} (denoted $ZS_{n,m}^{(1)}$)
and the new test \eqref{testone} for the relevant hypohteses \eqref{2.21:func} (denoted by $W^{(1)}_{m,n}$) as a function of $\delta_1$ in Scenario 1.
In the left hand panel $n=m=50$, and in the right hand panel $n=m=100$. The horizontal green line is at the nominal level 0.05, and the vertical green line at $\delta_1=0.05$ indicates the case when $\|v_1^{X}-v_1^{Y}\|^2=\Delta_1=0.1.$}.
\label{fig1:sim}
\end{figure}
\begin{figure}[H]
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{n50_efun2.pdf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{n100_efun2.pdf}
\end{minipage}
\caption{
Percentage of rejections (out of 1{,}000 simulations) of the self-normalized statistic of \cite{zhangshao2015} for the classical hypotheses \eqref{class} (denoted $ZS_{n,m}^{(j)}$, $j=2,3$)
and the new test \eqref{testone} for the relevant hypohteses \eqref{2.21:func} (denoted by $W^{(j)}_{m,n}$, $j=2,3$) as a function of $\delta_2$ in Scenario 2.
In the left hand panel $n=m=50$, and in the right hand panel $n=m=100$.
The horizontal green line is at the nominal level 0.05, and the vertical green line at $\delta_2=0.3155$ indicates the case when $\|v_3^{X}-v_3^{Y}\|^2=\Delta_1=0.1$.}\label{fig2:sim}
\end{figure}
\subsection{Multiple comparisons}\label{sec-mult-sim}
In order to investigate the multiple testing procedure of Section \ref{sec-mult}, $X$ and $Y$ samples were generated according to model \eqref{dgp-eq} with $n=m=100$ in two situations: one with $\delta_1=0.0504915$ and $\delta_2= 0.3155$, and another with $\delta_1=0.25$ and $\delta_2= 2$. In the former case, $\|v_j^{X}-v_j^{Y}\|^2\approx0.1$ for $j=1,\ldots,4$, while in the latter case $\|v_j^{X}-v_j^{Y}\|^2>0.1$, $j=1,\ldots,4$. We then applied tests of $H_{0,p}$ in \eqref{1.3} with $\Delta_j=0.1$ for $j=1,\ldots,4$ and varied $p=1,\ldots,4$. These tests were carried out by combining the marginal tests for relevant differences of the respective eigenfunctions using the standard Bonferroni correction as well as the Holm--Bonferroni correction. Empirical size and power calculated from 1{,}000 simulations with nominal size $0.05$ for each value of $p$ and correction are reported in Table \ref{pow:tab}. It can be seen that these corrections controlled the family-wise error rate well. The tests still retain similar power when comparing up to four eigenfunctions, although one may notice the declining power as the number of tests increases.
\begin{table}[H]
\vspace{.2cm}
\centering
\begin{tabular}{l l r@{\qquad} r r r r}
\hline
{$\delta_1$} & {$\delta_2$} & & { $j=1$ } & {$2$} & {$ 3 $} &{ $4$ } \\ \hline
0.0504915 & 0.3155 & B & 0.036 & 0.021 & 0.018 & 0.017 \\
&& HB & 0.037 & 0.036 & 0.024 & 0.025 \\
0.25 & 2 & B & 0.750 & 0.678 & 0.668 & 0.564 \\
&& HB & 0.750 & 0.798 &0.716 & 0.594 \\
\hline
\end{tabular}
\caption{Rejection rates from 1{,}000 simulations of tests $H_{0,j}$ with nominal level $0.05$ for $j=1,
\ldots,4$ and Bonferroni (B) and Holm--Bonferroni (HB) corrections.}\label{pow:tab}
\end{table}
\begin{comment}
\begin{equation}\label{2.21:func}
H^{(j)}_0 : \; \| v^X_j - v^Y_j \|^2_2 \leq \Delta_j ~~\mbox{ versus} ~~ H^{(j)}_1 : \; \| v^X_j - v^Y_j \|^2_2 > \Delta_j ~,
\end{equation}
and
\begin{equation}\label{2.21:val}
H^{(j)}_{0,\tau} : \; | \tau^X_j - \tau^Y_j | \leq \Delta_j ~~\mbox{ versus} ~~ H^{(j)}_{1,\tau} : \; |\tau^X_j - \tau^Y_j | > \Delta_j ~,
\end{equation}
where $\Delta_j >0 $
\end{comment}
\section{Application to Australian annual temperature profiles}
\label{sec4}
\def6.\arabic{equation}{4.\arabic{equation}}
\setcounter{equation}{0}
To demonstrate the practical use of the tests proposed above, the results of an application to annual minimum temperature profiles are presented next. These functions were constructed from data collected at various measuring stations across Australia. The raw data consisted of approximately daily minimum temperature measurements recorded in degrees Celsius over approximately the last 150 years at six stations, and is available in the \texttt{R} package \texttt{fChange}, see \cite{fChange:2017}, as well as from {\tt www.bom.gov.au}. The exact station locations and time periods considered are summarized in Table \ref{stat:tab}. In addition, Figure \ref{Ausmap:fig} provides a map of eastern Australia showing the relative locations of these stations.
\begin{table}[H]
\vspace{.3cm}
\centering
\begin{tabular}{l@{\qquad\qquad}l}
\hline
Location & Years \\
\hline
Sydney, Observatory Hill & 1860--2011~~(151) \\
Melbourne, Regional Office & 1856--2011~~(155) \\
Boulia, Airport$^*$ & 1900--2009~~(107) \\
Gayndah, Post Office & 1905--2008~~(103) \\
Hobart, Ellerslie Road & 1896--2011~~(115) \\
Robe & 1885--2011~~(126) \\
\hline
\end{tabular}
\caption{Locations and names of six measuring stations at which annual temperature data was recorded, and respective observation periods. In brackets are the numbers of available annual temperature profiles. The 1932 and 1970 curves were removed from the Boulia series due to missing values.}\label{stat:tab}
\end{table}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.592\linewidth,height=0.632\linewidth]{Aus_map.pdf}
\vspace{-.7cm}
\caption{Map of eastern Australia showing the locations of the six measuring stations whose data were used in the data analysis. This map was produced using the \texttt{ggmap} package in \texttt{R}; see \cite{ggmap:2013}.}\label{Ausmap:fig}
\end{center}
\end{figure}
In each year and for each station, 365 (366 in leap years) raw data points were converted into functional data objects using cubic B-splines at 20 equally spaced knots using the \texttt{fda} package in \texttt{R}; see \cite{ramsay:hooker:graves:2009} for details. We also tried using cubic B-splines with between 20 and 40 equally spaced knots, as well as using the same numbers of standard Fourier basis elements to smooth the raw data into functional data objects, and the test results reported below were essentially unchanged. The resulting curves from the stations located in Sydney and Gayndah are displayed respectively in the left and right hand panels of Figure \ref{Syd:fig} as rainbow plots, with earlier curves drawn in red and progressing through the color spectrum to later curves drawn in violet; see \cite{rainbow:2016}. One may notice that the curves appear to generally increase in level over the years. In order to remove this trend, a linear time trend was estimated for the series of average yearly minimum temperatures, and then this linear trend was subtracted pointwise from the time series of curves. The detrended Sydney and Gayndah curves are displayed again as rainbow plots in the left and right-hand panels of Figure \ref{SydDM:fig}, which appear to be fairly stationary.
\begin{figure}[H]
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{Syd.pdf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{Gayn.pdf}
\end{minipage}
\caption{Rainbow plots of minimum temperature profiles based on data collected at the Sydney (left panel) and Gayndah (right panel) stations constructed using cubic B-splines.}\label{Syd:fig}
\end{figure}
\begin{figure}[H]
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{SydDM.pdf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{GaynDM.pdf}
\end{minipage}
\caption{Rainbow plots of detrended minimum temperature profiles from Sydney (left panel) and Ganydah (right panel). Detrending was carried out by fitting a linear time trend to the series of average yearly minimum temperatures, and then removing this trend pointwise from the time series of curves.}\label{SydDM:fig}
\end{figure}
We took as the goal of the analysis to evaluate whether or not there are relevant differences in the primary modes of variability of these curves between station locations, as measured by differences in the leading eigenfunctions of the sample covariance operators. We applied tests of $H_0^{(1)}$ and $H_0^{(2)}$ with thresholds $\Delta_1=\Delta_2=0.1$ based on the statistics $\hat{\mathbb{W}}^{(j)}_{m,n}$, $j=1,2$, to each pair of functional time series from the six stations. The results of these tests are reported in terms of $P$-values in Table \ref{h1:tab}. Plots of the estimated leading eigenfunctions from each sample are displayed in Figure \ref{efun:fig}.
One observes in five out of six stations, excluding the Gayndah station, that the leading eigenfunction of the sample covariances operators is approximately constant, suggesting that the primary mode of variability of those temperature profiles is essentially level fluctuations around the increasing trend. Pairwise comparisons based on tests of $H_0^{(1)}$ suggest that these functions in general do not exhibit relevant differences to any reasonable significance. In contrast, the leading eigenfunction calculated from the Gayndah station curves evidently puts more mass in the winter months than the summer months. This is almost expected given the comparison of the detrended curves in Figure \ref{SydDM:fig}, in which the Gayndah curves evidently exhibit more variability in the winter months relative to the Sydney curves. Pairwise comparisons of the Gayndah data with the other stations suggest that this difference is significant, and even that the change is relevant to the level $\Delta_1=0.1$. The analysis of the second eigenfunction leads to a similar conclusion here: the stations other than Gayndah have similar eigenfunction structure, and the curves calculated from the Gayndah station have different eigenfunction structure. However, for the second eigenfunction conclusions about the uniqueness of the Gayndah station cannot be made with the same level of confidence as for the first eigenfuction.
\begin{table}[!t]
\centering
{\setlength\tabcolsep{5.5pt}%
\begin{tabular}{lrrrrr}
\hline
\multicolumn{6}{c}{$H_0^{(1)}, \; \Delta_1=0.1$} \\
\hline
& Melbourne & Boulia & Gayndah & Hobart & Robe \\
Sydney & 0.2075 & 0.4545 & {\bf 0.0327} & 0.2211 & 0.5614 \\
Melbourne & & 0.1450 & {\bf 0.0046} & 0.5007 & 0.2203 \\
Boulia &&& {\bf 0.0466} & {\bf 0.0321} &0.5419 \\
Gayndah &&&& {\bf 0.0002} & {\bf 0.0011} \\
Hobart &&&&& 0.0885 \\
\hline
\multicolumn{6}{c}{$H_0^{(2)}, \; \Delta_2=0.1$}\\
\hline
& Melbourne & Boulia & Gayndah & Hobart & Robe \\
Sydney & 0.1712 & 0.0708 & 0.0865 & 0.1201 & 0.0785 \\
Melbourne & &0.0862 & {\bf 0.0082} & 0.1502 & 0.1778 \\
Boulia & & & 0.0542 & 0.0553 & 0.1438 \\
Gayndah & & & & {\bf 0.0371} & {\bf 0.0037} \\
Hobart & & & & & 0.4430 \\
\hline
\end{tabular}}
\caption{Approximate $P$-values of tests of $H_0^{(1)}$ and $H_0^{(2)}$ with $\Delta_1=\Delta_2=0.1$ for all pairwise comparisons of the series of curves from each of the six monitoring stations. Values that are less than 0.05 are {\bf bolded}. }\label{h1:tab}
\label{tab1}
\end{table}
\begin{comment}
\begin{table}[!t]
\caption{Approximate $P$-values of tests of $H_0^{(2)}$ with $\Delta_2=0.1$ for all pairwise comparisons of the series of curves from each of the six monitoring stations. Values that are less than 0.05 are {\bf bolded}. }\label{h2:tab}
\label{tab1}
\centering
{\setlength\tabcolsep{3.5pt}%
\begin{tabular}{l rrrrr}
& Melbourne & Boulia & Gayndah & Hobart & Robe \\
Sydney & 0.1712 & 0.0708 & 0.0865 & 0.1201 & 0.0785 \\
Melbourne & &0.0862 & {\bf 0.0082} & 0.1502 & 0.1778 \\
Boulia & & & 0.0542 & 0.0553 & 0.1438 \\
Gayndah & & & & {\bf 0.0371} & {\bf 0.0037} \\
Hobart & & & & & 0.4430 \\
\end{tabular}}
\end{table}
\end{comment}
\begin{figure}[H]
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{evecdiff_1.pdf}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=.992\linewidth]{oevecs.pdf}
\end{minipage}
\caption{Left panel: Plot of sample eigenfunctions corresponding to the largest eigenvalue of the sample covariance operators of the Sydney and Gayndah detrended minimum temperature profiles, $\hat{v}_1^{\rm Syd}$ and $\hat{v}_1^{\rm Gayn}$. A test of $H_0^{(1)}$ suggests that the squared norm of the difference between these curves is significantly larger than 0.1 (P-value $\approx 0.0327$). Right panel: Plots of sample eigenfunctions corresponding to the largest eigenvalues of the sample covariance operators from the remaining four stations.}
\label{efun:fig}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
\def6.\arabic{equation}{5.\arabic{equation}}
\setcounter{equation}{0}
In this paper, new two-sample tests were introduced to detect relevant differences in the eigenfunctions and eigenvectors of covariance operators of two independent functional time series. These tests can be applied both marginally and, with Bonferroni-type corrections, jointly. The tests are constructed utilizing a self-normalizing strategy, leading to an intricate theoretical analysis to derive the large-sample behavior of the proposed tests. Finite-sample evaluations, done through a simulation study and an application to annual minimum temperature data from Australia, highlight that the tests have very good finite sample properties and
exhibit the features predicted by the theory.
\section{Technical details }
\label {sec:proofs}
\def6.\arabic{equation}{6.\arabic{equation}}
\setcounter{equation}{0}
In this section we provide the technical details required for the arguments given in Section \ref{sec22}.
\subsection{Proof of Proposition \ref{z-approx}} \label{appendix1}
Below let $\int:= \int_{0}^{1}$ for brevity. According to the definitions of $\hat{\tau}^X_j(\lambda), \hat{v}^X_j(t,\lambda),\tau^X_j,$ and $v^X_j$, a simple calculation shows that for almost all $t\in[0,1]$,
\begin{align}\label{l3-0}
\int (C^X(t,s) & + (\hat{C}_m^X(t,s,\lambda) - C^X(t,s) ) )(v^X_j(s)+ (\hat{v}^X_j(s,\lambda)-v^X_j(s)))ds \\ \nonumber
&= (\tau^X_j + (\hat{\tau}^X_j(\lambda)-\tau^X_j))(v^X_j(t)+ (\hat{v}^X_j(t,\lambda)-v^X_j(t))).
\end{align}
The sequence $\{v^X_j\}_{i\in\mathbb{N}}$ forms an orthonormal basis of $L^2([0,1])$, and hence there exist coefficients $\{\xi_{j,\lambda}\}_ {j\in\mathbb{N}}$ such that
\begin{align}\label{lem4-1}
\hat{v}^X_j(t,\lambda) - v^X_j(t) = \sum_{i=1}^\infty \xi_{i,\lambda}v^X_i(t),
\end{align}
for almost every $t$ in $[0,1]$. By rearranging terms in \eqref{l3-0}, we see that
\begin{align}\label{lem4-2}
\int C^X(t,s)(\hat{v}^X_j(s,\lambda) & - v^X_j(s))ds + \int \left( \hat{C}_m^X(t,s,\lambda) - C^X(t,s)\right)v^X_j(s)ds \\
&= \tau^X_j (\hat{v}^X_j(t,\lambda)- v^X_j(t)) + \left(\hat{\tau}^X_j(\lambda) - \tau^X_j\right)v^X_j(t)+ G_{j,m}(t,\lambda), \notag
\end{align}
where
$$
G_{j,m}(t,\lambda)= \int [C^X(t,s) - \hat{C}_m^X(t,s,\lambda)] [ \hat{v}^X_j(s,\lambda)-v^X_j(s)]ds +[\hat{\tau}^X_j(\lambda)-\tau^X_j][\hat{v}^X_j(t,\lambda)-v^X_j(t)].
$$
Taking the inner product on the left and right hand sides of \eqref{lem4-2} with $v_k$, for $k\ne i$, and employing \eqref{lem4-1} yields
$$
\tau^X_k \xi_{k,\lambda} + \int\hspace{-.2cm}\int \left( \hat{C}_m^X(t,s,\lambda) - C^X(t,s)\right)v^X_j(s)v^X_k(t)dsdt= \tau^X_j \xi_{k,\lambda}+ \langle G_{j,m}(\cdot,\lambda),v^X_k \rangle,
$$
which implies that
\begin{align}\label{lem4-3}
\xi_{k,\lambda} = \frac{\langle \hat{C}_m^X(\cdot,\cdot,\lambda)- C^X, v^X_j \otimes v^X_k \rangle }{\tau^X_j - \tau^X_k}- \frac{\langle G_{j,m}(\cdot,\lambda),v^X_k \rangle}{\tau^X_j - \tau^X_k},
\end{align}
for all ${\lambda \in [0,1]}$ and $k \ne i$. Furthermore, by the parallelogram law,
\begin{align}\label{lem4-5}
\xi_{i,\lambda} = \langle v^X_j, \hat{v}^X_j(\cdot,\lambda) - v^X_j \rangle = -\frac{1}{2} \| \hat{v}^X_j(\cdot,\lambda) - v^X_j\|^2.
\end{align}
Let $S_{j,X} = \min\{ \tau_{j-1}^X- \tau_j^X ,\tau_{j}^X- \tau_{j+1}^X\}$ for $j \geq 2$ and $S_{1,X} = \tau_{1}^X- \tau_2^X $. By Assumption \ref{as-spacing} and the fact that $j \le d$ we have $S_{j,X} >0$. Hence, Lemma 2.2 in \cite{horvkoko2012} (see also Section 6.1 of \cite{gohberg1990}) implies for all $\lambda \in [0,1]$,
\begin{align}\label{l3-3.5}
\sqrt{\lambda}\|\hat{v}^X_j(\cdot,\lambda)-v^X_j\| \le \frac{1}{S_{j,X}} \big\| \sqrt{\lambda}[\hat{C}_m^X(\cdot,\cdot,\lambda)- C^X] \big\|.
\end{align}
Further,
\begin{align*}
\sqrt{\lambda}[\hat{C}_m^X(t,s,\lambda)- C^X(t,s)] &= \frac{\sqrt{\lambda}}{\lfloor m \lambda \rfloor} \sum^{\lfloor m \lambda \rfloor}_{i=1} (X_i(t) X_i(s) - C^X(t, s)) \notag \\
& = \frac{1}{\sqrt{m}}\frac{\sqrt{m\lambda}}{\sqrt{\lfloor m \lambda \rfloor}} \frac{1}{\sqrt{\lfloor m \lambda \rfloor}} \sum^{\lfloor m \lambda \rfloor}_{i=1} (X_i(t) X_i(s) - C^X(t, s)).
\end{align*}
It is easy to show using Cauchy--Schwarz inequality that the sequence $X_i(\cdot) X_i(\cdot) - C^X(\cdot, \cdot) \in L^2([0,1])^2$ is $L^{2+\kappa}$-$m$-approximable for some $\kappa >0$ if $X_i$ is $L^{p}$-$m$-approximable for some $p>4$. Lemma B.1 from the Supplementary Material of \cite{aue:rice:sonmez:2018} can be generalized to $L^{2+\kappa}$-$m$-approximable random variables taking values in $L^2([0,1]^2)$, from which it follows that
$$
\sup_{\lambda \in [0,1]}\frac{1}{\sqrt{\lfloor m \lambda \rfloor}} \Big\| \sum^{\lfloor m \lambda \rfloor}_{i=1} (X_i(\cdot) X_i(\cdot) - C^X(\cdot, \cdot)) \Big\| = O_\mathbb{P}(\log^{(1/\kappa)}(m)).
$$
Using this and combining with \eqref{l3-3.5}, we obtain the bound
\begin{align}\label{c-approx}
\sup_{\lambda \in [0,1]}\Big\| \sqrt{\lambda}[\hat{C}_m^X(\cdot,\cdot,\lambda)- C^X] \Big\|O_\mathbb{P}\Big({\log^{(1/\kappa)}(m)}{\sqrt{m}} \Big),
\end{align}
and the estimate \eqref{v-approx-1x}. Furthermore, using the bound that
$$
|\hat{\tau}^X_j(\lambda) - \tau^X_j| \le \big\| \hat{C}_m^X(\cdot,\cdot,\lambda)- C^X \big\|,
$$
we obtain by similar arguments that
\begin{align}\label{eigen-approx-2}
\sup_{\lambda \in [0,1]} \sqrt{\lambda}|\hat{\tau}^X_j(\lambda) - \tau^X_j| = O_\mathbb{P}\Big(\frac{\log^{(1/\kappa)}(m)}{\sqrt{m}}\Big).
\end{align}
Using the triangle inequality, Cauchy--Schwarz inequality, and combining \eqref{c-approx} and \eqref{eigen-approx-2}, it follows
\begin{align}\label{G-approx}
\sup_{\lambda \in [0,1]} \lambda\|G_{j,m}(\cdot,\lambda)\| \le& \sqrt{\lambda}\Big\|[\hat{C}_m^X(\cdot,\cdot,\lambda)- C^X] \Big\|\sup_{\lambda \in [0,1]} \sqrt{\lambda}\| \hat{v}(\cdot,\lambda) - v^X_j\| \\
&+\sup_{\lambda \in [0,1]} \sqrt{\lambda}|\hat{\tau}^X_j(\lambda) - \tau^X_j| \sup_{\lambda \in [0,1]}\sqrt{\lambda} \| \hat{v}(\cdot,\lambda) - v^X_j\|
=O_\mathbb{P}\Big(\frac{\log^{(2/\kappa)}(m)}{m} \Big). \notag
\end{align}
Let
$$
R_{i,m}(t, \lambda) = \frac {1}{\sqrt{m}} \sum_{k \neq i} \frac {v^X_k(t)}{\tau^X_j - \tau^X_k} \int^1_0 \hat Z^X_m (s_1, s_2, \lambda) v^X_k (s_2) v^X_j(s_1) ds_1 ds_2 .
$$
Combining \eqref{lem4-1}, \eqref{lem4-3} and \eqref{lem4-5}, we see that for almost all $t\in [0,1]$ and for all $\lambda \in [0,1]$,
$$
\lambda[\hat{v}^X_j(\cdot,\lambda) - v^X_j(t)] = \frac{m\lambda}{\lfloor m \lambda \rfloor} R_{i,m}(t,\lambda) - \sum_{k \neq i} \frac{\langle \lambda G_{j,m}(\cdot,\lambda),v^X_k \rangle}{\tau^X_j - \tau^X_k} v^X_k(t) -\frac{1}{2} \| \hat{v}^X_j(\cdot,\lambda) - v^X_j\|^2v_j^X(t),
$$
with the convention that $({m\lambda}/{\lfloor m \lambda \rfloor}) R_{i,m}(t,\lambda)=0$ for $\lambda < 1/m$. Using this identity and the triangle inequality, we obtain
\begin{align}\label{lem4-v-app}
\sup_{\lambda \in [0,1]} \Big\| & \lambda[\hat{v}^X_j(\cdot,\lambda) - v^X_j(t)] - \frac{m\lambda}{\lfloor m \lambda \rfloor} R_{i,m}(t,\lambda)\Big\| \\
&\le \frac{1}{2} \sup_{\lambda \in [0,1]} \lambda\| \hat{v}^X_j(\cdot,\lambda) - v^X_j\|^2 + \sup_{\lambda \in [0,1]} \Big\|\sum_{k \neq i} \frac{\langle \lambda G_{j,m}(\cdot,\lambda),v^X_k \rangle}{\tau^X_j - \tau^X_k} v^X_k(t)\Big\|. \notag
\end{align}
The first term on the right-hand side of \eqref{lem4-v-app} can be bounded by bound \eqref{v-approx-1x}. In order to bound the second term we have, using the orthonormality of the $v^X_k$ (Parseval's identity) and the fact that $1/(\tau^X_j - \tau^X_k)^2 \le 1/S_{j,X}^2$ for all $k\ne i$, that
\begin{align*}
\Big\|\sum_{k \neq i} \frac{\langle \lambda G_{j,m}(\cdot,\lambda),v^X_k \rangle}{\tau^X_j - \tau^X_k} v^X_k(\cdot)\Big\| &= \Big( \sum_{k \neq i} \frac{\langle \lambda G_{j,m}(\cdot,\lambda),v^X_k \rangle^2}{(\tau^X_j - \tau^X_k)^2} \Big)^{1/2} \\
&\le \frac{1}{S_{j,X}} \Big( \sum_{k \neq i} {\langle \lambda G_{j,m}(\cdot,\lambda),v^X_k \rangle^2} \Big)^{1/2} \le \frac{1}{S_{j,X}} \|\lambda G_{j,m}(\cdot,\lambda)\|.
\end{align*}
Therefore
\begin{align*}
\sup_{\lambda \in [0,1]} \Big\|\sum_{k \neq i} \frac{\langle \lambda G_{j,m}(\cdot,\lambda),v^X_k \rangle}{\tau^X_j - \tau^X_k} v^X_k(\cdot)\Big\| & \le \sup_{\lambda \in [0,1]} \frac{1}{S_{j,X}} \|\lambda G_{j,m}(\cdot,\lambda)\|
= O_\mathbb{P}\Big(\frac{\log^{(2/\kappa)}(m)}{m} \Big),
\end{align*}
where the last estimate follows from \eqref{G-approx}. Using these bounds in \eqref{lem4-v-app}, we obtain that
$$
\sup_{\lambda \in [0,1]} \Big\| \lambda[\hat{v}^X_j(\cdot,\lambda) - v^X_j(t)] - \frac{m\lambda}{\lfloor m \lambda \rfloor} R_{i,m}(t,\lambda)\Big\| = O_\mathbb{P}\Big(\frac{\log^{(2/\kappa)}(m)}{m} \Big).
$$
Given the convention that $({m\lambda}/{\lfloor m \lambda \rfloor}) R_{i,m}(t,\lambda)=0$ for $0\le \lambda < 1/m$, the result follows then by showing that
$$
\sup_{\lambda \in [1/m,1]} \Big|\frac{m\lambda}{\lfloor m \lambda \rfloor}-1\Big|\Big\| R_{i,m}(t,\lambda) \Big\|= O_\mathbb{P}\Big(\frac{\log^{(2/\kappa)}(m)}{m} \Big).
$$
This result is a consequence of $\sup_{\lambda \in [1/m,1]} \big|\frac{m\lambda}{\lfloor m \lambda \rfloor}-1\big| \le 1/m$, and $\sup_{\lambda \in [1/m,1]}\| R_{i,m}(t,\lambda) \|=O_\mathbb{P}(1)$.
\subsection{Proof of Proposition \ref{prop1}} \label{appendix2}
Before proceeding with this proof, we develop some notation as well as a rigorous definition of the constant $\zeta_j$. Recall the notations \eqref{2.8}, \eqref{2.3} and \eqref{2.4} and define the random variables
\begin{equation}\label{2.15a}
\tilde X_i (s_1, s_2) = X_i (s_1) X_i (s_2) - C^X (s_1, s_2); \quad \tilde Y_i (s_1, s_2) = Y_i(s_1) Y_i(s_2) - C^Y (s_1, s_2).
\end{equation}
Further let the random variables $\overline{X}_i^{(j)}$ and $\overline{Y}_i^{(j)}$ be defined by
\begin{eqnarray}
\label{2.17}
\overline {X}_i^{(j)} = \int^1_0 \tilde X_i (s_1, s_2) f^X_j (s_1, s_2) ds_1 ds_2 ~,~
\overline{Y}_i^{(j)} = \int^1_0 \tilde Y_i (s_1, s_2) f^Y_j (s_1, s_2) ds_1 ds_2,
\end{eqnarray}
with the functions $f^X_j, f^Y_j$ given by
\begin{eqnarray}
\label{2.19}
f^X_j (s_1,s_2) &=& - v^X_j (s_1) \sum_{k \neq j} \frac {v^X_k (s_2)}{\tau^X_j - \tau^X_k} \int^1_0 v^X_k (t) v^Y_j (t) dt, \\ \label{2.20}
f^Y_j (s_1, s_2) &=& - v^Y_j (s_1) \sum_{k \neq j} \frac {v^Y_k (s_2)}{\tau^Y_j - \tau^Y_k} \int^1_0 v^Y_k (t) v^X_j (t) dt.
\end{eqnarray}
Firstly, we note that by using the orthonormality of the eigenfunctions $v_j^X$ and $v_j^Y$, and Assumption \ref{as-spacing}, we get that
$$
\|f^X_j\|^2=\int\hspace{-.2cm}\int (f^X_j (s_1,s_2))^2 ds_1ds_2 = \|v_j^X\|^2 \sum_{k\ne j} \frac { \left(\int^1_0 v^X_k (t) v^Y_j (t) dt \right)^2}{(\tau^X_j - \tau^X_k)^2} \le 1/S_{j,X}^2 < \infty.
$$
Let
$$
\sigma_{X,j}^2 = \sum_{\ell = -\infty}^{\infty} \mbox{cov}(\overline{X}_0^{(j)},\overline{X}_\ell^{(j)}),
\mbox{~~
and ~~}
\sigma_{Y,j}^2 = \sum_{\ell = -\infty}^{\infty} \mbox{cov}(\overline{Y}_0^{(j)},\overline{Y}_\ell^{(j)}).
$$
Based on these quantities, $\zeta_j$ is defined as
\begin{align}\label{tau-def-app}
\zeta_j = 2 \sqrt{\frac{\sigma_{X,j}^2}{\theta} + \frac{\sigma_{Y,j}^2}{1-\theta}}.
\end{align}
\begin{proof}[Proof of Proposition \ref{prop1}]
We can write
\begin{eqnarray} \label{2.10}
\hat Z^{(j)}_{m,n} (\lambda) &=& \sqrt{m+n} \int^1_0 (\hat D^{(j)}_{m,n} (t,\lambda))^2 - \lambda^2 D^2_j(t))dt \\ \nonumber
&=& \sqrt{m+n} \ \Big \{ \int^1_0 (\hat D^{(j)}_{m,n}(t,\lambda) - \lambda D_j(t))^2 + 2 \lambda D_j(t) (\hat D^{(j)}_{m,n} (t,\lambda) - \lambda D_j(t))^2 dt \\ \nonumber
&=& \sqrt{m+n} \int^1_0 (\tilde D^{(j)}_{m,n} (t,\lambda))^2 dt + 2 \lambda \sqrt{m+n} \int^1_0 D_j(t) \tilde D_{m,n} ^{(j)}(t, \lambda)dt + o_{\mathbb{P}}(1)
\end{eqnarray}
uniformly with respect to $\lambda \in [0,1]$,
where the process $\tilde D_{m,n}^{(j)}(t, \lambda)$ is defined in \eqref{2.8} and Proposition \ref{d-approx-1} was used in the last equation.
Observing \eqref{2.11} gives
\begin{equation}\label{2.12}
\hat Z^{(j)}_{m,n} (\lambda) = \tilde Z^{(j)}_{m,n} (\lambda) + o_{\mathbb{P}}(1)
\end{equation}
uniformly with respect to $\lambda \in [0,1]$, where the process $\tilde Z_{m,n}^{(j)}$ is given by
\begin{equation}\label{2.14}
\tilde Z^{(j)}_{m,n} (\lambda) = 2 \lambda \sqrt{m+n} \int^1_0 D_j(t) \tilde D^{(j)}_{m,n} (t,\lambda) dt .
\end{equation}
Consequently the assertion of Proposition \ref{prop1} follows from the weak convergence
$$
\{ \tilde Z^{(j)}_{m,n} (\lambda) \}_{\lambda \in [0,1]} \rightsquigarrow \{ \lambda \zeta_j \mathbb{B} (\lambda) \}_{\lambda \in [0,1]}.
$$
We obtain, using the orthogonality of the eigenfunctions and the notation \eqref{dj}, that
\begin{eqnarray}\label{z-mn-def}
\nonumber
\tilde{Z}^{(j)}_{m,n} (\lambda) &=& 2 \lambda \sqrt{m+n} \Big \{ \frac {1}{\sqrt{m}} \int^1_0 \hat Z^X_m (s_1, s_2, \lambda) \int^1_0 D_j(t) \sum_{k \neq j} \frac {v^X_k(t)}{\tau^X_j - \tau^X_k} dt v^X_j (s_1) v^X_k (s_2) ds_1 ds_2 \\ \nonumber
&&- \frac {1}{\sqrt{n}} \int^1_0 Z^Y_n (s_1, s_2, \lambda) \int^1_0 D_j(t) \sum_{k \neq j} \frac {v^Y_k(t)}{\tau^Y_j - \tau^Y_k} dt v^Y_j (s_1) v^Y_k(s_2) ds_1ds_2 \Big \} \\ \label{2.16}
&=& 2 \lambda \sqrt{m+n} \Big \{ \frac {1}{m} \sum_{i=1}^{\lfloor m \lambda \rfloor} \overline{X}_i^{(j)} + \frac {1}{n} \sum_{i=1}^{\lfloor n \lambda \rfloor} Y_i^{(j)} \Big \},
\end{eqnarray}
where the random variables $\overline{X}_i^{(j)}$ and $\bar{Y}_i^{(j)}$ are defined above. We now aim to establish that
\begin{align}\label{x-conv}
\Big \{
\frac {1}{\sqrt{m}} \sum_{i=1}^{\lfloor m \lambda \rfloor} \overline{X}_i^{(j)} \Big\}_{\lambda \in [0,1]}
\rightsquigarrow \sigma_{X,j} \{ \mathbb{B}^X(\lambda) \}_{\lambda \in [0,1]},
\end{align}
where $\mathbb{B}^X$ is a standard Brownian motion on the interval $[0,1]$. In the following we use the symbol $\| \cdot \|$ simultaneously for $L^2$-norm on the space $L^2 ([0,1])$ and $L^2([0,1]^2)$ as the particular meaning is always clear from the context.
Firstly, we note that by using the orthonormality of the eigenfunctions $v_j^X$ and $v_j^Y$, and Assumption \ref{as-spacing}, we get that
$$
\|f^X_j\|^2=\int\hspace{-.2cm}\int (f^X_j (s_1,s_2))^2 ds_1ds_2 = \|v_j^X\|^2 \sum_{k\ne j} \frac { \left(\int^1_0 v^X_k (t) v^Y_j (t) dt \right)^2}{(\tau^X_j - \tau^X_k)^2} \le 1/S_{j,X}^2 < \infty.
$$
The following calculation is similar to Lemma A.3 in \cite{aue:rice:sonmez:eigen:2018}. Let
$$
\tilde X_i^{(m)}(t,s) = X_{i,m}(t)X_{i,m}(s) - \mathbb{E}X_0(t)X_0(s),
$$
where
$\{ X_{i,m} \}_{i \in \mathbb{Z}}$ is the mean zero $m$-dependent sequence used in definition of $m$-approximability (see Assumption \ref{edep}).
Moreover, if $q=p/2$ with $p$ given in Assumption \ref{edep}, then we have by the triangle inequality and Minkowski's inequality that
\begin{align}\label{l2-2}
\big \{ \mathbb{E}\|\tilde X_i - \tilde X_i^{(m)} \|^q\big \}^{1/q} &\le \big \{ \mathbb{E}( \|X_i(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\| + \|X_{i,m}(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\| )^q \big \}^{1/q} \\
&\le \big \{ \mathbb{E}( \|X_i(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\|^q \big \}^{1/q} + \big \{\mathbb{E}\|X_{i,m}(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\|^q \big \}^{1/q}. \notag
\end{align}
Using the definition of the norm in $L^2([0,1])$, it is clear that
$$
\|X_i(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\|= \|X_i\|\|X_i-X_{i,m}\|,
$$
and hence we obtain from the Cauchy--Schwarz inequality applied to the expectation on the concluding line of \eqref{l2-2} and stationarity that
\begin{align*}
( \mathbb{E}( \|X_i(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\|^q)^{1/q} + (\mathbb{E} & \|X_{i,m}(\cdot)(X_i(\cdot)-X_{i,m}(\cdot))\|^q)^{1/q} \\
&\le (\mathbb{E}\|X_0\|^{2q})^{1/2q}(\mathbb{E}\|X_0-X_{0,m}\|^{2q})^{1/2q}.
\end{align*}
It follows from this and \eqref{l2-2} that
\begin{align}\label{rho-s}
\sum_{m=1}^\infty (\mathbb{E}\|\tilde X_i - \tilde X_i^{(m)}\|^q)^{1/q} \le (\mathbb{E}\|X_0\|^{p})^{1/p} \sum_{m=1}^\infty (\mathbb{E}\|X_0-X_{0,m}\|^{p})^{1/p} < \infty.
\end{align}
Now let $\overline {X}_{i,m}^{(j)}$ be defined as $\overline {X}_{i}^{(j)}$ in \eqref{2.17} with $X_i$ replaced by $X_{i,m}$. We obtain using the Cauchy--Schwarz inequality that
$$
(\mathbb{E} [\overline {X}_{i}^{(j)}- \overline {X}_{i,m}^{(j)}]^q)^{1/q} \le \|f^X_j\| (\mathbb{E}\|\tilde X_i - \tilde X_i^{(m)} \|^q)^{1/q}.
$$
By \eqref{rho-s} it follows that
$$
\sum_{m=1}^{\infty}(\mathbb{E} [\overline {X}_{i}^{(j)}- \overline {X}_{i,m}^{(j)}]^q)^{1/q} < \infty
$$
and therefore the sequence $\overline {X}_{i}^{(j)}$ satisfies the assumptions of Theorem 3 in
\cite{wu:2005}. By this result the weak convergence in \eqref{x-conv} follows. By the same arguments it follows that
\begin{align}\label{y-conv}
\Big \{ \frac {1}{\sqrt{n}} \sum_{i=1}^{\lfloor n \lambda \rfloor} \overline{Y}_i^{(j)} \Big \}_{\lambda \in [0,1]}
\rightsquigarrow \sigma_{Y,j} \{Ê\mathbb{B}^Y(\lambda)\}_{\lambda \in [0,1]},
\end{align}
where $\mathbb{B}^Y$ is a standard Brownian motion on the interval $[0,1]$ and
$$
\sigma_{Y,j}^2 = \sum_{\ell = -\infty}^{\infty} \mbox{cov}(\overline{Y}_0^{(j)},\overline{Y}_\ell^{(j)}).
$$
Since the sequences $\{ X_i \}_{i \in \mathbb{R}}$ and $\{ Y_i \}_{i \in \mathbb{R}}$ are independent, we have that \eqref{x-conv} and \eqref{y-conv} may be taken to hold jointly where the Brownian motions $\mathbb{B}^X$ and $\mathbb{B}^Y$ are independent.
It finally follows from this and \eqref{z-mn-def} that
\begin{align*}
\{ \tilde Z^{(j)}_{m,n} (\lambda)\}_{\lambda \in [0,1]} &\rightsquigarrow \Big \{ 2 \lambda \Big ( \frac {\sigma_{X,j}}{\sqrt{\theta}} \mathbb{B}^X (\lambda) + \frac {\sigma_{Y,j}}{\sqrt{1 - \theta}} \mathbb{B}^Y (\lambda)\Big ) \Big \}_{\lambda \in [0,1]}~ \stackrel{\cal D}{=} ~\big \{ \lambda\zeta_j \mathbb{B}(\lambda) \big \}_{\lambda \in [0,1]}~,
\end{align*}
which completes the proof of Proposition \ref{prop1}.
\end{proof}
\bigskip
{\bf Acknowledgements} This work has been supported in part by the
Collaborative Research Center ``Statistical modeling of nonlinear
dynamic processes'' (SFB 823, Teilprojekt A1,C1) of the German Research Foundation
(DFG), and the Natural Sciences and Engineering Research Council of Canada, Discovery Grant. We gratefully acknowledge Professors Xiaofeng Shao and Xianyang Zhang for sharing code to reproduce their numerical examples with us.
\itemsep=0.015pt
|
1,477,468,750,925 | arxiv | \section{Introduction}
The two-dimensional Potts model pervades statistical physics and is a vivid
illustration of the strong ties between conformal field theory (CFT), integrability,
algebra, and probability theory. For $Q \in [0,4]$, and on
the square lattice, it exhibits integrable points corresponding to second-order phase transitions, in both the ferromagnetic \cite{Baxter73,BaxterBook} and the antiferromagnetic \cite{Baxter82,JS_AFPotts} regimes.
While the integrable
aspects emerge from transforming the Potts model into a vertex model, the conformal properties are best investigated by transforming it into a loop model \cite{LoopReview}.
The continuum limit of the ferromagnetic case is the well-studied compactified boson CFT. The antiferromagnetic case hase only been understood very recently: it corresponds to a compact boson coupled to another non compact boson \cite{IkhlefJS1,IkhlefJS2,IkhlefJS3,CanduIkhlef} and provides a statistical physics realisation of the Euclidean black hole sigma model \cite{BlackHoleCFT} which has been extensively studied in a string theory context \cite{Troost,RibSch}.
In the theory of disordered systems there is a strong motivation to study
the Potts model with quenched bond randomness. In a perturbative CFT approach
\cite{LudwigCardy87,DPP95} this corresponds to the replica limit ($N \to 0$) of a
system of $N$ Potts models coupled by the term
$g \int {\rm d}x \, \sum_{a \neq b} \varepsilon_a(x) \varepsilon_b(x)$ in the action, where $\varepsilon_a(x)$ denotes the local energy operator ($\Phi_{21}$ in Kac notation) of replica $a=1,2,\ldots,N$. This term is relevant in the renormalisation group (RG) sense for $Q>2$ --- an observation which provides the basis for the perturbative expansion around $Q=2$.
Unfortunately, apart from the perturbative CFT results, the progress on the random-bond Potts model has mainly been numerical \cite{Picco97,CJ97,JC98}.
Alternatively, one can study the coupled replicas for finite, integer $N \ge 2$. The perturbative CFT predicts a non-trivial fixed point $g_*$ for $N \ge 3$ for which a variety of critical exponents can be computed perturbatively, in agreement with numerical transfer matrix results and a duality analysis \cite{DJLP99,JJ00}. The case $N=2$ is
special, since all terms in the beta function, except the leading one, contain the factor
$(N-2)$. Accordingly the perturbative expressions for the critical exponents are singular at $N=2$.
Analytical progress has been limited, this far, to this case of $N=2$ coupled models.
For $Q=2$ this is known as the Ashkin-Teller model \cite{AT43}, which can be solved through a mapping to the eight-vertex model \cite{BaxterBook}. The coupling term is here exactly marginal and leads to a line of critical
points along which the critical exponents vary continuously.
For $Q>2$, and in the case where the Potts spins interact ferromagnetically, a field theoretical analysis reveals that the perturbation makes the model massive \cite{Vaysburd95}. This agrees with the duality analysis and transfer matrix computations \cite{DJLP99}, as in this case the two models couple strongly to form a single $Q^2$-state model which is non critical (since $Q^2 > 4$).
There however exists another, integrable case of two coupled Potts models, which was
found by Au-Yang and Perk \cite{Perk} by direct solution of the star-triangle equation;
see eq.~(2.17) in that paper. This line of integrable points
was further investigated by Martins and Nienhuis \cite{MartinsNienhuis} (it is called
solution 2 in their paper), who established the corresponding Bethe Ansatz
equations. They also showed that there are two regimes within the range $0 \le Q \le 4$,
each one corresponding to distinct critical behaviour. Martins and Nienhuis were
mainly interested in the case $Q=1$ which can be interpreted as a Lorentz lattice gas.
It belongs to the first regime, with $0 \le Q < 2$, throughout which the Potts interaction is
ferromagnetic and the two models decouple in the continuum limit. The second regime,
with $2 < Q \le 4$, was only treated briefly, and numerical evidence of an effective central
charge $c_{\rm eff} = 3$ was given.
For the remainder of this paper we shall focus on this second regime, $2 < Q \le 4$, for which
the Potts interaction is {\em antiferromagnetic} and
the energy-energy coupling between the two models is relevant by the perturbative
CFT analysis. Fendley and Jacobsen \cite{FJ08} presented a detailed analysis of this case,
based on the level-rank duality \cite{FK09} of the $SO(N)_k$ Birman-Wenzl-Murakami (BWM) algebras \cite{BWM}.
Parameterising $Q$ as
\begin{equation}
\sqrt{Q} = 2 \cos \left( \frac{\pi}{k+2} \right) \,,
\label{eq:loopweight}
\end{equation}
they found that these theories correspond \cite{FJ08} to the conformal coset
\begin{equation}
\frac{SO(k)_3 \times SO(k)_1}{SO(k)_4} \approx
\frac{SU(2)_k \times SU(2)_k}{SU(2)_{2k}}
\end{equation}
with central charge
\begin{equation}
c = \frac{3 k^2}{(k+1)(k+2)} \,.
\label{eq:cc}
\end{equation}
The purpose of this paper is to study further this integrable case of two coupled
antiferromagnetic Potts models. We shall see that the integrable
$\check{R}$ matrix is equivalent to that of $U_q(sl_4^{(2)})$ --- also known as
the $a_3^{(2)}$ model --- in the fundamental representation \cite{GalleasMartins,GalleasMartins04}. This allows us in particular to identify and study in details the corresponding Bethe Ansatz equations.%
\footnote{
The Bethe Ansatz equations already appeared in \cite{MartinsNienhuis}, but they were not
subjected to a systematic investigation in the range $2 < Q \le 4$.
}
On a more fundamental level, this equivalence places the two coupled Potts
models into the family of $a_n^{(2)}$ models whose first member we have recently investigated in detail \cite{VJS:a22}.
This $a_2^{(2)}$ model is related to the well-known O($n$) model on the square lattice \cite{Nienhuis}, and our study \cite{VJS:a22} concentrated on the so-called regime III, a model of dilute loops that contains a special point ($n \to 0$) which is a candidate for describing the theta-point collapse of polymers \cite{VJS:polymers}.
Using both analytical arguments and extensive numerical analysis we established that the $a_2^{(2)}$ model in regime III has a non compact continuum limit that turns out to be precisely the same as that of the antiferromagnetic Potts model \cite{JS_AFPotts,IkhlefJS1,IkhlefJS2,IkhlefJS3,CanduIkhlef}, namely that of the
$SL(2,\mathbb{R})_k/U(1)$ Euclidean black hole CFT \cite{BlackHoleCFT,Troost,RibSch}.
We show here that the $a_3^{(2)}$ model, relevant for describing two coupled Potts models, also has a non compact continuum limit, albeit now involving three rather than two bosons [cf.\ eq.~(\ref{eq:cc})]. The range $2 < Q \le 4$ in which the Potts models couple non-trivially corresponds precisely to the interesting regime III. Just like in the $a_2^{(2)}$ counterpart, the ``spectrum'' of critical exponents in the $a_3^{(2)}$ model contains both continuous and discrete states, with the discrete states emerging from --- and redisappearing into --- the continuum upon changing the twist. The twist is here controlled by two angles (rather than one in the $a_2^{(2)}$ case) that correspond to modifying the weights of the non contractible loops in each of the two Potts models. We provide the critical exponents of the magnetic-type operators, as functions of the two twists, and infer from those the scaling dimensions $x_{2n_1,2n_2}$ of the so-called watermelon operators in the Potts model, corresponding to the insertion of any given number $(2n_1,2n_2)$ of propagating ``through-lines'' in
each of the two Potts models.
To keep the presentation light, the analysis given here is mainly based on analogies with the $a_2^{(2)}$ case and on an extensive numerical analysis of the Bethe Ansatz equations. A more formal treatment, that corroborates the present analysis, will appear elsewhere in the general $a_n^{(2)}$ context \cite{VJS:an2}.
The fact that the continuum limit is non compact implies that the finite-size free energies, from which the critical exponents are extracted in the usual way, often contain strong logarithmic corrections to scaling. Ref.~\cite{FJ08} made an attempt of conjecturing the first few watermelon exponents based on direct diagonalisation of the
transfer matrix for sizes up to $L=16$ loop strands. Our present knowledge of the logarithmic corrections, combined with the ability to numerically solve the Bethe Ansatz equations for vastly larger sizes (typically $L \simeq 100$), obviously gives a much stronger handle on this problem. It is therefore hardly surprising that a few of the conjectures presented in \cite{FJ08} turn out to be wrong.
The plan of this paper is the following. In section \ref{section:models} we review the definition of the two coupled Potts models and their equivalence with the two-colour dense loop model studied in \cite{FJ08}. We then show how the latter can be reformulated in terms of a properly twisted $a_{3}^{(2)}$ model. The Bethe Ansatz study of the $a_3^{(2)}$ model is presented in section \ref{section:BAEa32}, allowing us to compute the conformal spectrum, which turns out to exhibit non compact features. These results are then applied to the calculation of the loop model's critical exponents in section \ref{section:confloop}. We give general formulae for the watermelon exponents, some of which differ significantly from the numerical estimations of \cite{FJ08}. Our findings are summarised and discussed in section~\ref{section:conclusion}.
\section{From two coupled Potts models to the $a_3^{(2)}$ vertex model}
\label{section:models}
We wish to study a system of two coupled Potts models described by the
Hamiltonian
\begin{equation}
{\cal H} = - \sum_{\langle ij \rangle} \left[
K(\delta_{\sigma_i,\sigma_j} + \delta_{\tau_i,\tau_j}) +
L \delta_{\sigma_i,\sigma_j} \delta_{\tau_i,\tau_j} \right] \,,
\label{eq:hamiltonian}
\end{equation}
where $\langle ij \rangle$ denotes the set of nearest neighbour sites (edges) on the square
lattice, and the Kronecker symbol $\delta_{x,y}$ equals $1$ if $x=y$, and $0$
otherwise. The spins $\sigma_i$ and $\tau_i$ of the first and second models take
the values $1,2,\ldots,Q$.
[The extension to two different models with $Q_1$ and $Q_2$ states
is interesting, but does not to our knowledge sustain an integrable formulation.]
Writing ${\cal H} = -\sum_{\langle ij \rangle} {\cal H}_{ij}$ the local Boltzmann
weight becomes
\begin{equation}
W_{ij} \equiv \mathrm{e}^{-{\cal H}_{ij}} =
1 + v (\delta_{\sigma_i,\sigma_j} + \delta_{\tau_i,\tau_j}) +
(v^2 + w(1+v)^2) \delta_{\sigma_i,\sigma_j} \delta_{\tau_i,\tau_j} \,,
\label{boltzmann}
\end{equation}
where we have defined $v = \mathrm{e}^K - 1$ and $w = \mathrm{e}^L - 1$.
The duality analysis \cite{DomanyRiedel,DJLP99,JJ00} shows that
selfduality is attained by setting the coefficient of the $\delta_{\sigma_i,\sigma_j} \delta_{\tau_i,\tau_j}$ term
to $Q$, viz.
\begin{equation}
w = \frac{Q - v^2}{(1+v)^2} \,.
\label{selfdual}
\end{equation}
\subsection{Loop model and integrable $\check{R}$ matrix}
The partition function is obtained by expanding the product over $W_{ij}$
and summing over the spins,
\begin{equation}
Z = \sum_{\{\sigma,\tau\}} \prod_{\langle ij \rangle} W_{ij} \,.
\end{equation}
It is convenient to associate a graphical representation with this expansion.
We first concentrate on just the first Potts model. For a horizontal edge $(ij)$ we
draw the edge or leave it empty
\begin{equation}
\begin{tikzpicture}[scale=0.5]
\draw [black, line width=0.2] (0,-1) -- (1,0);
\draw [black, line width=0.2] (0,1) -- (1,0);
\draw [black, line width=0.2] (-1,0) -- (0,-1);
\draw [black, line width=0.2] (-1,0) -- (0,1);
\draw [black,line width=1.0] (-1,0) -- (1,0);
\draw (-1,0) node[left] {$\delta_{\sigma_i,\sigma_j} \equiv$};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.5]
\draw [black, line width=0.2] (0,-1) -- (1,0);
\draw [black, line width=0.2] (0,1) -- (1,0);
\draw [black, line width=0.2] (-1,0) -- (0,-1);
\draw [black, line width=0.2] (-1,0) -- (0,1);
\draw [black, dashed,line width=1.0] (-1,0) -- (1,0);
\draw (-1,0) node[left] {$1 \equiv$};
\end{tikzpicture}
\end{equation}
depending on whether we take a term with or without the $\delta_{\sigma_i,\sigma_j}$ interaction. The set of drawn edges form a set of connected clusters, and the sum over $\{\sigma\}$ amounts to giving a weight $Q$ per cluster. Equivalently, we draw loops on the medial lattice \cite{BKW76,DJLP99}
\begin{equation}
\begin{tikzpicture}[scale=0.5]
\draw [black, line width=0.2] (0,-1) -- (1,0);
\draw [black, line width=0.2] (0,1) -- (1,0);
\draw [black, line width=0.2] (-1,0) -- (0,-1);
\draw [black, line width=0.2] (-1,0) -- (0,1);
\draw[red, line width=0.3mm, rounded corners=7pt] (-0.5,-0.5) -- (0.,0.0) -- (0.5,-0.5);
\draw[red, line width=0.3mm, rounded corners=7pt] (-0.5,0.5) -- (0.,0.0) -- (0.5,0.5);
\draw [black,line width=1.0] (-1,0) -- (1,0);
\draw (-1,0) node[left] {$\delta_{\sigma_i,\sigma_j} \equiv$};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=0.5]
\draw [black, line width=0.2] (0,-1) -- (1,0);
\draw [black, line width=0.2] (0,1) -- (1,0);
\draw [black, line width=0.2] (-1,0) -- (0,-1);
\draw [black, line width=0.2] (-1,0) -- (0,1);
\draw[red, line width=0.3mm, rounded corners=7pt] (-0.4,-0.6) -- (0,0) -- (-0.4,0.6);
\draw[red, line width=0.3mm, rounded corners=7pt] (0.4,-0.6) -- (0,0) -- (0.4,0.6);
\draw [black, dashed,line width=1.0] (-1,0) -- (1,0);
\draw (-1,0) node[left] {$1 \equiv$};
\end{tikzpicture}
\end{equation}
such that the loops bounce off the empty edges and cut through the occupied edges.
Using the Euler relation this provides a weight $n = \sqrt{Q}$ per closed loop, and at the
selfdual point (\ref{selfdual}) the local Boltzmann weight can be represented as
\begin{equation}
W_{ij} = \raisebox{-0.4cm}{\Rii} + \lambda \left( \raisebox{-0.4cm}{\Rei} +
\raisebox{-0.4cm}{\Rif} \right) + \raisebox{-0.4cm}{\Ref} \,,
\end{equation}
where we have represented the loops corresponding to the second Potts model
by a different colour and defined $\lambda = v / \sqrt{Q}$. Note that due to the
selfduality the weights are now invariant under a $90^\circ$ rotation, so we get
the same expression for horizontal and vertical edges. Accordingly, we have omitted
the graphical rendering of the edge itself, retaining only the loops.
It is this dense two-colour loop model that was studied in \cite{FJ08}.
The local Boltzmann weights define the corresponding $\check{R}$ matrix, so
we shall henceforth write
\begin{equation}
\check{R} = \left( \raisebox{-0.4cm}{\Rii} + \raisebox{-0.4cm}{\Ref} \right) +
\lambda_c \left( \raisebox{-0.4cm}{\Rei} + \raisebox{-0.4cm}{\Rif} \right) \,,
\label{RJF}
\end{equation}
where now $\lambda_c$ is the integrable choice of the coupling constant.
It is given by
\begin{equation}
\lambda_c = \frac12 \left( -\sqrt{Q} + \sqrt{4-Q} \right) \,.
\label{lambdacQ}
\end{equation}
It is convenient to parameterise the loop weight by
$n = 2 \cos \gamma$, with $\gamma = {\pi \over k+2}$,
in agreement with (\ref{eq:loopweight}). The critical coupling is then
\begin{equation}
\lambda_c = -\sqrt{2} \sin \left({\pi \over 4} {k-2 \over k+2}\right) \,,
\label{cJF}
\end{equation}
and the central charge found from the level-rank duality argument
of \cite{FJ08} is given by (\ref{eq:cc}).
We can now interpret the value (\ref{cJF}) physically in terms of the
spin interaction $K$ and the coupling between models $L$ appearing
in the original Hamiltonian (\ref{eq:hamiltonian}). We are interested in
the regime $2 < Q \le 4$ where the two Potts models couple non trivially.
Within this regime, $K$ is real and negative provided that
$\lambda_c \ge -1/\sqrt{Q}$ --- that is $2 < Q \le 2 + \sqrt{2}$, or $2 < k \le 6$ ---
so the spins interact antiferromagnetically.%
\footnote{For $2 + \sqrt{2} < Q \le 4$, or $k > 6$,
the original Potts formulation (\ref{eq:hamiltonian}) is unphysical, corresponding
to complex $K$, but the two-colour loop formulation still makes sense.}
On the other hand,
\begin{equation}
w = \frac{2 Q \sqrt{Q(4-Q)}}
{\left( 2 - Q + \sqrt{Q(4-Q)} \right)^2}
\end{equation}
is real and non-negative for any $Q \in [0,4]$, and so is $L$.
For integer $k$ the coupled Potts models can also be formulated as an RSOS height model
whose weights can be brought into positive definite form \cite{FJ08} under the same condition,
that is, $2 < k \le 6$.
It was observed in \cite{FJ08} that (\ref{RJF}) is the isotropic point of a more general, spectral parameter dependent, integrable $\check{R}$ matrix.
Let us recall this construction (more details are provided in \cite{FJ08}).
Each loop colour (red or blue) is independently a representation of the
Temperley-Lieb (TL) algebra \cite{TL}. Its generators satisfy the well-known
relations
\begin{eqnarray}
e_{(i)} e_{(i)} &=& n e_{(i)} \,, \nonumber \\
e_{(i)} e_{(i \pm 1)} e_{(i)} &=& e_{(i)} \,, \label{TLrelations} \\
e_{(i)} e_{(j)} &=& e_{(j)} e_{(i)} \mbox{ if $|i-j| > 1$} \,. \nonumber
\end{eqnarray}
Omitting henceforth the site index, we make the following graphical
identification of the identity operator and TL generator in the two models:
\begin{eqnarray}
i_1 = \raisebox{-0.2cm}{\iblue} & & \qquad e_1 = \raisebox{-0.2cm}{\eblue} \\
i_2 = \raisebox{-0.2cm}{\ired} & & \qquad e_2 = \raisebox{-0.2cm}{\ered}
\label{TLonecolour}
\end{eqnarray}
The integrable $\check{R}$ matrix (\ref{RJF}) then reads
\begin{equation}
\check{R} = i_1 \otimes i_2 + e_1 \otimes e_2 + \lambda_c \left(e_1 \otimes i_2 + i_1 \otimes e_2 \right) \,.
\label{Re1e2}
\end{equation}
Setting now
\begin{eqnarray}
I &=& i_1 \otimes i_2 \,, \\
E &=& e_1 \otimes e_2 \,, \\
B &=& \left( q^{- {1 \over 2}}i_1 - q^{1 \over 2} e_1 \right) \otimes \left( q^{- {1 \over 2}}i_2 - q^{1 \over 2} e_2 \right) \,,
\end{eqnarray}
where we defined $q=\mathrm{e}^{\mathrm{i}\gamma} = \mathrm{e}^{\mathrm{i}{\pi \over k+2}}$, it is straightforward to see that $I, E, B \equiv I^{(4,k)}, E^{(4,k)}, B^{(4,k)}$ are the generators of an $SO(4)_k$ Birman-Wenzl-Murakami (BWM) algebra \cite{BWM}.
The $SO(4)_k$ algebra is part of the more general family of $SO(N)_k$ BWM algebras ($N$ and $k$, both integers, are called respectively the {\it rank} and {\it level}), which enjoy the interesting property of {\it level-rank duality}\/: the generators of $SO(N)_k$ can be rewritten in terms of those of $SO(k)_N$, and vice-versa.
The construction of integrable $\check{R}$ matrices based on BWM generators is well known \cite{Akutsu}.
For $SO(k)_N$ one defines an integrable, spectral parameter dependent model as
\begin{equation}
\check{R}^{(N,k)} = \left[{N \over 2}-1 - u\right] \left[1-u\right]I^{(N,k)} + \left[u\right] \left[2-{N \over 2}+u\right] E^{(N,k)} + \left[{N \over 2}-1-u\right] \left[u\right]X^{(N,k)} \,,
\end{equation}
where $[x]\equiv {q^{x}-q^{-x} \over q - q^{-1}}$ with now $q=\mathrm{e}^{\mathrm{i}{\pi \over N+k-2}}$, and $X^{(N,k)}=q^{-1}I^{(N,k)} + q E^{(N,k)} - B^{(N,k)}$.
Following \cite{FJ08} we start from $\check{R}^{(k,4)}$, which can be rewritten in terms of the $SO(4)_k$ generators using level-rank duality.
The result is, after proper normalization (see section 3.1 of \cite{FJ08} for details),
\begin{equation}
\check{R}^{(k,4)} = I
+ {\sin\left( {\pi u \over 2+k} \right) \over \sin\left(\pi { 1+u \over 2+k} \right) } \left( 2\cos\left({\pi \over 2+k}\right) + {\cos\left(\pi {1+u-k \over k+2} \right) \over \cos\left(\pi {2+u \over 2+k} \right)} \right) E
- {\sin\left( {\pi u \over 2+k} \right) \over \sin\left(\pi { 1+u \over 2+k} \right) } X \,.
\label{Rk4}
\end{equation}
At the isotropic point $u = {k\over4}-{1\over 2}$ this is exactly the decomposition (\ref{RJF}).
\subsection{Formulation as an integrable vertex model}
Having at hand an integrable, spectral parameter dependent $\check{R}$ matrix for describing the critical point (\ref{RJF}) allows us to look for a Bethe ansatz solution.
To proceed, we need to find a representation of (\ref{Rk4}) which is purely algebraic, as the $\check{R}$ matrix of some vertex model. This is done following the lines of \cite{GalleasMartins}, where the $\check{R}$ matrix based on the generators of BWM algebras are rewritten as those of certain $q$-deformed Lie (super)algebras.
In our case, we see that $\check{R}^{(k,4)}$ correspond to the integrable $\check{R}$ matrices associated with the superalgebras $U_q \left( sl(4+r | r)^{(2)} \right)$, with $r=0,1,\dots$, whose matrix expression in the tensor product of fundamental representations is given explicitly in \cite{GalleasMartins04}.
Restricting to the simplest of these representations, namely $r=0$, we therefore arrive at the conclusion that (\ref{Rk4}) is equivalent to the integrable $\check{R}$ matrix associated with $U_q \left(sl_4^{(2)}\right)$, which more commonly goes by the name of $a_3^{(2)}$ model. Since we will be concerned with this model from now on, it is worthwhile recalling its definition explicitly.
Consider a system of horizontal length $L$, each site of which carries a space $\mathcal{V}\equiv \mathbb{C}^{4}$. The spectral parameter dependent row-to-row transfer matrix is written as a trace over an auxilliary space $\mathcal{A}\equiv \mathbb{C}^{4}$, namely
\begin{equation}
T^{(L)}(\lambda) = \mbox{Tr}_{\mathcal{A}} \left( R_{\mathcal{A}1}(\lambda)\ldots R_{\mathcal{A}L}(\lambda) \tau_{\mathcal{A}} \right) \,,
\label{T6GM}
\end{equation}
where the matrix $R_{\mathcal{A}i}$ acts on the tensor product of the auxilliary space with the $i$th vertical --- or quantum --- space, and as the identity on the others. It is related to the $\check{R}$ matrix by a permutation or spaces, $R_{\mathcal{A}i} = \mathcal{P}_{\mathcal{A}i} \check{R}_{\mathcal{A}i}$. Moreover, the twist operator $\tau_{\mathcal{A}}$ acts diagonally on $\mathcal{A}$ in a way that will be made explicit later, and the $\check{R}_{ab}$ matrix acting on two spaces $a,b$ can be decomposed in term of the $4 \times 4$ Weyl matrices in $a$ and $b$ as \cite{GalleasMartins04}
\begin{eqnarray}
\check{R}_{ab}(\lambda) &=& a(\lambda) \sum_{\stackrel{\alpha=1}{\alpha \neq \alpha'}}^{4}
\hat{e}^{(a)}_{\alpha \alpha} \otimes \hat{e}^{(b)}_{\alpha \alpha}
+b (\lambda) \sum_{\stackrel{\alpha ,\beta=1}{\alpha \neq \beta,\alpha \neq \beta'}}^{4}
\hat{e}^{(a)}_{\beta \alpha} \otimes \hat{e}^{(b)}_{\alpha \beta} \nonumber \\
&+& {\bar{c}} (\lambda) \sum_{\stackrel{\alpha ,\beta=1}{\alpha < \beta,\alpha \neq \beta'}}^{4} \hat{e}^{(a)}_{\alpha \alpha} \otimes \hat{e}^{(b)}_{\beta \beta}
+c (\lambda) \sum_{\stackrel{\alpha ,\beta=1}{\alpha > \beta,\alpha \neq \beta'}}^{4} \hat{e}^{(a)}_{\alpha \alpha} \otimes \hat{e}^{(b)}_{\beta \beta} \nonumber \\
&+& \sum_{\alpha ,\beta =1}^{4} d_{\alpha, \beta} (\lambda)
\hat{e}^{(a)}_{\alpha' \beta} \otimes \hat{e}^{(b)}_{\alpha \beta'} \,.
\label{RGM}
\end{eqnarray}
In the above formula every index $\alpha=1,\ldots,4$ corresponds to a conjugate index $\alpha' \equiv 5-\alpha$, and $\hat{e}^{(a)}_{\alpha \beta}$ (resp. $\hat{e}^{(b)}_{\alpha \beta}$) denotes the matrix acting on $a$ (resp. $b$) such that $\left(\hat{e}^{(a,b)}_{\alpha \beta}\right)_{\mu \nu} = \delta_{\alpha \mu} \delta_{\beta \nu}$. The Boltzmann weights $a(\lambda)$,
$b(\lambda)$, $c(\lambda)$ and ${\bar{c}} (\lambda)$ are determined by
\begin{eqnarray}
\label{bw1}
a (\lambda) &=&(e^{2 \lambda} -\zeta)(e^{2 \lambda} -q^2) \,, \\
b (\lambda) &=&q(e^{2 \lambda} -1)(e^{2 \lambda} -\zeta) \,, \\
c (\lambda) &=&(1-q^2)(e^{2 \lambda} -\zeta) \,, \\
{\bar{c}} (\lambda) &=& e^{2 \lambda} c(\lambda) \,,
\end{eqnarray}
where $\zeta = -q^4$, whilst $d_{\alpha\beta}(\lambda)$ has the form
\begin{equation}
d_{\alpha, \beta} (\lambda) = \left \lbrace
\begin{array}{ll}
q(e^{2 \lambda} -1)(e^{2 \lambda} -\zeta) +e^{2\lambda}(q^2 -1)(\zeta -1) &
\mbox{for } \alpha=\beta=\beta' \,, \\
(e^{2 \lambda} -1)\left[ (e^{2 \lambda} -\zeta)q^{2} +e^{2\lambda}(q^2 -1) \right] &
\mbox{for } \alpha=\beta \neq \beta' \,, \\
(q^{2 }-1)\left[ \zeta(e^{2 \lambda} -1) q^{t_{\alpha}-t_{\beta}} -\delta_{\alpha ,\beta'} (e^{2\lambda} -\zeta) \right] &
\mbox{for } \alpha < \beta \,, \\
(q^{2 }-1) e^{2 \lambda} \left[ (e^{2 \lambda} -1) q^{t_{\alpha}-t_{\beta}} -\delta_{\alpha ,\beta'} (e^{2\lambda} -\zeta) \right] &
\mbox{for } \alpha > \beta \,, \\
\end{array} \right.
\end{equation}
where $t_{\alpha}=-1,0,0,1$ for $\alpha=1,2,3,4$ respectively.
The exact identification of (\ref{Rk4}) with (\ref{RGM}) in fact involves some gauge changes, which we detail in the next section.
It is important to notice that even though the use of level-rank duality for the BWM algebra is only defined for integer values of $k$, we are now left with a parametrization $\gamma = {\pi \over k+2}$ where $\gamma$, and therefore $k$, can vary continuously.
More precisely we will consider in general $\gamma \in [0, {\pi \over 2}]$, because of the periodicity and the $\gamma \to {\pi - \gamma}$ symmetry of (\ref{RGM}).
The isotropic value of the spectral parameter $\lambda$ which recovers (\ref{RJF})--(\ref{lambdacQ})
is the following:
\begin{equation}
\lambda_+ = \mathrm{i} \left( \gamma - {\pi \over 4} \right) \,.
\label{lambdaisoRIII}
\end{equation}
Note that there is another value of the spectral parameter yielding an isotropic model, namely
\begin{equation}
\lambda_- = \mathrm{i} \left( \gamma + {\pi \over 4} \right) \,.
\label{lambdaisoRI}
\end{equation}
It corresponds to the other solution of $(\lambda_c)^2 + \lambda_c \sqrt{Q} + \frac12 Q = 1$, that is, replacing (\ref{lambdacQ}) by
\begin{equation}
\lambda_c^{\pm} = \frac12 \left( -\sqrt{Q} \pm \sqrt{4-Q} \right) \,.
\end{equation}
The leading eigenvalues at one or another of these isotropic points do not correspond to the same eigenstates, and therefore define different regimes. We shall come back to this issue in section \ref{section:BAEa32}.
Note also that we only consider here periodic or twisted periodic boundary conditions. It would however also be interesting to study the case where the system has open boundary conditions in the horizontal directions. Integrable reflection matrices need to be introduced in this case, and we point out that one solution has been found in \cite{Malara,GalleasOpen}. This solution contains a free parameter, which is reminiscent of the situation for a single Potts model where the apperance
of an arbitrary constant of separation \cite{Doikou} in the diagonal $K$-matrix can be interpreted
as an algebraic freedom in defining the boundary interaction in the corresponding conformal
boundary loop model \cite{confbound}.
\subsection{Two-colour structure and conserved magnetisations of the $a_3^{(2)}$ model}
\label{section:twocolourstructure}
We now wish to go the opposite way, in order to make transparent the two-colour structure hidden in the vertex formulation of the $a_3^{(2)}$ model.
First relabel the basis states $\alpha=1,2,3,4$ as $-2,-1,1,2$ (so in particular $\alpha' = -\alpha$), and give these states the following interpretation as the product of $U_q\left(sl_2\right)$ spin-${1 \over2}$ states
\begin{eqnarray}
-2,-1,1,2 &=& \left|-\right\rangle_1 \otimes \left|-\right\rangle_2 , \quad \left|-\right\rangle_1 \otimes \left|+\right\rangle_2 , \quad \left|+\right\rangle_1 \otimes \left|-\right\rangle_2 , \quad \left|+\right\rangle_1 \otimes \left|+\right\rangle_2 \\
\alpha &=& 3 a_1 + a_2 \,,
\label{aa1a2}
\end{eqnarray}
where $a_1 = S_1^{(z)}$ and $a_2 = S_2^{(z)}$ take values $\pm \frac12$ and will
be interpreted as the $z$-component of spin in each of the two models.
In this formulation the charge $t_{-2,-1,1,2}=-1,0,0,1$ defined earlier can just be interpreted as the total spin, $t_\alpha = S^{(z)}_1 + S^{(z)}_2$.
We can define on each pair of sites Temperley-Lieb generators in a standard way
\begin{eqnarray}
\left(e_1\right)_{a_1 a_2,b_1 b_2}^{c_1 c_2,d_1 d_2} &=& \delta_{a_1+b_1,0} \delta_{c_1+d_1,0} q^{c_1-b_1} \\
\left(e_2\right)_{a_1 a_2,b_1 b_2}^{c_1 c_2,d_1 d_2} &=& \delta_{a_2+b_2,0} \delta_{c_2+d_2,0} q^{c_2-b_2}
\end{eqnarray}
and check that these generators obey the same algebraic relations as those in (\ref{Re1e2}), namely (\ref{TLrelations}).
Let us represent these generators graphically in the loop language of (\ref{TLonecolour}), which allows to write (\ref{RGM}) acting on two sites $a,b$ as
\begin{eqnarray}
\check{R}_{a,b} &=& P_a P_{b} \left[ w_I \raisebox{-0.4cm}{\Rii} + w_X \left( \raisebox{-0.4cm}{\Rei} + \raisebox{-0.4cm}{\Rif} \right) + w_E \raisebox{-0.4cm}{\Ref} \right] P_a^{-1} P_{b}^{-1} \\
&\equiv& P_a P_{b} \check{R}_{\text{loop}} P_a^{-1} P_{b}^{-1} \,,
\end{eqnarray}
where $w_I$, $w_X$, $w_E$, are coefficients that depend on $\gamma$ and $\lambda$. The $P_{i}$ are gauge factors which amount to multiplying the states $\alpha=\pm 1$ (resp.\ $\alpha=\pm 2$) by $\mathrm{i}$ on odd (resp.\ even) sites, namely
\begin{eqnarray}
P_a &=& \mathrm{diag}\left( 1,\mathrm{i} ,\mathrm{i},1 \right) \otimes \mathbf{1} \equiv U \otimes \mathbf{1} \,, \\
P_b &=& \mathbf{1} \otimes \mathrm{diag}\left( \mathrm{i},1,1 ,\mathrm{i} \right) \equiv \mathbf{1}\otimes V \,.
\end{eqnarray}
Nothing changes conversely if one decides to instead multiplying $\alpha=\pm 1$ (resp.\ $\alpha=\pm 2$) by $\mathrm{i}$ on even (resp.\ odd) sites, which amounts to exchanging $U$ and $V$.
Therefore, considering the $\check{R}_{\mathcal{A}i}$ matrix acting on the auxiliary space ${\cal A}$ and the quantum space labelled $i$ we can use the equivalence just mentioned to write
\begin{eqnarray}
\check{R}_{\mathcal{A},i} &=& U_i V_{\mathcal{A}} \check{R}_{\text{loop}} V_i^{-1} U_{\mathcal{A}}^{-1} \quad \mbox{for $i$ even} \,, \\
\check{R}_{\mathcal{A},i} &=& V_i U_{\mathcal{A}} \check{R}_{\text{loop}} U_i^{-1} V_{\mathcal{A}}^{-1} \quad \mbox{for $i$ odd}\,.
\end{eqnarray}
Since the square lattice is bipartite, the factors of $U^{\pm 1}$ and $V^{\pm 1}$ coming from adjacent sites will cancel out when forming the transfer matrix $T_L(\lambda)$, so it is equivalent to express the latter in the form (\ref{T6GM}) with $\check{R}_{\mathcal{A},i} = \check{R}_{\rm loop}$, i.e., with the gauge matrices being omitted. This observation completes the equivalence between the $a_3^{(2)}$ vertex model and the two-colour loop model, up to boundary effects and other subtleties to be discussed in section~\ref{sec:bcs} below.
Just like in the well-known construction in the one-colour (Potts) case (see e.g.~\cite{Richard}), the loop transfer matrix has a block-triangular structure in terms of the number of through-lines (or ``watermelon legs'') $l_1$ and $l_2$ propagating in each of the Potts models. As far as the eigenvalue problem is concerned, it is therefore equivalent to impose the strict conservation of the quantum numbers $(l_1,l_2)$. On the other hand, the vertex-model transfer matrix commutes with both of the total magnetisations,
$S_1^{(z)}\equiv \sum_{i=1}^L \left(S_1^{(z)}\right)_i$ and
$S_2^{(z)}\equiv \sum_{i=1}^L \left(S_2^{(z)}\right)_i$, and so it can be diagonalised
in sectors of fixed total magnetisation. It follows that the sector of the loop-model
transfer matrix with a fixed number $(l_1,l_2)$ of through-lines of each colour is related with that of the vertex-model transfer matrix with magnetizations $S_1^{(z)}={l_1 \over 2}$ and $S_2^{(z)}={l_2 \over 2}$.
\subsection{The periodic loop model and its associated twisted vertex model}
\label{sec:bcs}
To identify the models completely, that is, for instance, to reformulate the periodic loop transfer matrix in terms of the transfer matrix (\ref{T6GM}), there are however still two aspects that need to be taken care of.
\subsubsection{Choice of the boundary conditions}
\label{section:choiceoftwists}
In the periodic loop model as considered in \cite{FJ08}, there can exist non contractible loops, that is, closed loops that wind horizontally around the periodic direction.
These must have the same weight $n=q+q^{-1}=2\cos\gamma$ as the contractible ones, a fact which needs to be taken in account in the vertex model by choosing correctly the twist in (\ref{T6GM}). Let us write the latter in terms of two independent twist angles $\phi_1$ and $\phi_2$, associated with each of the two colours,
\begin{equation}
\tau_{\mathcal{A}} = \mathrm{e}^{-2\mathrm{i} \left( \phi_1 s_1^{(z)}+\phi_2 s_2^{(z)}\right)} \,.
\label{twistmatrix}
\end{equation}
We stress that $s_i^{(z)} = \pm \frac12$ denotes here the local magnetisation along the auxiliary space; it should not be confused with the global magnetisation $S_i^{(z)} = -\frac{L}{2},\ldots,\frac{L}{2}$ on the quantum spaces, a quantity which is conserved by the transfer matrix.
The proper values to give to the twist angles follows depend on $S_i^{(z)}$. For each of $i=1,2$ we must choose them as follows:
\begin{itemize}
\item When $S_i^{(z)}=0$ there can exist non contractible loops of colour $i$. The correct choice is $\phi_i=\gamma$, so that each non contractible loop gets a weight $\mathrm{e}^{2\mathrm{i}\gamma{1\over 2}}+\mathrm{e}^{-2\mathrm{i}\gamma{1\over 2}}=n$.
\item When $S_i^{(z)} \neq 0$ the presence of through-lines forbids the presence of non contractible loops of colour $i$. The correct choice is then $\phi_i = 0$, since otherwise the through-lines would pick up spurious phase factors when spiraling around the horizontal, periodic direction.
\end{itemize}
\subsubsection{The twisted vertex model as an enlarged periodic loop model}
\label{sec:enlarged}
Even with the correct choice of twist angles, there is a subtle difference between
the twisted vertex model and the periodic loop model. The reason for this is that
the vertex model has a larger space of states.
To show this, we first focus on a single loop colour. Consider as an example
the system of size $L=4$. In the loop model there are $2$ possible states without through-lines which can be represented as graphically as
$\begin{tikzpicture}[scale=0.5]
\draw[red,line width=1.0] (0,0) arc(180:360:3mm and 5mm);
\draw[red,line width=1.0] (1.2,0) arc(180:360:3mm and 5mm);
\end{tikzpicture}$
and
$\begin{tikzpicture}[scale=0.5]
\draw[red,line width=1.0] (0,0) arc(180:360:9mm and 5mm);
\draw[red,line width=1.0] (0.6,0) arc(180:360:3mm and 2.5mm);
\end{tikzpicture}$.
In the sector $S^{(z)} = 0$ of the vertex model there are obviously ${4 \choose 2} = 6$
states. The difference is that the loop model gives the same weight $n$ to any
loop, contractible or not, whereas the vertex model can control the weight of the
non contractible loop independently by means of the twist angle. To endow the
loop model with the capability of distinguishing between contractible and non
contractible loops, we must enlarge its state space with another 4 states. We
can represent those graphically as
\raisebox{-0.5mm}{$\begin{tikzpicture}[scale=0.5]
\draw[red,line width=1.0] (0,0) arc(180:360:3mm and 5mm);
\draw[red,line width=1.0] (1.2,0) arc(180:360:3mm and 5mm);
\draw[black,fill] (0.3,-0.5) circle(0.5ex);
\end{tikzpicture}$},
\raisebox{-0.5mm}{$\begin{tikzpicture}[scale=0.5]
\draw[red,line width=1.0] (0,0) arc(180:360:3mm and 5mm);
\draw[red,line width=1.0] (1.2,0) arc(180:360:3mm and 5mm);
\draw[black,fill] (1.5,-0.5) circle(0.5ex);
\end{tikzpicture}$},
\raisebox{-0.5mm}{$\begin{tikzpicture}[scale=0.5]
\draw[red,line width=1.0] (0,0) arc(180:360:9mm and 5mm);
\draw[red,line width=1.0] (0.6,0) arc(180:360:3mm and 2.5mm);
\draw[black,fill] (0.9,-0.5) circle(0.5ex);
\end{tikzpicture}$}, and
\raisebox{-0.5mm}{$\begin{tikzpicture}[scale=0.5]
\draw[red,line width=1.0] (0,0) arc(180:360:9mm and 5mm);
\draw[red,line width=1.0] (0.6,0) arc(180:360:3mm and 2.5mm);
\draw[black,fill] (0.9,-0.25) circle(0.5ex);
\draw[black,fill] (0.9,-0.5) circle(0.5ex);
\end{tikzpicture}$},
where a mark on an arc now means that it has traversed the periodic
boundary condition. The number of marks add up modulo 2 upon
multiple traversals and upon concatenating two arcs through the action
of TL generators.
We shall refer to the loop model where arcs in the sector without through-lines can be marked as the {\em enlarged loop model}. (We do not mark arcs in sectors with through-lines, since there cannot be any non contractible loops anyway.)
The original, periodic loop model, will be in contrast refered to as the {\em original loop model}. We now claim that the enlarged loop model is equivalent to the twisted vertex model, in the sense that their state spaces are isomorphic.
In fact, it is not difficult to establish a bijection between the state spaces. Reading the states of the enlarged loop model from left to right, replace each opening of an unmarked (resp.\ a marked) loop by an up-spin (resp.\ a down-spin) and each closing
by a down-spin (resp.\ an up-spin). In this way, the first two states given above
become
$\uparrow \downarrow \uparrow \downarrow$ and
$\uparrow \uparrow \downarrow \downarrow$, while the latter four states become
$\downarrow \uparrow \uparrow \downarrow$,
$\uparrow \downarrow \downarrow \uparrow$,
$\downarrow \uparrow \downarrow \uparrow$, and
$\downarrow \downarrow \uparrow \uparrow$.
The mapping extends to sectors with through-lines, provided we replace each
through-line by an up-spin. To establish the reverse mapping, consider any given
initial spin. Compute the accumulated magnetisation upon moving rightwards (crossing
the periodic boundary condition if necessary) until the magnetisation becomes zero,
or the same spin is reached again. In the former case, the spin where the magnetisation
becomes zero is linked by an arc to the initial spin. The corresponding spins are obviously opposite, and if the down-spin is to the left of the up-spin (and only if we are in the $S^{(z)}=0$ sector) the arc is marked.
In the latter case, there is no corresponding spin and the initial spin is a through-line.
It is an elementary exercise to show that in the original loop model the sector without through-lines has dimension
\begin{equation}
d_0(L) = {L \choose L/2} - {L \choose L/2 + 1}
= \frac{1}{L/2 + 1} {L \choose L/2} \,.
\end{equation}
In the enlarged loop model the sector with $2l$ through-lines has dimension
\begin{equation}
d'_{2l}(L) = {L \choose L/2 + l} \,,
\end{equation}
which is obvious because of the equivalence with the vertex model. The
original loop model has the same dimensions in the sectors with through-lines,
i.e., $d_{2l}(L) = d'_{2l}(L)$ for $l \neq 0$.
The extension of these considerations to the two-colour loop model is
obvious, since the two loop colours behave independently. In particular
the total dimension in the sector with $(l_1,l_2)$ through-lines is the product of the dimensions for each of the colours:
\begin{equation}
d_{(2l_1,2l_2)}(L) = d_{2l_1}(L) \, d_{2l_2}(L) \,.
\end{equation}
\section{Conformal spectrum of the $a_3^{(2)}$ model: Bethe Ansatz results}
\label{section:BAEa32}
Having cleared up the relationship between the two-colour dense loop model of \cite{FJ08} and the $a_3^{(2)}$ twisted vertex model, we now turn to the Bethe Ansatz study of the latter. The numerical study of the Bethe Ansatz equations allows us in particular to attain the eigenvalues in large finite size, and the close relationship with the $a_2^{(2)}$ model studied in \cite{VJS:a22} will then permit us to infer the conformal spectrum in the continuum limit.
Our Bethe Ansatz study of the $a_3^{(2)}$ model is part of a broader study of the $a_n^{(2)}$ models, which we plan to develop in a future publication \cite{VJS:an2}.
Each of these models comprises three regimes, denoted I, II and III, as already explained in the $a_2^{(2)}$ case in \cite{VJS:a22}. In the $a_3^{(2)}$ case, and referring to the isotropic models $\lambda = \lambda_\pm$ given by (\ref{lambdaisoRIII})-(\ref{lambdaisoRI}), these three regimes correspond to the following choice of the parameters:
\begin{itemize}
\item Regime I corresponds to the isotropic point $\lambda_-$ with $\gamma \in \left[0 , {\pi \over 2}\right]$.
\item Regime II corresponds to the isotropic point $\lambda_+$ with $\gamma \in \left[{\pi \over 4} , {\pi \over 2}\right]$.
\item Regime III corresponds to the isotropic point $\lambda_+$ with $\gamma \in \left[0 , {\pi \over 4}\right]$.
\end{itemize}
Only the regime III corresponds to the range of parameters describing the critical point (\ref{RJF}) for $2 < Q \le 4$ and we will therefore not discuss the regimes I and II any longer.
\subsection{Evidence for a non compact boson}
Before entering the details of our numerical results, let us present the main lines of what has convinced us
about the presence of a non compact boson in the continuum limit of regime III. The first piece of
evidence can be related to the two following observations:
\begin{itemize}
\item In the periodic (untwisted) case, the conformal exponents associated with each level show a very slow convergence with the size $L$ of the system.
\item Turning on the twists $\phi_1$ and $\phi_2$, we observe changes of regimes for these exponents, beyond which these are described by different analytical formulae and the convergence issues observed in the small-twist regime have disappeared.
\end{itemize}
This is very reminiscent of the features observed in the regime III of the $a_2^{(2)}$ model \cite{Nienhuis},
which lead us in \cite{VJS:a22} to associate the continuum limit of this model with Witten's Euclidean black hole
CFT \cite{BlackHoleCFT,Troost,RibSch} (which can also be considered as the coset $SL(2,\mathbb{R})_k/U(1)$).
It is therefore very tempting to interpret the continuum limit of the $a_3^{(2)}$ model in regime III as another non
compact CFT. A systematic study of this CFT will be postponed to a subsequent publication on the $a_n^{(2)}$
models for general $n$ \cite{VJS:an2}. Instead, we will here rely on the fact that the features described above,
which are highly unusual within the context of ordinary CFTs (for instance those described by a Coulomb gas for
bosons of compact radius), have a natural interpretation within the context of non compact, cigar-like CFTs.
To this end, it is useful to recall some basic features of the black hole CFT \cite{BlackHoleCFT,Troost,RibSch}.
It is written in terms of an action of
two fluctuating fields $r$ and $\theta$, on some target space with metric
\begin{equation}
ds^2={k\over 2} d\sigma^2,~d\sigma^2=(dr)^2+\tanh^2r (d\theta)^2 \,.
\end{equation}
%
This target space is associated to a two-dimensional surface in three dimensions with the rough shape of a cigar, hence the familiar name `cigar CFT'. More precisely, the target has rotational invariance around the $z$ axis, while the radius in the $x,y$ plane is given by $\tanh r$, where $r\geq 0$ denotes the geodesic distance from the origin.
The best way to understand the physics of this CFT is to study it within the minisuperspace approximation, that is, solve the Laplacian on the target \cite{RibSch}
\begin{equation}
\Delta=-{2\over k}\left[\partial_r^2+\left(\coth r+ \tanh r \right)\partial_r+\hbox{coth}^2 r\partial_\theta^2\right] \label{LapTar}\,.
\end{equation}
In this limit, there are no $L^2$-normalisable eigenfunctions. The whole spectrum is obtained from
$\delta$-function normalisable eigenfunctions, which depend on two parameters: one is $n\in \mathbb{Z}$,
the angular momentum of rotations around the axis, and the other, $J=-{1\over 2}+is$, is related to
the momentum $s \in \mathbb{R}$ along the $\rho$-direction of the cigar.
Each eigenfunction of the Laplacian lifts into a primary state in the CFT, and the corresponding Laplacian eigenvalues read
\begin{equation}
x=h+\bar{h}=-{2J(J+1)\over k}+{n^2\over 2k} \,.
\end{equation}
The relation between $J$ and $s$ is imposed by the normalisability.
In finite size, the existence of a continuum of primary fields corresponding to various values of $s$
is associated to towers of excited transfer matrix eigenstates indexed by an integer $j$ and with
conformal weights of the form
\begin{equation}
\Delta_j(L) \sim \mbox{compact part} + j^2 {\frac{A}{[B+\log L]^2}} \,.
\label{logarithmicscaling}
\end{equation}
There is thus a lattice regularisation of the momentum $s$, which also explains the slow convergence of the corresponding exponents.
Taking account of the ``stringy'' corrections requires considering non zero winding modes (indexed by the winding number $w$) of strings around the longitudinal direction of the cigar, which could be implemented in the $a_2^{(2)}$ lattice model by varying the twist parameter $\varphi$. A consequence of these corrections was shown in \cite{BlackHoleCFT, RibSch,Troost} to be that, on top of the continuum of normalisable states discussed
above, the theory also admits {\sl discrete states} which can be observed as an additional discrete set of conformal exponents popping out of the continuum beyond some particular value of the twist parameter, hence explaining the changes of regimes and the convergence improvements mentioned above.
Although the link between non compactness and discrete states was only worked out in details for
the $SL(2,\mathbb{R})_k/U(1)$ case, we wish to sketch here that it has a quite general origin related to the
geometry of the target space, and hence could very probably generalise to other non compact CFTs.
This reproduces in a very evocative way the arguments of \cite{BlackHoleCFT}. The primary fields are labeled by three integers, namely the momentum $n$ and winding number $w$ around the compact direction of the cigar, as well as the momentum $J$ (or $s$) in the non compact direction. Although $n$ and $w$ are treated on the
same footing by the CFT, they lead to very different sigma model descriptions, as for instance non zero values of $w$ are not accounted for in the minisuperspace approach. To described non zero winding modes, one has to perform a duality transformation $(r,\theta) \to (r,\tilde{\theta})$ on the cigar target space, which is turned into a singular, trumpet-like geometry with metrics
\begin{equation}
ds^2=(dr)^2+4\coth^2{r \over 2} (d\tilde{\theta})^2 \,.
\end{equation}
This singular new geometry allows for bound states, which are precisely the discrete states referred to above.
In conclusion, the non compactness is closely linked to the existence of discrete states. The non compactness
itself leads directly to the logarithmic scaling (\ref{logarithmicscaling}), which is hard to extract quantitatively
from finite-size numerical data, beyond the observation that the scaling dimensions converge very slowly.
However, the emergence of well-converged discrete states beyond certain values of the twist is quite easy
to detect numerically. Our affirmation that the $a_3^{(2)}$ model contains non compact features is therefore
based on the combined observation of slow convergence compatible with (\ref{logarithmicscaling}) for small
twists, and the emergence of well-converged
discrete states.
\subsection{The twisted Bethe Ansatz equations}
The Bethe ansatz equations for the purely periodic (untwisted) model --- i.e., with $\tau_{\mathcal{A}}= {\rm Id}_{4\times 4}$ in (\ref{T6GM}) --- were derived in \cite{GalleasMartins04}. They involve two different types of roots, which we note $\lambda_i$ ($i=1,\ldots,m_1$) and $\mu_i$ ($i=1,\ldots,m_2$).
In appendix \ref{app:twistedBAE}, we revisit this derivation with two goals in mind :
\begin{itemize}
\item First, we wish to understand precisely how the numbers $m_1$ and $m_2$ of each kind of roots are related to the different sectors of fixed magnetisation in a system of size $L$
\item Second, we need to slightly extend the working of \cite{GalleasMartins04}, since we are interested not only in the periodic case, but also in its generalisation to an arbitrary twist angles $\phi_1$ and $\phi_2$.
\end{itemize}
As a result of this analysis, the numbers $m_1$ and $m_2$ are seen to be related to the magnetizations $S_1^{(z)}$ and $S_2^{(z)}$ by
\begin{eqnarray}
S_1^{(z)} &=& -{L \over 2} + m_1 - m_2 \,, \nonumber \\
S_2^{(z)} &=& -{L \over 2} + m_2 \,.
\label{main:relateSandm}
\end{eqnarray}
The twisted Bethe equations read \cite{MartinsNienhuis}
\begin{eqnarray}
\mathrm{e}^{2\mathrm{i}\phi_1}\left(\frac{\sinh (\lambda_i-\mathrm{i}\frac{\gamma}{2})}{\sinh (\lambda_i+\mathrm{i}\frac{\gamma}{2})} \right)^L &=&
\prod_{j=1, j\neq i}^{m_1} \frac{\sinh (\lambda_i-\lambda_j-\mathrm{i}\gamma)}{\sinh (\lambda_i-\lambda_j+\mathrm{i}\gamma)}
\prod_{k=1}^{m_2} \frac{\sinh (2(\lambda_i-\mu_k+\mathrm{i}\frac{\gamma}{2}))}{\sinh (2(\lambda_i-\mu_k-\mathrm{i}\frac{\gamma}{2}))} \nonumber \\
\mathrm{e}^{2\mathrm{i}\left(\phi_1 - \phi_2\right)} \prod_{j=1}^{m_1} \frac{\sinh (2(\mu_k-\lambda_j-\mathrm{i}\frac{\gamma}{2}))}{\sinh (2(\mu_k-\lambda_j+\mathrm{i}\frac{\gamma}{2}))} &=&
\prod_{l=1, l\neq k}^{m_2} \frac{\sinh (2(\mu_k-\mu_l-\mathrm{i}\gamma))}{\sinh (2(\mu_k-\mu_l+\mathrm{i}\gamma))} \,,
\end{eqnarray}
and the corresponding eigenvalues, in the notations of \cite{GalleasMartins04}, are
\begin{eqnarray}
\Lambda^{(4)}(\lambda) &=& \mathrm{e}^{\mathrm{i} \left( \phi_1 + \phi_2 \right)} \left[a_{1}(\lambda)\right]^L \frac{Q_1\left( \lambda + \mathrm{i}\frac{\gamma}{2} \right)}{Q_1\left( \lambda - \mathrm{i}\frac{\gamma}{2} \right)}
+ \mathrm{e}^{\mathrm{i} \left( -\phi_1 - \phi_2 \right)} \left[d_{4,4}(\lambda)\right]^L \frac{Q_1\left( \lambda - \mathrm{i}\frac{5\gamma}{2} + \mathrm{i}\frac{\pi}{2} \right)}{Q_1\left( \lambda - \mathrm{i}\frac{3\gamma}{2} + \mathrm{i}\frac{\pi}{2} \right)} \nonumber \\
& & + \left[b(\lambda)\right]^L \left( \mathrm{e}^{\mathrm{i} \left( \phi_1 - \phi_2 \right)} G_1\left(\lambda \right) + \mathrm{e}^{\mathrm{i} \left(- \phi_1 + \phi_2 \right)} G_2\left(\lambda \right) \right) \,,
\end{eqnarray}
where $Q_1(\lambda) \equiv \prod_{i=1}^{m_1} \sinh\left(\lambda - \lambda_i\right)$ and $Q_2(\lambda) \equiv \prod_{i=1}^{m_2} \sinh\left(\lambda - \mu_i\right)$.
As usual in integrable systems, the Hamiltonian of the corresponding 1D chain can be obtained from the transfer matrix by taking the very anisotropic limit,
\begin{equation}
H^{(L)} = \mp\left.\frac{\mathrm{d}}{\mathrm{d}\lambda}\log T^{(L)}(\lambda) \right|_{\lambda=0} \,,
\end{equation}
which allows to rewrite its eigenvalues in terms of the Bethe roots
\begin{equation}
E = \pm\sum_{i=1}^{m_1} \frac{2 \sin \gamma}{2\cosh 2\lambda_i - \cos\gamma} \,.
\label{Energy}
\end{equation}
The two possible signs for the eigenenergies $E$ of the quantum hamiltonian $H^{(L)}$ produce two different regimes for the low-lying excitations.
In terms of the transfer matrix eigenvalues, the plus sign in (\ref{Energy}) has its low-lying spectrum corresponding to the leading eigenvalues at the isotropic point (\ref{lambdaisoRIII}) while the minus sign corresponds to the leading eigenvalues at (\ref{lambdaisoRI}). This explains the subscripts of $\lambda_\pm$ used in (\ref{lambdaisoRIII})--(\ref{lambdaisoRI}).
\subsection{Low-lying spectrum at zero twist in regime III}
We first consider the conformal spectrum in the untwisted case, before turning on the twist in
the following section.
\subsubsection{Classification of the low-lying excitations}
The regime III, in which lies the critical point (\ref{RJF}) for $k>2$, corresponds to the plus sign in the definition of the energy (\ref{Energy}) and the isotropic point (\ref{lambdaisoRIII}), and to $\gamma \in \left[ 0, {\pi \over 4} \right]$.
In this regime we found (numerically, by comparison with exact diagonalization of the transfer matrix and use of the Mc Coy method \cite{VJS:a22} for sizes up to $L=12$) that the ground state is described by a sea of $L \over 2$ 2-strings (pairs of conjugate roots) of $\lambda$ roots, with imaginary parts close to $\pm \left({\pi \over 4} - {\gamma \over 2}\right)$, together with a sea of $\mu$ roots with imaginary part precisely $\pi \over 4$. These are represented for $L=16$ in figure \ref{fig:E000g030L16}.
\begin{figure}
\begin{center}
\includegraphics[width=90mm,height=70mm]{./E000_rootsL16g030.pdf}
\end{center}
\caption{Configuration of the $\lambda$ (in blue) and $\mu$ (in purple) roots corresponding to the ground state of regime III in the $n_1 = n_2=0$ sector, at $\gamma = \frac{3}{10}$ and for a system size $L=16$. We also plotted the line of imaginary part $\left({\pi \over 4} - {\gamma \over 2}\right)$, for comparison.}
\label{fig:E000g030L16}
\end{figure}
We now describe the classification of the low-lying excitations with respect to this ground state.
First, we point out that in the untwisted, periodic case, the transfer matrix commutes with the momentum operator $P^{(L)} = T^{(L)}(0)$, which acts on the states as a unit translation. The eigenstates can therefore be classified according to their momentum eigenvalue, defined modulo $L$. As usual in such systems, the ground state and lowest-lying levels in each sector of given magnetisation have zero momentum, i.e., they are translationally invariant.
We will restrict to such states in this discussion, and refer to our general work on $a_n^{(2)}$ \cite{VJS:an2} for the conformal weights associated with states of non zero momenta.
The zero momentum excitations can be labeled by three integers, $(n_1, n_2, j)$. The first two correspond respectively to the magnetizations $S_{1}^{(z)}$ and $S_2^{(z)}$, and the last one is an extra index labeling the level of different excitations in a given magnetisation sector, in a sense that we shall make precise now.
\begin{paragraph}{Excitations in the $n_1 = n_2 = 0$ sector.}
All these excitations have $m_1 = L$ and $m_2 = {L \over 2}$.
The excitation $(0,0,j)$, $j=1,2,\ldots$ is obtained from the ground state ($j=0$) roots configuration by replacing $j$ 2-strings of $\lambda$-roots by the same number of antistrings, that is, of pairs of anticonjugate ($\equiv$ having opposite real parts) roots with imaginary part $\pi \over 2$.
\end{paragraph}
\begin{paragraph}{Ground states and excitations in the other sectors.}
For general $(n_1 , n_2)$ we now have $m_1 = L - n_1- n_2$ and $m_2 = {L \over 2}-n_2$, by (\ref{main:relateSandm}) and after an immaterial sign change of the magnetisations.
More precisely, the roots configurations corresponding to the ground states in these sectors involve
$m_2 = {L \over 2}-n_2$ $\mu$-roots with imaginary part ${\pi \over 4}$, the same number of
2-strings for the $\lambda$-roots, whereas the remaining $\lambda$ roots align on the axis of imaginary part $\pi \over 2$ (see appendix \ref{app:RootsConfigs}).
Just as in the $(n_1,n_2)=(0,0)$ sector, the $j$th excited state is obtained
by replacing $j$ 2-strings by antistrings of imaginary part $\pi \over 2$.
\end{paragraph}
\subsubsection{Conformal spectrum of the untwisted chain}
It is well-known from conformal field theory that the scaling of the energies with the size $L$ allows one to extract the conformal spectrum.
The finite-size scaling of the ground state energy yields the central charge,
\begin{equation}
E_{0,0,0}(L) = E_{\infty} - v_{\rm F} \frac{\pi c}{6 L^2} + O\left(1 \over L^4 \right) \,,
\end{equation}
whereas the scaling of the gap between the ground state and the different excited levels yields the corresponding conformal weights $x_{n_1,n_2,j} = \Delta_{n_1,n_2,j} + \bar{\Delta}_{n_1,n_2,j}$
via
\begin{equation}
E_{n_1,n_2,j}(L) - E_{0,0,0}(L) = v_{\rm F} \frac{2 \pi x_{n_1,n_2,j}}{L^2} + O\left(1 \over L^4 \right) \,.
\end{equation}
In these two formulae $v_{\rm F}$ is the Fermi velocity, which can be found from the scattering equations in the continuum limit to be
\begin{equation}
v_{\rm F} (\gamma) = {\pi \over \pi - 4 \gamma} \,.
\end{equation}
In the sequel it will turn out convenient to work with the effective central charges, rather than with the conformal weights, associated with each level:
\begin{equation}
c_{n_1,n_2,j} \equiv c - 12 x_{n_1,n_2,j} \,,
\end{equation}
where we have set $x_{0,0,0}=0$ for the ground state.
Similarly to what was observed in regime III of the $a_2^{(2)}$ model \cite{VJS:a22}, we find here
\begin{equation}
-{c_{n_1,n_2,j} \over 12} = x_{n_1,n_2,j} - {c \over 12} = \frac{\gamma}{2\pi}\left( n_1 + n_2 \right)^2 + \left(N_{n_1,n_2,j}\right)^2 \frac{A(\gamma)}{\left[B_{n_1,n_2,j}(\gamma) + \log L \right]^2} \,,
\label{eq:ciuntwisted}
\end{equation}
with quite strong numerical support for the following conjectures:
\begin{eqnarray}
A(\gamma) &=& 10 {\gamma (\pi - \gamma) \over (\pi - 4 \gamma)^2} \,, \\
N_{n_1,n_2,j} &=& 1 + \left[\mbox{number of $\lambda$-roots with imaginary part $\pi \over 2$}\right] \,.
\label{ceffnontwisted}
\end{eqnarray}
The numerical support for the functional dependence of $A(\gamma)$ on $\gamma$ is very strong,
whereas the determination of the proportionality factor $10$ has more moderate support.
The precise determination of the $B_{n_1,n_2,j}(\gamma)$ functions was however beyond the scope of our numerical accuracy, and progress would presumably require solving the non-linear integral equations (NLIE), e.g., along the lines of \cite{CanduIkhlef} for a cognate but simpler model.
The interpretation of (\ref{ceffnontwisted}) is similar to that made in \cite{VJS:a22}: the last term on the right-hand side accounts for a continuous degree of freedom in the continuum limit, or in other terms for a non compact direction in the target space of the corresponding field theory. In other words, the
continuum limit of the $a_3^{(2)}$ model consists of two compact bosons (corresponding to each of the two magnetisations $n_i$, i.e., originating from each of the two Potts models) and one non compact boson (corresponding to the quantum number $j$, i.e., emerging from the non-trivial coupling of the two models).
\subsection{General twist angles}
In order to make the connection with the loop formulation, we now wish to study what happens to the conformal spectrum when the twist angles $\phi_1$ and $\phi_2$ given non-zero values.
We recall that the equivalence between the $a_3^{(2)}$ vertex model and the two-colour loop
model requires $(\phi_1,\phi_2)$ to take the particular sector-dependent values given in
section~\ref{section:choiceoftwists}. Before specialising to that case we shall however study
the conformal weights of the $a_3^{(2)}$ model for general twists.
Turning on a twist involves --- not unexpectedly --- qualitative changes in the roots configuration describing the excitations $(n_1, n_2,j)$ (some of the corresponding roots patterns are described in appendix \ref{app:RootsConfigs}). What is less usual, however, is that we also observe changes of regimes in the central charge and conformal weights. These changes of regimes, which are not crossovers and also do not necessarly coincide with the change of regimes in the roots configurations, are very similar to what we observed in the $a_2^{(2)}$ case \cite{VJS:a22} and can be explained in terms of the so-called 'discrete states' \cite{BlackHoleCFT,Troost,RibSch}.
\subsubsection{Twist and discrete states: a review of the $a_2^{(2)}$ case}
Before detailing our results on the effective central charges, we briefly review a few relevant results \cite{VJS:a22} on the closely related $a_2^{(2)}$ model.
The $a_2^{(2)}$ model is also defined in terms of a parameter $\gamma$, but regime III corresponds to the range $\gamma \in [0,\frac{\pi}{3}]$ in that case. Its continuum limit is that of one (not two) compact boson and one non compact boson. Accordingly,
the labelling of the low-lying excitations involves only two integers, $n$ and $j$, where $n$ corresponds to the magnetisation. Similarly the twist is defined in terms of just one angle $\phi$. We found in \cite{VJS:a22} that at $\phi=0$ and in the continuum limit the conformal weights related to the states $j=0,1,\ldots$ and fixed $n$ form a continuum, related to a non compact direction of the corresponding sigma model.
As already observed by Nienhuis {\em et al.} \cite{Nienhuis}, turning on the twist brings along a change of regime for the central charge at $\phi = \gamma$, so that $c$ is described by two different analytical expressions that are tangent at $\phi=\gamma$. The analytical expression for $c$ with $\phi \geq \gamma$ is the largest for all values of $\phi$, but yet the corresponding state is not observed as the ground state in the regime of small twist $\phi \in [0,\gamma]$. The field-theoretical explanation of this fact is that the corresponding state is non normalisable for $\phi \in [0,\gamma]$. We intrepret this physically as a {\em discrete state} that pops out of the continuum (and becomes normalisable) at $\phi = \gamma$.
Similar phenomena hold true for all the states $(n,j)$, resulting in a set of discrete states that one after the other pop out of the continuum when the twist angle $\phi$ passes through appropriate discrete values. The maximum number of discrete states depend on the value of $\gamma$. As $\phi$ approaches $2\pi$ the opposite phenomenon occurs: one after the other the discrete states reintegrate the continuum (and become non normalisable again). These processes are illustrated in Figure~5 of \cite{VJS:a22}.
We shall now see that the same type of processes occur in the $a_3^{(2)}$ case, and will trust our understanding of the $a_{2}^{(2)}$ case to give these processes a similar interpretation. As usual, we defer the field theoretical description to a subsequent publication \cite{VJS:an2}.
\subsubsection{Conformal weights}
Taking several proportionality constants $\alpha$ between $\phi_1$ and $\phi_2$, we increased both twists at fixed $\alpha$, allowing us to conjecture the full $(\phi_1,\phi_2)$-dependence of the central charges $c_{n_1, n_2, j}$. The support for these conjectures is provided both by the numerical solution of the Bethe Ansatz equations themselves and by the formal similarities with the extensively studied $a_2^{(2)}$ case \cite{VJS:a22}.
Note that all eigenvalues, hence all central charges and conformal weights, are even functions of $\phi_1$ and $\phi_2$, allowing us here to consider only $\phi_1, \phi_2 \geq 0$. In the formulae that follow, it is therefore understood that $\phi_1$ and $\phi_2$ are just short-hand notations for $|\phi_1|$ and $|\phi_2|$.
As an example of the numerical accuracy of the results we show in figure \ref{fig:c100_twist09} the central charge $c_{1,0,0}$ measured at $\gamma=\frac{3}{10}$ for $L=16$ and $20$.
Note in particular that for small twist (here $\phi_1 < \frac{19}{10} \gamma$; see below) the convergence to the first analytical expression --- corresponding to the continuous part of the spectrum --- is really slow, whereas for bigger twist one observes a convergence to the second analytical expression --- corresponding to a discrete state --- that is fast, and similar to what is usually
observed for models with compact continuum limits.
\begin{figure}
\begin{center}
\includegraphics[width=100mm,height=80mm]{./sl4_c100_twist09.pdf}
\end{center}
\caption{Effective central charge in the sector $(n_1,n_2) = (1,0)$ for $\gamma=\frac{3}{10}$, as a function of $\phi_1$ (with $\phi_2 = \frac{9}{10} \phi_1)$. The solid lines are the conjectured expressions.
}
\label{fig:c100_twist09}
\end{figure}
We however first turn to the excited states in the $(n_1,n_2) = (0,0)$ sector, where our numerical
results lead us to the following conjecture
\begin{equation}
c_{0,0,j} = \begin{cases} 3 - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} + o(1) & \mbox{for } \phi_1 + \phi_2 \leq (2 j + 1)\gamma \,,
\\ 3 - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} + 3 \frac{\left(\phi_1 + \phi_2-(2 j + 1)\gamma\right)^2}{\gamma(\pi - \gamma)} & \mbox{for } \phi_1 + \phi_2 \geq (2 j + 1)\gamma \,.
\end{cases}
\label{c00iconj}
\end{equation}
In the first expression we denoted by $o(1)$ the contribution to the central charges of the non-compact degree of freedom, vanishing as $\left( \log L \right)^{-2}$, cf.~(\ref{eq:ciuntwisted}).
We insist on the fact that in the second expression this logarithmic term has disappeared, according to the fact that the corresponding state has lifted from the continuum, and is now a proper discrete state.
This is summed up in figure \ref{fig:c00itwisted}, where we schematically represented the central charges $c_{0,0,j}$ and the corresponding continuum.
In the remainder of this paper we shall use the symbol $o(1)$ with this same meaning, namely
indicating logarithmic corrections due to the non compact boson.
\begin{figure}
\begin{center}
\includegraphics[width=100mm,height=80mm]{./Discretestates_c00i.pdf}
\end{center}
\caption{Effective central charges for the first excited states in the sector $n_1 = n_2 = 0$ as a function of the twist angles, for fixed $\gamma$.
The shaded zone is the continuum, and we represent by dashed curves the analytic continuations of the discrete states central charges $c_{0,0,j}$ to the domain $\phi_1 + \phi_2 \leq (2j+1)\gamma$, where the corresponding states are not normalisable.
}
\label{fig:c00itwisted}
\end{figure}
Now going back to the ground states in the different magnetisation sectors, we were led to the following conjectures
\begin{equation}
c_{n_1,n_2,0} = \begin{cases} 3- 6 {\left( \left(n_1\right)^2 + \left(n_2\right)^2 \right)\gamma \over \pi} - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} + o(1) & \mbox{for } \phi_1 + \phi_2 \leq \left(|n_1|+|n_2|+1\right)\gamma \,,
\\ 3 - 6 {\left( \left(n_1\right)^2 + \left(n_2\right)^2 \right)\gamma \over \pi} - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} + 3 \frac{\left(\phi_1 + \phi_2-\left(|n_1|+|n_2|+1\right)\gamma\right)^2}{\gamma(\pi - \gamma)} & \mbox{for } \phi_1 + \phi_2 \geq \left(|n_1|+|n_2|+1\right)\gamma \,.
\end{cases}
\label{cnn0conj}
\end{equation}
We further conjecture that the general formula for $c_{n_1, n_2, j}$, that contains (\ref{c00iconj})--(\ref{cnn0conj}) as special cases, should be
\begin{equation}
c_{n_1,n_2,j} = \begin{cases} c^*_{n_1,n_2} + o(1) & \mbox{for } \phi_1 + \phi_2 \leq \left(|n_1|+|n_2|+2j+1\right)\gamma \,,
\\ c^*_{n_1,n_2} + 3 \frac{\left(\phi_1 + \phi_2-\left(|n_1|+|n_2|+2j+1\right)\gamma\right)^2}{\gamma(\pi - \gamma)} & \mbox{for } \phi_1 + \phi_2 \geq \left(|n_1|+|n_2|+2j+1\right)\gamma \,.
\end{cases}
\label{cnnjconj}
\end{equation}
where we have defined
\begin{equation}
c^*_{n_1,n_2} = 3- 6 {\left( \left(n_1\right)^2 + \left(n_2\right)^2 \right)\gamma \over \pi} - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} \,.
\label{cnnstar}
\end{equation}
\section{Conformal spectrum of the dense two-colour loop model}
\label{section:confloop}
By now, we have completed (up to a subtlety that will be explained shortly) our understanding of the $a_{3}^{(2)}$ conformal spectrum in both the twisted and untwisted cases. We are thus ready to come back to the loop model and its exponents.
Magnetic excitations in loop models are described by the so-called watermelon excitations,
corresponding to imposing a fixed number of through-lines. In the context of the two-colour loop model of
interest here, the watermelon operators were defined in section 3.2 of \cite{FJ08}, as operators ${\cal O}_{l_1,l_2}$ that act as sources (or sinks) for a given number $(l_1,l_2)$ of through-lines of each loop colour. The watermelon exponents $x_{l_1,l_2}$ are the critical exponents governing the asymptotic decay of the two point correlation functions,
\begin{equation}
\langle {\cal O}_{l_1,l_2}({\bf x}_1) {\cal O}_{l_1,l_2}({\bf x}_2) \rangle \sim
\frac{1}{|{\bf x}_1 - {\bf x}_2|^{2 x_{l_1,l_2}}} \,.
\end{equation}
Although energy-type excitations are also
of interest, we limit ourselves to the study of the watermelon exponents in what follows.
The central charge and first few watermelon exponents of what we refer to as the original loop model (see section~\ref{sec:enlarged}) were measured numerically in \cite{FJ08}.
These authors worked by numerically diagonalising the transfer matrix for sizes up to $L=16$,
in several different sectors $(l_1,l_2)$, but failed to obtain a general formula for the watermelon exponents. Our goal in this final part is to use the results of section \ref{section:BAEa32} to obtain this general expression.
With hindsight, we can state that the difficulties encountered in \cite{FJ08} are
due to the very particular (logarithmic) finite-size behaviour of the scaling levels entailed by the
non compact boson. Section~\ref{FJcomp} contains an {\em a posteriori} comparison of
our numerical analysis with the one made in \cite{FJ08}.
The watermelon exponent $x_{l_1,l_2}$ associated with the operator ${\cal O}_{l_1,l_2}$ inserting $l_1$ blue lines and $l_2$ red lines can be expressed as
\begin{equation}
x_{l_1,l_2} = \frac{c - c_{l_1,l_2}}{12} \,,
\label{linkxtoc}
\end{equation}
where $c_{l_1,l_2}$ is the effective central charge in the loop model's $(l_1, l_2)$-legs sector, and $c$ the central charge. We have seen that the loop model can only be interpreted as a Potts model when $L$ is even. Since $(l_1,l_2)$ must necessarily have the same parity as $L$, we shall henceforth set $l_1 = 2 n_1$ and $l_2 = 2 n_2$. As far as the two-colored loop model is concerned, one could also consider the ``twisted sector'' of odd $L$, and hence odd values of $l_1$ and $l_2$, but we shall refrain from doing so.
To obtain the loop model's central charge and watermelon exponents, the first thing we need to do is therefore to identify the ground states in the different sectors within the bigger spectrum of the extended loop (twisted vertex) model, which is the one we understand from the Bethe Ansatz.
The two models' (isotropic) transfer matrices can be implemented numerically, and their eigenvalues obtained by direct diagonalisation. The general conclusion we draw from the analysis of these eigenvalues are the following:
\begin{itemize}
\item In sectors with $(0,0)$ legs or with $(l_1\neq 0,l_2 \neq 0)$ legs, the ground states of the two models coincide all through regime III.
\item In sectors with $(l_1\neq 0,0)$ or $(0,l_2 \neq 0)$ legs, the two ground states do not necessarily coincide. Moreover we observe crossovers. We will make this observation more precise in the following (see section \ref{Section:xl10}).
\end{itemize}
Note that the way in which we have defined the excited states $j \neq 0$ for each sector in the vertex model entails that these also present in the original loop model. In other words, the vertex model
obviously contains more state than the original loop model, but those are not of the form $(n_1,n_2,j)$.
In any case, since the calculation of the central charge and watermelon exponents only involves the ground states in each sector we do not need to consider the excited states any further.
\subsection{Central charge and first set of watermelon exponents}
\label{section:cxnn}
According to the above discussion, we first focus on the sectors with $(0,0)$ legs or with $(l_1\neq 0,l_2 \neq 0)$ legs, that is the sectors $n_1 = n_2 = 0$ and $(n_1\neq 0,n_2 \neq 0)$ in the vertex model, for which the ground states of the original loop model and that of the (appropriately twisted) vertex model coincide.
We start by considering the central charge $c$, associated with the ground state in the sector $n_1 = n_2 = 0$. As explained in section \ref{section:choiceoftwists}, it is obtained by imposing the twists $\phi_1 = \phi_2 = \gamma = \frac{\pi}{k+2}$ on the vertex model, yielding
\begin{equation}
c = c_{0,0,0}(\gamma,\gamma) = 3 - 12{\gamma \over \pi} + 3 {\gamma \over \pi - \gamma} = {3 k^2 \over (k+1)(k+2)} \,,
\label{eq:measurec}
\end{equation}
which is exactly the expression (\ref{cJF}).
We now turn to the watermelon exponents $x_{l_1\neq 0, l_2 \neq 0}$.
From all that precedes, and in particular from the discussion in section \ref{section:choiceoftwists}, we know that the effective central charge of the loop model, in the sector $l_1=2n_1\neq 0, l_2=2n_2 \neq 0$, is obtained as
\begin{equation}
c_{l_1,l_2} = c_{2 n_1,2 n_2} = c_{n_1,n_2,0}(0,0) = 3 - 6 \frac{\left( \left(n_1\right)^2 + \left(n_2\right)^2 \right)\gamma}{\pi} \,,
\end{equation}
and by (\ref{linkxtoc}) the corresponding watermelon exponent is
\begin{eqnarray}
x_{2 n_1, 2 n_2} &=& \frac{\left(n_1\right)^2 + \left(n_2\right)^2-2}{2}\frac{\gamma}{\pi} + {\gamma \over 4(\pi - \gamma)} \nonumber \\
&=& \frac{\left(n_1\right)^2 + \left(n_2\right)^2-2}{2}\frac{1}{k+2} + {1 \over 4(k +1)} \,.
\label{watermelon_both_nonzero}
\end{eqnarray}
\subsection{Comparison with the analysis of Ref.~\cite{FJ08}}
\label{FJcomp}
\begin{table}
\begin{center}
\begin{tabular}{l|lll}
Exponent & $k=3$ & $k=4$ & $k=5$ \\ \hline
$x_{2,2}$ &0.0625& 0.05& 0.0416667 \\
$x_{4,2}$ &0.3625& 0.3& 0.255952 \\
$x_{4,4}$ &0.6625& 0.55& 0.470238 \\
$x_{6,2}$ & 0.8625& 0.716667& 0.613095 \\
$x_{6,4}$ &1.1625& 0.966667& 0.827381 \\
$x_{6,6}$ &1.6625& 1.38333& 1.18452 \\
\end{tabular}
\end{center}
\caption{Numerical values of the watermelon exponent (\ref{watermelon_both_nonzero}).}
\label{tab:watermelon}
\end{table}
In Table \ref{tab:watermelon} we give some numerical values corresponding to (\ref{watermelon_both_nonzero}). This can be directly compared with
Table 1 of \cite{FJ08} that reports the $L \to \infty$ extrapolated results (with indicative error bars)
based on transfer matrix diagonalisations for systems of size $L \le 16$.
The disagreement is in most cases quite spectacular, the discrepancy with the exact results
of our Table~\ref{tab:watermelon} often being more than 50 times the perceived error bar
in \cite{FJ08}.%
\footnote{In particular, our results $x_{2,2} = \frac{1}{4(k+1)}$ and
$x_{4,2} = \frac{8+7k}{4(k+1)(k+2)}$
infirm the conjectures $\frac{4}{(k+1)(k+2)}$ and $\frac{16}{(k+1)(k+2)}$ proposed in \cite{FJ08}.}
The reason for this is obviously the very slow convergence of $c_{n_1,n_2,0}(0,0)$ due to the presence of the logarithmic term at zero twist (\ref{eq:ciuntwisted}), a phemenon that the authors
of \cite{FJ08} had clearly no reason to suspect.
\begin{figure}
\begin{center}
\includegraphics[width=150mm,height=120mm]{./x22scaling.pdf}
\end{center}
\caption{Finite-size estimates of the exponent $x_{2,2}$ for $k=3$, plotted against $1/L$.
The red points are the data for $L \leq 16$ that were found in \cite{FJ08} by direct
diagonalisation of the transfer matrix. The blue points, extending this to $L \le 90$, were
obtained here by numerical solution of the Bethe Ansatz equations. The extrapolation
to $L \to \infty$ that led \cite{FJ08} to the erroneous value $x_{2,2} = 0.200(2)$ is shown
as a red dashed curve. Our present extrapolation (blue dashed curves) takes into
account logarithmic corrections to scaling and leads to $x_{2,2} = \frac{1}{16}$.
See the main text for details.}
\label{fig:x22scaling}
\end{figure}
Consider as an example the exponent $x_{2,2}$ for $k=3$. The finite-size estimates for $L \le 90$ are
plotted against $1/L$ in figure~\ref{fig:x22scaling}. Using direct diagonalisation of the transfer
matrix, the authors of \cite{FJ08} had however
only access to the range $L \le 16$, shown as red points in the figure. It appeared quite
reasonable to extrapolate to the $L \to \infty$ limit by fitting the last few points to a
second order polynomial in $1/L$. This technique usually gives very accurate results
--- even in difficult situations \cite{VJ_FK_spin} --- and provides what appears to be
a quite reasonable fit to the red data points. It leads to the estimate $x_{2,2} = 0.200(2)$
given in Ref.~\cite{FJ08}.
The data for larger sizes (blue points) however makes it evident that this extrapolation
is incorrect. A much better fit is obtained by taking into account logarithmic corrections
as in (\ref{eq:ciuntwisted}). The blue dashed curve shows the form
$x_{2,2}(L) = \frac{1}{16} + \frac{A}{\left(B+\log L\right)^2}$, with
$A \simeq 28.95$ and $B \simeq 10.05$, in agreement with the asymptotic value
$x_{2,2} = \frac{1}{4(k+1)} = \frac{1}{16}$ of (\ref{watermelon_both_nonzero}).
Alternatively one may proceed without fixing $x_{2,2} = \frac{1}{16}$ as follows.
Using three successive sizes $(L,L+2,L+4)$ we first fit to the form
$x_{2,2}(L) = x_{2,2}^{\rm ext}(L) + \frac{A}{\left(B+\log L\right)^2}$
to obtain a series of extrapolants $x_{2,2}^{\rm ext}(L)$. Plotting those against
$1/L$ we observe a residual finite-size dependence which is almost linear for $L \ge 50$.
Fitting therefore $x_{2,2}^{\rm ext}(L) = x_{2,2} + C/L$ we obtain finally $x_{2,2} = 0.071(6)$. This is again
in good agreement with the proposed exact value $x_{2,2} = \frac{1}{16}$.
We should stress that the conjecture (\ref{watermelon_both_nonzero}) for the watermelon
exponents ultimately comes from (\ref{cnn0conj}) depending on both twist angles.
Obviously the determination of this function of two parameters is much less sensitive to
numerical error bars than is its specialisation to particular values of $(\phi_1,\phi_2)$.
What is however determining in this problem is that in a whole range of $(\phi_1,\phi_2)$ the corresponding state becomes discrete and the error bars intrinsically small. The unambiguous determination of the effective central charge in this region then makes really easy to conjecture its expression in the continuum.
We note in passing that the central charge (\ref{eq:measurec}) involves the state $(n_1,n_2,j)=(0,0,0)$
for which the twist $\phi_1 + \phi_2 = 2 \gamma$ is greater than the threshold $\gamma$ given in
(\ref{c00iconj})--(\ref{cnn0conj}), meaning that the corresponding state is discrete. Logarithmic terms
are therefore absent, explaining why the central charge could be measured precisely from finite size scaling in \cite{FJ08} (see e.g.\ Figure 5 in that reference).
\subsection{The watermelon exponents $x_{l_1,0}$}
\label{Section:xl10}
Turning to the watermelon exponents $x_{l_1 \neq 0 , 0}$ (or equivalently $x_{0, l_2 \neq 0}$), special care has to be taken to locate the central charge $c_{l_1,0}$ of the original loop model within the spectrum of the vertex model. According to section \ref{section:choiceoftwists}, the twist angles that must be taken in the vertex model are now $\phi_1=0$ and $\phi_2 = \gamma$.
To this end we first consider the spectra of both models obtained at small sizes by direct diagonalisation of the transfer matrix. The results for the 25 (resp.\ 50) lowest-lying levels of the loop model (resp.\ twisted vertex model) obtained at size $L=8$ for $l_1 = 2,4,6$ are displayed in figure \ref{fig:eigenvloopsL8s2460} in appendix. Each level in the loop model is also present in the twisted
vertex model, but the vertex model contains many levels that do not occur in the loop model.
This is in agreement with the discussion in section~\ref{sec:enlarged}. More precisely, we
observe the following features:
\begin{enumerate}
\item For $k>2$ the ground state in the vertex model is not present in the loop model.
\item The ground state in the loop model for small $k \gtrsim 2$ corresponds initially to the first
excited state in the vertex model, and then to an increasingy excited state upon increasing $k$.
At a certain value $k_0$ it joins with another level in the loop model so as to form a complex
conjugate pair. (In the figure we have $k_0 \approx 3.1$ for $l_1=2$, $k_0 \approx 4.4$ for
$l_1=4$, and $k_0 \approx 6.1$ for $l_1=6$.)
\item The ground state in the loop model for large $k \gg 2$ corresponds to a highly excited
state in the vertex model. Its analytic continuation to smaller $k$ undergoes a number of
level crossings, and eventually becomes close to the ground state again for $k \gtrsim 2$.
\end{enumerate}
To arrive at analytical expressions for the watermelon exponents of the loop model, we need
to establish a detailed understanding of the crossovers undergone by the ground state in the
original loop model, in each sector $(l_1,0)$, as $\gamma$ runs through the interval $\left[0,{\pi \over 4}\right]$. A systematic comparison between the original and enlarged (alias vertex) models for
small values of $L$ establishes the following relations:
\begin{itemize}
\item For $\gamma$ large enough, the ground state of the loop model with $(l_1=2n_1,0)$ legs coincides with the $(n_1,0)$ ground state of the vertex model with twist angles $\phi_1 =0$ and $\phi_2=\pi-\gamma$. This observation brings along two remarks:
\begin{enumerate}
\item The twist $\phi_2 = \pi - \gamma$, instead of the expected $\phi_2 =\gamma$, amounts to changing the sign of the weight $2 \cos \phi_2$ of non contractible loops of colour 2. Suppose that the boundary conditions on the system are such that there is an even number $M$ of rows. Then it can be shown \cite{FSZ87} that the number of non contractible loops in each Potts model is even. It follows that the partition function (or, more precisely, the modified partition function $Z_{l_1,0}$ conditioned to supporting the given number of through-lines) is invariant under the sign change. Decomposing
$Z_{l_1,0}$ over the transfer matrix eigenvalues $\Lambda_j$ (see e.g.\ \cite{Richard}) gives a (weighted) sum over terms of the type $(\Lambda_j)^M$, implying that the change of $\phi_2$ only has the effect of changing the sign of a subset of the $\Lambda_j$. This effect goes away upon changing to a formulation in which the transfer matrix adds two rows to the system, and hence does not alter the physics of the problem.
\item We {\em define} the ground state of the twisted vertex model at a given twist to be the
analytic continuation (upon gradually increasing the twist) of the state which has the lowest
energy in the untwisted case. In other words, this is the state $(n_1,0,0)$, with effective central charge $c_{n_1,0,0}(0,\pi - \gamma)$, using the notation established above. Because of the
level crossings apparent in figure~\ref{fig:eigenvloopsL8s2460}, combined with the fact that
the desired twist ($\phi_1=0$ and $\phi_2=\pi-\gamma$) depends on $\gamma$,
it might very well be that for some
values of $\gamma$ the ground state of the twisted vertex model (thus defined) is not the
lowest energy state.
%
\end{enumerate}
%
\item For small $\gamma$, the ground state of the loop model corresponds to some excited state of the vertex model at twists $\phi_1 =0$, $\phi_2=\gamma$.
Since this excited state does not belong to the general set of excitations $(n_1,0,j)$ we studied so far, we choose to call it $(n_1,0,0)'$, and write the corresponding central charge $c'_{n_1,0,0}(\phi_1,\phi_2)$. When $n_1$ is odd this state is actually a doublet of states, with respective momenta $\pm 1$ (defined with respect to the one site translation operator in the untwisted case). From the point of view of Bethe roots, the states in this doublet are anticonjugate to each other and we will need to consider only one of the two, say that of momentum $+1$, which we will still call $(n_1,0,0)'$.
\end{itemize}
Our work program is therefore the following: 1) Understand the roots pattern corresponding to $(n_1,0,0)'$ and find general conjectures for $c'_{n_1,0,0}(\phi_1,\phi_2)$, and 2) then compare
$c_{n_1,0,0}(0,\pi - \gamma)$ and $c'_{n_1,0,0}(0,\gamma)$, yielding the location of the crossover in the $L \to \infty$ limit and the watermelons $x_{2 n_1,0}$ throughout the whole regime.
\subsubsection{Roots structure of the states $(n_1,0,0)'$}
We studied the roots configuration associated with the states $(n_1,0,0)'$ for $n_1=1,2,3$, which are represented in appendix \ref{app:RootsConfigs}. These configurations turn out to have the same qualitative structure as those of the states $(n_1,0,0)$ at large $\phi_1$, and differ only by their (properly defined) set of Bethe integers.
Moreover, these seem to undergo no qualitative change as arbitrary values of the twist angles are taken --- at least we can affirm they do not in the whole region where we computed the corresponding effective central charges (see the next paragraph), which is enough for our purpose here.
\subsubsection{Central charges $c'_{n_1,0,0}$}
Turning to the corresponding effective central charges, we repeat for the states $(n_1,0,0)'$ what we did for all other states, namely determine the $\gamma$ dependence of these central charges at zero twist angles, then turn to the $\phi_1, \phi_2$ dependence and possibly to different regimes.
We show for instance in figure \ref{fig:cprime100_gamma} the measures of $c'_{1,0,0}(0,0)$ as a function of $\gamma$, leading to the following conjecture
\begin{equation}
c'_{1,0,0} = c_{1,0,0} - 12
\label{eq:cprime1=c1-12}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{./sl4_cprime100_gamma.pdf}
\end{center}
\caption{Effective central charge of the state $(1,0,0)'$ in the $n_1 = 1, n_2 =0$ sector at zero twist, plotted as a function of $\gamma$ for various sizes.
The thick blue dots correspond to an $L \to \infty$ extrapolation, and we plotted in comparison the conjectured expression.}
\label{fig:cprime100_gamma}
\end{figure}
More generally, we found using results for sizes up to $L=100$ support (very good at $n_1=1,2$, but less perfect at $n_1=3$ where sizes $L>10$ could not be studied) for the following conjecture
\begin{equation}
c'_{n_1,0,0} \left(\phi_1, \phi_2 \right) = \begin{cases} 3-12 n_1 -6 {\left(n_1\right)^2 \gamma \over \pi} - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} +o(1) & \mbox{for } \phi_1 + \phi_2 \leq \left(|n_1|-1\right)\gamma \,,
\\ 3-12 n_1 - 6 {\left(n_1\right)^2\gamma \over \pi} - 6 \frac{\left(\phi_1\right)^2 + \left(\phi_2\right)^2}{\pi \gamma} + 3 \frac{\left(\phi_1 + \phi_2-\left(|n_1|-1\right)\gamma\right)^2}{\gamma(\pi - \gamma)} & \mbox{for } \phi_1 + \phi_2 \geq \left(|n_1|-1\right)\gamma \,.
\end{cases}
\label{cn00conj}
\end{equation}
This expression coincides with (\ref{cnn0conj}) for $n_2 = 0$, thus lending further credibility
to the conjecture, and has the now familiar structure in terms of discrete states.
This implies that the states $(n_1,0,0)'$ are part of a continuum for small values of the twist parameters, and therefore exhibit logarithmic terms (shown by the $o(1)$ notation in the first
line) of the same nature as those of (\ref{eq:ciuntwisted}).
\subsection{Conclusion: the watermelon exponents $x_{2 n_1,0}$}
We are now ready to proceed to the last step of our program concerning the calculation of the exponents $x_{2 n_1,0}$.
A explained all through this section, regime III is the stage of a crossover between two states, for which we found the following effective central charges
\begin{equation}
c_{n_1,0,0} \left(0,\pi - \gamma\right) = \begin{cases} 3- 6 {\left(n_1\right)^2 \gamma \over \pi} - 6 \frac{\left(\pi - \gamma\right)^2 }{\pi \gamma} +o(1) & \mbox{for } n_1 \geq k \,,
\\ 3 - 6 {\left(n_1\right)^2\gamma \over \pi} - 6 \frac{\left(\pi - \gamma\right)^2 }{\pi \gamma} + 3 \frac{\left(\pi - \left(|n_1|+2\right)\gamma\right)^2}{\gamma(\pi - \gamma)} & \mbox{for } n_1 \leq k \,,
\end{cases}
\label{eq:cgamma0}
\end{equation}
and
\begin{equation}
c'_{n_1,0,0}\left(0, \gamma \right) = \begin{cases} 3 - 6\frac{\left(n_1\right)^2\gamma}{\pi} - 12 n_1 - 6 \frac{\gamma}{\pi} + o(1)
& \mbox{for } n_1 \geq 2 \,,
\\ 3 - 6\frac{\left(n_1\right)^2\gamma}{\pi} - 12 n_1 - 6 \frac{\gamma}{\pi}
+ 3 \frac{\left(n_1-2\right)^2}{\pi -\gamma} & \mbox{for } n_1 \leq 2 \,.
\end{cases}
\label{eq:cprimegamma0}
\end{equation}
Let us treat as a warm-up the particular case of $x_{2,0}$, that is $n_1=1$.
For this case only the second line in (\ref{eq:cprimegamma0}) is relevant. The same is true for the second line in (\ref{eq:cgamma0}), for any value of $\gamma$ in regime III (i.e., $k \ge 2$).
One easily sees that in the whole regime the former central charge is larger, as a matter of fact the two meet at $\gamma = {\pi \over 4}$. We therefore expect that for sufficiently large sizes the exponent $x_{2,0}$ be governed by $c'_{1,0,0}$ throughout regime III (or, in other words, the crossover in the limit $L \to \infty$ is located at $\gamma = {\pi \over 4}$). This leads to
\begin{equation}
x_{2,0} = - \frac{c'_{1,0,0}(0,\gamma) - c_{0,0,0}(\gamma,\gamma)}{12} = 1 \,
\end{equation}
independently of $\gamma$. In this case we therefore confirm a conjecture made in \cite{FJ08}.
Note that this exponent was measured with very good precision in the latter reference, the reason being, as discussed for the central charge in the end of section \ref{section:cxnn}, that at twist $\phi_1=0, \phi_2=\gamma$ the state $(1,0,0)'$ is a discrete state, and hence does not possess any disturbing logarithmic terms its corresponding finite-size effective central charge.
Now for $n_1 \geq 2$ we can always use the first line in (\ref{eq:cprimegamma0}), which is the one relevant for describing the twists $\phi_1=0, \phi_2=\pi-\gamma$. Let us define a ``critical'' value of $k$ by
\begin{equation}
k_c(n_1) \equiv \sqrt{2 \left(n_1\right)^2+2 n_1+1}+n_1-1 \,.
\label{defkc}
\end{equation}
There are then three regimes:
\begin{itemize}
\item $2 \leq k \leq n_1$: Here $c_{n_1,0,0}$ is described by the first line of (\ref{eq:cgamma0}), and is bigger than $c'_{n_1,0,0}$.
The corresponding watermelon exponent reads
\begin{equation}
x_{2 n_1,0} = - \frac{c_{1,0,0}(0,\pi-\gamma) - c_{0,0,0}(\gamma,\gamma)}{12} = \frac{2 (k+1) \left(n_1\right)^2 +k (2 k (k+3)+3)}{4 (k+1) (k+2)} \,.
\end{equation}
\item $n_1 \leq k \leq k_c(n_1)$: Here $c_{n_1,0,0}$ is described by the second line of (\ref{eq:cgamma0}), and is bigger than $c'_{n_1,0,0}$.
The corresponding watermelon exponent is
\begin{equation}
x_{2 n_1,0}= - \frac{c_{1,0,0}(0,\pi-\gamma) - c_{0,0,0}(\gamma,\gamma)}{12} = \frac{k \left(k^2+2 k (n_1+2)+\left(n_1\right)^2+4 n_1+3\right)}{4 (k+1) (k+2)} \,.
\end{equation}
\item $k \geq k_c(n_1)$: Now $c'_{n_1,0,0}$, described by the first line in (\ref{eq:cprimegamma0}), is bigger than $c_{n_1,0,0}$.
The corresponding watermelon exponent is in this case
\begin{equation}
x_{2 n_1,0} = - \frac{c'_{1,0,0}(0,\gamma) - c_{0,0,0}(\gamma,\gamma)}{12} = -\frac{k}{4 \left(k^2+3 k+2\right)}+\frac{\left(n_1\right)^2}{2 k+4}+n_1 \,.
\end{equation}
\end{itemize}
Note that the location $k_c$ of the transition between the regimes dominated by
$c_{n_1,0,0}$ and $c'_{n_1,0,0}$ is an increasing function of $n_1$, in agreement with
numerical results (such as those shown in figure~\ref{fig:eigenvloopsL8s2460})
that we have obtained by directly diagonalising the relevant transfer matrices for small sizes $L$.
Our result for $x_{2 n_1,0}$ with $n_1 \ge 2$ can be compared with the numerical results
given in Table 1 of \cite{FJ08}. Note that the latter have relatively large error bars, and in
some cases did not converge.
\section{Conclusion and discussion}
\label{section:conclusion}
We have studied an integrable case of two coupled $Q$-state antiferromagnetic Potts models on the square lattice,
first discovered by Martins and Nienhuis \cite{MartinsNienhuis} and further
studied by Fendley and Jacobsen \cite{FJ08},
who identified its continuum limit from an argument of level-rank duality.
Going beyond this, we have exhibited an
exact mapping of the lattice model to the $a_3^{(2)}$ vertex model \cite{GalleasMartins} in regime III,
which corresponds to the range $k \in [2,\infty)$ in the parameterisation (\ref{eq:loopweight}).
This mapping has allowed us in particular to extract the Bethe Ansatz equations
(see also \cite{MartinsNienhuis}).
Studying these numerically, and drawing on
formal analogies with the $a_2^{(2)}$ model \cite{VJS:a22}, we have established that the
continuum limit contains two compact bosons and one non compact boson. The non compact
degree of freedom entails a continuous spectrum of critical exponents. When twisting
the vertex model discrete states emerge from the continuum at particular twist angles that we have precisely identified.
Results on the coupled Potts models --- in their formulation as a dense two-colour loop model \cite{FJ08} --- then followed from identifying the twist angles that ensure the equivalence between the vertex model and the loop model in various sectors. For the ground state sector we recovered
the central charge
\begin{equation}
c = {3 k^2 \over (k+1)(k+2)} \,,
\label{c_again}
\end{equation}
in agreement with \cite{FJ08}. Improving on this we have also given complete results for
the magnetic exponents of the watermelon type, corresponding to imposing given numbers
$(2n_1,2n_2)$ of propagating through-lines in each Potts model.
The watermelon exponents $x_{2n_1, 2n_2}$, in the case where both $n_1$ and $n_2$ are non zero,
were found to be
\begin{equation}
x_{2 n_1, 2 n_2} = \frac{\left(n_1\right)^2 + \left(n_2\right)^2-2}{2}\frac{1}{k+2} + {1 \over 4(k +1)} .
\end{equation}
The exponents $x_{2 n_1,0}$ for $n_1 \neq 0$ (or equivalently, $x_{0,2 n_2}$ with $n_2 \neq 0$)
exhibit a delicate crossover when $k$ runs through the interval $[2,\infty)$. Aside from the particular case $x_{2,0}=1$, we find for $n_1 \ge 2$ three different regimes:
\begin{equation}
x_{2 n_1,0} = \begin{cases}
\frac{2 (k+1) \left(n_1\right)^2 +k (2 k (k+3)+3)}{4 (k+1) (k+2)} &
\mbox{for } 2 \leq k \leq n_1 \,, \\
\frac{k \left(k^2+2 k (n_1+2)+\left(n_1\right)^2+4 n_1+3\right)}{4 (k+1) (k+2)} &
\mbox{for } n_1 \leq k \leq k_c(n_1) \,, \\
-\frac{k}{4 \left(k^2+3 k+2\right)}+\frac{\left(n_1\right)^2}{2 k+4}+n_1 &
\mbox{for } k \geq k_c(n_1) \,,
\end{cases}
\end{equation}
where $k_c(n_1)$ is defined in (\ref{defkc}).
The numerical study made in \cite{FJ08} for the first few watermelon exponents was based
on the numerical diagonalisation of the transfer matrix for sizes ranging up to $L=16$.
The strong logarithmic corrections, produced by the non compact nature of the continuum limit,
were unfortunately unknown to the authors of \cite{FJ08} and led them to numerical estimates
(and a couple of conjectures) that --- with the exception of $c$ and $x_{2,0}$ for which no
logarithms are present --- have been invalidated by the present work
(see section~\ref{FJcomp} for a detailed comparison of the numerical analyses).
This might be a useful
lesson for the future, since we now know at least three different loop models \cite{IkhlefJS3,VJS:a22}
having a non compact continuum limit.
We should point out that it follows from an argument in \cite{DJLP99} that when $Q=3$ (i.e., $k=4$)
the two Potts models decouple. More precisely, the free energies in the ground state sector
for the dense two-colour loop model (with $\lambda=\lambda_c$) and the decoupled models (with $\lambda=1$) are related by
\begin{equation}
f(\lambda_c,L) = f(1,L) + \frac12 \log \left(2 (2 + \sqrt{3}) \right) \qquad \mbox{(for $Q=3$)} \,.
\end{equation}
Consequently the central charge (\ref{c_again}) is $c = \frac85 = 2 \times \frac45$.
However, this does not imply that the spectrum of excitations at $k=4$ is also related to
that of a single $3$-state Potts model. In particular, the watermelon operators given above
are manifestly completely different. Moreover, we stress that the $k=4$ model will also have
non compact excitations --- a feature which is obviously absent in the $3$-state Potts model.%
\footnote{A similar remark can be made about the relation between the square-lattice Ising model
at its ferromagnetic and antiferromagnetic critical points. In the ground state sector these
two models are related by a well-known mapping (change the sign of the coupling constant
and flip the spins on one sublattice), and both have $c=\frac12$. However, the loop model
underlying the $Q=2$ state antiferromagnetic Potts model \cite{JS_AFPotts} has an excitation
spectrum that is quite different from that of its ferromagnetic counterpart.}
We have left several directions for future work. For example, it would be interesting to compute
also the energy-like critical exponents of the coupled Potts models. Another open issue is to
study non-periodic boundary conditions, and to consider in particular boundary extensions of the
underlying Temperley-Lieb algebra along the lines of \cite{confbound}. On the integrability
side, setting up the non-linear integral equations (NLIE) might allow one to actually prove our
formulae for the critical exponents and for the density of states. A field-theoretical formulation
of the $a_3^{(2)}$ model will appear elsewhere in the more general $a_n^{(2)}$ setting \cite{VJS:an2}.
Let us finally mention that the dense two-colour loop model studied here has a dilute counterpart
which is related to a truncated version of the plateau transition in the integer quantum Hall effect
\cite{IkhlefFC}. The dilute model is expected to contain non compact features as well, and we hope to
report more on this issue elsewhere.
\subsection*{Acknowledgments}
We thank Paul Fendley for comments on the manuscript.
This work was supported by the French Agence Nationale pour la Recherche
(ANR Projet 2010 Blanc SIMI 4: DIME) and the Institut Universitaire de France (JLJ).
|
1,477,468,750,926 | arxiv | \section{Introduction}
The relative expansion of the Universe is parametrized by a dimensionless scale factor $\mathcal{R}$. This is a key parameter in Friedman equations and also known as the cosmic scale factor or Robertson Walker scale factor. In the early stages of the Big Bang, most of the energy was in the form of radiation and that radiation has a dominant influence on the expansion of the Universe. Later, with cooling from the expansion, the roles of matter and radiation changed and the Universe entered into a matter dominated era. Recent observational results suggest that we have already entered an era dominated by dark energy (DE). But investigation of the roles of matter and radiation is most important for a good understanding of the early Universe. One should note that, the effective energy density of the Universe is usually expressed in terms of the scale factor. Also, the dynamics of the Universe is assessed through an equation of state parameter $\omega$ usually defined as the ratio of the pressure $(p)$ to the energy density $\rho$. Within the purview of GR using a flat Friedman model and assuming a constant equation of state parameter $\omega$, the energy density relates to the scale factor as $\rho \sim \mathcal{R}^{3(1+\omega)}$. The evolution of the scale factor is a dynamical question, determined by the equations of general relativity, which are presented in the case of a locally isotropic, locally homogeneous universe by the Friedman equations. In a flat Universe model comprising only a perfect cosmic fluid, we may have the scale factor as $\mathcal{R}(t)\sim t^{\frac{2}{3(1+\omega)}}$. For a radiation dominated Universe, the scale factor behaves like $\sim t^{1/2}$, whereas for a matter dominated era, the scale factor behaves like $\sim t^{2/3}$. In the dark energy dominated era, we may have a De Sitter type expansion with the scale factor behaving like an exponential function such as $\mathcal{R}(t)\sim e^{H_0t}$.
From the above simple discussion, we may infer that the equation of state parameter becomes $\omega=\frac{1}{3}$ for radiation era, $\omega=0$ for matter dominated era and for a DE dominated era we may have $-\frac{2}{3}\leq \omega \leq -\frac{1}{3}$. On the other hand, different values of the equation of state parameter have been extracted from observational data in recent years. DE models with a cosmological constant ($\Lambda$CDM model) predicts the equation of state parameter as $\omega=-1$ and for quintessence models, we have $\omega > -1$. However, as inferred from recent observations, there may be possibility of a phantom field dominated phase with $\omega <-1$ \cite{Tripathi2017}. The 9 year WMAP survey suggests that $\omega=-1.073^{+0.090}_{-0.089}$ from CMB measurements and $\omega=-1.084\pm 0.063$ in combinations with Supernova data\cite{Hinshaw13}. The Supernova cosmology project group have found that $\omega=-1.035^{+0.055}_{-0.059}$ \cite{Amanullah2010}. From a combined analysis of the data sets of SNLS3, BAO, Planck, WMAP9 and WiggleZ, Kumar and Xu constrained the equation of state parameter as $\omega=-1.06^{+0.11}_{-0.13}$ \cite{Kumar2014}. Moreover the recent Planck 2018 results constrained $\omega=-1.03\pm 0.03$ \cite{Planck2018}.
It has been confirmed from a lot of observations that, the Universe is in a state of accelerated expansion at least at its late phase of evolution \cite{S.per 1998,A.G 1998,R.Knop 2003,A.G 2004,A.G. 2007,D.N 2007,D.J 2005,M.Sulli 2011,N.Suzuki 2012,C.R 2004,S.W 2004,S.P 2004,Cole2005}. Also, it is believed that, the Universe was decelerating in the past prior to entering into an accelerated phase. The deceleration is mostly due to a matter dominated phase of the Universe. After the Universe enters into a DE dominated phase, the scale factor increases exponentially. At the very beginning after the phenomenal Big Bang, the Universe is also believed to have undergone an exponential expansion. This phase is popularly known as the inflationary phase. After this inflationary phase, the Universe evolves to decelerate and undergoes a transition from a decelerated phase to an accelerated phase at its late phase. The transit epoch, when the Universe flips from a decelerated phase to an accelerated one corresponds to a transit redshift $z_{da}$ which is believed to be an order of 1 i.e $z_{da} \sim 1$. The transition redshift has been constrained to be $z_{da}= 0.82\pm 0.08$ (Busca \cite{Busca13}), $z_{da}=0.74\pm 0.05$ (Farooq and Ratra \cite{Farooq13}), $z_{da}= 0.7679^{+0.1831}_{-0.1829}$ (Capozziello et al. \cite{Capo14}), $z_{da}=0.69^{+0.23}_{-0.12}$ (Lu et al. \cite{Lu11}), $z_{da}=0.4\pm 0.1$ ( Moresco et al. \cite{Moresco16}). While Reiss et al. derived kinematic limits on the transition redshift as $z_{da}= 0.426^{+0.27}_{-0.089}$ \cite{Reiss07}, in a recent work, Goswami et al. obtained a constraint as $z_{da}=0.73$ \cite{Goswami2021}. If at all, the present Universe follows the DE models, then, this transition redshift $z_{da}$ may be considered as an important cosmological parameter.
The dynamical aspects of Universe may be modelled in many ways. Usually, for a given gravitational theory such as GR, the field equations are set up assuming a cosmic fluid comprising some matter fields and then for an assumed equation of state (may be dynamically varying), the field equations are solved to obtain the cosmic dynamics. Other ways to investigate the cosmological issues may include, the assumption of a cosmic dynamics through an adhoc scale factor and then the relationship between the energy density and pressure is obtained. The later modelling method provides an opportunity to obtain the equation of state parameter for a dynamically evolving DE phase. Here we require a suitably chosen scale factor that mimicks the present Universe.
In fact, there are several scale factors available in the literature. A power law form $\mathcal{R}(t)\sim t^{n}$, $n$ being a constant and an exponential expansion form $\mathcal{R}(t)\sim e^{\alpha t}$, $\alpha$ being a positive constant are well known. Bouncing scale factors providing a bouncing scenario to avoid the singularity problem occurring at the initial epoch have different structures than these two forms. However, these scale factors provide a constant deceleration parameter $q=-\frac{\mathcal{R}\ddot{\mathcal{R}}}{(\dot{\mathcal{R}})^2}$. But, consequent upon the recent finding concerning the late time cosmic speed up, we require a signature flipping deceleration parameter having positive values at early times and negative values at late time. Such a deceleration parameter may be obtained from a hybrid scale factor \cite{Mishra15} having two factors: one behaving as a power law and the other behaving as an exponential law of expansion. While the power law factor of the HSF dominates at an initial epoch, the exponential factor dominates at late time to provide a suitable explanation for the transitioning Universe.
In the present review, we wish to present the role played by the hybrid scale factor in obtaining the dynamical behaviour of the Universe. We have considered diversified space times and different matter fields in the framework of general relativity to justify that, HSF can be a good alternative as an adhoc scale factor for investigating background cosmologies. The article is organized as follows: In Sec. 2, we present a brief idea about the hybrid scale factor. In Sec. 3, some viable cosmological models have been discussed where we have employed HSF. At the end in Sec. 4, we summarize our results.
\section{Hybrid Scale Factor}
The hybrid scale factor may be expressed as \cite{Mishra15,Mishra18,Mishra2018,Mishra21,Tripathy2020}
\begin{equation}
\mathcal{R} =e^{\alpha t}t^{\beta},\label{eq:1}
\end{equation}
where $\alpha$ and $\beta$ are positive constants. As it will be described, a cosmic transit from early deceleration to late time acceleration can be obtained using the hybrid scale factor. It is certain from \eqref{eq:1} that, the power law behaviour dominate the cosmic dynamics in early phase of cosmic evolution and the exponential factor dominates at late phase. When $\beta=0$, the exponential law is recovered and for $\alpha=0$, the scale factor reduces to the power law. The Hubble parameter and the deceleration parameter for the hybrid scale factor can be obtained as
\begin{eqnarray}
H &=&\alpha+\frac{\beta}{t},\\
q &=& -1+\frac{\beta}{(\alpha t+\beta)^2}.
\end{eqnarray}
Similar expansion law has already been conceived earlier. In a recent work, Tripathy \cite{SKT2014} has used a more general form of such hybrid Hubble parameter has been considered with the form $H=\alpha+\frac{\beta}{t^n}$. The present HSF model is a special case of the scale factor considered in Ref. \cite{SKT2014}. One may note from the expression of the deceleration parameter that, at an early phase of cosmic evolution, the deceleration parameter becomes $q=-1+\frac{1}{\beta}$ and at late phase of cosmic evolution it approaches to $-1$. In order to obtain a signature flipping behaviour of the deceleration parameter having positive values at an early time and negative values at late times, it is required to adjust the value of the parameter $\beta$ in the range $0<\beta<1$. Since, at the cosmic transit epoch, the deceleration parameter vanishes, we can infer that, the cosmic transit occurs at a time $t_{da}=-\frac{\beta}{\alpha}\pm \frac{\sqrt{\beta}}{\alpha}$. The negativity of the second term leads to a concept of negative time which appears to unphysical in the context of Big Bang cosmology and therefore, we should have $t_{da}=\frac{\sqrt{\beta}-\beta}{\alpha}$. It is possible to study different issues in cosmology concerning a transitioning Universe with a behaviour of early phase deceleration and late phase acceleration by employing an assumed dynamics through the hybrid scale factor. The parameter $\beta$ has a defined range of $0<\beta<1$. From an analysis of the behaviour of an anisotropic model, Mishra and Tripathy have constrained the parameter $\beta$ in a further narrow range $0<\beta<\frac{1}{3}$ \cite{Mishra15}. This range is required to get a positive time frame to have a decelerated Universe. The other parameter $\alpha$ may be constrained from a detail analysis of $H(z)$ data \cite{Tripathy2020}. Mishra et al. have investigated some anisotropic dark energy models in the framework of GR and have employed a hybrid scale factor to study the cosmic dynamics \cite{Mishra18}. In that work, they have used the range $0<\beta<1$ and constrained the value of of $\alpha$ in the range $0.075<\alpha <0.1$ basing on recent observational constraints on transition redshift $0.4<z_{da}<0.8$. Particularly they have used two specific values of $\alpha$ namely $0.1$ and $0.075$. Through the behaviour of the deceleration parameter as deciphered from the hybrid scale factor, Mishra et al. have shown that the power law factor dominate at the early time and the exponential factor dominates at late time. The rate of transition of the deceleration parameter is obtained in that work to be faster for a higher value of $\alpha$. In a recent work of Tripathy et al. \cite{Tripathy2020}, the hybrid scale factor has been constrained from an analysis of the $H(z)$ data and four different HSF models have been proposed. However, accurate determination of the HSF parameters is essential to get a model more closer to the present Universe.
Usually, the cosmological models with dark energy components are distinguished through the use of two important diagnostic approaches: the determination of the state finder pair $\{j,s\}$ in the $j-s$ plane and the $Om(z)$ diagnostics. While the state finder pair involve third derivatives of the scale factor, the $Om(z)$ parameter involve only the first derivative of the scale factor appearing through the Hubble rate $H(z)$. The state finder pairs are defined as
\begin{eqnarray}
j &=& \frac{\dddot{\mathcal{R}}}{\mathcal{R}H^3}=\frac{\ddot{H}}{H^3}-(2+3q),\\
s &=& \frac{j-1}{3(q-0.5)}.
\end{eqnarray}
The $Om(z)$ parameter is defined by
\begin{equation}
Om(z) = \frac{E^2(z)-1}{(1+z)^3-1},
\end{equation}
where $E(z)=\frac{H(z)}{H_0}$ is the dimensionless Hubble parameter. Here $H_0$ is the Hubble rate at the present epoch. If $Om(z)$ becomes a constant quantity, the DE model is considered to be a cosmological constant model with $\omega = -1$. If this parameter increases with $z$ with a positive slope, the model can be a phantom model with $\omega < -1$. For a decreasing $Om(z)$ with negative slope, quintessence model are obtained ($\omega >-1$).
For the hybrid scale factor, the state finder pair become
\begin{eqnarray}
(j,s)&=&\left(\frac{\dddot{\mathcal{R}}}{\mathcal{R}H^3},\frac{j-1}{3(q-0.5)}\right)\nonumber \\
&=&\left(1-\frac{3 \beta}{(\alpha t+\beta)^2}+\frac{2\beta}{(\alpha t+\beta)^3}, \frac{4\beta-6\beta(\alpha t+\beta)}{6\beta (\alpha t+\beta)-9(\alpha t+\beta)^3}\right).
\end{eqnarray}
The values of the statefinder pair depend on the parameters $\alpha$ and $\beta$. Both $j$ and $s$ evolve with time from large value to small value at late time. At the beginning of cosmic time, the statefinder pair for the HSF are $\{1 + \frac{2-3\beta}{\beta^2}, \frac{2}{3\beta}\}$ whereas at late time of cosmic evolution, the HSF model behaves like $\Lambda$CDM with the statefinder pair having values $\{1, 0\}$. The $Om(z)$ parameter can also be assessed for the HSF. The HSF model behaves like a cosmological constant model for a substantial time zone in the recent past and before this time zone, the model evolves as a phantom field \cite{Mishra21}.
\section{Some cosmological Models with HSF in GR}
In this section, we review some of the cosmological models with hybrid scale factor in the framework of general relativity. We will set up the field equations with different matter field and discuss the results.
\subsection{Dark Energy models in anisotropic Bianchi V Space time in GR}
We may consider an anisotropic Bianchi V space time in the form
\begin{equation} \label{ST}
ds^{2}=-dt^{2}+A^{2}dx^{2}+e^{2 a x}(B^{2}dy^{2}+C^{2}dz^{2}),
\end{equation}
where $A=A(t), B=B(t), C=C(t)$ are functions of cosmic time $t$ only and $a$ is a positive constant. The Einstein Field equations can be written as
\begin{equation} \label{EFE}
G_{ij} \equiv R_{ij}-\frac{1}{2}Rg_{ij}=-\frac{8\pi G}{c^4}T_{ij},
\end{equation}
where $G_{ij}$ Einstein tensor, $R_{ij}$ Ricci tensor, $R$ Ricci scalar, $T_{ij}$ energy momentum tensor, which can also be assumed for multiple fluids. For simplicity, we take $8\pi G=c=1$.
For the anisotropic Universe, we may define the directional Hubble parameters as $H_x=\frac{\dot{A}}{A}$, $H_y=\frac{\dot{B}}{B}$, $H_z=\frac{\dot{C}}{C}$. The mean Hubble parameter can be obtained as $H=\frac{\dot{\mathcal{R}}}{\mathcal{R}}=\frac{1}{3}(H_x+H_y+H_z)$, where $\mathcal{R}$ is the scale factor. The shear scalar $\sigma^2\left[=\sigma_{ij}\sigma^{ij}=\frac{1}{2} \left(\Sigma H^2_i-\frac{\theta^2}{3}\right)\right]$ usually considered to be proportional to the scalar expansion $\theta(=3H)$. This leads to an anisotropy relation between the metric potentials $B$ and $C$ in the form $B=C^k$. Now, the relationship between the scale factor and the metric potentials can be established as $A=\mathcal{R}$, $B=\mathcal{R}^{\frac{2k}{k+1}}$ , $C=\mathcal{R}^{\frac{2}{k+1}}$. Also the Hubble parameter and directional Hubble parameters can be related as $H_x=H$, $H_y=\left(\frac{2k}{k+1}\right)H$ and $H_z=\left(\frac{2}{k+1}\right)H$. It can be noted that the rate of expansion along $x$-axis is same as the mean Hubble parameter. Now, the set of field equations in the form of Hubble parameter become
\begin{eqnarray}
\dot{H}_y+\dot{H}_z+H_y^2 + H_z^2 + H_y H_z-\frac{a^2}{A^2}&=& -\frac{1}{A^2}[T_{11}],\\ \label{EFE6}
\dot{H}_x+\dot{H}_z+H_x^2 + H_z^2 + H_x H_z-\frac{a^2}{A^2}&=& -\frac{1}{B^2e^{2a x}}[T_{22}],\\ \label{EFE7}
\dot{H}_x+\dot{H}_y+H_x^2 + H_y^2 + H_x H_y-\frac{a^2}{A^2}&=& -\frac{1}{C^2e^{2a x}}[T_{33}],\\ \label{EFE8}
H_xH_y+H_yH_z+H_zH_x-3\frac{a^2}{A^2}&=& -[T_{44}]. \label{EFE9}
\end{eqnarray}
The energy conservation for the anisotropic fluid, $T^{ij}_{;j}=0 $, yields
\begin{equation}\label{ECE}
\dot{\rho}+3\rho(\omega+1)H+\rho(\delta H_x+\gamma H_y+\eta H_z)=0,
\end{equation}
In the above equations, an overhead dot on a field variable denotes differentiation with respect to time $t$.
Eqn. \eqref{ECE} can be splitted into two parts: the first one corresponds to the conservation of matter field with equal pressure along all the directions i.e. the deviation free part of \eqref{ECE} and the second one corresponds to that involving the deviations of equation of state (EoS) parameter:
\begin{equation}\label{ECE1}
\dot{\rho}+3\rho(\omega+1)H=0,
\end{equation}
and
\begin{equation}\label{ECE2}
\rho(\delta H_x+\gamma H_y+\eta H_z)=0.
\end{equation}
It is now certain that, the behaviour of the energy density $\rho$ is controlled by the deviation free part of EoS parameter whereas the anisotropic pressures along different spatial directions can be obtained from the second part of the conservation equation. From eqn. \eqref{ECE1}, we obtain the energy density as $\rho=\rho_0 \mathcal{R}^{-3(\omega+1)}$, where $\rho_0$ is the value of energy density at the present epoch.
The field equations in an anisotropic space time can be handled by explicitly writing the matter field. We may consider that, the cosmic fluid be consisting of a mixture of Dark energy and usual fluid or it may be embedded in an external magnetic field. Also, we may consider the effect of bulk viscosity on the cosmic dynamics. After choosing the matter field, we may employ the hybrid scale factor to obtain the pressure and energy density of the Universe and study the cosmic dynamics through the dynamical equation of state parameter. In this context, we will consider some of the results of earlier works~\cite{Mishra15,Mishra18,Mishra19}.
\subsubsection{Case:1}
If we assume a cosmic fluid consisting of a mixture of usual matter and some dark energy components, then the right hand side of \eqref{EFE} can be expressed as~\cite{Mishra15}
\begin{eqnarray}
-T_{ij}^D &=& diag[\rho_D, -p_{Dx}, -p_{Dy},-p_{Dz}]\nonumber \\ \label{EMTD}
&=& diag[1, -\omega_{Dx}, -\omega_{Dy},-\omega_{Dz}]\rho_D\nonumber\\
&=&diag[1,-(\omega_D +\delta), -(\omega_D+\gamma), -(\omega_D+\eta)]\rho_D.
\end{eqnarray}
Here $p_D$, $\rho_D$ and $\omega_D$ respectively denote pressure and energy density and DE EoS parameter $(=\frac{p_D}{\rho_D})$. Also, $\delta, \gamma, \eta$ are the skewness parameters that describe the change of pressure in the respective coordinate axis. With an algebraic manipulations in the field equations, the skewness parameters can be obtained alike as
\begin{eqnarray}
\delta &=& -\left(\frac{k-1}{3\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)\right],\label{SP1}\\
\gamma &=& \left(\frac{k+5}{6\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)\right],\label{SP2}\\
\eta &=& -\left(\frac{5k+1}{6\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)\right],\label{SP3}
\end{eqnarray}
where $F(H)=2(\dot{H}+3H^2)$. The functional $F(H)$ for the hybrid scale factor becomes
$$F(t)=\frac{2}{t^2}\left[3\alpha^2t^2+6\alpha \beta t+3\beta^2-\beta\right].$$
The energy conservation equations for DE fluid ($T_{:j}^{ij(D)}$) can be obtained as
\begin{eqnarray}
\dot{\rho}_D+3\rho_D(1+\omega_D)H+\rho_D(\delta H_x+\gamma H_y+\eta H_z)&=&0. \label{ECD}
\end{eqnarray}
The energy density $\rho_D$ and the EoS parameter $\omega_D$ are obtained as
\begin{equation} \label{EDD}
\rho_D = 2\left(\frac{k^2+4k+1}{(k+1)^2}\right)\frac{\dot{\mathcal{R}}^2}{\mathcal{R}^2}-\frac{3a ^2}{\mathcal{R}^2}= 2\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left(\alpha+\frac{\beta}{t}\right)^2-\frac{3a ^2}{e^{2\alpha t}t^{2\beta}},
\end{equation}
\begin{eqnarray}
\omega_D \rho_D&=&-\frac{2}{3}\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left(2\frac{\mathcal{\ddot{R}}}{\mathcal{R}}+\frac{\dot{\mathcal{R}}^2}{\mathcal{R}^2}\right)+\frac{a^2}{\mathcal{R}^2}\nonumber \\
&=&-\frac{2}{3}\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left[-\frac{2\beta}{t^2}+3\left(\alpha+\frac{\beta}{t}\right)^2\right]+\frac{a ^2}{e^{2\alpha t}t^{2\beta}}.\label{EOS}
\end{eqnarray}
The model studied under the above assumptions being more realistic to simulate a cosmic transit that favours a phantom phase at late time. The use of the hybrid scale factor significantly changes the behavior of the cosmic fluid~\cite{Mishra15}.
\subsubsection{Case:2}
The behaviour of the cosmological model with dark energy fluid and viscous matter fluid is another attraction in the study of cosmic expansion. So, the energy momentum tensor and the energy conservation equations for viscous fluid ($T_{:j}^{ij(V)}$) can be obtained as~\cite{Mishra15,Mishra18}
\begin{equation}
-T_{ij}^V = diag[\rho, -\bar{p}, -\bar{p},-\bar{p}],\\ \label{EMTV}
\end{equation}
and
\begin{equation}
\dot{\rho}+3(\rho+\bar{p})H=0. \\ \label{ECV}
\end{equation}
We get the energy density for the matter field as
\begin{align} \label{EDMV}
\rho=\frac{\rho_{0}}{\left[ e^{\int{H}.dt}\right]^{3(\epsilon +1)}},
\end{align}
where $\rho_0$ is the integration constant or rest energy density of present time.
The energy density $\rho_D$ and the EoS parameter $\omega_D$ are obtained as
\begin{eqnarray} \label{EDDV}
\rho_D &=& 2\left(\frac{k^2+4k+1}{(k+1)^2}\right)\frac{\dot{\mathcal{R}}^2}{\mathcal{R}^2}-\frac{3a ^2}{\mathcal{R}^2}-\rho_{0} \mathcal{R}^{-3(\epsilon + 1)} \\ \nonumber
&=& 2\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left(\alpha+\frac{\beta}{t}\right)^2-\frac{3a ^2}{e^{2\alpha t}t^{2\beta}}-\dfrac{\rho_{0}}{(e^{\alpha t}t^{\beta})^{\frac{3}{2}(k+1)(\epsilon+1)}},
\end{eqnarray}
\begin{eqnarray}
\omega_D \rho_D &=& -\frac{2}{3}\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left(2\frac{\mathcal{\ddot{R}}}{\mathcal{R}}+\frac{\dot{\mathcal{R}}^2}{\mathcal{R}^2}\right)+\frac{a^2}{\mathcal{R}^2}-\epsilon \rho_{0} \mathcal{R}^{-3(\epsilon +1)} \\ \nonumber
&=& -\frac{2}{3}\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left[-\frac{2\beta}{t^2}+3\left(\alpha+\frac{\beta}{t}\right)^2\right]+\frac{a ^2}{e^{2\alpha t}t^{2\beta}}-\dfrac{\epsilon \rho_{0}}{(e^{\alpha t}t^{\beta})^{\frac{3}{2}(k+1)(\epsilon+1)}}.\label{EOSV}
\end{eqnarray}
The skewness parameters can be obtained as in eqns. \eqref{SP1},\eqref{SP2},\eqref{SP3}. Both the fluids the viscous and and dark energy fluid have shown their dominance respectively in early time and late time of evolution in the presumed hybrid scale factor~\cite{Mishra19}.
\subsubsection{Case:3}
When the cosmic fluid contains some one dimensional cosmic string aligned along the $x-$ axis along with DE, we have~\cite{Mishra18}
\begin{equation}
-T_{ij}^S = diag[0,-\lambda,0,0]. \label{EMTS}
\end{equation}
For this choice of the matter field, the skewness parameters can be obtained as
\begin{eqnarray}
\delta &=& -\left(\frac{k-1}{3\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)+\lambda]\right],\label{SP4}\\
\gamma &=& \left(\frac{k+5}{6\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H) +\lambda]\right] ,\label{SP5}\\
\eta &=& -\left(\frac{5k+1}{6\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)-\lambda\right]. \label{SP6}
\end{eqnarray}
The energy conservation equation yields
\begin{equation} \label{ECES}
\dot{\rho}+3\left(\rho+p+\frac{\lambda}{3}\right)H=0,
\end{equation}
which can be integrated to obtain
\begin{equation}
\rho=\rho_0 \mathcal{R}^{-3(1+\omega+\xi)},
\end{equation}
where $\rho_0$ is rest energy density due to the matter field at the present epoch. $\xi$ and $\omega$ are assumed to be non evolving state parameters. The equations of state for strings and isotropic fluid can be considered respectively as
\begin{equation}
\lambda=3\xi\rho,~~~~~~~~~~~~~~~~~~~~~~~~~p=\omega\rho.
\end{equation}
Consequently, the pressure and string tension density are obtained as
\begin{equation}
p=\omega\rho_o \mathcal{R}^{-3(1+\omega+\xi)}~~~~~~~~~~ \lambda=3\xi\rho_0 \mathcal{R}^{-3(1+\omega+\xi)}.
\end{equation}
From Eqs. \eqref{EFE9}, \eqref{EMTD} and \eqref{EMTS}, we can obtain the DE density as
\begin{equation}
\rho_D=3(\Omega_{\sigma}-\Omega_k)\left(\frac{\dot{\mathcal{R}}}{R}\right)^2-\rho,
\end{equation}
where $\Omega_{\sigma}=1-\frac{\mathcal{A}}{2}$, $\Omega_k=\frac{a^2}{\mathcal{R}^2}$.
The density parameters can be expressed as
\begin{equation}
\Omega_D=\Omega_{\sigma}-\Omega_k-\Omega_m,
\end{equation}
where $\Omega_m=\frac{\rho_m}{3H^2}$.
The total density parameter becomes
\begin{equation} \label{TDP}
\Omega=\Omega_m+\Omega_D=\Omega_{\sigma}-\Omega_k.
\end{equation}
The DE EoS parameter $\omega_D$ is now obtained as
\begin{equation}
\omega_D \rho_D=- \left[2\frac{\ddot{\mathcal{R}}}{\mathcal{R}}+\left(\frac{\dot{\mathcal{R}}}{\mathcal{R}}\right)^2\right]\Omega_{\sigma}+\frac{a^2}{\mathcal{R}^2}-\rho(\omega+\xi).
\end{equation}
The model constructed with the hybrid scale factor in the work~\cite{Mishra18}, yields anisotropy in the dark energy pressure that evolves with the cosmic expansion at least at late times. However, at an early time, the dynamics of the accelerating universe substantially affected by the presence of cosmic string.
\subsubsection{Case:4}
We may extend our investigation further by incorporating an external electromagnetic field in both $x$~\cite{Mishra19a} and $z$~\cite{Ray19} direction with the dark energy fluid. Here, we will incorporate the electromagnetic field in the form
\begin{equation}\label{EMTE1}
E_{ij} = \frac{1}{4 \pi} \left[ g^{sp}f_{\mu s}f_{\nu p}-\frac{1}{4} g_{\mu \nu}f_{sp}f^{sp} \right],
\end{equation}
where $g_{\mu \nu}$ is the gravitational metric potential and $f_{\mu \nu}$ is the electromagnetic field tensor. In order to avoid the interference of electric field, we have considered an infinite electrical conductivity to construct the cosmological model. This results in the expression $f_{14}=f_{24}=f_{34}=0$. Again, quantizing the axis of the magnetic field along $x$-direction as the axis of symmetry, we obtain the expression $f_{12}=f_{13}=0,$ $f_{23}\neq 0$. Thus, the only non-vanishing component of electromagnetic field tensor is $f_{23}$. With the help of Maxwell's equation, the non-vanishing component can be represented as, $f_{23}=-f_{32}= k_1 $, where $k_1$ is assumed to be a constant and it comes from the axial magnetic field distribution. For the anisotropic BV model, the components of EMT for electromagnetic field can be obtained as
\begin{equation}\label{EMTE2}
E_{ij}= diag[-1,A^2, - B^2 e^{2 \alpha x}, - C^2 e^{2 \alpha x}]\mathcal{M},
\end{equation}
where $\mathcal{M}=\frac{k_1^{2}}{B^{2}C^{2}e^{4 \alpha x}}$. When $k_1$ is non-zero, the presence of magnetic field has been established along the $x$-direction. If $k_1$ vanishes, the model will reduce to the one with DE components only. Hence
\begin{eqnarray}
\delta &=& -\left(\frac{k-1}{3\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)+2\mathcal{M}\right],\label{eqn:2.1.7}\\
\gamma &=& \left(\frac{k+5}{6\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H) +2\mathcal{M}\right] ,\label{eqn:2.1.8}\\
\eta &=& -\left(\frac{5k+1}{6\rho_D}\right)\left[\frac{k-1}{(k+1)^2}\times F(H)-2\mathcal{M} \right].\label{eq:2.1.9}
\end{eqnarray}
The energy conservation equations for magnetized field can be be obtained as
\begin{equation} \label{ECEE}
\dot{\mathcal{M}}+2 \mathcal{M} (H_{y}+H_{z})=0.
\end{equation}
The DE energy density $\rho_D$ and the EoS parameter $\omega_D$ are obtained as
\begin{eqnarray}
\rho_D &=& 2\left(\frac{k^2+4k+1}{(k+1)^2}\right) \dfrac{\dot{\mathcal{R}}^2}{\mathcal{R}^2}- 3 \dfrac{\alpha ^2}{\mathcal{R}^2}+ 2\mathcal{M},\\
\omega _D\rho_D &=& -\frac{2}{3}\left(\frac{k^2+4k+1}{(k+1)^2}\right)\left[\dfrac{2\ddot{\mathcal{R}}}{\mathcal{R}}+ \dfrac{\dot{\mathcal{R}}^2}{\mathcal{R}^2} \right] + \dfrac{\alpha ^{2}}{\mathcal{R}^2}-\frac{2}{3}\mathcal{M}.
\end{eqnarray}
The findings of the model indicate that at late phase of cosmic evolution, the universe on its expansion is in agreement with the supernovae observation and in the deceleration phase, it allows the formation of large scale structure of the Universe. The presence of magnetic field changes the dynamics substantially at least at the initial phase of the evolution. The skewness parameters are more sensitive to the magnetic field strength and the choice of the model parameters.
\section{Conclusion}
In the present review, we have discussed the role of a hybrid scale factor in obtaining viable cosmic dynamics without assuming any specific relationship between the pressure and energy density of the Universe. The hybrid scale factor is an intermediate between the power law and exponential law expansion behaviour. The HSF has two factors: one behaving like the power law expansion and dominates at an early phase of cosmic evolution. The other factor behaves like an exponential expansion and dominates at late phase to provide a model closer to the concordance $\Lambda$CDM model at late times. Also, the observations supporting the late time cosmic speed up phenomena suggest that, the Universe has undergone a transition from an early deceleration to late time acceleration phase. Such a cosmic transit behaviour envisages a signature flipping behaviour of the deceleration parameter. The usual power law or exponential law scale factors generate constant deceleration parameter and are therefore not suitable for cosmological studies relevant to cosmic speed up issue. In this context, a hybrid scale factor can simulate a deceleration parameter with early positive values and late time negative values. However, the HSF has two adjustable parameters which need to be tuned so as to obtain viable models with cosmic transit behaviour. Out of the two parameters, one can be readily constrained to some acceptable range, but the other parameter may be constrained through a detailed analysis of the $H(z)$ data. In this context, an accuracy in the $H(z)$ data is essential to get models closer to the present Universe. We have considered some cosmological examples, where, hybrid scale factor have been used to obtain satisfactorily the cosmic dynamics. It is shown that, the HSF can be good alternative in providing interesting and viable models.
\section*{Acknowledgement} SKT, BM and SR thank IUCAA, Pune, India for providing support through the visiting associateship program. The work by MK has been performed with a support of the Ministry of Science and Higher Education of the Russian Federation, Project "Fundamental problems of cosmic rays and dark matter", No 0723-2020-0040.
|
1,477,468,750,927 | arxiv | \section{Introduction}
Fractional calculus is an effective tool in the study of non local and memory
effects in physics. Its successful application to anomalous diffusion was
immediately followed by other examples in classical physics [1-4]. The first
application of fractional calculus to quantum mechanics was given by Laskin in
terms of the fractional Riesz derivative as the space fractional
Schr\"{o}dinger equation [5]. Laskin's space fractional quantum mechanics is
intriguing since it follows from the Feynman's path integral formulation of
quantum mechanics over L\'{e}vy paths. One of the first solutions of this
theory was given by Laskin for the infinite well problem [5-8]. Despite its
simplicity, the infinite well problem is very important since it is the
prototype of a quantum detector with internal degrees of freedom. In 2010,
Jeng et. al. [9] argued that the solutions obtained for the space fractional
Schr\"{o}dinger equation in a piecewise fashion are not valid. Their argument
was based on a contradiction they think exists in the ground state wave
function of the infinite square well problem. In [10, 11] we have shown that
an exact treatment of the integral that lead them to inconsistency proves
otherwise. However, in a recent comment, Hawkins and Schwarz point to a
possible problem in the proof regarding the analyticity of the relevant
integrals [12].
In Sections II and III we present details of the treatment of the relevant
integrals and show for general $n$ that there is no inconsistency. Recently,
Dong [13] obtained the wave function for the infinite square well problem by
using path integrals over L\'{e}vy paths and confirmed the solution given by
Laskin [5-8].
However, Luchko analyzed the solution in configuration space\ with a different
representation of the Riesz derivative and argued in favor of inconsistency
[14]. To pinpoint the source of this controversy and its resolution, in the
Section IV we scrutinize the different representations of the Riesz derivative
and show that when calculated consistently, they all have the same Fourier
transform. The controversy arises when the divergent integrals in the
configuration space are evaluated piecewise for the infinite square well
problem, thus tampering with the integrity of the Riesz derivative. Finally,
Section V is the conclusions.
\section{Consistency of the Solutions of the Space Fractional Schr\"{o}dinger
Equation}
In one dimension the space fractional Schr\"{o}dinger equation is written in
terms of the quantum Riesz derivative $\left( -\hslash^{2}\Delta\right)
^{\alpha/2}$ [5-8] a
\begin{equation}
i\hslash\frac{\partial\Psi(x,t)}{\partial t}=D_{\alpha}\left( -\hslash
^{2}\Delta\right) ^{\alpha/2}\Psi(x,t)+V(x)\Psi(x,t),
\end{equation}
wher
\begin{equation}
\left( -\hslash^{2}\Delta\right) ^{\alpha/2}\Psi(x,t)=\frac{1}{2\pi\hslash
}\int_{-\infty}^{+\infty}dp\text{ }e^{ipx/\hslash}\left\vert p\right\vert
^{\alpha}\Phi(p,t),\text{ }1<\alpha\leq2,
\end{equation}
and $\Phi(p,t)$ is the Fourier transform of the wave function:
\begin{equation}
\Phi(p,t)=\int_{-\infty}^{+\infty}dx\Psi(x,t)e^{-ipx/\hslash}.
\end{equation}
The restriction on $\alpha$ comes from the requirement of the existence of the
first-order moments of the $\alpha$-stable L\'{e}vy distribution so that
average momentum or position of the quantum particle can be found [5]. For the
infinite square well, the potential is given as
\begin{equation}
V(x)=\left\{
\begin{tabular}
[c]{ccc
$0$ & $;$ & $\left\vert x\right\vert <a$\\
$\infty$ & $;$ & $\left\vert x\right\vert \geqslant a
\end{tabular}
\ \ \ \ \ \ \right. ,
\end{equation}
where for its separable solutions:
\begin{equation}
\Psi(x,t)=e^{-iEt/\hslash}\psi(x),
\end{equation}
$\psi(x)$ satisfies the following eigenvalue problem
\begin{equation}
D_{\alpha}\left( -\hslash^{2}\Delta\right) ^{\alpha/2}\psi(x)=E\psi
(x),\text{ }\psi(a)=\psi(-a)=0.
\end{equation}
The \ corresponding energy eigenfunctions and the eigenvalues are obtained as
[5-8]
\begin{align}
\psi_{n}(x) & =\left\{
\begin{tabular}
[c]{ccc
$A\sin\frac{n\pi}{2a}(x+a)$ & $;$ & $\left\vert x\right\vert <a$\\
& & \\
$0$ & $;$ & $\left\vert x\right\vert \geqslant a
\end{tabular}
\ \ \ \ \ \ \ \right. ,\\
\text{ }E_{n} & =D_{\alpha}\left( \frac{\hslash n\pi}{2a}\right) ^{\alpha
},\text{ }n=1,2,\ldots\text{ }.\nonumber
\end{align}
To show the inconsistency of these solutions, Jeng et. al. [9] concentrated on
the ground state with $n=1$
\begin{equation}
\psi_{1}(x)=\left\{
\begin{tabular}
[c]{ccc
$A\cos\left( \frac{\pi x}{2a}\right) $ & $;$ & $\left\vert x\right\vert
<a$\\
& & \\
$0$ & $;$ & $\left\vert x\right\vert \geqslant a
\end{tabular}
\ \ \ \right.
\end{equation}
and argued that this solution, albeit satisfying the boundary conditions,
$\psi_{1}(-a)=\psi_{1}(a)=0,$ when substituted back into the space fractional
Schr\"{o}dinger equation leads to a contradiction [9]. Using the Fourier
transform of $\psi_{1}(x):$
\begin{equation}
\phi_{1}(p)=\mathcal{F}\left\{ \psi_{1}(x)\right\} =-A\pi\left(
\frac{\hslash^{2}}{a}\right) \frac{\cos\left( ap/\hslash\right)
{p^{2}-\left( \pi\hslash/2a\right) ^{2}},\text{ }\left\vert x\right\vert <a,
\end{equation}
and the definition of the quantum Riesz derivative [5-8]
\begin{equation}
\left( -\hslash^{2}\Delta\right) ^{\alpha/2}\psi_{1}(x)=(1/2\pi\hslash
)\int_{-\infty}^{+\infty}dpe^{ipx/\hslash}\left\vert p\right\vert ^{\alpha
}\phi_{1}(p),
\end{equation}
in Equation (6), they wrote $\psi_{1}(x)$ as the integral
\begin{equation}
\psi_{1}(x)=-\frac{AD_{\alpha}}{2E_{1}}\left( \frac{\hslash}{a}\right)
\int_{-\infty}^{+\infty}dp\text{ }\left( \frac{2a}{\pi\hslash}\right)
^{2}\frac{\left\vert p\right\vert ^{\alpha}\cos\left( ap/\hslash\right)
}{\ \left( 2ap/\pi\hslash\right) ^{2}-1}e^{ipx/\hslash},\text{ }\left\vert
x\right\vert <a.
\end{equation}
Using the substitution $q=\frac{2a}{\pi\hslash}p,$ $\psi_{1}(x)\ $become
\begin{equation}
\psi_{1}(x)=-\frac{AD_{\alpha}}{\pi E_{1}}\left( \frac{\pi\hslash
{2a}\right) ^{\alpha}\int_{-\infty}^{+\infty}dq\text{ }\frac{\left\vert
q\right\vert ^{\alpha}\cos\left( \pi q/2\right) }{\ q^{2}-1}e^{i\pi qx/2a}.
\end{equation}
\ Jeng et. al. [9] argued that the right hand side of the above equation,
which they wrote a
\begin{equation}
\psi_{1}(x)=-\frac{AD_{\alpha}}{\pi E_{1}}\left( \frac{\pi\hslash
{2a}\right) ^{\alpha}2\int_{0}^{+\infty}dq\text{ }\frac{\left\vert
q\right\vert ^{\alpha}\cos\left( \pi q/2\right) }{\ q^{2}-1}\cos\left( \pi
qx/2a\right) ,
\end{equation}
can not satisfy the boundary conditions that $\psi_{1}(x)$ satisfies as
$x\rightarrow\pm a$, thus indicating an inconsistency in the infinite square
well solution. However, we have shown that an exact evaluation of the
integral\ in Equation (12) proves otherwise [10, 11]. In the Section III we
give the general proof for all $n.$
\section{Proof For All $n$}
\subsection{The case for odd $n$}
For\ the odd values of $n,$ eigenfunctions in Equation (7) becom
\begin{align}
\psi_{n}(x) & =\left\{
\begin{tabular}
[c]{ccc
$A\cos\frac{n\pi x}{2a}$ & $;$ & $\left\vert x\right\vert <a$\\
& & \\
$0$ & $;$ & $\left\vert x\right\vert \geqslant a
\end{tabular}
\ \ \ \ \ \right. ,\\
\text{ }E_{n} & =D_{\alpha}\left( \frac{\hslash n\pi}{2a}\right) ^{\alpha
},\text{ }n=1,3,5,\ldots\text{ }.\nonumber
\end{align}
Using the Fourier transform $\mathcal{F}\{\psi_{n}(x)\}=\phi_{n}(p):
\begin{equation}
\phi_{n}(p)=-\frac{An\pi\hbar^{2}\sin(n\pi/2)}{a}\left( \frac{\cos pa/\hbar
}{p^{2}-(n\pi\hbar/2a)^{2}}\right) ,\text{ }n=1,3,5,\ldots,
\end{equation}
and the definition of the Riesz derivative [Eq. (2)] in the space fractional
Schr\"{o}dinger Equation [Eq. (6)], the corresponding integral expression for
$\psi_{n}(x),$ $n=1,3,5,\ldots$ becomes
\begin{equation}
\psi_{n}(x)=-\frac{AD_{\alpha}n\hbar(2a/n\pi\hbar)^{2}\sin(n\pi/2)}{2aE_{n
}\int_{-\infty}^{+\infty}dp\text{ }e^{ipx/\hbar}\frac{\left\vert p\right\vert
^{\alpha}\cos(pa/\hbar)}{(2ap/n\pi\hbar)^{2}-1\ }\text{.
\end{equation}
Making the substitution $p=(n\pi\hbar/2a)q,$ we writ
\begin{align}
\psi_{n}(x) & =-\ \frac{AD_{\alpha}\sin(n\pi/2)}{E_{n}\pi}\ \left(
\frac{n\pi\hbar}{2a}\right) ^{\alpha}\int_{-\infty}^{+\infty}dq\text{
}e^{i(n\pi x/2a)q}\frac{\left\vert q\right\vert ^{\alpha}\cos(n\pi
q/2)}{\left( q^{2}-1\right) \ }\nonumber\\
& =-\ \frac{AD_{\alpha}\sin(n\pi/2)}{E_{n}\pi}\ \left( \frac{n\pi\hbar
{2a}\right) ^{\alpha}I,
\end{align}
where $I$ is the integra
\begin{equation}
I=\int_{-\infty}^{+\infty}dq\text{ }e^{i(n\pi x/2a)q}\frac{\left\vert
q\right\vert ^{\alpha}\cos(n\pi q/2)}{\left( q^{2}-1\right) \ }.
\end{equation}
Substitutin
\begin{equation}
\cos\left( n\pi q/2\right) =\frac{1}{2}\left( e^{in\pi q/2}+e^{-in\pi
q/2}\right) ,
\end{equation}
$\ $we can write $I$ as the sum of two integrals
\begin{align}
I & =I_{1}+I_{2}\nonumber\\
& =\frac{1}{2}\int_{-\infty}^{+\infty}dq\text{ }\frac{\left\vert q\right\vert
^{\alpha}\ e^{i(\frac{n\pi x}{2a}+\frac{n\pi}{2})q}}{\ (q+1)(q-1)}\ +\frac
{1}{2}\int_{-\infty}^{+\infty}dq\text{ }\frac{\left\vert q\right\vert
^{\alpha}\ e^{i(\frac{n\pi x}{2a}-\frac{n\pi}{2})q}}{\ (q+1)(q-1)},
\end{align}
which can be evaluated by analytic continuation as a Cauchy principal value
integral [15 pg. 365]. However, In the above integrals, as it stands,
$\left\vert q\right\vert ^{\alpha}$ can not be continued analytically. To
overcome this difficulty, we resort to the original definition of the Riesz
derivative and see where $\left\vert q\right\vert ^{\alpha}$ comes from.
The Riesz derivative, $R_{x}^{\alpha}f(x),$ is defined as [3, 16-18]
\begin{align}
R_{x}^{\alpha}f(x) & =-\frac{_{-\infty}D_{x}^{\alpha}f(x)+_{\infty
D_{x}^{\alpha}f(x)}{2\cos\alpha\pi/2},\text{ }\alpha>0,\text{ }\alpha
\neq1,3,...\\
_{-\infty}D_{x}^{\alpha}f(x) & =\frac{1}{\Gamma(n-\alpha)}\int_{-\infty
^{x}(x-x^{\prime})^{-\alpha-1+n}f^{(n)}(x^{\prime})dx^{\prime},\\
_{+\infty}D_{x}^{\alpha}f(x) & =\frac{(-1)^{n}}{\Gamma(n-\alpha)}\int
_{x}^{\infty}(x^{\prime}-x)^{-\alpha-1+n}f^{(n)}(x^{\prime})dx^{\prime},
\end{align}
where $n$ is the smallest integer greater than $\alpha.$ For the range
$1<\alpha<2,$ $n=2.$ In Equations (22) and (23) we have used the Caputo
fractional derivative [Eqs. (A7) and (A8)] since for sufficiently smooth
functions:
\begin{equation}
f(x),f^{\prime}(x),\ldots,f^{(n-1)}(x)\rightarrow0\text{ as}~x\rightarrow
\pm\infty,
\end{equation}
the Caputo and the Riemann-Liouville definitions agree [2, 3, 16-18]. Also
note that the quantum Riesz derivative and the Riesz derivative $R_{x
^{\alpha}$ are related by [5-8]
\begin{equation}
\left( -\hslash^{2}\Delta\right) ^{\alpha/2}\psi_{1}(x)=-\hslash^{\alpha
}R_{x}^{\alpha}\psi_{1}(x).
\end{equation}
Using the following Fourier transforms (see Section IV for the detailed
derivation)
\begin{equation}
\left.
\begin{tabular}
[c]{l
$\mathcal{F}\left\{ _{-\infty}D_{x}^{\alpha}f(x)\right\} =(i\omega)^{\alpha
}g(\omega),$\\
\\
$\mathcal{F}\left\{ _{\infty}D_{x}^{\alpha}f(x)\right\} =(-i\omega)^{\alpha
}g(\omega)
\end{tabular}
\ \ \ \ \ \ \ \ \right.
\end{equation}
where $g(\omega)=\mathcal{F}\left\{ f(x)\right\} $ and $\alpha>0$, we write
the Fourier transform of the Riesz derivative a
\begin{equation}
\mathcal{F}\left\{ R_{x}^{\alpha}f(x)\right\} =-\left( \frac{(i\omega
)^{\alpha}+(-i\omega)^{\alpha}}{2\cos\alpha\pi/2}\right) g(\omega).
\end{equation}
When $\omega$ is restricted to the real axis, this reduces to the familiar
expression $\mathcal{F}\left\{ R_{x}^{\alpha}f(x)\right\} =-\left\vert
\omega\right\vert ^{\alpha}g(\omega),$ which is used in Equations (12) and
(17). In order to evaluate $I$ by analytic continuation, we use the above form
of the Riesz derivative [Eq. (27)], which allows analytic continuation and
write $I\ $[Eq. (20)] as
\begin{align}
I & =I_{1}+I_{2}\nonumber\\
& =\frac{1}{2}\int_{-\infty}^{+\infty}dq\left( \frac{(iq)^{\alpha
}+(-iq)^{\alpha}}{2\cos\alpha\pi/2}\right) \frac{e^{i(\frac{n\pi x}{2a
+\frac{n\pi}{2})q}}{\ (q+1)(q-1)}\ \nonumber\\
& +\frac{1}{2}\int_{-\infty}^{+\infty}dq\left( \frac{(iq)^{\alpha
}+(-iq)^{\alpha}}{2\cos\alpha\pi/2}\right) \frac{\ e^{i(\frac{n\pi x
{2a}-\frac{n\pi}{2})q}}{\ (q+1)(q-1)}.
\end{align}
Factoring $q^{\alpha}$ out, the integrals $I_{1}$ and $I_{2}$:
\begin{align}
I_{1} & =\ \left( \frac{(i)^{\alpha}+(-i)^{\alpha}}{4\cos\alpha\pi
/2}\right) \int_{-\infty}^{+\infty}dq\frac{q^{\alpha}e^{i(\frac{n\pi x
{2a}+\frac{n\pi}{2})q}}{\ (q+1)(q-1)}\ ,\\
I_{2} & =\ \left( \frac{(i)^{\alpha}+(-i)^{\alpha}}{4\cos\alpha\pi
/2}\right) \int_{-\infty}^{+\infty}dq\frac{q^{\alpha}e^{i(\frac{n\pi x
{2a}-\frac{n\pi}{2})q}}{\ (q+1)(q-1)}\ ,
\end{align}
can now be evaluated as Cauchy principal value integrals via analytic
continuation [15 pg. 365].
In the above integrals, aside from the poles at $q=$ $\pm1,$ there is also a
branch point and a branch cut at the origin due to\ the power $q^{\alpha},$
$\alpha>0.$ For each integral, in contrast to the claims of Hawkins and
Schwarz [12], the branch cut can always be chosen away from the region of
interest. For the branch values of $(i)^{\alpha}$ and $(-i)^{\alpha},$ it has
to be remembered that the Riesz derivative $R_{x}^{\alpha}$, is defined such
that for real $q,$ the Fourier transform of the Riesz derivative corresponds
to the logarithm of the characteristic function of the symmetric L\'{e}vy
probability density function. Therefore, in the definition of the Riesz
derivative [Eq. (21)], $2\cos\alpha\pi/2$ is introduced with the principal
branch values of $(i)^{\alpha}$ and $(-i)^{\alpha}$ in mind, hence
$(i)^{\alpha}+(-i)^{\alpha}=2\cos\alpha\pi/2$. This way, along with the minus
sign introduced by hand in Equation (21), $R_{x}^{\alpha}$ reproduces the
standard derivative $\frac{d^{2}}{dx^{2}}$ for $\alpha=\allowbreak2$ [3, 17].
Other linear combinations of $_{-\infty}D_{x}^{\alpha}f(x)$ and $_{\infty
}D_{x}^{\alpha}f(x)$ have also found use in literature as the Feller
derivative, which gives an additional degree of freedom in terms of a
parameter called the phase or the skewness parameter [3, 17].
\subsubsection{Evaluation of $I_{1}$ and $I_{2}$}
For $I_{1}$ [Eq. (29)] the contour is closed counterclockwise in the upper
half\ complex $q-$plane over a semicircular path with radius $R,$ and then the
contour detours around the poles on the real axis over semicircular paths of
radius $\delta$ in the upper half $q-$plane. Similarly, the contour goes
around the branch point at the origin with the branch cut located in the lower
half of the $q-$plane. Since $\alpha>0$, the integrand vanishes on the contour
as the radius of the semicircular path over $q=0$ shrinks to zero, hence the
integral over the branch point does not contribute to the integral. In the
limit as $R\rightarrow\infty$, by the Jordan's lemma, the contribution coming
from the large semicircle vanishes, thus allowing the evaluation of this
integral as a Cauchy principal value integral in the limit $\delta
\rightarrow0$ as [15 pg. 365
\begin{align}
PV(I_{1}) & =\ \ \left( \frac{i\pi}{2}\right) \frac{\sin n\pi/2
{4\cos\alpha\pi/2}\left[ (i^{\alpha}+(-i)^{\alpha})(-1+(-1)^{\alpha
)\sin\left( n\pi x/2a\right) \right. \nonumber\\
& \left. +i(i^{\alpha}+(-i)^{\alpha})(1+(-1)^{\alpha})\cos\left( n\pi
x/2a\right) \right] \\
& =-\left( \frac{\pi\sin n\pi/2}{2}\right) \cos\left( n\pi x/2a\right)
,\text{ }n=1,3,\ldots\text{ .
\end{align}
Note that one also uses the relation $[i^{\alpha}+(-i)^{\alpha}]=(-1)^{\alpha
}[i^{\alpha}+(-i)^{\alpha}].$
For $I_{2},$ the contour is closed counterclockwise in the lower $q-$plane and
circles around the poles and the branch point in the lower half $q-$plane. For
$I_{2}$ the branch cut is chosen in the upper half $q-$plane and again since
$\alpha>0,$ the integral around the branch point does not contribute to the
integral, thus yielding $PV(I_{2})$ as
\begin{equation}
PV(I_{2})=-\left( \frac{\pi\sin n\pi/2}{2}\right) \cos\left( n\pi
x/2a\right) ,\text{ }n=1,3,\ldots\text{ ,
\end{equation}
which leads to the Cauchy principal value of $I$ as the su
\begin{align}
PV(I) & =PV(I_{1})+PV(I_{2})\nonumber\\
& =-\pi\left( \sin n\pi/2\right) \cos\left( n\pi x/2a\right) ,\text{
}n=1,3,\ldots\text{ .
\end{align}
When this is substituted back into Equation (17) we ge
\begin{align}
\psi_{n}(x) & =-AD_{\alpha}\frac{\sin(n\pi/2)}{E_{n}\pi}\ \left( \frac
{n\pi\hbar}{2a}\right) ^{\alpha}PV(I)\\
& =\frac{AD_{\alpha}\sin^{2}(\frac{n\pi}{2})}{E_{n}}\ \left( \frac{n\pi
\hbar}{2a}\right) ^{\alpha}\cos\frac{n\pi x}{2a}.
\end{align}
Since $E_{n}=D_{\alpha}(\frac{\hslash n\pi}{2a})^{\alpha}$ and $\sin^{2
(\frac{n\pi}{2})=1$ for odd $n,$ we again obtain the wave function [Eq. (14)]
a
\begin{equation}
\psi_{n}(x)=A\cos\frac{n\pi x}{2a},\text{ }n=1,3,\ldots\text{, }\left\vert
x\right\vert <a,
\end{equation}
which on the contrary to Jeng et. al. [9] and Hawkins and Schwarz [12],
vanishes at the boundary as $x\rightarrow\pm a$, hence there is no
inconsistency with the solution outside.
\subsection{The case for even $n$}
The proof for the even $n$ values follows along the same lines [10, 11]. We
first write the wave function [Eq. (7)] as
\begin{align}
\psi_{n}(x) & =\left\{
\begin{tabular}
[c]{ccc
$A\sin\frac{n\pi x}{2a}$ & $;$ & $\left\vert x\right\vert <a$\\
& & \\
$0$ & $;$ & $\left\vert x\right\vert \geqslant a
\end{tabular}
\ \ \ \ \ \ \ \ \right. ,\\
\text{ }E_{n} & =D_{\alpha}\left( \frac{\hslash n\pi}{2a}\right) ^{\alpha
},\text{ }n=2,4,\ldots\text{ },\nonumber
\end{align}
and then obtain its Fourier transform\ a
\begin{equation}
\phi_{n}(p)=-\frac{iAn\pi\hbar^{2}(\cos n\pi/2)}{a}\frac{\sin\left(
pa/\hbar\right) }{p^{2}-\left( n\pi\hbar/2a\right) ^{2}},\text{
}n=2,4,\ldots\text{ .
\end{equation}
Now the integral representation of $\psi_{n}(x)$ becomes
\begin{equation}
\psi_{n}(x)=-\ \frac{iAD_{\alpha}\cos(n\pi/2)}{E_{n}\pi}\ \left( \frac
{n\pi\hbar}{2a}\right) ^{\alpha}\int_{-\infty}^{+\infty}dq\text{ }e^{i(n\pi
x/2a)q}\frac{\left\vert q\right\vert ^{\alpha}\sin(n\pi q/2)}{\left(
q^{2}-1\right) \ },
\end{equation}
where we used the substitution $p=(n\pi\hslash/2a)q.$ Finally, usin
\begin{equation}
\sin\left( n\pi q/2\right) =\frac{1}{2i}\left( e^{in\pi q/2}-e^{-in\pi
q/2}\right) ,
\end{equation}
and the original definition of the Riesz derivative [Eq.(21)], we writ
\begin{equation}
\psi_{n}(x)=-\ \frac{AD_{\alpha}\cos(n\pi/2)}{E_{n}\pi}\ \left( \frac
{n\pi\hbar}{2a}\right) ^{\alpha}I,
\end{equation}
where$\
\begin{align}
I & =I_{1}-I_{2},\\
I_{1} & =\ \left( \frac{(i)^{\alpha}+(-i)^{\alpha}}{4\cos\alpha\pi
/2}\right) \int_{-\infty}^{+\infty}dq\frac{q^{\alpha}e^{i(\frac{n\pi x
{2a}+\frac{n\pi}{2})q}}{\ (q+1)(q-1)}\ ,\\
I_{2} & =\ \left( \frac{(i)^{\alpha}+(-i)^{\alpha}}{4\cos\alpha\pi
/2}\right) \int_{-\infty}^{+\infty}dq\frac{q^{\alpha}e^{i(\frac{n\pi x
{2a}-\frac{n\pi}{2})q}}{\ (q+1)(q-1)}\ .
\end{align}
The Cauchy principal value of $I$ is now found a
\begin{equation}
PV(I)=-\pi\left( \cos n\pi/2\right) \sin\left( n\pi x/2a\right) ,\text{
}n=2,4,\ldots\text{ },
\end{equation}
which when substituted into (42) yields the wave function in (38),
hence\ again no inconsistency.
This is not surprising at all. In fact, Equations (14) and (17) and similarly
Equations (38) and (40), represent the same wave function, where Equations
(17) and (40) are just the integral representations of $\psi_{n}(x)$ in
Equations (14) and (38) for the odd and the even values of $n$, respectively.
It is true that the Riesz derivative is a non local operator [Eqs. (21-23)]
that requires knowledge of the wave function over the entire space. For the
infinite square well problem, the system is confined to the region $\left\vert
x\right\vert <a$ with \ $\Psi(x,t)=0$ for $\left\vert x\right\vert \geq a.$
Since the solution for $\left\vert x\right\vert <a$ satisfies the boundary
conditions as $x\rightarrow\pm a,$ the solution inside the well is consistent
with the outside.
\section{Scrutinizing the Riesz Derivative}
Another source for the proposed inconsistency in the infinite square well
solution [Eq. (7)] is that when the Riesz derivative in Equation (21) is
directly calculated by evaluating the integrals in Equations (22) and (23),
the result does not satisfy the space fractional Schr\"{o}dinger equation
[14]. Note that these integrals are now in the configuration space. This
situation is explained by the fact that the Riesz derivative is non local,
hence to find the solution outside the well, one also has to consider the
solution inside [14]. To shed some light on this problem, we now scrutinize
how the different definitions of the Riesz derivative are written and how they
are\ related and calculated.
\subsection{Riesz Fractional Integral}
To evaluate the integrals in the definition of $R_{x}^{\alpha}f(x)$ [Eqs. (22)
and (23)], we are going to start with the definition of the Riesz fractional
integral, which is defined as [Eqs. (A1) and (A2)
\begin{align}
R_{x}^{-\alpha}f(x) & =\frac{_{-\infty}D_{x}^{-\alpha}f(x)+_{\infty
D_{x}^{-\alpha}f(x)}{2\cos\alpha\pi/2},\text{ }\ \alpha>0,\text{ }\alpha
\neq1,3,...,\\
_{-\infty}D_{x}^{-\alpha}f(x) & =\frac{1}{\Gamma(\alpha)}\int_{-\infty
^{x}(x-x^{\prime})^{\alpha-1}f(x^{\prime})dx^{\prime},\\
_{+\infty}D_{x}^{-\alpha}f(x) & =\frac{1}{\Gamma(\alpha)}\int_{x}^{\infty
}(x^{\prime}-x)^{\alpha-1}f(x^{\prime})dx^{\prime}.
\end{align}
To evaluate the integral in Equation (48), we define the function
\begin{equation}
h_{+}(x)=\left\{
\begin{array}
[c]{ccc
\frac{x^{\alpha-1}}{\Gamma(\alpha)} & , & x>0\\
& & \\
0 & , & x\leq0
\end{array}
\right. ,
\end{equation}
which allows us to write $_{-\infty}D_{x}^{-\alpha}f(x)$ as the convolution of
$h_{+}(x)$ with $f(x)$
\begin{equation}
_{-\infty}D_{x}^{-\alpha}f(x)=h_{+}(x)\ast f(x).
\end{equation}
It is well known that the Fourier transform of a convolution is equal to the
product of the Fourier transforms of the convolved functions, that is
\begin{equation}
\mathcal{F}\left\{ _{-\infty}D_{x}^{-\alpha}f(x)\right\} =\mathcal{F
\left\{ h_{+}(x)\right\} \mathcal{F}\left\{ f(x)\right\} .
\end{equation}
Using analytic continuation with an appropriate contour, it is straight
forward to evaluate the Fourier transform of $h_{+}(x)$ as
\begin{equation}
\mathcal{F}\left\{ h_{+}(x)\right\} =\int_{-\infty}^{\infty}\frac
{x^{\alpha-1}}{\Gamma(\alpha)}e^{-i\omega x}dx=(i\omega)^{-\alpha},\text{
}\alpha>0.
\end{equation}
Assuming that the Fourier transform of $f(x)$ exists:
\begin{equation}
\mathcal{F}\left\{ f(x)\right\} =\int_{-\infty}^{\infty}f(x)e^{-i\omega
x}dx=F(\omega),
\end{equation}
which only demands an absolutely integrable $f(x)$, we obtain the Fourier
transfor
\begin{equation}
\mathcal{F}\left\{ _{-\infty}D_{x}^{-\alpha}f(x)\right\} =(i\omega
)^{-\alpha}F(\omega),\text{ }\alpha>0.
\end{equation}
Following similar steps, we define the functio
\begin{equation}
h_{-}(x)=\left\{
\begin{array}
[c]{ccc
0 & , & x\geq0\\
& & \\
\frac{(-x)^{\alpha-1}}{\Gamma(\alpha)} & , & x<0
\end{array}
\right. ,
\end{equation}
with the Fourier transfor
\begin{equation}
\mathcal{F}\left\{ h_{-}(x)\right\} =\int_{-\infty}^{0}\frac{(-x)^{\alpha
-1}}{\Gamma(\alpha)}e^{-i\omega x}dx=(-i\omega)^{-\alpha},\text{ }\alpha>0.
\end{equation}
We can now write $_{\infty}D_{x}^{-\alpha}f(x)$ [Eq. (49)] as the convolutio
\begin{equation}
_{\infty}D_{x}^{-\alpha}f(x)=h_{-}(x)\ast f(x),
\end{equation}
where its Fourier transform is given a
\begin{align}
\mathcal{F}\left\{ _{\infty}D_{x}^{-\alpha}f(x)\right\} & =\mathcal{F
\left\{ h_{-}(x)\right\} \mathcal{F}\left\{ f(x)\right\} \\
& =(-i\omega)^{-\alpha}F(\omega),\text{ }\alpha>0.
\end{align}
Using Equations (55) and (60), the Riesz fractional integral, $R_{x}^{-\alpha
}f(x),$ is defined in terms of its Fourier transform a
\begin{align}
\mathcal{F}\left\{ R_{x}^{-\alpha}f(x)\right\} & =\frac{(i\omega
)^{-\alpha}+(-i\omega)^{-\alpha}}{2\cos\alpha\pi/2}F(\omega),\text{
\alpha>0,\text{ }\alpha\neq1,3,...,\\
& =\left\vert \omega\right\vert ^{-\alpha}F(\omega),\text{ for real }\omega.
\end{align}
Also note that from Equations (47-49),~$R_{x}^{-\alpha}f(x)$ is also the
integra
\begin{equation}
R_{x}^{-\alpha}f(x)=\frac{1}{2\Gamma(\alpha)\cos\alpha\pi/2}\int_{-\infty
}^{\infty}\left\vert x-x^{\prime}\right\vert ^{\alpha-1}f(x^{\prime
})dx^{\prime},\text{ }\alpha>0,\text{ }\alpha\neq1,3,...\text{ }.
\end{equation}
\subsection{Riesz Fractional Derivative}
To evaluate the Riesz fractional derivative, we note that in Equations
(21-23), the Caputo definition of the fractional derivative is used. Since for
sufficiently smooth functions [Eq. (24)]:
\[
f(x),f^{\prime}(x),\ldots,f^{(n-1)}(x)\rightarrow0\text{ as}~x\rightarrow
\pm\infty,
\]
the Caputo and the Riemann-Liouville definitions agree [Eq. (A9)], we can
write Equation (22) as [Eq. (A7) [3, 16-18]
\begin{align}
_{-\infty}D_{x}^{\alpha}f(x) & =\frac{1}{\Gamma(n-\alpha)}\int_{-\infty
^{x}(x-x^{\prime})^{-\alpha-1+n}f^{(n)}(x^{\prime})dx^{\prime},\text{
\alpha>0,\\
& =_{-\infty}\mathbf{I}_{x}^{n-\alpha}f^{(n)}(x)=_{-\infty}D_{x}^{\alpha
-n}f^{(n)}(x).
\end{align}
Note that we have dropped the abbreviation $R-L$ and $C$ in $^{R-L
D_{x}^{\alpha}$ and $^{C}D_{x}^{\alpha}$. Since $\alpha-n<0,$ we can use our
previous result [Eq. (55)] to obtain [2, 16
\begin{align}
\mathcal{F}\left\{ _{-\infty}D_{x}^{\alpha}f(x)\right\} & =\mathcal{F
\left\{ _{-\infty}D_{x}^{\alpha-n}f^{(n)}(x)\right\} \\
& =(i\omega)^{\alpha-n}\mathcal{F}\left\{ f^{(n)}(x)\right\} \\
& =(i\omega)^{\alpha-n}(i\omega)^{n}F(\omega)\\
& =(i\omega)^{\alpha}F(\omega).
\end{align}
The third step [Eq. (68)], is already assured by the smoothness condition [Eq.
(24)]. Similarly, we obtai
\begin{equation}
\mathcal{F}\left\{ _{\infty}D_{x}^{\alpha}f(x)\right\} =(-i\omega)^{\alpha
}F(\omega).
\end{equation}
Therefore, we can write the Fourier transform of the Riesz derivative [Eq.
(21)] a
\begin{equation}
\mathcal{F}\left\{ R_{x}^{\alpha}f(x)\right\} =-\frac{(i\omega)^{\alpha
}+(-i\omega)^{\alpha}}{2\cos\alpha\pi/2}F(\omega),\text{ }\alpha>0,\text{
}\alpha\neq1,3,...,
\end{equation}
where $F(\omega)$ is the Fourier transform of $f(x)$ [Eq. (54)], which makes
use of the values of $f(x)$ over the entire range $x\in(-\infty,\infty)$. For
real $\omega,$ we can also write this a
\begin{equation}
\mathcal{F}\left\{ R_{x}^{\alpha}f(x)\right\} =-\left\vert \omega\right\vert
^{\alpha}F(\omega),
\end{equation}
which was used to write Equations (12), (17) and (40). So far, all we have
assumed is that $f(x)$ is absolutely integrable, hence its Fourier transform
exists and the smoothness condition in Equation (24). Granted that the inverse
transform exists, the Riesz derivative is defined as
\begin{align}
R_{x}^{\alpha}f(x) & =\mathcal{F}^{-1}\left\{ -\left\vert \omega\right\vert
^{\alpha}F(\omega)\right\} \\
& =-\frac{1}{2\pi}\int_{-\infty}^{\infty}\left\vert \omega\right\vert
^{\alpha}F(\omega)e^{i\omega x}d\omega.
\end{align}
Note that our starting point was the integrals in Equations (22$-$23), hence
using (21), $R_{x}^{\alpha}f(x)$ can also be written a
\begin{gather}
R_{x}^{\alpha}f(x)\ =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi/2}\\
\times\left[ \int_{-\infty}^{x}(x-x^{\prime})^{-\alpha+1}f^{(2)}(x^{\prime
})dx^{\prime}+\int_{x}^{\infty}(x^{\prime}-x)^{-\alpha+1}f^{(2)}(x^{\prime
})dx^{\prime}\right] ,\text{ }1<\alpha<2,\nonumber
\end{gather}
where we have set $n=2$ for $1<\alpha<2.$
It is important to note that Equations (74) and (75) correspond to different
representations of the Riesz derivative, which have the same Fourier
transform. As we have shown, Equation (74) is actually obtained from the
Fourier transform of (75). It is not true to say that non local effects are
incorporated in (75) but not in (74).\ In Equation (74),\ the Fourier
transform of $f(x)$ is obtained by integrating over the entire space as
$F(\omega)=\int_{-\infty}^{\infty}f(x)e^{-i\omega x}dx.$ In (74),
$R_{x}^{\alpha}f(x)$ is given in terms of an integral in the frequency
(momentum) space, while in (75), $R_{x}^{\alpha}f(x)$ is given in terms of
integrals in the configuration space. In general, the integrals in both of
these expressions are singular in their respective spaces. Granted that these
singular integrals\ are treated consistently, they should yield the same
result. However, technically, it is easier to work in the momentum space with
Equation (74).
\subsection{Riesz Derivative via the R-L Definition}
In Equation (75) we have used the Caputo fractional derivative for $_{-\infty
}D_{x}^{\alpha}f(x)$ and $_{\infty}D_{x}^{\alpha}f(x)$. If we use the
Riemann-Liouville definition, The Riesz derivative [Eqs. (21$-$23)] becomes
[Eqs. (A4) and (A6), [3, 16$-$18]
\begin{align}
R_{x}^{\alpha}f(x) & =-\frac{_{-\infty}D_{x}^{\alpha}f(x)+_{\infty
D_{x}^{\alpha}f(x)}{2\cos\alpha\pi/2},\text{ }\alpha>0,\text{ }\alpha
\neq1,3,...,\\
_{-\infty}D_{x}^{\alpha}f(x) & =\frac{1}{\Gamma(2-\alpha)}\frac{d^{2
}{dx^{2}}\int_{-\infty}^{x}(x-x^{\prime})^{-\alpha+1}f(x^{\prime})dx^{\prime
},\\
_{+\infty}D_{x}^{\alpha}f(x) & =\frac{1}{\Gamma(2-\alpha)}\frac{d^{2
}{dx^{2}}\int_{x}^{\infty}(x^{\prime}-x)^{-\alpha+1}f(x^{\prime})dx^{\prime},
\end{align}
hence we can also writ
\begin{gather}
R_{x}^{\alpha}f(x)\ =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi/2}\\
\times\left[ \frac{d^{2}}{dx^{2}}\int_{-\infty}^{x}(x-x^{\prime})^{-\alpha
+1}f(x^{\prime})dx^{\prime}+\frac{d^{2}}{dx^{2}}\int_{x}^{\infty}(x^{\prime
}-x)^{-\alpha+1}f(x^{\prime})dx^{\prime}\right] ,\nonumber
\end{gather}
Using the functions $h_{\pm}(x)$ [Eqs. (50) and (56)] and the convolution
theorem, it is straight forward to show that the Fourier transform of (79) is
still given by Equation (71), or (72) when $\omega$ is real.
\subsection{Source of the Controversy}
The so called inconsistency problem of the infinite square well, in the
configuration space [14] originates from the piecewise evaluation of the
highly singular integrals in Equation (79), which tampers with the integrity
of the Riesz derivative, thus affecting its Fourier transform. For example,
for a point outside the well, say $x\geq a,$ if we write the Riesz derivative
[Eq (79)] a
\begin{gather}
R_{x}^{\alpha}\psi_{n}(x)\ =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi
/2}\nonumber\\
\times\left\{ \left[ \frac{d^{2}}{dx^{2}}\int_{-\infty}^{-a}(x-x^{\prime
})^{-\alpha+1}\psi_{n}(x^{\prime})dx^{\prime}+\frac{d^{2}}{dx^{2}}\int
_{-a}^{a}(x-x^{\prime})^{-\alpha+1}\psi_{n}(x^{\prime})dx^{\prime}\right.
\right. \nonumber\\
\left. +\frac{d^{2}}{dx^{2}}\int_{a}^{x}(x-x^{\prime})^{-\alpha+1}\psi
_{n}(x^{\prime})dx^{\prime}\right] \nonumber\\
+\left. \left[ \frac{d^{2}}{dx^{2}}\int_{x}^{\infty}(x^{\prime
-x)^{-\alpha+1}\psi_{n}(x^{\prime})dx\right] \right\} ,\text{ }1<\alpha<2~,
\end{gather}
and then substitute the square well solution [Eq. (7)], we obtai
\begin{equation}
R_{x}^{\alpha}\psi_{n}(x)\ =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi/2}\left[
\frac{d^{2}}{dx^{2}}\int_{-a}^{a}(x-x^{\prime})^{-\alpha+1}\psi_{n}(x^{\prime
})dx^{\prime}\right] ,\text{ }x\geq a.
\end{equation}
The above expression gives the values of the Riesz derivative outside the
well, $x\geq a$, in terms of an integral that only makes use of the values of
the wave function inside the well. In general, the $R_{x}^{\alpha}\psi_{n}(x)$
given above for $x\geq a$ does not vanish, hence does not satisfy the space
fractional Schr\"{o}dinger equation [Eq. (6)] for $x\geq a$. This implies a
potential problem for the infinite square well solution [14]. Note that to
write Equation (81), we have used the fact that the wave function outside is
zero. Thus, along with the first and the third integrals in Equation (80), we
have set the last integral to zero [14]. Even though this procedure looks
reasonable, what it essentially does is to set the fractional derivative
$_{\infty}D_{x}^{\alpha}\psi_{n}(x)$ to zero for $x\geq a$, that is,
\begin{align}
_{\infty}D_{x}^{\alpha}\psi_{n}(x)\ & =\frac{1}{\Gamma(2-\alpha)}\frac
{d^{2}}{dx^{2}}\int_{x\geq a}^{\infty}(x^{\prime}-x)^{-\alpha+1}\psi
_{n}(x^{\prime})dx^{\prime}\\
& =0,\text{ }x\geq a,
\end{align}
thus the Fourier transform $\mathcal{F}\left\{ _{\infty}D_{x}^{\alpha
\psi_{n}(x)\right\} $ is also set to zero for $x\geq a$. However, in the
definition of the Riesz derivative [Eqs. (21$-$23)], the Fourier transform of
$_{\infty}D_{x}^{\alpha}\psi_{n}(x),$ for all $x,$ is given as [Eq. (70)]
\begin{equation}
\mathcal{F}\left\{ _{\infty}D_{x}^{\alpha}\psi_{n}(x)\ \right\}
=(-i\omega)^{\alpha}\Phi_{n}(\omega),
\end{equation}
where $\Phi_{n}(\omega)$ is the Fourier transform of the entire solution,
$\psi_{n}(x)$, not just the solution for\ $x\geq0$.
Similarly, this procedure also tampers with the Fourier transform of
$_{-\infty}D_{x}^{\alpha}\psi_{n}(x)$, thus the Fourier transform of the
derivative in (81) is not what it should be, that is, $\mathcal{F}\left\{
R_{x}^{\alpha}f(x)\right\} =-\left\vert \omega\right\vert ^{\alpha
F(\omega),$ which is the basic definition of the Riesz derivative used in the
space fractional Schr\"{o}dinger equation.
Similarly, the expressions for $x\leq-a$ and $\left\vert x\right\vert <a$ can
be written as [14
\begin{align}
R_{x}^{\alpha}\psi_{n}(x)\ & =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi
/2}\left[ \frac{d^{2}}{dx^{2}}\int_{-a}^{a}(x^{\prime}-x)^{-\alpha+1}\psi
_{n}(x^{\prime})dx^{\prime}\right] ,\text{ }x\leq-a,\\
R_{x}^{\alpha}\psi_{n}(x)\ & =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi
/2}\left[ \frac{d^{2}}{dx^{2}}\int_{-a}^{a}\left\vert x-x^{\prime}\right\vert
^{-\alpha+1}\psi_{n}(x^{\prime})dx^{\prime}\right] ,\text{ }\left\vert
x\right\vert <a.
\end{align}
Note that Equation (81) can also be written as
\begin{align}
R_{x}^{\alpha}\psi_{n}(x) & =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi/2
\int_{-a}^{a}\frac{\psi_{n}(x^{\prime})}{(x-x^{\prime})^{\alpha+1}}dx^{\prime
},\text{ }x\geq a,\\
& =-\frac{(-\alpha+1)(-\alpha)}{2\Gamma(2-\alpha)\cos\alpha\pi/2}\int
_{-a}^{a}\frac{\psi_{n}(x^{\prime})}{(x-x^{\prime})^{\alpha+1}}dx^{\prime},\\
& =-\frac{1}{2\Gamma(-\alpha)\cos\alpha\pi/2}\int_{-a}^{a}\frac{\psi
_{n}(x^{\prime})}{(x-x^{\prime})^{\alpha+1}}dx^{\prime},
\end{align}
This result was used in [14~], which was obtained by using another
representation of the Riesz derivative
\begin{equation}
R_{x}^{\alpha}f(x)=\ \frac{\Gamma(1+\alpha)\sin\alpha\pi/2}{\pi}\ \int
_{0}^{\infty}\frac{f(x+x^{\prime})-2f(x)+f(x-x^{\prime})}{x^{\prime\alpha+1
}dx^{\prime},
\end{equation}
which is also good for $\alpha=1.$ This representation is obtained by writing
$_{-\infty}D_{x}^{\alpha}f(x)$ and $_{\infty}D_{x}^{\alpha}f(x)$ in Equations
(77) and (78) as [3
\begin{align}
_{-\infty}D_{x}^{\alpha}f(x) & =\frac{\alpha}{\Gamma(1-\alpha)}\int
_{0}^{\infty}\frac{f(x)-f(x-x^{\prime})}{x^{\prime\alpha+1}}dx^{\prime},\\
_{\infty}D_{x}^{\alpha}f(x) & =-\frac{\alpha}{\Gamma(1-\alpha)}\int
_{0}^{\infty}\frac{f(x+x^{\prime})-f(x)}{x^{\prime\alpha+1}}dx^{\prime}.
\end{align}
Similarly for $x\leq-a$ $\ $and $\left\vert x\right\vert <a,$ we obtain the
expressions used in [14] a
\begin{align}
R_{x}^{\alpha}\psi_{n}(x)\ & =-\frac{1}{2\Gamma(-\alpha)\cos\alpha\pi
/2}\left[ \int_{-a}^{a}\frac{\psi_{n}(x^{\prime})}{(x^{\prime}-x)^{\alpha+1
}dx^{\prime}\right] ,\text{ }x\leq-a,\\
R_{x}^{\alpha}\psi_{n}(x)\ & =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi
/2}\left[ \frac{d^{2}}{dx^{2}}\int_{-a}^{a}\left\vert x-x^{\prime}\right\vert
^{-\alpha+1}\psi_{n}(x^{\prime})dx^{\prime}\right] ,\text{ }\left\vert
x\right\vert <a.
\end{align}
In summary, the Riesz derivative can be evaluated by using Equation (74),
which involves an integration in the frequency (momentum) space. We have shown
that for the infinite square well problem, the use of Equation (74) gives
consistent results. We can also use the representations in Equations (75) or
(79), which involve integrals in configuration space. What is important is
that a consistent treatment of all the representations of the Riesz derivative
should yield the same Fourier transform, that is, $\mathcal{F}\left\{
R_{x}^{\alpha}f(x)\right\} =-\left\vert \omega\right\vert ^{\alpha
F(\omega).$
\section{Conclusions}
Using the convolution theorem we have demonstrated how the frequency
(momentum) space representation of the Riesz derivative [Eq. (74)]:
\begin{equation}
R_{x}^{\alpha}f(x)=-\frac{1}{2\pi}\int_{-\infty}^{\infty}\left\vert
\omega\right\vert ^{\alpha}F(\omega)e^{i\omega x}d\omega,
\end{equation}
is obtained from the integral representations in the configuration space [Eq.
(75)]
\begin{gather}
R_{x}^{\alpha}f(x)\ =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi/2}\\
\times\left[ \int_{-\infty}^{x}(x-x^{\prime})^{-\alpha+1}f^{(2)}(x^{\prime
})dx^{\prime}+\int_{x}^{\infty}(x^{\prime}-x)^{-\alpha+1}f^{(2)}(x^{\prime
})dx^{\prime}\right] ,\text{ }1<\alpha<2,\nonumber
\end{gather}
and similarly from [Eq. (79)
\begin{gather}
R_{x}^{\alpha}f(x)\ =-\frac{1}{2\Gamma(2-\alpha)\cos\alpha\pi/2}\\
\times\left[ \frac{d^{2}}{dx^{2}}\int_{-\infty}^{x}(x-x^{\prime})^{-\alpha
+1}f(x^{\prime})dx^{\prime}+\frac{d^{2}}{dx^{2}}\int_{x}^{\infty}(x^{\prime
}-x)^{-\alpha+1}f(x^{\prime})dx^{\prime}\right] ,\text{ }1<\alpha<2.\nonumber
\end{gather}
Granted that $f(x)$ is absolutely integrable and the smoothness condition in
Equation (24) is satisfied, all the above representations of the Riesz
derivative agree and have the same Fourier transform. The first definition is
given in the frequency (momentum) domain while the others are in the
configuration space.
For the infinite square well, the controversy proposed in [9, 12] is based on
the use of the momentum space definition in Equation (95). In Section II and
III, we have shown that if the relevant integrals are evaluated as Cauchy
principal value integrals, there is no inconsistency.
As for the inconsistency of the infinite well solution proposed in terms of
the configuration space definitions of the Riesz derivative [14], the
segmented evaluation of these integrals leads to the Riesz derivative in
Equation (81) for $x\geq a$, (85) for $x\leq-a$ and (86) for $\left\vert
x\right\vert <a$. Substituting the eigenfunctions [Eq. (7)] into Equations
(89), (93) and (94) we obtai
\begin{align}
R_{x}^{\alpha}\psi_{n}(x) & =F_{1}(x)=-\frac{1}{2\Gamma(-\alpha)\cos
\alpha\pi/2}\int_{-a}^{a}\frac{\psi_{n}(x^{\prime})}{(x-x^{\prime})^{\alpha
+1}}\\
& =-\frac{A}{2\Gamma(-\alpha)\cos\alpha\pi/2}\int_{-a}^{a}\frac{\sin
\frac{n\pi}{2a}(x^{\prime}+a)}{(x-x^{\prime})^{\alpha+1}},\text{ }x\geq a
\end{align}
an
\begin{align}
R_{x}^{\alpha}\psi_{n}(x) & =F_{2}(x)=-\frac{A}{2\Gamma(-\alpha)\cos
\alpha\pi/2}\left[ \int_{-a}^{a}\frac{\sin\frac{n\pi}{2a}(x^{\prime
+a)}{(x^{\prime}-x)^{\alpha+1}}dx^{\prime}\right] ,\text{ }x\leq-a,\\
R_{x}^{\alpha}\psi_{n}(x) & =F_{3}(x)=-\frac{A}{2\Gamma(2-\alpha)\cos
\alpha\pi/2}\left[ \frac{d^{2}}{dx^{2}}\int_{-a}^{a}\frac{\sin\frac{n\pi
{2a}(x^{\prime}+a)}{\left\vert x-x^{\prime}\right\vert ^{\alpha-1}}dx^{\prime
}\right] ,\text{ }\left\vert x\right\vert <a,
\end{align}
which are used in [14] to argue for inconsistency. In these expressions,
$F_{1}(x),$ $F_{2}(x)$ and $F_{3}(x)$ are functions of $x,$ in their
respective intervals. However, since all the integrands are singular at the
end points, none of these functions are well defined, thus\ the integrals do
not exist in the Riemann sense. In this regard, their Fourier transforms do
not exist. The segmented evaluation of the integrals destroys the wholeness in
the definition of the Riesz derivative, hence does not yield the correct
Fourier transform.
In other words, what the above procedure yields in Equations (99-101) is not
the Riesz derivative used in the space fractional Schr\"{o}dinger equation. It
does not have the correct Fourier transform. It has to be kept in mind that
the Riesz derivative is basically defined in terms of its Fourier transform,
which is equal to the logarithm of the characteristic function of the L\'{e}vy
probability distribution function. This is in keeping with one of the basic
premises of the quantum mechanics, which says that the wave functions in
position and momentum spaces are related to each other through a Fourier
transform. This also shows in the fact that the space fractional
Schr\"{o}dinger equation follows from the Feynman path integral formulation of
quantum mechanics over L\'{e}vy paths.
|
1,477,468,750,928 | arxiv | \section{Introduction}
\noindent \textbf{Definition of the Yamabe operator $L_g$, eigenvalues of $L_g$, smooth Yamabe invariant $\sigma(M)$}\\
Let $(M,g) $ be a compact Riemannian manifold of dimension $n\geq 3$. We denote the scalar curvature by $\Scal_g$.
Let us define
$$\mu(M,g):= \inf_{\tilde{g} \in [g]} \int_M \Scal_{\tilde{g}} dv_{\tilde{g}} \left( \Vol_{\tilde{g}}(M) \right)^{-(n-2)/n}$$
and
$$\sigma(M):= \sup_g \mu(M,g)$$
where, in the definition of $\mu(M,g)$, the infimum runs over all the
metrics $g'$ in the conformal class $[g]$ of $g$ and
where, in the definition of $\sigma(M)$, the supremum is taken over all the Riemannian metrics $g$ on $M$. The number $\mu(M,g)$,
also denoted by $\mu(g)$ if no ambiguity, is called the
{\it Yamabe constant} while $\sigma(M)$ is called the {\it Yamabe invariant}. The Yamabe constant played a crucial role
in the solution of the Yamabe problem
solved between 1960 and 1984 by Yamabe, Tr\"udinger, Aubin and Schoen. This problem consists in finding a metric $\widetilde g$ conformal to
$g$ such that the scalar curvature $\Scal_{\widetilde g}$ of $\widetilde g$ is constant.
For more information, the reader may refer to \cite{lee.parker:87, hebey:97, Aubin:98}.
An important geometric meaning of $\mu(M,g)$ and $\sigma(M)$ is contained in the following well known result:
\begin{prop}
Let $M$ be a compact differentiable manifold of dimension $n\geq 3$. Then,
\begin{itemize}
\item if $g$ is a Riemannian metric on $M$, the conformal class $[g]$ of $g$ contains a metric of positive scalar curvature if and only if $\mu(M,g)>0$.
\item $M$ carries a metric $g$ with positive scalar curvature if and only if $\sigma(M)>0$.
\end{itemize}
\end{prop}
\noindent Classifying compact manifolds admitting a positive scalar curvature metric is a hard open problem which was studied by many mathematicians.
Significant progresses were made thanks to surgery techniques.
We recall briefly that a surgery on $M$ is the procedure of constructing from $M$ a new manifold
$$N:= M\setminus {S^k\times B^{n-k}} \cup_{S^k\times S^{n-k-1}} \bar{B}^{k+1}\times S^{n-k-1},$$
by removing the interior of $S^k\times B^{n-k}$ and gluing it with $\bar{B}^{k+1}\times S^{n-k-1}$ along the boundaries.
Gromov-Lawson and Schoen-Yau proved in \cite{gromovlawson} and \cite{schoenyau} the following
\begin{thm}
Let $M$ be a compact manifold of dimension $n \geq 3$ such that $\si(M) >0$.
Assume that $N$ is obtained from $M$ by a surgery of dimension
$k$ ($0 \leq k \leq n-3$). Then, $ \si(N) >0$.
\end{thm}
\noindent Using cobordism techniques, one deduces:
\begin{corollary}
Every manifold $M$ of dimension $n\geq 5$ simply connected and non-spin, carries a metric of positive scalar curvature.
\end{corollary}
\noindent Later, Kobayashi \cite{kobayashi} and Petean-Yun \cite{peteanyun} obtained new surgery formulas for $\sigma(M)$. These works were generalized by B. Ammann, M. Dahl and E. Humbert in \cite{ammann.dahl.humbert:08} where they proved in particular
\begin{thm} \label{adhl}
If $N$ is obtained from $M$ by a surgery of dimension $0\leq k \leq n-3$, then
$$\sigma(N)\geq \min(\sigma (M), \Lambda_n),$$
where $\Lambda_n$ is a positive constant depending only on $n$.
\end{thm}
\noindent As a corollary, they obtained the following
\begin{corollary} \label{coradh}
Let $M$ be a simply connected compact manifold of dimension $n\geq 5$, then one of this assumptions is satisfied\\
\begin{enumerate}
\item $\sigma(M) = 0$ (which implies that $M$ is spin);
\item $\sigma (M)\geq \alpha_n$, where $\alpha_n$ is a positive constant depending only on $n$.
\end{enumerate}
\end{corollary}
\noindent Now, let us define the \emph{Yamabe operator} or \emph{conformal Laplacian}
$$L_g:= a\Delta_g + \Scal_g,$$
where $a = \frac{4(n-1)}{n-2}$ and where $\Delta_g$ is the Laplace-Beltrami operator. The operator $L_g$ is an elliptic differential operator of second order
whose spectrum is discrete:
$$\Spec(L_g) = \{{\lambda}_1(g),{\lambda}_2(g),\cdots\},$$
where ${\lambda}_1(g)< {\lambda}_2(g)\leq \cdots$ are the eigenvalues of $L_g$.
The variational characterization of ${\lambda}_i(g)$ is given by
$${\lambda}_i(g) = \inf_{V\in Gr_i(H_1^2(M))}\sup_{v\in V\setminus\{0\}}\frac{\int_M vL_gv\, dv_g}{\int_M v^2\,dv_g},$$
where $Gr_i(H_1^2(M))$ stands for the $i$-th dimensional Grassmannian in $H_1^2(M).$ One important property of the eigenvalues of $L_g$ is that their sign is a conformal invariant equal to the sign of the Yamabe constant (see \cite{elsayed}). Consequently,
a compact manifold $M$ possesses a metric with positive
${\lambda}_1$ if and only if it admits a positive scalar curvature metric. \\
\noindent Now, if $\mu(M,g)\geq 0,$ it is easy to check that
\begin{eqnarray} \label{defmu}
\mu(M,g) = \inf_{\widetilde g\in \left[ g\right]} {\lambda}_1(\widetilde g) \Vol
(M,\widetilde g)^\frac{2}{n},
\end{eqnarray}
where $\left[ g\right]$ is the conformal class of $g$ and ${\lambda}_1$ is the first
eigenvalue of the Yamabe operator $L_g$. Inspired by these definitions,
one can define the \emph{second Yamabe constant} and the \emph{second Yamabe invariant} by
$$\mu_2(M,g)= \inf_{\widetilde g\in \left[ g\right]} {\lambda}_2(\widetilde g) \Vol
(M,\widetilde g)^\frac{2}{n},$$
and
$$\si_2(M) = \sup_g \mu_2(M,g). $$
\noindent The second Yamabe constant $\mu_2(M,g)$ or $\mu_2(g)$ if no ambiguity was introduced and studied in \cite{ammannhumbert} when $\mu(M,g) \geq 0$. This study was enlarged in \cite{elsayed} where we started to investigate the relationships between the sign of the second eigenvalue of the Yamabe operator $L_g$ and the existence of nodal
solutions of the equation $L_g u = \ep |u|^{N-2}u,$ where $\ep = -1, 0, +1$. The present paper establishes a surgery formula for $\si_2(M)$ in the spirit of Theorem \ref{adhl}. More precisely, our main result is the following
\begin{thm} \label{mainthm}
Let $M$ be a compact manifold of dimension $n\geq 3$ such that $\sigma_2(M)>0$.
Assume that $N$ is obtained from $M$ by a surgery of dimension $0\leq k\leq n-3$, then we have
$$\sigma_2(N)\geq \min(\sigma_2(M), \Lambda_n),$$
where $\Lambda_n$ is a positive constant depending only on $n$.
\end{thm}
\noindent Note that B\"ar and Dahl in \cite{baerdahl} proved a surgery formula for the spectrum of the Yamabe operator with interesting topological consequences. \\
\noindent The proof of Theorem \ref{mainthm} is inspired by the one of Theorem \ref{adhl} but some new difficulties arise here. Let us recall the strategy: first, we fix a metric $g$ on $M$ such that $\mu_2(M,g)$ is close to $\sigma_2(M)$.
Then the goal is to construct on $N$ a sequence of metrics $g_\th$ such that
$$ \liminf_{\th \to 0} \mu_2(N,g_\th) \geq \min(\mu_2(M,g), \Lambda_n)$$
where $\Lambda_n >0$ depends only on $n$ (see Theorem \ref{theoremprincipal}). Surprisingly, if $\mu(M,g) =0$, we are not able to prove Theorem \ref{theoremprincipal} directly. So the first step is to show that one can assume that $\mu(M,g) \not= 0$ (see Paragraph \ref{munot0}).
Here, we use exactly the same metrics than in \cite{ammann.dahl.humbert:08} and use many of their properties established in \cite{ammann.dahl.humbert:08}. The proof consists in studying the first and second eigenvalues ${\lambda}_1(u_\th^{N-2} g_\th)$ and ${\lambda}_2(u_\th^{N-2} g_\th)$ of $L_{u_ \th^{N-2} g_\th}$ where $u_\th$ is such that
$$\mu_2(g_\th) = {\lambda}_2(u_\th^{N-2} g_\th) \vol_{u_\th^{N-2} g_\th}(M)^{2/n},$$
or in other words, $u_\th$ is such that the metric $u_\th^{N-2} g_\th$ achieves the infimum in the definition of $\mu_2(N,g_\th)$.
Two main difficulties arise in this situation:
\begin{itemize}
\item Contrary to what happened in \cite{ammann.dahl.humbert:08}, we could not show that ${\lambda}_1(u_\th^{N-2} g_\th)$ and ${\lambda}_2(u_\th^{N-2} g_\th)$ are bounded.
\item The proof of Theorem \ref{adhl} was consisting in obtaining some good ``limit equations``. The difficulty here is to ensure that
$$\lim_\th {\lambda}_1(u_\th^{N-2} g_\th) \not= \lim_\th {\lambda}_2(u_\th^{N-2} g_\th).$$
\end{itemize}
The way to overcome these difficulties is to proceed in two steps: the first one is to show that $ {\lambda}_2(u_\th^{N-2} g_\th)>0$. In a second step, we are able to get the desired inequality.\\
\noindent Let us now come back to Theorem \ref{mainthm}. Standard cobordism techniques allow to deduce the following corollary
\begin{corollary}\label{cor}
Let $M$ be a compact, spin, connected and simply connected manifold of dimension $n\geq 5$ with $n\equiv 0, 1, 2, 4$ mod $8$.
If $|\alpha(M)|\leq 1$, then
$$\sigma_2(M)\geq \alpha_n,$$
where $\alpha_n$ is a positive constant depending only on $n$ and $\alpha(M)$ is the $\alpha$-genus of $M$ (see Section \ref{topologicalpart}).
\end{corollary}
\noindent When $M$ is not spin, the conclusion of the corollary still holds but is a direct application of Corollary \ref{coradh} and the fact that $\sigma_2(M)\geq \sigma(M)$. Note that:\\
\noindent $\bullet$ In dimensions $1, 2$ mod $8$, $\alpha(M)\in \mathbb{Z}/{2\mathbb{Z}}$ and hence the condition on the $\alpha$-genus $|\alpha(M)|\leq 1$ is always satisfied. We then obtain that on any connected, simply connected manifold (not necessarily spin) of dimension $n\equiv 1, 2$ mod $8$
$$\sigma_2(M)\geq \alpha_n,$$
for some $\alpha_n>0$ depending only on $n$.\\
$\bullet$ In dimensions $0$ mod $8$, when $M$ is spin, $\alpha (M)= \hat{A}(M),$ where $\hat{A}$ is the $\hat{A}$-genus. Hence if $M$ is simply connected (not necessarily spin) connected of dimension $n\equiv 0$ mod $8$, $|\hat{A}|\leq 1$ then
$$\sigma_2(M)\geq \alpha_n,$$
where $\alpha_n$ is a positive constant depending only on $n$.\\
$\bullet$ In dimensions $4$ mod $8$, when $M$ is spin, we have $\alpha(M) = \frac{1}{2}\hat{A}(M)$. When $M$ is spin and $\hat{A}(M)\leq 2$, we get that
$|\alpha(M)|\leq 1$ and consequently, for any simply connected (not necessarily spin) connected $M$ of dimension $n\geq 5$, $n \equiv 4$ mod $8$
with $|\hat{A}|\leq 2$, we obtain that
$$\sigma_2(M)\geq \alpha_n,$$
where $\alpha_n$ is a positive constant depending only on $n$.\\
\textbf{Acknowledgements:} I would like to thank Emmanuel Humbert for his encouragements, support and remarks along this work. I am also very grateful to Bernd Ammann, Mattias Dahl, Romain Gicquaud and Andreas Hermann for their remarks and their suggestions.
\section{Joining manifolds along a submanifold} \label{joining_man}
\subsection{Surgery on manifolds}
\begin{definition}
A surgery on a $n$-dimensional manifold $M$ is the procedure of constructing a new $n$-dimensional manifold
$$N = (M\setminus f(S^k\times B^{n-k}))\cup (\overline{B}^{k+1}\times S^{n-k-1})/\sim,$$
by cutting out $f(S^k\times B^{n-k})\subset M$ and replacing it by $\overline{B}^{k+1}\times S^{n-k-1},$
where $f: S^k\times \overline {B^{n-k}}\rightarrow M$ is a smooth embedding which preserve the orientation and $\sim$ means that we paste along the boundary. Then, we construct on the topological space $N$ a differential structure and an orientation that makes a differentiable manifold such that the following inclusions
$$M\setminus f(S^k\times B^{n-k})\subset N,$$
and
$$\overline{B^{k+1}}\times S^{n-k-1}\subset N$$
preserve the orientation. We say that $N$ is
obtained from $M$ by a surgery of dimension $k$ and we will
denote $M \stackrel{k}{\rightarrow} N.$
\end{definition}
\noindent Surgery can be considered from another point of view. In
fact, it is a special case of the connected sum: We paste $M$ and $S^n$
along a $k$-sphere. In this section we describe how two manifolds are joined
along a common submanifold with trivialized normal bundle. Strictly
speaking this is a differential topological construction, but since we
work with Riemannian manifolds we will make the construction adapted
to the Riemannian metrics and use distance neighborhoods defined by
the metrics etc.
Let $(M_1,g_1)$ and $(M_2,g_2)$ be complete Riemannian manifolds of
dimension $n$. Let $W$ be a compact manifold of dimension $k$, where
$0 \leq k \leq n$. Let $\bar{w}_i: W \times \mR^{n-k} \to TM_i$,
$i=1,2$, be smooth embeddings. We assume that $\bar{w}_i$ restricted
to $W \times \{ 0 \}$ maps to the zero section of $TM_i$ (which we
identify with $M_i$) and thus gives an embedding $W \to M_i$. The
image of this embedding is denoted by $W_i'$. Further we assume that
$\bar{w}_i$ restrict to linear isomorphisms $\{ p \} \times \mR^{n-k}
\to N_{\bar{w}_i(p,0)} W_i'$ for all $p \in W_i$, where $N W_i'$
denotes the normal bundle of $W_i'$ defined using $g_i$.
We set $w_i \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} \exp^{g_i} \circ \bar{w}_i$. This gives
embeddings $w_i: W \times B^{n-k}(R_{\textrm{max}}) \to M_i$ for some
$R_{\textrm{max}} > 0$ and $i=1,2$. We have $W_i' = w_i(W \times \{ 0 \})$ and we
define the disjoint union
$$
(M,g) \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} (M_1 \amalg M_2, g_1 \amalg g_2),
$$
and
$$
W' \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} W_1' \amalg W_2'.
$$
Let $r_i$ be the function on $M_i$ giving the distance to $W_i'$.
Then $r_1 \circ w_1 (p,x) = r_2 \circ w_2(p,x) = |x|$ for $p \in W$,
$x \in B^{n-k}(R_{\textrm{max}})$. Let $r$ be the function on $M$ defined by
$r(x) \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} r_i(x)$ for $x \in M_i$, $i=1,2$. For $0 < \ep$ we
set $U_i(\ep) \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} \{ x \in M_i \, : \, r_i(x) < \ep \}$ and
$U(\ep) \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} U_1(\ep) \cup U_2(\ep)$. For $0 < \ep < \th$ we
define
$$
N_{\ep}
\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=}
( M_1 \setminus U_1(\ep) ) \cup ( M_2 \setminus U_2(\ep) )/ {\sim},
$$
and
$$
U^N_\ep (\th)
\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=}
(U(\th) \setminus U(\ep)) / {\sim}
$$
where ${\sim}$ indicates that we identify $x \in \partial U_1(\ep)$
with $w_2 \circ w_1^{-1} (x) \in \partial U_2(\ep)$. Hence
$$
N_{\ep}
=
(M \setminus U(\th) ) \cup U^N_\ep (\th).
$$
\noindent We say that $N_\ep$ is obtained from $M_1$, $M_2$ (and $\bar{w}_1$,
$\bar{w}_2$) by a connected sum along $W$ with parameter $\ep$.
\noindent The diffeomorphism type of $N_\ep$ is independent of $\ep$, hence we
will usually write $N = N_\ep$. However, in situations when dropping
the index causes ambiguities, we will keep the notation $N_\ep$. For
example the function $r: M \to [0,\infty)$ gives a continuous function
$r_\ep: N_\ep \to [\ep, \infty)$ whose domain depends on $\ep$. It is
also going to be important to keep track of the subscript $\ep$ on
$U^N_\ep (\th)$ since crucial estimates on solutions of the Yamabe
equation will be carried out on this set.
\noindent The surgery operation on a manifold is a special case of taking
connected sum along a submanifold. Indeed, let $M$ be a compact
manifold of dimension $n$ and let $M_1 = M$, $M_2 = S^n$,
$W = S^k$. Let $w_1 : S^k \times B^{n-k} \to M$ be an embedding
defining a surgery and let $w_2 : S^k \times B^{n-k} \to S^n$ be the
canonical embedding. Since $ S^n \setminus w_2 (S^k \times B^{n-k})$ is
diffeomorphic to $\overline{B^{k+1}} \times S^{n-k-1}$ we have in this situation
that $N$ is obtained from $M$ using surgery on $w_1$, see
\cite[Section VI, 9]{kosinski:93}.
\section{The constants $\Lambda_{n,k}$}\label{lambdank}
\subsection{Definition of $\Lambda_{n,k}$} In this paragraph, we define some constants $\Lambda_{n,k}$ in the same way than in \cite{ammann.dahl.humbert:08}. The only difference is that the functions we considered are not necessarily positive. More precisely,
let $(M,h)$ be a Riemannian manifold of dimension $n\geq 3$. For $i = 1, 2$ we denote by $\Omega^{(i)}$ the set of $C^2$ functions $v$ (not necessarily positive) solution of the equation
$$L_h v = \mu |v|^{N-2} v,$$
where $\mu \in \mR$ . We assume that $v$ satisfies\\
\begin{eqnarray*}
&\bullet& v\not\equiv 0,\\
&\bullet& \|v\|_{L^N(M)}\leq 1,\\
&\bullet& v\in L^{\infty}(M),\\
\end{eqnarray*}
together with \\
\begin{eqnarray*}
&\bullet& v\in L^2(M), \text{ for } i = 1,\\
or\\
&\bullet& \mu\|v\|_{L^\infty(M)}^{N-2}\geq \frac{(n-k-2)^2(n-1)}{8(n-2)}, \text{ for } i = 2.
\end{eqnarray*}
For $i = 1, 2$, we set
$$\mu^{(i)}(M,h) := \inf_{v\in \Omega^{(i)}(M,h)} \mu(v).$$\\
If $\Omega^{(i)}(M, h)$ is empty, we set $\mu^{(i)} = \infty.$
\begin{definition}
For $n\geq 3$ and $0\leq k \leq n-3$, we define
$$\Lambda_{n,k}^{(i)} := \inf_{c\in [-1,1]}\mu^{(i)}({\mathbb H}_c^{k+1} \times {\mathbb S}^{n-k-1}),$$
and
$$\Lambda_{n, k} := \min(\Lambda_{n,k}^{(1)}, \Lambda_{n, k}^{(2)}),$$
where $${\mathbb H}_c^{k+1} := ({\mathbb R}^k\times \mathbb R, \eta_c^{k+1} = e^{2ct}\xi^k + dt^2)$$
\end{definition}
\noindent When considering only positive functions $v$, B. Ammann, M. Dahl and E. Humbert proved in \cite{ammann.dahl.humbert:08} that these constants are positive. It is straightforward to see that the positivity of $v$ has no role in their proof and hence it remains true that $\Lambda_{n,k}>0$.
They gave also explicit positive lower bounds of these constants and many of their techniques still hold in this context but we will not discuss this fact here. For more informations, the reader may refer to \cite{ammanndahlhumbertlow}, \cite{ammanndahlhumbertsquare} and \cite{ammanndahlhumbetyamabeconstant} .
\section{Limit spaces and limit solutions}
\begin{lemma}\label{vtheta}
Let $M$ be an $n$-dimensional manifold. let $(g_\th)$ be a sequence of metrics which converges toward a metric $g$ in $C^2$ on all compact $K\subset M$ when $\th\to 0$. Assume that $v_\th$ is a sequence of functions such that $\|v_\th\|_{L^\infty(M)}$ is bounded and
$\|L_{g_\th} v_\th\|_{L^\infty(M)}$ tends to $0$. Then, there exists a smooth function $v$ solution of the equation
$$L_g v = 0$$
such that $v_\th$ tends to $v$ in $C^1$ on each compact set $K \subset\subset V$.
\end{lemma}
\noindent \textbf{Proof:} Let $K, K'$ be compact sets of $M$ such that $K'\subset K$, we have
$$-g_\th^{ij}\left(\partial_i\partial_j v_\th-\Gamma_{ij}^k\partial_k v_\th\right)+\frac{n-2}{4(n-1)}\Scal_{g_\th} v_\th = f_\th\to 0.$$
Using Theorem 9.11 in \cite{gilbarg.trudinger:77}, one easily checks that
$$\|v_\th\|_{H^{2,p}(K',g)}\leq C(\|L_{g_\th} v_\th\|_{L^p(K,g_\th)} + \|v_\th\|_{L^p(K,g_\th)}).$$
It follows that $v_\th$ is bounded in $H^{2,p}(K',g)$ for all $p \geq 1$. Using Kondrakov's theorem, there exists $v_{K'}$ such that $v_\th$ tends to $v_{K'}$ in $C^1(K').$ Taking an increasing sequence of compact sets $K_m$ such that $\cup_m K_m = M$, $(v_\th)$ converges to $v_m$ on $C^1(K_m),$ we define $v:= v_m$ on $K_m$. Using the diagonal extraction process, we deduce that $v_\th$ tends to $v$ in $C^1$ on any compact set and that $v$ verifies the same Yamabe equation as $v_\th$. Since for each compactly supported smooth function $\phi$, we have
$$\int_M L_{g_\th} \phi v_\th dv_{g_\th} \to \int_M L_g \phi v dv_g,$$ and $$\|L_{g_\th} v_\th \|_{L^\infty(M)} \to 0,$$ we obtain that $L_g v = 0$ in the sense of distributions. Using standard regularity theorems, $v$ is smooth.
\section{$L^2$-estimates on $WS$-bundles} \label{wsbundles}
\noindent We suppose that the product $P \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} I \times W \times S^{n-k-1}$ is equipped with a metric $g_{\rm WS}$ of
the form
$$
g_{\rm WS}
=
dt^2 + e^{2\phi(t)}h_t + \sigma^{n-k-1}
$$
and we mean by $WS$-bundle this product, where $h_t$ is a smooth family of metrics on $W$ and depending on $t$
and $\phi$ is a function on $I$. Let $\pi : P \to I$ be the projection onto the first factor and $F_t = \pi^{-1}(t) = \lbrace t\rbrace \times W\times S^{n-k-1}$, and the metric induced on $F_t$ is defined by
$$g_t := dt^2 + e^{2\phi(t)} h_t + \sigma^{n-k-1}.$$
Let $H_t$ be the mean curvature of $F_t$ in $P$, it is given by the following
$$H_t = -\frac{k}{n-1}\phi'(t)+ e(h_t),$$
with $e(h_t):= \frac{1}{2}tr_{h_t}(\pa_t h_t).$ The derivative of the element of volume of $F_t$ is
$$\pa_t dv_{g_t} = -(n-1)H_tdv_{g_t}.$$
From the definition of $H_t$, when $t\to h_t$ is constant, we obtain that $$H_t = -\frac{k}{n-1}\phi'(t).$$
\begin{definition}
We say that the condition $(A_t)$ is verified if the following assumptions are satisfied:
\Atbox{\ \
\begin{matrix}
1.)& t \mapsto h_t\mbox{ is constant},\hfill\\
2.)& e^{-2 \phi(t)} \inf_{x \in W} \Scal^{h_t}(x)
\geq -\frac{n-k-2}{32} a ,\hfill\\
3.)& |\phi'(t)| \leq 1,\hfill\\
4.)& 0 \leq -2k \phi''(t) \leq \frac1{2} (n-1)(n-k-2)^2.
\end{matrix}}{(A_t)}
Similarly, for the condition $B_t$, we should have another assumptions to verify
\Atbox{\ \
\begin{matrix}
1.) & t \mapsto \phi(t)\mbox{ is constant,}\hfill\\
2.) & \inf_{x\in F_t} \Scal^{g_{\rm WS}}(x)
\geq \frac{1}{2} \Scal^{\si^{n-k-1}}
= \frac{1}{2} (n-k-1)(n-k-2),\hfill\\
3.) & \frac{(n-1)^2}{2} e(h_t)^2
+ \frac{n-1}{2} \partial_t e(h_t)
\geq
- \frac{3}{64} (n-k-2).\hfill
\end{matrix}}{(B_t)}
\end{definition}
\begin{theorem}\label{theo.fibest}
Let $\alpha,$ $\beta \in \mR$ such that $\left[ \alpha, \beta \right]\subset I.$ We suppose also that one of the conditions $(A_t)$ and $(B_t)$ is satisfied. We assume that we have a solution $v$ of the equation
\begin{equation} \label{eqyamodif}
L^{g_{\rm WS}} v
=
a\Delta^{g_{\rm WS}} v + \Scal^{g_{\rm WS}} v
=
\mu u^{N-2}v + d^* A(dv) + Xv + \ep \pa_t v - sv
\end{equation}
where $s, \ep \in C^\infty(P)$, $A\in \End(T^*P)$, and $X\in \Gamma(TP)$
are perturbation terms coming from the difference between $G$ and
$g_{\rm WS}$. We assume that the endomorphism $A$ is symmetric and that $X$
and $A$ are vertical, that is $dt(X) = 0$ and $A(dt) = 0$. Such that
\begin{equation} \label{u}
\mu \| u \|_{L^\infty(P)}^{N-2}
\leq
\frac{(n-k-2)^2(n-1)}{8(n-2)}.
\end{equation}
Then there exists $c_0>0$ independent of $\al$, $\beta$, and $\phi$,
such that if
$$
\| A \|_{L^\infty(P)},
\| X \|_{L^\infty(P)},
\| s \|_{L^\infty(P)},
\| \ep \|_{L^\infty(P)},
\| e(h_t) \|_{L^\infty(P)}
\leq
c_0
$$
then
$$
\int_{\pi^{-1} \left((\al + \ga ,\be - \ga)\right) }
v^2 \, dv_{g_{\rm WS}}
\leq
\frac{4 \| v \|_{L^\infty}^2}{n-k-2}
\left(
\Vol^{g_\al} ( F_{\al }) + \Vol^{g_\be} ( F_{\be })
\right),
$$
where $\ga \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} \frac{\sqrt{32}}{n-k-2}$.
\end{theorem}
\noindent Remark that we should have $\beta - \alpha > 2\gamma$ to obtain our result and note that this theorem gives us an estimate of $\left\|v\right\|_{L^2}.$\\
For the proof of this Theorem, we mimic exactly the proof of Theorem 6.2 in \cite{ammann.dahl.humbert:08}. The only difference is that we consider here a nodal solution (and not a positive solution) of the equation $$L^{g_{\rm WS}} v
=
\mu u^{N-2}v + d^* A(dv) + Xv + \ep \pa_t v - sv.$$
Other details are exactly the same.
\section{Main Theorem}
\noindent Theorem \ref{mainthm} is a direct corollary of
\begin{theorem}\label{theoremprincipal}
Let $(M,g)$ be a compact Riemannian manifold of dimension $n\geq 3$ such that $\mu_2(M,g)>0$ and let $N$ be obtained from $M$ by a surgery of dimension $0\leq k\leq n-3$. Then there exists a sequence of metrics $g_\th$ such that
$$\liminf_{\th\to 0} \mu_2(N,g_\th)\geq \min (\mu_2(M,g), \Lambda_n),$$
where $\Lambda_n>0$ depends only on $n$.
\end{theorem}
\noindent Indeed, to get Theorem \ref{mainthm}, it suffices to apply Theorem \ref{theoremprincipal} with a metric $g$ such that
$\mu_2(M,g)$ is arbitrary closed to $\sigma_2(M)$. The conclusion easily follows since $ \mu_2(N,g_\th) \leq \sigma_2(M)$. This section is devoted to the proof of Theorem \ref{theoremprincipal}.
\subsection{Construction of the metric $g_\th$}\label{constructionofthemetric}
\subsubsection{Modification of the metric $g$} \label{munot0}
For a technical reason, we will need in the proof of Theorem \ref{theoremprincipal} that $\mu(g) \not=0$. To get rid of this difficulty, we need the following proposition:
\begin{prop} \label{munot0prop}
There exists on $M$ a metric $g'$ arbitrary close to $g$ in $C^2$ such that $\mu(g') \not= 0$.
\end{prop}
Indeed, let us assume for a while that Theorem \ref{theoremprincipal} is true if $\mu(g) \not=0$ and let us
see that the result remains true if $\mu(g) = 0$. A first observation is that if $g'$ is close enough to $g$ in $C^2$, then as one can check, $\mu_2(g')$ is close to $\mu_2(g)$. Let us consider a metric $g'$ given by Proposition \ref{munot0prop} close enough to $g$ so that $\mu_2(g') > \mu_2(g) - \ep >0$ for an arbitrary small $\ep$. From Theorem \ref{theoremprincipal} applied to $g'$, we obtain a sequence of metrics $g_\th$ on $N$ such that
$$\liminf_{\th\to 0} \mu_2(N,g_\th)\geq \min (\mu_2(M,g'), \Lambda_n) \geq \min (\mu_2(M,g) - \ep, \Lambda_n).$$
Letting $\ep$ tend to $0$, we obtain Theorem \ref{theoremprincipal}.
It remains to prove Proposition \ref{munot0prop}. \\
\noindent {\bf Proof of Proposition \ref{munot0prop}: } At first, in order to simplify notations, we will consider $g$ as a metric on $M \amalg S^n$ and equal to the standard metric $g=\si^n$ on $S^n$. Since $\mu(g) = 0$, we can assume that $\scal_g = 0$, possibly making a conformal change of metrics. Let us consider a metric $h$ for which $\scal_h$ is negative and constant and whose existence is given in \cite{Aubin:98}. Consider the analytic family of metrics $g_t:= t h + (1-t) g$. Since the first eigenvalue ${\lambda}_t$ of $L_{g_t}$ is simple, the function $t \to {\lambda}_t$ is analytic (see for instance Theorem VII.3.9 in \cite{kato:95}). Since ${\lambda}_0=0$ and ${\lambda}_1<0$, it follows that for $t$ arbitrary close to $0$, ${\lambda}_t \not= 0$. Proposition \ref{munot0prop} follows since $\mu(g_t)$ has the same sign than ${\lambda}_t$.
\subsubsection{Definition of the metric $g_\th$}\label{definitionofthemetric}
\noindent As explained above, we will use the same construction as in \cite{ammann.dahl.humbert:08}. Consequently, we give the definition of $g_\th$ without additional explanations. The reader may refer to \cite{ammann.dahl.humbert:08} for more details.
We keep the same notations than in Section \ref{joining_man}. Let $h_1$ be the restriction of $g$ to the surgery sphere $S_1'\subset M$ and $h_2$ be the restriction of the standard metric $\si^n=g$ on $S^n$ to $S_2' \subset S^n$. Define $S':= S_1' \amalg S_2'$ and
$h \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} h_1 \amalg h_2$ on $S'$. In the following, $r$ denotes the distance function to $S'$ in $(M\amalg S^n,g\amalg \si^n)$. In polar coordinates, the metric $g$ has the form
\begin{equation} \label{metric=product}
g = h + \xi^{n-k} + T = h + dr^2 + r^2 \sigma^{n-k-1} + T
\end{equation}
on $U(R_{\textrm{max}})\setminus S'\cong S'\times (0,R_{\textrm{max}})\times
S^{n-k-1}$. Here $T$ is a symmetric $(2,0)$-tensor vanishing on $S'$ which is the error term measuring the fact that $g$ is not in general a product metric (at least near $S_1'$).
We also define the product metric
\begin{equation} \label{def.g'}
g' \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} h + \xi^{n-k} = h + dr^2 + r^2 \si^{n-k-1},
\end{equation}
on $U(R_{\textrm{max}}) \setminus S'$ so that $g = g' + T$. As in \cite{ammann.dahl.humbert:08}, we have
\[ \left\{ \begin{array}{ccc} \label{normC0T}
| T(X,Y) | & \leq & Cr | X |_{g'} | Y |_{g'}, \\
|(\nabla_U T)(X,Y) |
& \leq &
C | X |_{g'} | Y |_{g'} | U|_{g'}, \\
|(\nabla^2_{U,V}) T(X,Y) |
& \leq&
C | X |_{g'} | Y |_{g'} |U|_{g'}|V|_{g'},
\end{array} \right. \]
for $X,Y,U,V \in T_x M$ and $x \in U(R_{\textrm{max}})$. We define $T_1 \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} T|_{M}$ and $T_2\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} T|_{S^n}$.
We fix $R_0\in (0,R_{\textrm{max}})$, $R_0<1$
and choose a smooth positive function $F: M \setminus S ' \to \mR$ such that
$$
F(x) =
\begin{cases}
1,
&\text{if $x \in M \setminus U_1(R_{\textrm{max}})\amalg S^n\setminus U_2(R_{\textrm{max}})$;} \\
r(x)^{-1},
&\text{if $x \in U_i(R_0)\setminus S'$.}
\end{cases}
$$
Next we choose a sequence $\theta = \theta_j$ of positive numbers tending to $0$. For any $\theta$ we then choose a number $\de_0 = \de_0(\th) \in (0,\th)$ small enough to suit with the arguments below.
For any $\th>0$ and sufficiently small $\de_0$ there is
$A_\th\in [\th^{-1}, (\de_0)^{-1})$ and a smooth function
$f: U(R_{\textrm{max}}) \to \mR$ depending only on the coordinate $r$ such
that
$$
f(x) =
\begin{cases}
- \ln r(x), &\text{if $x \in U(R_{\textrm{max}}) \setminus U(\th)$;} \\
\phantom{-} \ln A_\th, &\text{if $x \in U(\de_0)$,}
\end{cases}
$$
and such that
\begin{equation} \label{asumpf}
\left| r\frac{df}{dr} \right|
=
\left| \frac{df }{d(\ln r)} \right|
\leq 1,
\quad
\text{and}
\quad
\left\|r\frac{d}{dr}\left(r\frac{df}{dr}\right)\right\|_{L^\infty}
=
\left\|\frac{d^2f}{d^2(\ln r)}\right\|_{L^\infty}
\to 0
\end{equation}
as $\th\to 0$.
Set $\ep = e^{-A_\th} \de_0$ that we assume smaller than $1$ and
use this $\ep$ to construct $M$ as in Section \ref{joining_man}.
On $U^N_\ep(R_{\textrm{max}}) =
\left( U(R_{\textrm{max}}) \setminus U(\ep) \right)/{\sim}$ we define $t$ by
$$
t \mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=}
\begin{cases}
- \ln r_1 + \ln \ep, &
\text{on $U_1(R_{\textrm{max}}) \setminus U_1(\ep)$;} \\
\phantom{-} \ln r_2 - \ln \ep, &
\text{on $U_2(R_{\textrm{max}}) \setminus U_2(\ep)$.}
\end{cases}
$$
One checks that
\begin{itemize}
\item $
r_i = e^{|t|+ \ln \ep} = \ep e^{|t|};
$
\item
$
F(x) = \ep^{-1}e^{-|t|}
$
for $x \in U(R_0) \setminus U^N(\th)$, or equivalently if
$|t|+\ln \ep \leq \ln R_0$ and hence
$$
F^2 g =\ep^{-2} e^{-2|t|}(h+T) + dt^2 + \sigma^{n-k-1}
$$
on $U(R_0)\setminus U^N(\th)$;
\item and
$$
f(t) =
\begin{cases}
-|t|-\ln\ep,
&\text{if $\ln\th- \ln \ep \leq |t| \leq \lnR_{\textrm{max}} - \ln\ep$;} \\
\ln A_\th,
&\text{if $|t|\leq \ln\de_0-\ln\ep$.}
\end{cases}
$$
\end{itemize}
We have $|df/dt|\leq 1$, $\|d^2f/dt^2\|_{L^\infty}\to 0$. Now, we choose a
cut-off function $\chi:\mR\to [0,1]$ such that $\chi=0$ on
$(-\infty,-1]$, $|d\chi| \leq 1$, and $\chi=1$ on $[1,\infty)$. Finally, we define
$$
g_{\th}
\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=}
\begin{cases}
F^2 g_i,
&\text{on $M_i \setminus U_i(\th)$;} \\
e^{2f(t)}(h_i+T_i) + dt^2 + \sigma^{n-k-1},
&\text{on $U_i(\th)\setminus U_i(\de_0)$;} \\
\begin{aligned}
&A_{\th}^2 \chi( t / A_{\th} )(h_2+T_2)
+ A_{\th}^2 (1-\chi( t / A_{\th} ))(h_1+T_1)\\
&\quad + dt^2 + \sigma^{n-k-1},
\end{aligned}
&\text{on $U^N_\ep(\de_0)$.}
\end{cases}
$$
Moreover, the metric $g_\th$ can be written as
$$g_\th:= g'_\th + \widetilde {T_t} \text{ on } U^N(R_0),$$
where $g'_\th$ is the metric without error term and it is equal to
$$g'_\th = e^{2f(t)}\widetilde{h_t} + dt^2 + \sigma^{n-k-1},$$
where the metric $\widetilde{h_t}$ is given by
$$\widetilde{h_t}:= \chi(\frac{t}{A_\th})h_2 + (1-\chi(\frac{t}{A_\th}))h_1,$$
and $\widetilde{T_t}$ is the error term and his expression is given by the following
$$\widetilde{T_t}:= e^{2f(t)}(\chi(\frac{t}{A_\th})T_2 + (1-\chi(\frac{t}{A_\th}))T_1).$$
We further have the following properties of the error term $\widetilde{T_t}$
\[ \left\{ \begin{array}{ccc} \label{norm}
|\widetilde T(X,Y) | & \leq & Cr | X |_{g'_\th} | Y |_{g'_\th}, \\
|\nabla {{\widetilde T}_t}|_{g'_\th}
& \leq &
C e^{-f(t)}, \\
|\nabla^2{{\widetilde T}_t}|_{g'_\th}
& \leq&
C e^{-f(t)},
\end{array} \right. \]
where $\nabla$ is the Levi-Civita connection with respect to the metric $g'_\th$, for all $X$, $Y\in T_x N$ and $x\in U^N(R_0).$
\subsection{A preliminary result}
\noindent In order to prove Theorem \ref{theoremprincipal}, we will start by proving the following results.
\begin{theorem}\label{theoremprincipal1}
\noindent {\bf Part 1:} let $(u_\th)$ be a sequence of functions which satisfy
$$L_{g_\th}u_\th = {\lambda}_\th |u_\th|^{N-2} u_\th,$$
such that $\int_N |u_\th|^N dv_{g_\th}=1$ and ${\lambda}_\th\to_{\th\to 0} {\lambda}_\infty$, where ${\lambda}_\infty \in \mR$. Then, at least one of the two following assertions is true
\begin{enumerate}
\item ${\lambda}_\infty\geq \Lambda_n$, where $\Lambda_n>0$ depends only on $n$;
\item there exists a function $u\in C^{\infty}(M \amalg S^n)$, $u \equiv 0$ on $S^n$, $u \not\equiv 0$ on $M$ solution of
$$L_{g} u = \lambda_\infty |u|^{N-2} u,$$
with
$$\int_M |u|^N dv_g=1$$
such that for all compact sets $K\subset M \amalg S^n \setminus S'$ (note that $K$ can also be considered as a subset of $N$), $F^\frac{n-2}{2} u_\th$ tends to $u$ in $C^2(K)$, where $F$ is defined in Section \ref{constructionofthemetric}. Moreover, we have
\begin{enumerate}
\item the norm $L^2$ of $u_\th$ is bounded uniformly in $\th$; \label{0}
\item $\lim_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} u_\th= 0$; \label{1}
\item $\lim_{b\to 0} \limsup_{\th\to 0}\int_{U^N(b)} u_\th^N\, dv_{g_\th} = 0.$ \label{2}
\end{enumerate}
\end{enumerate}
\noindent {\bf Part 2:} let $u_\th$ be as in Part 1 above and assume that Assertion 2) is true. Let $v_\th$ be a sequence of functions which satisfy
$$L_{g_\th} v_\th = \mu_\th |u_\th|^{N-2} v_\th,$$
such that $\int_N v_\th^N dv_{g_\th} = 1,$ $\mu_\th\to \mu_\infty$ where $\mu_\infty < \mu(\mS^n)$. Then, there exists a function $v \in C^{\infty}(M \amalg S^n)$, $v \equiv 0$ on $S^n$, $v \not\equiv 0$ on $M$ solution of
$$L_g v = \mu_\infty |u|^{N-2} v$$
with
$$\int_M |v|^N dv_g = 1$$
and such that for all compact sets $K\subset M \amalg S^n \setminus S' $, $F^{\frac{n-2}{2}} v_\th$ tends to $v$ in $C^2(K).$ Moreover,
\begin{enumerate}
\item the norm $L^2$ of $v_\th$ is bounded uniformly in $\th$; \label{2'}
\item $\lim_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} v_\th= 0;$ \label{3}
\item $\lim_{b\to 0} \limsup_{\th\to 0} \int_{U^N(b)} v_\th^N\, dv_{g_\th} = 0.$ \label{4}
\end{enumerate}
\end{theorem}
\subsubsection{Proof of Theorem \ref{theoremprincipal1} Part 1} \label{part1}
Let $(u_\th)$ be a sequence of functions which satisfy
$$L_{g_\th}u_\th = {\lambda}_\th |u_\th|^{N-2} u_\th,$$
such that $\int_N |u_\th|^N \,dv_{g_\th}=1$ and ${\lambda}_\th\to_{\th\to 0} {\lambda}_\infty$, where ${\lambda}_\infty \in \mR$.
We proceed exactly as in \cite{ammann.dahl.humbert:08} where here, the manifold $M_2$
is $S^n$ equiped with the standard metric $\sigma^n$, and where $W$ is the sphere $S^k$. The only difference will be that $u_\th$ may now have a changing sign.
\begin{remark}
In the proof of the main theorem in \cite{ammann.dahl.humbert:08}, it was proven that
$${\lambda}_\infty > -\infty.$$
Here, we made the assumption that ${\lambda}_\infty$ has a limit. Without this assumption, one could again prove that
${\lambda}_\infty > -\infty$ but the point here is that there is no reason why ${\lambda}_\infty$ should be bounded from above contrary to what happened in \cite{ammann.dahl.humbert:08}.
\end{remark}
\noindent The argument of Corollary 7.7 in \cite{ammann.dahl.humbert:08} still holds here and shows that
\begin{eqnarray}
\liminf_\th \|u_\th\|_{L^\infty(N)} > 0.
\end{eqnarray}
Several cases are studied:
\begin{caseI}
$\limsup_{\th\to 0}\|u_\th\|_{L^\infty(N)} = \infty$.
\end{caseI}
\noindent Set $m_\th:= \|u_\th\|_{L^\infty(N)}$ and choose $x_\th\in N$ such that $u_\th(x_\th) = m_\th$. After taking a subsequence, we can assume that $\lim_{\th\to 0} m_\th = \infty$. We have to study the following two subcases.
\begin{subcaseI.1}
There exists $b>0$ such that $x_\th\in N \setminus U^N(b)$
for an infinite number of $\th$.
\end{subcaseI.1}
\begin{subcaseI.2}
For all $b>0$ it holds that $x_\th\in U^N(b)$ for $\th$ sufficiently
small.
\end{subcaseI.2}
\begin{caseII}
There exists a constant $C_0$ such that
$\| u_\th\|_{L^{\infty}(N)}\leq C_0$ for all $\th$.
\end{caseII}
\begin{subcaseII.1}
There exists $b>0$ such that
$$
\liminf_{\th\to 0}
\left( {\lambda}_\th\sup_{ U^N(b) } {u_\th}^{N-2} \right)
<
\frac{(n-k-2)^2(n-1)}{8(n-2)}.
$$
\end{subcaseII.1}
\begin{subsubcaseII.1.1}
$\limsup_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} u_\th> 0$.
\end{subsubcaseII.1.1}
\begin{subsubcaseII.1.2}
$\lim_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} u_\th= 0$.
\end{subsubcaseII.1.2}
\begin{subcaseII.2}
$$
{\lambda}_\th \sup_{ U^N(b) } {u_\th}^{N-2}
\geq
\frac{(n-k-2)^2(n-1)}{8(n-2)}
$$
\end{subcaseII.2}
\noindent In Subcases I.1 and I.2, it is shown in \cite{ammann.dahl.humbert:08} that $\lambda_\infty \geq \mu(\mS^n)$. The proof still holds when $u_\th$ has a changing sign. In Subsubcase II.1.1 and Subcase
II.2, we obtain that $\lambda_\infty \geq \Lambda_{n,k}$ where $\Lambda_{n,k}$ is a positive number depending only on $n$ and $k$. The definition of $\Lambda_{n,k}$ in \cite{ammann.dahl.humbert:08} is the infimum of energies of positive solutions of the Yamabe equation on model spaces (see Section \ref{lambdank}). This definition has to be slightly modified to allow nodal solutions. As explained in Section \ref{lambdank} the proof that $\Lambda_{n,k}> 0$ remains the same. \\
\noindent In Subcases I.1, I.2, II.1.1 and II.2, we then get that
$\lambda_\infty\geq \Lambda_n$, where $$\Lambda_n:= \min_{k\in\{0, \cdots, n-3\}} \{\Lambda_{n,k}, \mu\}.$$ In particular, Assertion 1) of part 1 in Theorem \ref{theoremprincipal1} is true. So let us examine Subsubcase II.1.2. The assumption of Subcase II.1 allows to obtain as in \cite{ammann.dahl.humbert:08} that
\begin{eqnarray} \label{uboundedl2}
\int_N u_\th^2 dv_{g_\th} \leq C.
\end{eqnarray}
for some $C>0$. The assumptions of Subcase II.1.2 are that
\begin{eqnarray} \label{ubounded}
\sup_N (u_\th) \leq C
\end{eqnarray}
and that
\begin{eqnarray} \label{uto0}
\limsup_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} u_\th= 0.
\end{eqnarray}
\begin{step} \label{step1}
We prove that
$\lim_{b \to 0} \limsup_{\th\to 0} \int_{U^N(b)} {|u_\th|}^N \, dv_{g_\th}
= 0$.
\end{step}
\noindent Let $b >0$. We have, by Relation \eref{uboundedl2}
$$
\int_{U^N(b) } |u_\th|^N \, dv_{g_\th}
\leq
A_0 \sup_{U^N(b)} |u_\th|^{N-2},
$$
where $A_0$ is a positive number which does not depend on $b$ and $\th.$ The claim then follows from \eref{uto0}.
\begin{step}\label{step2}
$C^2$ convergence on all compact sets of $M\amalg S^n\setminus S'.$
\end{step}
\noindent Let $(\Omega_j)_j$ be an
increasing sequence of subdomains of $(M \amalg S^n\setminus S')$ with smooth boundary such that $\bigcup_{j}\Omega_j = M\amalg S^n\setminus S',$ $\Omega_j\subset \Omega_{j+1}$. The norm $\left\|u_\th\right\|_{L^\infty(N)}$ is bounded,
then so is $\left\|u_\th\right\|_{L^\infty(\Omega_{j+1})}$. Using standard results on elliptic regularity (for more details, see for example \cite{gilbarg.trudinger:77}), we see that the sequence $(u_\th)$ is bounded in the Sobolev space
$H^{2,p}(\Omega'_{j})$ $\forall p \in (1, \infty)$ where $\Omega'_j$ is any domain such that $\overline{\Om}_j \subset \Om'_j \subset \overline{\Om'_j} \subset \Om_{j+1}$. The Sobolev embedding Theorem implies that $(u_\th)$ is bounded in $C^{1,\alpha}(\overline{\Omega_j})$ for any $\alpha \in (0, 1)$. (See Theorem 4.12 in \cite{adams.fournier:03} for more informations on Sobolev embedding Theorems).\\
Now we use a diagonal extraction process, by taking successive subsequences, it follows that $(u_\th)$ converges to functions $\widetilde{u_j}\in C^1(\overline{\Omega_j})$ and such that $\widetilde{u_j}\vert_{\overline{\Omega}_{j-1}} = \widetilde {u}_{j-1}.$\\
We define
$$\widetilde u =\widetilde {u_j} \text{ on } \overline{\Omega_j}.$$
By taking a diagonal subsequence of $u_\th$, we get that $u_\th$ tends to $\widetilde u$ in $C^1$ on any compact subset of $M\amalg S^n\setminus S'$ and by $C^1$-convergence of the functions $u_\th$, the function $\widetilde u$ satisfies the equation
\begin{equation}\label{eqV1}
L_{g_\th} \widetilde u ={\lambda}_\infty |\widetilde u|^{N-2}{\widetilde {u}} \text{ on } M\amalg S^n\setminus S'.
\end{equation}
We recall that $g_\th= F^2 g = (F^\frac{n-2}{2})^\frac{4}{n-2}g$ on $U^N(b)$.
By conformal invariance of the Yamabe operator we obtain for all $v$
$$L_{F^2 g}v = F^{-\frac{n+2}{2}}L_g(F^\frac{n-2}{2}v).$$
Now we set
$$ u = F^\frac{n-2}{2} \widetilde u.$$
We obtain
\begin{eqnarray*}
L_g u&=&F^\frac{n+2}{2}L_{F^2 g}\widetilde u\\
&=& F^\frac{n+2}{2} {\lambda}_{\infty} |\widetilde u|^{N-2}\widetilde u\\
&=& {\lambda}_{\infty} |u|^{N-2} u.
\end{eqnarray*}
This shows that $u$ is a solution on $(M\amalg S^n\setminus S', g)$ of the following equation
$$L_g u = {\lambda}_{\infty} |u|^{N-2} u.$$
Moreover, using Step \ref{step1} and the fact that $\int_N u_\th^N dv_{g_\th}=1$, the function $u$ satisfies
\begin{eqnarray*}
\int_{M\amalg S^n} u^N\, dv_g &=&\int_{M\amalg S^n \setminus S'} {\widetilde u}^N\, dv_g\\
&=& \lim_{b\to 0}\lim_{\th \to 0}\int_{U^N(b)} u_{\th}^N\, dv_{g_{\th}}\\
&=& 1.
\end{eqnarray*}
\begin{step}\label{step3}
Removal of the singularity
\end{step}
\noindent The next step is to show that $u$ is a solution on all $M\amalg S^n$ of
\begin{eqnarray}\label{eqnwidetilde}
L_g {u} = {\lambda}_\infty |u|^{N-2}u.
\end{eqnarray}
To prove this fact, we will show that for all $\phi \in C^\infty(M\amalg S^n)$, we have
$$\int_{M\amalg S^n} L_g u \phi\,dv_g =\int_{M\amalg S^n}{\lambda}_\infty |u|^{N-2} u \phi\,dv_g.$$
First, we have
\begin{eqnarray*}
\int_{M\amalg S^n} u L_g\phi\,dv_g &=&\int_{M\amalg S^n} u L_g(\phi-\chi_\ep \phi+\chi_\ep \phi)\,dv_g\\
&=&\int_{M\amalg S^n} u L_g(\chi_\ep \phi)\,dv_g+\int_{M\amalg S^n} u L_g((1-\chi_\ep)\phi)\,dv_g,
\end{eqnarray*}
where
\begin{eqnarray*}
\left|\;
\begin{matrix}
\chi_\ep=1\hfill & \hbox{if } d_g(x,S')<\ep,\\\\
\chi_\ep =0\hfill & \hbox{if } d_g(x,S')\geq 2\ep,\\\\
\left| d\chi_\ep\right|<\frac{2}{\ep}.
\end{matrix}
\right.
\end{eqnarray*}
Since $(1-\chi_\ep)$ is compactly supported in $M\amalg S^n\setminus S'$, we have \begin{eqnarray*}
\int_{M\amalg S^n} u L_g((1-\chi_\ep)\phi)\,dv_g &=& \int_{M\amalg S^n} (L_g u)(1-\chi_\ep)\phi\,dv_g\\
&\to&
\int_{M\amalg S^n} L_g u\phi\, dv_g = \int_{M\amalg S^n} {\lambda}_\infty |u|^{N-2} u\phi\,dv_g.
\end{eqnarray*}
Then, it remains to prove that
$$\int_{M\amalg S^n} u L_g(\chi_\ep \phi)\, dv_g\rightarrow 0.$$
We have
\begin{eqnarray*}
L_g(\chi_\ep\phi)&=&C_n \Delta (\chi_\ep\phi)+\Scal_g(\chi_\ep\phi)\\
&=&C_n \Delta \chi_\ep \phi+ C_n\Delta \phi \chi_\ep+\Scal_g(\chi_\ep \phi)-2\left\langle \nabla \chi_\ep,\nabla \phi\right\rangle\\
&=& \chi_\ep L_g\phi+C_n(\Delta \chi_\ep)\phi-2\left\langle \nabla \chi_\ep,\nabla \phi\right\rangle.
\end{eqnarray*}
According to Lebesgue Theorem, it holds that
$$\int_{M\amalg S^n} u \chi_\ep L_g\phi \, dv_g\rightarrow 0 \text{ a.e.}$$
Further, we have
\begin{eqnarray} \label{etoile}
\left| \int_{M\amalg S^n} u L_g(\chi_\ep \phi)\,dv_g\right|&\leq& \frac{C}{\ep^2}\int_{C_\ep} u\,dv_g\\
&\leq& \frac{C}{\ep^2}\left(\int_{C_\ep} u^2\,dv_g\right)^\frac{1}{2}\left(\vol(Supp(C_\ep))\right)^\frac{1}{2},
\end{eqnarray}
where $C_\ep = \left\lbrace x\in M\amalg S^n; \ep < d(x,S')<2\ep \right\rbrace = U^N(2\ep)\setminus U^N(\ep)$.\\
In addition, we get from \eref{uboundedl2} that
$$\int_N {\widetilde u}^2\,dv_{F^2 g} < +\infty,$$
which implies that
$$\int_{C_\ep}{\widetilde u}^2\,dv_{F^2 g} < +\infty.$$
Let us compute
\begin{eqnarray*}
\int_{C_\ep}{\widetilde u}^2\,dv_{g_{\th}}&=& \int_{C_\ep} \left(F^{\frac{n-2}{2}}\right)^{\frac{2n}{n-2}}F^{-(n-2)}u^2\,dv_g \\
&=& \int_{C_\ep} F^2 u^2\,dv_g < +\infty.
\end{eqnarray*}
We recall that $F = \frac{1}{r}$ on $C_\ep$. Coming back to \eref{etoile}, we deduce
\begin{eqnarray*}
\left| \int_M u L_g(\chi_\ep \phi)\,dv_g\right|&\leq&\frac{C}{\ep^2} \left(\int_{C_\ep}\frac{u^2 F^2}{F^2} \,dv_g\right)^\frac{1}{2} \left(\vol(C_\ep)\right)^\frac{1}{2}\\
&\leq& \frac{C}{\ep^2}\times \ep \times \ep^\frac{n-k}{2} = C\ep^{\frac{n-k}{2}-1}.
\end{eqnarray*}
Since $k\leq n-3,$ we have
$$\frac{n-k}{2}-1 > 0,$$
which implies that
$$\int_{M\amalg S^n} u L_g(\chi_\ep \phi)\,dv_g \rightarrow 0.$$
Finally, we get that $u$ is a solution on $M\amalg S^n$ of the equation
$$L_g u= {\lambda}_\infty |u|^{N-2} u.$$
\begin{step}
We have either $u \equiv 0$ on ${\mS}^n$ either ${\lambda}_\infty\geq \mu({\mS}^n).$
\end{step}
\noindent Note that the function $u$ verifies
\begin{eqnarray}\label{mamalgs}
\int_{M\amalg S^n} |u|^N\, dv_g \leq 1.
\end{eqnarray}
Since
\begin{eqnarray*}
\int_{M\amalg S^n} |u|^N\, dv_g &=& \int_{M\amalg S^n} |\widetilde u|^N\, dv_{g_{\th}}\\
&\leq& \int_N |\widetilde u|^N \, dv_{g_{\th}}\\
&\leq& \lim_{\th\to 0} \int_N |{u_\th}|^N \,dv_{g_\th} = 1.
\end{eqnarray*}
Assume that $u \not\equiv 0$ on ${\mS}^n$.\\
Setting $w = u_{|{\mS^n}}$ and using equations (\ref{eqnwidetilde}) and (\ref{mamalgs}), we have
\begin{eqnarray*}
\mu({\mS}^n) \leq Y(w)&=& \frac{{\lambda}_\infty \int_{{\mS}^n} w^N\, dv_g}{\left(\int_{{\mS}^n} w^N\, dv_g\right)^\frac{n-2}{n}}\\
&=& {\lambda}_\infty \left(\int_{{\mS}^n} w^N \,dv_g\right)^\frac{2}{n}\leq {\lambda}_\infty.
\end{eqnarray*}
Then we obtain that ${\lambda}_\infty \geq \mu({\mS}^n)$ and hence, the conclusion $1)$ of Theorem \ref{theoremprincipal1} Part 1 is true.
\subsubsection{Proof of Theorem \ref{theoremprincipal1} Part 2}
We consider a function $v_\th$ satisfying
\begin{eqnarray}\label{vth}
L_{g_\th} v_\th = \mu_\th |u_\th|^{N-2} v_\th,
\end{eqnarray}
with
$$\int_N |v_\th|^N\, dv_{g_\th} = 1.$$
\noindent A first remark is the following: as in Lemma 7.6 of \cite{ammann.dahl.humbert:08}, we observe that $U^N(b)$ is a $WS$-bundle for any $b > 0$. Since $u_\th$ satisfies
$$\lim_{b\to 0}\limsup_{\th\to 0} \sup_{U^N(b)} u_\th = 0.$$
Then, for $b$ small enough, we have
$$\mu_\th \|u_\th\|_{U^N(b)}^{N-2} \leq \frac{(n-k-2)^2(n-1)}{8(n-2)}.$$
We then can apply Theorem \ref{theo.fibest} on $U^N(b)$ and the proof of Lemma 7.6 of \cite{ammann.dahl.humbert:08} shows that there exists numbers $c_1, c_2>0$ independent of $\th$ such that
\begin{eqnarray}\label{c1c2}
\int_N |v_\th|^2 \, dv_{g_\th} \leq c_1{\Arrowvert v_\th\Arrowvert}^2_{L^\infty(N)} + c_2.
\end{eqnarray}
As a consequence, we get that
$$
\liminf_{\th\to 0} \| v_\th\|_{L^{\infty}(N)}
>0.
$$
Indeed, assume that
$$\lim_{\th \to 0}\left\|v_\th\right\|_{L^\infty(N)} = 0.$$
By Equation \eref{c1c2}, we have
\begin{eqnarray*}
1 = \int_N |v_\th|^N \, dv_{g_\th}&\leq& {\Arrowvert v_\th\Arrowvert}_{L^\infty(N)}^{N-2} \int_N |v_\th|^2\, dv_{g_\th}\\
&\leq& {\Arrowvert v_\th\Arrowvert}_{L^\infty(N)}^{N-2} (c_1 {\Arrowvert v_\th\Arrowvert}_{L^\infty(N)}^2+ c_2)\to 0,
\end{eqnarray*}
as $\th\to 0$. This gives the desired contradiction.
In the rest of the proof, we will study several cases. In what follows, only Subcase II.1.2 will be a big deal: Subcases I.1, I.2 and II.1 will be excluded by arguments mostly contained in \cite{ammann.dahl.humbert:08}. So we will just give few explanations for these cases.
\begin{caseI}
$\limsup_{\th\to 0}\|v_\th\|_{L^\infty(N)} = \infty$.
\end{caseI}
\noindent Set $m_\th\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} \|v_\th\|_{L^\infty(N)}$ and choose $x_\th\in N$ with $v_\th(x_\th) = m_\th$. After taking a
subsequence we can assume that $\lim_{\th\to 0} m_\th= \infty$.
\begin{subcaseI.1}
There exists $b>0$ such that $x_\th\in N \setminus U^N(b)$
for an infinite number of $\th$.
\end{subcaseI.1}
\noindent By taking a subsequence we can assume
that there exists $\bar{x} \in M \amalg S^n \setminus U(b)$ such
that $\lim_{\th\to 0} x_\th= \bar{x}$. We define
$\tilde{g}_\th\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} m_\th^{\frac{4}{n-2}} g_\th$. For $r>0$, \cite{ammann.dahl.humbert:08} tells that for $\th$ small enough, there exists a diffeomorphism
$$
\Th_\th:
B^n(0,r)
\to
B^{g_\th} ( x_\th, m_\th^{-\frac{2}{n-2}} r)
$$
such that the sequence of metrics
$(\Th_\th^* (\tilde{g}_\th))$ tends to the flat
metric $\xi^n$ in $C^2(B^n(0,r))$, where $B^n(0,r)$ is the standard ball in $\mR^n$ centered in $0$ with radius $r$. We let
$\tilde{u}_\th\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} m_\th^{-1} u_\th,$ ${\tilde{v}}_\th\mathrel{\raise.095ex\hbox{\rm :}\mkern-5.2mu=} m_\th^{-1} v_\th$ and we have
\begin{eqnarray*}
L_{\tilde{g}_\th} {\tilde{v}}_\th &=& {\lambda}_\th{\tilde{u}}_{\th}^{N-2} {\tilde{v}}_\th\\
&=& \frac{{\lambda}_\th}{m_\th^{N-2}}{u_\th}^{N-2}\tilde{v}_\th.
\end{eqnarray*}
\noindent Since $\left\|u_\th\right\|_{L^\infty(N)} \leq C,$ it follows that
$\left\|L_{\tilde{g}_\th} \tilde{v}_\th \right\|_{L^\infty(N)}$ tends to $0$.
Applying Lemma \ref{vtheta}, we obtain a solution $v\not\equiv 0$ of the following equation on $\mR^n$
$L_{\xi^n} v = 0$. Since $\Scal_{\xi^n} = 0$, $v$ is harmonic and admits a maximum at $x = 0$. As a consequence, $v$ is constant equal to $v(0) = 1$. This is a contradiction, since $\|v\|_{L^N}\leq 1.$
\begin{subcaseI.2}
For all $b>0$ it holds that $x_\th\in U^N(b)$ for $\th$ sufficiently
small.
\end{subcaseI.2}
\noindent We proceed as in Subcase I.2 in \cite{ammann.dahl.humbert:08}. As in Subcase I.1 above, we get from Lemma \ref{vtheta} a function $v$ which is harmonic on $\mR^n$ and admits a maximum at $x = 0$. This is again a contradiction.
\begin{caseII}
There exists a constant $C_0$ such that
$\| v_\th\|_{L^{\infty}(N)}\leq C_0$ for all $\th$.
\end{caseII}
\noindent By (\ref{c1c2}), there exists a constant $A_0$ independent of $\th$ such that
\begin{equation} \label{vboundedl2}
\| v_\th\|_{L^2(N,g_\th)} \leq A_0.
\end{equation}
We split the treatment of Case II into two subcases.
\begin{subcaseII.1}
$\limsup_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} v_\th> 0$.
\end{subcaseII.1}
\noindent Again mimicking what is done in \cite{ammann.dahl.humbert:08}, we obtain from Lemma \ref{vtheta} a function $v$ which is a solution of
$L_{G_c} v= 0$ on $\mR^{k+1} \times S^{n-k-1}, G_c$ for some $c \in [-1,1]$ where
$G_c= e^{2cs} \xi^k + ds^2 + \sigma^{n-k-1}$. In Subcases I.1 and I.2, we used the fact that
$\frac{{\lambda}_\th}{m_\th^{N-2}}$ tends to $0$ to show that at the limit $L_{G_c} v= 0$. Here, the argument is different: first we set $\alpha_0:= \frac{1}{2}\limsup_{b\to 0}\limsup_{\th\to 0} \sup_{U^N(b)} v_\th >0$. Then, we can suppose that there exists a sequence of positive numbers $(b_i)$ and $(\th_i)$ such that
$$\sup_{U^N(b_i)}v_{\th_i}\geq \alpha_0,$$
for all $i$. To simplify, we write $\th$ for $\th_i$ and $b$ for $b_i$. Take $x'_\th\in \overline{U^N(b_\th)}$ such that
$$v_\th(x'_\th)\geq \alpha_0.$$
For $r,r'>0$, we define
$$U_\th(r,r'):= B^{{\widetilde h}_{t_\th}}(y_\th, e^{-f(t_\th)}r)\times [t_\th-r',t_\th+r']\times S^{n-k-1}.$$
As in \cite{ammann.dahl.humbert:08}, the function $v$ is obtained as the limit of $v_\th$ on each $U_\th(r,r')$ (with $r, r'>0$).
The fact that $L_{G_c} v = 0$ follows from the observation that
$$\sup_{U_\th(r,r')} |u_\th| = 0,$$
hence
$$|u_\th|^{N-2}v_\th \to 0 \text{ uniformly on } U_\th(r,r').$$
\begin{subcaseII.2}
$\lim_{b\to 0} \limsup_{\th\to 0} \sup_{U^N(b)} v_\th = 0$.
\end{subcaseII.2}
\noindent By the same method than in Subsection {\ref{part1}}, we obtain that there is a function $v$ solution of the following equation
$$L_g v = \mu_\infty |u|^{N-2} v,$$
such that
$$\int_N v^N\,dv_g\leq 1.$$
Suppose that $v\not\equiv 0$ on $\mS^n$, then we have
\begin{eqnarray*}
\mu(\mS^n) \leq Y(v) = \mu_\infty\frac{\int_{\mS^n} u^{N-2} v^2\, dv_g}{(\int_{\mS^n} v^N\, dv_g)^\frac{2}{N}} = 0
\end{eqnarray*}
since $u \equiv 0$ on $S^n$. This is a contradiction.
This proves that $v \not\equiv 0$ on $\mS^n$. By the same argument than in Part 1, we have
$\int_M |v|^N dv_g = 1$. We finally obtain that the function $v$ satisfies all the desired conclusions of Theorem \ref{theoremprincipal1} Part 2.
\subsection{Proof of Theorem \ref{theoremprincipal}}
\noindent Let $(g_\th)$ the sequence of metrics defined on $N$ as in Section {\ref{constructionofthemetric}}.\\
{\bf{Step 1:}} For $\th$ small enough, we show that if $${\lambda}_k(M,g)>0 \Rightarrow {\lambda}_k(N,g_\th)>0,$$ where ${\lambda}_k$ is the $k^{th}$ eigenvalue associated to the Yamabe equation.\\
\begin{remark}
Note that this step implies that the existence of a metric with positive ${\lambda}_k$ is preserved by surgery of dimension $k \in \{0,\cdots,n-3\}$. This is an alternative proof of a result already contained in
\cite{baerdahl}.
\end{remark}
\noindent We proceed by contradiction and we suppose that ${\lambda}_k(N, g_\th)\leq 0.$ Let $u_\th$ be a minimizing solution of the Yamabe problem. By referring to \cite{elsayed}, there exists functions $v_{\th, 1}= u_\th, v_{\th,2}, \cdots, v_{\th, k}$ solution of the following equation on $N$
$$L_{g_{\th}} v_{\th, i} = {\lambda}_{\th, i} u_\th^{N-2} v_{\th,i},$$
where $${\lambda}_{\th, i} = {\lambda}_i(N, u_\th^{N-2} g_\th),$$
such that
$$\int_N {v_{\th, i}}^N\, dv_{g_\th} = 1 \text{ and }\int_N {u_\th}^{N-2} v_{\th, i} v_{\th, j} \,dv_{g_\th}= 0 \text{ for all }i \neq j.$$
By conformal invariance of the sign of the eigenvalues of the Yamabe operator (see \cite{elsayed}), we have $${\lambda}_{\th,i}={\lambda}_i(N, u_\th^{N-2} g_\th)\leq 0.$$
Moreover, by construction, it is easy to check that $ {\lambda}_{\th,1} = \mu_\th$ where $\mu_\th=\mu(N,g_\th)$ is the Yamabe constant of the metric $g_\th$. The main theorem in \cite{ammann.dahl.humbert:08} implies that $\lim_{\th\to 0} {\lambda}_{\th, 1} = \lim_{\th\to 0} \mu_\th > -\infty$. It follows that there exists a constant $C >0$ such that $-C \leq {\lambda}_{\th, 1} \leq \cdots\leq{\lambda}_{\th, k} \leq 0$. Then, for all $i$, ${\lambda}_{\th, i}$ is bounded and by restricting to a subsequence we can assume that ${\lambda}_{\infty, i} := \lim_{\th\to 0} {\lambda}_{\th, i}$ exists. Parts 1) and 2) of Theorem \ref{theoremprincipal} give the existence of functions $u=v_1, \cdots, v_k$ defined on $M,$ with $v_i\neq 0$ for all $i$ such that $F^{\frac{n-2}{2}}v_{\th,i}$ tends to $v_i$ in $C^1$ on each compact set $K \subset M \amalg S^n \setminus S'$. The functions $v_i$ are solutions of the following equation
$$L_g v_i = {\lambda}_{\infty, i} u^{N-2} v_i.$$
Moreover, we have
$$\int_M |v_i|^N\,dv_g\leq 1 \text{ and } \lim_{b\to 0}\limsup_{\th\to 0}\int_{U^N_\ep(b)}|v_{\th, i}|^N\, dv_g = 0.$$
Let us show that for all $i\neq j$, we get that
$$\int_M u^{N-2} v_i v_j \,dv_g = 0.$$
Set
$$\widetilde {u}_\th = F^\frac{n-2}{2} u_\th,$$
and
$$\widetilde {v}_{\th, i} = F^\frac{n-2}{2} v_{\th, i}.$$
For $b>0$ small, we have for $i \not= j$
\begin{eqnarray*}
\int_{M\setminus {U(b)}}u^{N-2} v_i v_j\, dv_g & = \lim_{\th\to 0}\int_{M\setminus U(b) = N\setminus U^N_\ep(b)} \widetilde u_\th^{N-2} \widetilde v_{\th, i} \widetilde v_{\th, j} dv_{g} \\
& = \lim_{\th\to 0}\int_{M\setminus U(b) = N\setminus U^N_\ep(b)} u_\th^{N-2} v_{\th, i} v_{\th, j} dv_{g_\th}
\end{eqnarray*}
where we used $dv_{g_\th} = F^n dv_g$. Using now the fact that $\int_N u_\th^{N-2} v_{\th, i} v_{\th, j}\, dv_{g_\th} = 0$, we get
\begin{eqnarray*}
\left|\int_{M\setminus U(b)} u^{N-2} v_i v_j\, dv_g \right|&=& \left|\lim_{\th\to 0} \int_{N\setminus {U^N_\ep(b)}} u_{\th}^{N-2} v_{\th,i} v_{\th,j} \, dv_{g_\th}\right|\\
&=&\lim_{\th\to 0} \left|\int_{U^N_\ep(b)}u_\th^{N-2} v_{\th,i} v_{\th,j} \, dv_{g_\th}\right|.
\end{eqnarray*}
We write
\begin{eqnarray*}
\left|\int_{U^N_\ep(b)} u_\th^{N-2} v_{\th,i} v_{\th,j} \, dv_{g_\th}\right|&\leq& \left(\int_{U_\ep^N(b)} u_{\th}^N\, dv_{g_\th}\right)^{\frac{N-2}{N}} \left(\int_{U_\ep^N(b)} |v_{\th,i}|^N\, dv_{g_\th}\right)^{\frac{1}{N}}\\
&&\left(\int_{U_\ep^N(b)} |v_{\th,j}|^N\, dv_{g_\th}\right)^{\frac{1}{N}}.
\end{eqnarray*}
Using the assertion
$$\lim_{b\to 0}\limsup_{\th\to 0}\int_{U^N_\th(b)} v_{\th, i}^N\, dv_{g_\th} = 0.$$
we obtain that
$$\lim_{b\to 0} \limsup_{\th\to 0} \left|\int_{U^N_\ep(b)} u_\th^{N-2} v_{\th, i} v_{\th, j}\, dv_{g_\th}\right| = 0.$$
We get finally that
$$\left|\int_M u^{N-2} v_i v_j\, dv_g\right| = \lim_{b\to 0}\left|\int_{M\setminus U(b)}u^{N-2} v_i v_j\, dv_g \right| = 0 \text{ for all }i\neq j.$$
We now write
\begin{eqnarray*}
0<{\lambda}_k(M,g)&\leq& \sup_{(\alpha_1,\cdots ,\alpha_k)\neq (0,\cdots ,0)}F(u, \alpha_1 v_1+\cdots +\alpha_k v_k)\\
&=& \sup_{(\alpha_1,\cdots ,\alpha_k)\neq (0, \cdots, 0)} \frac{\int_M (\alpha_1 v_1+\cdots +\alpha_k v_k)L_g(\alpha_1 v_1+\cdots +\alpha_k v_k)\,dv_g}{\int_M {u}^{N-2}(\alpha_1 v_1+\cdots +\alpha_k v_k)^2\,dv_g}\\
&=& \sup_{(\alpha_1,\cdots ,\alpha_k)\neq (0, \cdots, 0)} \frac{\alpha_1^2 \int_M v_1 L_g v_1\, dv_g + \cdots + \alpha_k^2 \int_M v_k L_g v_k\, dv_g}{\alpha_1^2\int_M u^{N-2} v_1^2\, dv_g+\cdots+\alpha_k^2\int_M u^{N-2}v_k^2\, dv_g}\\
&=&\sup_{(\alpha_1,\cdots ,\alpha_k)\neq (0, \cdots, 0)} \frac{\alpha_1^2{\lambda}_{\infty,1}\int_M u^{N-2} v_1^2\,dv_g +\cdots+\alpha_k^2{\lambda}_{\infty,k}\int_M u^{N-2}v_k^2\,dv_g}{\alpha_1^2\int_M u^{N-2} v_1^2\, dv_g +\cdots+\alpha_k^2\int_M u^{N-2} v_k^2\, dv_g}\\
&\leq& 0,
\end{eqnarray*}
since each ${\lambda}_{\infty,i}\leq 0$. This gives the desired contradiction.\\
\begin{remark}
Note that, for $i \geq 2$ it could happen that $\int_M u^{N-2} v_i^2 dv_g = 0$ if $M$ is not connected.
\end{remark}
\noindent {\bf{Step 2:} Conclusion} \\
Since $\mu_2(M,g)>0,$ from Step 1, we get that $\mu_2(N, g_\th) > 0.$ Assume $\mu_2(N, g_\th) < \mu(\mS^n)$ (otherwise, we are done). Using \cite{elsayed} we construct a sequence $(v_\th)$ solution of
$$L_{g_\th}v_\th = \mu_2(N,g_\th) |v_\th|^{N-2}v_\th,$$
such that
$$\int_N v_\th^N\, dv_{g_\th} = 1.$$
By Theorem \ref{theoremprincipal1} Part 1), this holds that $\lim_{\th \to 0} \mu_2(N,g_\th) \geq \Lambda_n$ (and the conclusion of Theorem \ref{theoremprincipal} is true) or there exists a function $v$ solution on $M$ of the equation:
$$L_g v = \mu_\infty |v|^{N-2}v,$$
with $\mu_\infty = \lim_\th \mu_2(N,g_\th) \geq 0$ and
$$\int_M |v|^N\, dv_g = 1.$$
This is what we assume until now.
\noindent As explained in Paragraph \ref{munot0}, we can assume that $\mu(g) \not= 0$.\\
\noindent {\bf Case 1:} $\mu(g)<0$.
\noindent Assume that $M$ is connected (so is $N$) and let us prove that $v$ has a changing sign.
We suppose by contradiction that $v\geq 0.$ The maximum principle gives that $v>0$. Let $u$ be a positive solution of the Yamabe equation on $M,$ i.e.
$$L_g u = \mu(g) u^{N-1}.$$
Since $v>0$, we can write:
$$L_g v = \underbrace{\mu_\infty}_{\geq 0} |v|^{N-2} v = \mu_\infty v^{N-1}.$$
Multiplying the second equation by $u$ and integrating, we get
$$\underbrace{\mu(g)}_{<0}\int_M u^{N-1} v \,dv_g= \int_M L_g u v \, dv_g = \int_M uL_gv\,dv_g = \underbrace{\mu_\infty}_{\geq 0} \int_M v^{N-1}u\,dv_g.$$
This gives a contradiction.
Then $v$ have a changing sign and this implies that
$$\mu_2(M,g)\leq \sup_{\alpha, \beta}F(v, \alpha v^+ + \beta v^-) = \mu_\infty.$$
\noindent If $M$ is now disconnected, then the Yamabe minimizer $u$ is positive on a connected component of $M$. If $uv\not\equiv 0$, the same proof holds. If $uv \equiv 0$ then
$$\mu_2(M,g)\leq \sup_{\alpha, \beta}F(v, \alpha u + \beta v) = \mu_\infty$$
In any case, the conclusion of Theorem \ref{theoremprincipal} is true. \\
\noindent {\bf{Case 2:}} $\mu(M,g) >0$.
\noindent Then, ${\lambda}_1(N,g_\th)>0$. In \cite{elsayed}, it is established that the sign of the eigenvalues of the Yamabe operator is conformally invariant. Consequently,
${\lambda}_1(N, v_\th^{N-2} g_\th)>0$. Set $\mu_1 = {\lambda}_1(N, v_\th^{N-2} g_\th)$ and let $u_\th$ be associated to $\mu_1$. Since associated to the first eigenvalue of the Yamabe operator, $u_\th$ is positive on at least one connected component of $N$ (and $0$ on the other). In addition, $u_\th$ is a solution of the equation
$$L_{g_\th}u_\th = \mu_1 |v_\th|^{N-2}u_\th,$$
such that
$$\int_N u_\th^N\, dv_{g_\th} = 1 \text{ and } \int_N |v_\th|^{N-2} u_\th v_\th\,dv_{g_\th} = 0.$$
Using Theorem \ref{theoremprincipal1} Step 2), there exists a function $u$ solution on $M$ of the following equation
$$L_g u = \mu_{\infty, 1} |v|^{N-2} u,$$
where $\mu_{\infty,1} := \lim_\th \mu_1$. Note that this limit exists after a possible extraction of a subsequence since
$0 \leq \mu_1 \leq \mu_2(N,g_\th)$.
Proceeding as in Step 1, we show that
\begin{eqnarray} \label{uvorth}
\int_M |v|^{N-2}u v \, dv_g = 0.
\end{eqnarray}
\noindent By maximum principle and since $u_\th>0$, $u>0$ on at least one connected component of $M$.
Then, $u$ and $v$ satisfy the equations
$$L_g u = \mu_{\infty, 1} |v|^{N-2} u,$$
and
$$L_g v = \mu_\infty |v|^{N-2}v.$$
These equations implies that $\mu_{\infty, 1}$ and $\mu_\infty$ are some eigenvalues of the generalized metric $|v|^{N-2} g$ (see \cite{elsayed}). Since positive, $u$ is associated to the first eigenvalue of $L_{|v|^{N-2} g}$ i.e. $\mu_{\infty, 1} = {\lambda}_1(M, |v|^{N-2} g)$. Hence, $\mu_{\infty, 1} \leq \mu_\infty$.\\
Finally, we obtain that
\begin{eqnarray*}
\mu_2(M,g ) &\leq& {\lambda}_2(|v|^{N-2}g) \Vol_{|v|^{N-2}g}(M)^{\frac{2}{n}}= \mu_\infty
\end{eqnarray*}
since
$$\Vol_{|v|^{N-2}g}(M) = \int_M |v|^N dv_g = 1$$
and since $\mu_{\infty, 1} \leq \mu_\infty$ are associated to two non proportional eigenfunctions in the metric $|v|^{N-2} g$ (thanks to Relation \eref{uvorth})
where we recall that $\mu_\infty = \lim_{\th\to 0} \mu_2(N, g_\th).$ This proves Theorem \ref{theoremprincipal}.
\begin{rem}
The reason why we need $\mu(g) \not= 0$ is the following. If $\mu(g) = 0$, the proof of Case 1 clearly does not lead to a contradiction. So, we would like to apply the method used in Case 2 above. For this, we need that ${\lambda}_1(v_\th^{N-2} g_\th)$ is bounded. When $\mu(g) >0$, this holds true since
$$0 \leq {\lambda}_1(v_\th^{N-2} g_\th) \leq {\lambda}_2(v_\th^{N-2} g_\th) = \mu_2(N,g_\th) \to \mu_\infty.$$
If $\mu(g) = 0$, one cannot say nothing about the sign of ${\lambda}_1(v_\th^{N-2} g_\th)$. In particular, if it is negative, we were not able to prove that ${\lambda}_1(v_\th^{N-2} g_\th)$ is bounded from above and the proof breaks down.
\end{rem}
\section{Some applications}\label{topologicalpart}
\noindent In this section, we establish some topological applications of Theorem \ref{mainthm}.
\subsection{A preliminary result}
\noindent We have
\begin{prop}
Let $V$, $M$ be two compact manifolds such that $V$ carries a metric $g$ with $\Scal_g = 0$ and $\sigma (M)>0,$ then
$$\sigma_2(V\amalg M) \geq \min(\mu_2(g),\sigma(M))>0.$$
\end{prop}
\noindent \textbf{Proof:} On $V\amalg M$, let $G = {\lambda} g + \mu h$, where ${\lambda}$ and $\mu$ are two positive constants and for a small $\ep$, $h$ is a metric such that $\sigma(M) \leq\mu(M,h) + \ep$. We have
\begin{eqnarray*}
\Spec(L_G) &=& \Spec(L_{{\lambda} g}) \cup \Spec(L_{\mu h})\\
&=& {\lambda}^{-1}\Spec(L_g) \cup \mu^{-1}\Spec(L_h)\\
&=& \{{\lambda}^{-1} {\lambda}_1, {\lambda}^{-1}{\lambda}_2, \cdots \}\cup \{\mu^{-1} {\lambda}_1',\mu^{-1} {\lambda}_2', \cdots \}
\end{eqnarray*}
where ${\lambda}_i$ (resp. ${\lambda}_i'$) denotes the $i$-th eigenvalue of $L_g$ (resp. $L_h$).
The assumption we made allows to claim that ${\lambda}_1=0$, ${\lambda}_2>0$ and ${\lambda}_1' >0$. Hence, we deduce that ${\lambda}_2(L_G) = \min \{{\lambda}^{-1}{\lambda}_2, \mu^{-1} {\lambda}_1'\}$.\\
We know that
$$\vol_G(V\amalg M) = {\lambda}^\frac{n}{2} \vol_g(V) + \mu^\frac{n}{2} \vol_h(M).$$
$\bullet$ For $\mu = 1$ and ${\lambda}\to +\infty$, we have
$${\lambda}_2(L_G) = {\lambda}^{-1}{\lambda}_2.$$
\begin{eqnarray*}
{\lambda}_2(L_G) {\vol_G}^\frac{2}{n}(V\amalg M) &=& {\lambda}^{-1} {\lambda}_2 \left(C + {\lambda} {\vol_g}^\frac{2}{n}(V)\right)\\
&\to_{{\lambda} \to +\infty} & {\lambda}_2 {\vol_g}^\frac{2}{n}(V) = \mu_2(g).
\end{eqnarray*}
$\bullet$ For ${\lambda} = 1$ and $\mu\to +\infty$, in this case $${\lambda}_2(L_G)) = \mu^{-1} {\lambda}_1'.$$
Hence \begin{eqnarray*}
{\lambda}_2(L_G) {\vol_G}^\frac{2}{n}_g(V\amalg M) &=& \mu^{-1} {\lambda}_1'\left(C + \mu {\vol_h}^\frac{2}{n}\right)\\
&\to_{\mu \to +\infty} & {\lambda}_1' {\vol_h}^\frac{2}{n} = \mu(M,h) \geq \sigma(M) - \ep.
\end{eqnarray*}
Finally we get that
$$\sigma_2(V\amalg M)\geq \min(\mu_2(g), \sigma(M)).$$
\begin{rem}
\begin{enumerate}
\item It is known that if $\sigma(M)>0$ and $\sigma(N)>0$, then
$$\sigma(M\amalg N) = \min(\sigma(M), \sigma(N)),$$
where $M\amalg N$ is the disjoint union of $M$ and $N$. (see \cite{ammann.dahl.humbert:08}).\\
\item Let $V$ with $\sigma(V) \leq 0$, then for $k\geq 2$
$$\sigma_2(\underbrace{V\amalg\cdots\amalg V}_{k \hbox{ times }} \amalg M) \leq 0.$$
\end{enumerate}
\noindent Indeed, let any metric $g = g_1\amalg g_2\amalg \cdots \amalg g_k\amalg g_n$ on $V\amalg\cdots\amalg V \amalg M$. Let $v_i$ be functions associated to ${\lambda}_1(g_i)$ which is non-negative by assumption. The functions $\tilde {v_i} = 0 \amalg \cdots 0 \amalg \underbrace{v_i}_{i^{th} \hbox{factor} }
\amalg\hskip0.1cm 0 \cdots \amalg 0$ are linearly independent and satisfy $L_g(\tilde{v_i}) = {\lambda}_1(g_i) v_i$ and thus are eigenfunctions of $L_g$. This implies that ${\lambda}_k(g) \leq 0$ and since $k \geq 2$, ${\lambda}_2(g) \leq 0$.
\end{rem}
\noindent This remark explains the condition $|\al(M)| \leq 1$ in Corollary \ref{cor}: it is used to ensure that $M$ is obtained from a model manifold $V\amalg N$ with a number of factors $V$ (where $V$ carries a scalar flat metric and $\sigma(N)>0$) not larger than 1. We recall that the $\alpha$-genus is an homomorphism from the spin cobordism ring $\Omega_*^{\Spin}$to the real $K$-theory ring $KO_*(pt),$
$$\alpha: \Omega_*^{\Spin} \rightarrow KO_*(pt).$$
It is important that $\alpha$ is a ring homomorphism, i.e. for any connected closed spin manifolds $M$ and $N$, $\alpha(M\amalg N) = \alpha(M) + \alpha(N)$ and $\alpha(M\times N) = \alpha(M).\alpha(N).$\\
Noting that $KO_n(pt)$ vanishes if $n = 3, 5, 6, 7$ mod $8$, is isomorphic to $\mathbb{Z}$ if $n = 0,4$ mod $8$ and is isomorphic to $\mathbb{Z}/{2\mathbb{Z}}$ if $n = 1, 2$ mod $8$. Recall also that $\alpha$ is exactly the $\hat{A}$-genus in dimensions $0$-mod $8$ and equal to $\frac{1}{2}\hat{A}$-genus in dimensions $4$ mod $8$.\\
In \cite{baerdahl1}, Proposition 3.5 says that in dimensions $n = 0, 1, 2, 4$ mod $8$, there exists a manifold $V$ such that $\alpha(V) = 1$ and $V$ carries a metric $g$ such that $\Scal_g = 0$.\\
$\bullet$ When $\alpha(M) = 0$ then Thm A in \cite{stolz} applies and $\sigma(M)\geq \alpha_n$ where $\alpha_n$ depending only on $n$.
\begin{thm}
Let $M$ be a spin manifold, if $\alpha(M) = 0$, this is equivalent to the existence of a manifold $N$ cobordant to $M$ such that the scalar curvature of $N$, $\Scal_g$ is positive.
\end{thm}
\noindent Remember that a cobordism is a manifold $W$ with boundary whose boundary is partitioned in two, $W = M\amalg (-N)$.
\begin{thm}
If $M$ is cobordant to $N$ and if $M$ is connected then $M$ is obtained from $N$ by a finite number of surgeries of dimension $0\leq k\leq n-3.$
\end{thm}
\begin{prop}
Let $M$ be a spin, simply connected, connected manifold of dimension $n\geq 5$, if $n = 0, 1 ,2 , 4$ mod $8$ and $|\alpha(M)|\leq 1$, then
$$\sigma_2(M)\geq \alpha_n,$$
where $\alpha_n$ is a positive constant depending only on $n$.
\end{prop}
\noindent \textbf{Proof:} Proposition 3.5 in \cite{baerdahl1} gives us that for each $n = 0, 1, 2, 4$ mod $8$, $n\geq 1$, there is a manifold $V$ of dimension $n$ such that $V$ carries a metric $g$ such that $\Scal_g = 0$ and $\alpha(V) = 1$.\\
$\bullet$ \textbf{First case:} If $\alpha(M) = 0$, then $M$ is cobordant to a manifold $N$ such that $\Scal_g$ on $N$ is positive. In this case we can obtained $M$ from $N$ by a finite number of surgeries of dimension $k\leq n-3$. Hence, by Corollary $\sigma(M)\geq c_n$ with $c_n$ is a positive constant depending only on $n$.\\
$\bullet$ \textbf{Second case:} If $\alpha(M) = 1$, then $\alpha(M\amalg (-V)) = 0$, so there exists a manifold $N$ with $\Scal_g > 0$ such that $M\amalg (-V)$ is cobordant to $N$ which is equivalent to say that $M$ is cobordant to $V\amalg N$. Consequently $M$ can be obtained from $V\amalg N$ by a finite number of surgeries of dimension $k\leq n-3$. Applying the main theorem of this paper, we get the desired result.
|
1,477,468,750,929 | arxiv | \subsection{The training of the encoder (stage 1)}
The encoder training is based on the maximization problem:
\begin{equation}
\label{MI_SIMCLR}
\hat{\boldsymbol{\phi_{\varepsilon}}} = \operatornamewithlimits{argmax}_{\boldsymbol{\phi_{\varepsilon}}} I_{\boldsymbol {\phi_{\varepsilon}}}({\bf{X}} ;{\bf{E}}),
\end{equation}
where $I_{{\boldsymbol \phi}_{\varepsilon}}({\bf X} ;{\bf E}) = \mathbb{E}_{p(\mathbf{x} , {\boldsymbol\varepsilon}) }\left[ \log \frac{q_{ {\boldsymbol \phi}_{\varepsilon}}({\boldsymbol \varepsilon} | {\bf x} )}{q_{ {\boldsymbol \phi}_{\varepsilon}}({\boldsymbol \varepsilon})} \right]$, where $q_{\phi_{\varepsilon}}(\boldsymbol {\varepsilon} | \mathbf{x})$ denotes the encoder and $q_{\phi_{\varepsilon}}(\boldsymbol \varepsilon)$ - the marginal latent space distribution.
In the framework of contrastive learning, (\ref{MI_SIMCLR}) is maximized based on the infoNCE framework \cite{Oord2018RepresentationLW}. In the practical implementation, one can use the approach similar to SimCLR\cite{Chen2020ASF}, where the inner product between the positive pairs created from the augmented views of the same image is maximized and the inner product between the negative pairs based on different images is minimized\footnote{The SimCLR training is based on the maximization $I_{\boldsymbol \phi_{\varepsilon}, \phi_{n}}({\bf X} ;{\bf H})$, but since $I_{\boldsymbol \phi_{\varepsilon}, \phi_{n}}({\bf X} ;{\bf H}) < I_{\boldsymbol \phi_{\varepsilon}}({\bf X} ;{\bf E})$ one could lower bound (\ref{MI_SIMCLR}).}. Alternatively, one can use other approaches to learn the representation $\boldsymbol \varepsilon$ such as BYOL \cite{Grill2020BootstrapYO}, Barlow Twins \cite{Zbontar2021BarlowTS}, etc. without loose of generality of the proposed approach. It should be pointed out that the encoder is trained independently from the decoder in the scope of the considered setup.
\subsection{ The training of the class attribute classifier (stage 2)}
The class attribute classifier training is based on the maximization problem:
\begin{equation}
\label{MI_Classifier}
\hat{\boldsymbol \theta}_{\mathrm{y}} = \operatornamewithlimits{argmax}_{ \boldsymbol \theta_{\mathrm y}} I_{{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}_{\mathrm y} }({\bf Y} ;{\bf E}),
\end{equation}
where $I_{{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}_{\mathrm y} }({\bf Y} ;{\bf E}) = H({\bf Y}) - H_{{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}_{\mathrm y} }({\bf Y}|{\bf E})$ and $H({\bf Y}) = - \mathbb{E}_{p_{\bf y}({\bf y})}\log p_{\bf y}({\bf y}) $ is the conditional entropy of ${\bf Y}$ and the conditional entropy is defined as
$ H_{{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}_{\mathrm y} }({\bf Y}|{\bf E}) = - \mathbb{E}_{p_{\bf x}({\bf x})} \left[ \mathbb{E}_{q_{{\boldsymbol \phi}^*_{\varepsilon}}({\boldsymbol \varepsilon}| \mathbf{x})} \left[ \log p_{{\boldsymbol \theta}_{\mathrm{y}}}(\mathbf{y} |{ \boldsymbol \varepsilon}) \right]\right]$. Since $H({\bf Y})$ is independent of the parameters of the encoder and classifier, (\ref{MI_Classifier}) reduces to the lower bound minimization:
\begin{equation}
\label{MI_Classifier_min}
\hat{\boldsymbol \theta}_{\mathrm y} = \operatornamewithlimits{argmin}_{ {\boldsymbol \theta}_{\mathrm y}} H_{{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}_{\mathrm y} }({\bf Y}|{\bf E}) ,
\end{equation}
that under the categorical conditional distribution $p_{{\boldsymbol \theta}_{\mathrm{y}}}(\mathbf{y} |{ \boldsymbol \varepsilon})$ can be expressed as the categorical cross entropy $\mathcal{L}_{\mathrm{y}}(\mathbf{y}, \hat{\mathbf{y}})$.
\subsection{ The training of the decoder, i.e., the mapper and generator (stage 3) }
The decoder is trained first to maximize the mutual information between the class attributes $\tilde{ \bf y}$ predicted from the generated images and true class attributes $\bf y$:
\begin{equation}
\label{MI_decoder_classification_loss}
(\hat{\boldsymbol \theta}_{\mathrm x},\hat{\boldsymbol \psi} ) = \operatornamewithlimits{argmax}_{ \boldsymbol \theta_{\mathrm x}, {\boldsymbol \psi}} I_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x},{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}^*_{\mathrm y} }({\bf Y} ;{\bf E}),
\end{equation}
where $I_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x},{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}^*_{\mathrm y} }({\bf Y} ;{\bf E}) = H({\bf Y}) - H_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x},{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}^*_{\mathrm y} }({\bf Y}|{\bf E})$ and $H({\bf Y}) = - \mathbb{E}_{p_{\bf y}({\bf y})}\log p_{\bf y}({\bf y}) $ and the conditional entropy is defined as
$ H_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x},{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}^*_{\mathrm y} }({\bf Y}|{\bf E}) = - \mathbb{E}_{p_{\bf y}({\bf y})} \left[ \mathbb{E}_{p_{\bf z}({\bf z})} \left[ \mathbb{E}_{r_{{\boldsymbol \psi}}({\boldsymbol \varepsilon}| \mathbf{y}, {\bf z})} \left[ \mathbb{E}_{p_{{\boldsymbol \theta}_{\mathrm x}}(\mathbf{x}|{\boldsymbol \varepsilon})}
\left[ \mathbb{E}_{q_{{\boldsymbol \phi}^*_{\varepsilon}}({\boldsymbol \varepsilon}| \mathbf{x})}
\left[ \log p_{{\boldsymbol \theta}^*_{\mathrm{y}}}(\mathbf{y} |{ \boldsymbol \varepsilon}) \right]\right]\right] \right]\right]$, $p_{\theta_{\mathrm{y}}}(\mathbf{y} | \boldsymbol{\varepsilon})$ corresponds to the classifier and $q_{\phi_{\varepsilon}}^{*}(\boldsymbol{\varepsilon} | \mathbf{x})$ denotes the pre-trained encoder. Since $H({\bf Y})$ is independent of the parameters of the encoder and classifier, (\ref{MI_decoder_classification_loss}) reduces to the lower bound minimization:
\begin{equation}
\label{MI_decoder_classification_loss_bound}
(\hat{\boldsymbol \theta}_{\mathrm x},\hat{\boldsymbol \psi} ) = \operatornamewithlimits{argmin}_{ \boldsymbol \theta_{\mathrm x}, {\boldsymbol \psi}} H_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x},{\boldsymbol \phi}^*_{\varepsilon},{\boldsymbol \theta}^*_{\mathrm y} }({\bf Y}|{\bf E}),
\end{equation}
that under the categorical conditional distribution $p_{{\boldsymbol \theta}_{y}}(\mathbf{y} |{ \boldsymbol \varepsilon})$ can be expressed as the categorical cross entropy $\mathcal{L}_{\mathrm{y}}(\mathbf{y}, \tilde{\mathbf{y}})$.
Finally, the decoder should produce samples that follow the distribution of training data $p_{\bf x}({\bf x})$ that corresponds to the maximization of mutual information:
\begin{equation}
\label{MI_decoder_generation_loss}
(\hat{\boldsymbol \theta}_{\mathrm x},\hat{\boldsymbol \psi} ) = \operatornamewithlimits{argmax}_{ \boldsymbol \theta_{\mathrm x}, {\boldsymbol \psi}} I_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x} }({\bf X} ;{\bf E}),
\end{equation}
where $ I_{{\boldsymbol \psi},{\boldsymbol \theta}_{\mathrm x} }({\bf X} ;{\bf E}) = \mathbb{E}_{p_{\bf x}({\bf x})} \left[ \mathbb{E}_{p_{\bf y}({\bf y})} \left[ \mathbb{E}_{p_{\bf z}({\bf z})} \left[ \mathbb{E}_{r_{{\boldsymbol \psi}}({\boldsymbol \varepsilon}| \mathbf{y}, {\bf z})} \left[ \mathbb{E}_{p_{{\boldsymbol \theta}_{\mathrm x}}(\mathbf{x}|{\boldsymbol \varepsilon})}
\left[ \log \frac{p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x}|{\boldsymbol \varepsilon} )}{p_{ {\bf x}}( {\bf x} )} \right]\right]\right] \right] \right] = \mathbb{E}_{p_{\bf y}({\bf y})} \left[ \mathbb{E}_{p_{\bf z}({\bf z})} \left[ \mathbb{E}_{r_{{\boldsymbol \psi}}({\boldsymbol \varepsilon}| \mathbf{y}, {\bf z})}
\left[ \mathbb{D}_{\mathrm{KL}}(p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x}|{{\bf E} =\boldsymbol \varepsilon} )||p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x} )) \right]\right] \right] - \mathbb{D}_{\mathrm{KL}}(p_{\bf x}({\bf x})||p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x} )) $, where $p_{\theta_{\mathrm{x}}}(\bf x)$ denotes the distribution of generated samples $\bf{\tilde{x}}$. Since $ \mathbb{D}_{\mathrm{KL}}(p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x}|{{\bf E} =\boldsymbol \varepsilon} )||p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x} )) \geq 0$, the maximization of the above mutual information reduces to the minimization:
\begin{equation}
\label{MI_decoder_discriminnator}
(\hat{\boldsymbol \theta}_{\mathrm x},\hat{\boldsymbol \psi} ) = \operatornamewithlimits{argmin}_{ \boldsymbol \theta_{\mathrm x}, {\boldsymbol \psi}} \mathbb{D}_{\mathrm{KL}}(p_{\bf x}({\bf x})||p_{ {\boldsymbol \theta}_{\mathrm x}}( {\bf x} )).
\end{equation}
The above discriminator is denoted as $\mathcal{D}_{\mathbf{x} \tilde{\mathbf{x}}}(\mathbf{x})$. At the same time, one can also envision the discriminator conditioned on the attribute class $\bf y$, e.g., $\mathcal{D}_{\mathbf{x} \tilde{\mathbf{x}}}(\mathbf{x} \mid \mathbf{y})$ that is implemented as a set of discriminators for each subset of generated and original samples defined by the class attributes $\bf y$.
\section{Experiments}
In this section, we describe the generation experiments. For the evaluation, we use 3 metrics: Fréchet inception distance (FID) \cite{Heusel2017GANsTB}, inception score (IS) \cite{Salimans2016ImprovedTF} and Chamfer distance \cite{ravi2020pytorch3d}. Since Chamfer distance works in low dimensional spaces, we compute features of the real and generated image by the pre-trained encoder, then compute the 3D t-SNEs of these features, which are used to compute the Chamfer distance. We perform ablation studies on AFHQ dataset. To determine whether the conditional generated images obey the needed attributes, we use attribute control accuracy. The attribute control accuracy is computed as the percentage of the images for which the output of the attribute classifier is the same as an input attribute. The attribute control accuracy measures how good the generator is at conditional generation.
\subsection{EigenGAN}
We compare the proposed InfoSCC-GAN with the original EigenGAN \cite{He2021EigenGANLE} on the AFHQ dataset. Our model is based on the same generator while using different inputs and conditional regularization. In the current setup, EigenGAN has 6 layers each with 6 dimensions that are used for interpretable and controllable features exploration. The original EigenGAN achieves FID score of \textbf{29.02} and IS of \textbf{8.52} after 200000 training iterations on AFHQ dataset using global discriminator and Hinge loss \cite{Lim2017GeometricG}. The EigenGAN does not allow for interpretable feature exploration for the wild animal images. It can be explained by the imbalance since the "wild" animals class includes multiple distinct subclasses such as tiger, lion, fox, and wolf, which are not semantically close.
\subsection{Conditional generation}
We achieve the best FID score of \textbf{11.59}, IS of \textbf{11.06} and Chamfer distance \textbf{3645} using the InfoSCC-GAN approach after 200000 training iterations using Patch discriminator \cite{Isola2017ImagetoImageTW} and LSGAN \cite{Mao2017LeastSG} loss on AFHQ dataset. In the current setup, we have 6 layers with 6 explorable dimensions.
The results on CelebA dataset with 5, 10 and 15 attribute labels are presented in Tables \ref{table:table_resuls_CelebA_5}, \ref{table:table_results_CelebA_10}, \ref{table:table_results_CelebA_15}.
\begin{table}[]
\centering
\caption{Conditional generation results on CelebA dataset with 5 selected attributes.}
\label{table:table_resuls_CelebA_5}
\begin{tabular}{ccccccc}
\hline
\multirow{2}{*}{\textbf{FID $\downarrow$}} & \multirow{2}{*}{\textbf{IS $\uparrow$}} & \multicolumn{5}{c}{\textbf{Attribute Control Accuracy $\uparrow$}} \\
\cline{3-7} & & Bald & Eyeglasses & Mustache & Wearing Hat & Wearing Necktie \\ \hline
\multicolumn{1}{r}{27.84} & \multicolumn{1}{r}{9.91} & \multicolumn{1}{r}{93.27\%} & \multicolumn{1}{r}{99.88\%} & \multicolumn{1}{r}{95.68\%} & \multicolumn{1}{r}{94.62\%} & \multicolumn{1}{r}{98.62\%} \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\caption{Conditional generation results on CelebA dataset with 10 selected attributes.}
\label{table:table_results_CelebA_10}
\begin{tabular}{ccccccc}
\hline
\multirow{2}{*}{\textbf{FID$\downarrow$}} & \multirow{2}{*}{\textbf{IS$\uparrow$}} & \multicolumn{5}{c}{\textbf{Attribute Control Accuracy$\uparrow$}} \\ \cline{3-7}
& & Bald & Black Hair & Blond Hair & Brown Hair & Double Chin \\ \hline
32.39 & 9.04 & 89.74\% & 89.61\% & 86.86\% & 85.55\% & 84.82\% \\ \hline
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & Eyeglasses & Gray Hair & Mustache & Wearing Hat & Wearing Necktie \\ \cline{3-7}
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 99.6\% & 81.71\% & 92.27\% & 92.83\% & 89.26\% \\ \cline{3-7}
\end{tabular}
\end{table}
\begin{table}[]
\caption{Conditional generation results on CelebA dataset with 15 selected attributes.}
\label{table:table_results_CelebA_15}
\begin{tabular}{ccccccc}
\hline
\multirow{2}{*}{\textbf{FID$\downarrow$}} & \multirow{2}{*}{\textbf{IS$\uparrow$}} & \multicolumn{5}{c}{\textbf{Attribute Control Accuracy$\uparrow$}}\\ \cline{3-7}
& & Bald & Blurry & Chubby & Double Chin & Eyeglasses \\ \hline
34.97 & 8.87 & 83.6\% & 96.46\% & 80.1\% & 95.74\% & 98.11\% \\ \hline
& & Goatee & Gray Hair & Mustache & Narrow Eyes & Pale Skin \\ \cline{3-7}
& & 89.09\% & 90.78\% & 87.64\% & 74.22\% & 86.91\% \\ \cline{3-7}
& & Receding Hairline & Rosy Chicks & Sideburns & Wearing Hat & Wearing Necktie \\ \cline{3-7}
& & 86.46\% & 78.88\% & 74.9\% & 97.64\% & 94.87\% \\ \cline{3-7}
\end{tabular}
\end{table}
\section{Ablation studies}
In this section, we describe the ablation studies we have performed on the type of discriminator and the discriminator loss.
\subsection{Discriminator ablation studies}
\begin{table}
\centering
\caption{Discriminator ablation studies.}
\label{table:table_discriminator_ablation}
\begin{tabular}{llrrr}
\hline
\textbf{Discriminator} & \textbf{Loss} & \textbf{FID $\downarrow$} & \textbf{IS $\uparrow$} & \textbf{Chamfer distance $\downarrow$} \\ \hline
Global & Hinge & 13.08 & 10.71 & 4030 \\ \hline
Global & Non saturating & 25.62 & 10.33 & 28595 \\ \hline
Global & LSGAN & 29.02 & 9.89 & 45583 \\ \hline
Patch & Hinge & 15.95 & 10.51 & 7327 \\ \hline
Patch & Non saturating & 14.83 & 10.21 & 5114 \\ \hline
Patch & LSGAN & \textbf{11.59} & \textbf{11.06} & \textbf{3645} \\ \hline
\end{tabular}
\end{table}
In this section, we describe the discriminator and loss ablation studies. We compare two discriminators: global discriminator and patch discriminator. The global discriminator outputs one value that is the probability of the image being real. The architecture of the global discriminator is inspired by the EigenGAN paper. The patch discriminator outputs a tensor of values that represent the probability of the image patch being real, the architecture of the patch discriminator is inspired by the pix2pix GAN \cite{Isola2017ImagetoImageTW}. We compare these discriminators in combination with discriminator losses: Hinge loss, non-saturating loss and LSGAN. The results of the studies are presented in Table. \ref{table:table_discriminator_ablation}. For all of the discriminators and losses, used in the study, the attribute control accuracy is in the range of 99-100\%.
\section{Conclusions}
In this paper, we propose a novel stochastic contrastive conditional GAN InfoSCC-GAN, which produces stochastic conditional image generation with an explorable latent space. We provide the information-theoretical formulation of the proposed system. Unlike other contrastive image generation approaches, our method is truly a stochastic generator, that is controlled by the class attributes and by the set of stochastic parameters. We apply a novel training methodology based on using a pre-trained unsupervised contrastive encoder and a pre-trained classifier with every $n$-th iteration using a classification regularization. We propose an information-theoretical interpretation of the proposed system. We propose a novel attribute selection approach based on clustering embeddings computed using an encoder. The proposed model outperforms "vanilla" EigenGAN on AFHQ dataset, while it also provides conditional image generation. We have performed ablations studies to determine the best setup for conditional image generation. Finally, we have performed experiments on AFHQ and CelebA datasets.
\begin{ack}
This research was partially funded by the SNF Sinergia project (CRSII5-193716) Robust deep density models for high-energy particle physics and solar flare analysis (RODEM).
\end{ack}
{
\small
\bibliographystyle{unsrtnat}
|
1,477,468,750,930 | arxiv | \section{Introduction}
\noindent Quasi-biennial oscillation (QBO) is a well-known quasi-periodical variation with characteristic time 0.5-4 years in different solar, heliospheric and cosmic ray characteristics, see \cite{Bazilevskaya_etal_SSR_186_359_2014,Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016} and references therein. QBO appears to be the most prevalent quasi periodicity shorter than the 11-year cycle in solar activity phenomena.
Here we are interested, mainly, in three aspects among the many facts summarized in \cite{Bazilevskaya_etal_SSR_186_359_2014,Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016}.
First, in \cite{Bazilevskaya_etal_SSR_186_359_2014} it has been shown that there is a lack of correlation between solar and heliospheric QBOs: the correlation coefficient between the QBO in the sunspot area $S_{ss}$, $QBO_{Sss}$, and the QBO in the regular heliospheric magnetic field (HMF) strength $B^{hmf}$, $QBO_{Bhmf}$, is $\rho\approx 0.2$. This correlation increases up to $\rho\approx 0.5$ if the heliospheric QBOs are shifted behind the solar ones by about 3 months.
Second, in \cite{Bazilevskaya_etal_SSR_186_359_2014} it was shown that the QBOs in the galactic cosmic ray intensity $J_{gcr}$, $QBO_J$, are not coherent
with $QBO_{Sss}$ and other solar indices, while they correspond well with $QBO_{Bhmf}$ and it was suggested that both step-like changes of the GCR intensity and Gnevyshev Gap (GG) effect (a temporal damping of the solar modulation around the sunspot maxima, see the references in \cite{Bazilevskaya_etal_SSR_186_359_2014}) could be viewed as the manifestations of $QBO_J$. Third, in \cite{Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016} it was suggested that small delay of the $QBO_J$ relative to $QBO_{Bhmf}$ argues for a diffusion mechanism of the $QBO_J$ acting within $\approx 10$ AU from the Sun , while the difference in the correlation coefficients for the periods with the dominating HMF polarity ($A$, the sign of $B^{hmf}_r$ in the N-hemisphere) $A>0$ and $A<0$ may be indicative of the drift influence.
In this paper, after introducing necessary definitions and describing the data used, we formulate and check a hypothesis on the causes of the apparent lack of correlation between solar and heliospheric QBOs using as a proxy of both solar magnetic fields (SMF) and HMF the energy indices, introduced in \cite{Krainev_etal_Pulkovo_Conf_121_1999}. Then the possible mechanisms of $QBO_J$ are considered in slightly more details and we critically discuss the idea of the same nature of the step-like changes of the GCR intensity during the periods of low solar activity and GG-effect during the solar maximum phases.
\section{Definitions and data}
\noindent
In this paper as a proxy for QBO in the time series of any characteristic $P$, monthly or Carrington rotation (CR) or Bartels rotation (BR) averaged, we consider the same as in \cite{Bazilevskaya_etal_SSR_186_359_2014,Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016} simple and robust expression $QBO_P=P_7-P_{25}$, where $P_k$ means the $P$-series smoothed with the period of $k$ points. As a proxy for a long-term or sunspot cycle (SC) in the same characteristic we use $SC_P=P_{13}$, that is, approximately yearly smoothed $P$-series. Besides the CR time series of the SMF energy indices (see the next section), calculated using the Wilcox Solar Observatory (WSO), USA, data and models \cite{WSO_Site}, we also use the CR series on the sunspot area $S_{ss}$ from \cite{Sss_Site} and the BR time series on the HMF strength near the Earth $B^{hmf}$ from \cite{OMNI_Site}.
\section{Transition of QBO from the Sun to the heliosphere}
\noindent In \cite{Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016} it was shown that the sunspot area and the photospheric SMF energy index, introduced in \cite{Krainev_etal_Pulkovo_Conf_121_1999}, change similar both in their long-term (SC) and QBO variations. Using the WSO data and models, the SMF energy indices could be constructed not only for photosphere, $r_{ph}$, but for any radial distance between $r_{ph}$ and the source surface $r_{ss}=(2.5\div 3.25)r_{ph}$, in the transition layer from the Sun to the heliosphere. In this layer the energy density of the magnetic fields dominates over the solar wind thermal and kinetic energy densities, fixing the main features of the solar wind and HMF in the heliosphere, see \cite{WSO_Site,Krainev_Webber_IAUSymp_223_81_2004}. Probably these indices could contain some valuable information on the space and time structure of the magnetic fields in this very important region. So before proceeding further with the discussion of QBOs it could be useful to say some words on the WSO models and the SMF energy indices.
In the most widely used potential-field-source-surface WSO model, the radial SMF component $B_r$ in the range $r_{ph}\le r\le r_{ss}$ is expressed in spherical coordinates $\{r,\vartheta,\varphi\}$ as a series in terms of
spherical functions $Y_{lm}(\vartheta,\varphi)$ of degree $l$ and order $m$:
\begin{align}
B_r(r,\vartheta,\varphi;r_{ss})&=\sum_{l=0}^9 \sum_{m=-l}^l a_{lm} C_r(r;l,r_{ss})
Y_{lm}(\vartheta,\varphi)=\sum_{l=0}^9 B_r^l(r,\vartheta,\varphi;r_{ss})
\label{Br}
\end{align}
The expressions for $B_\vartheta$ and $B_\varphi$ can be written in the
similar way,
$C_r$, $C_\vartheta$, $C_\varphi$ being the known functions. The complex
coefficients $a_{lm}$, or
rather, their real counterparts $g_{lm}$ and $h_{lm}$ are available on the
WSO home page \cite{WSO_Site} for both types of the inner boundary conditions, fixing from observations the line-of-sight photospheric SMF component $B_{ls}^{ph}$ (in the ``classic'' variant of the WSO model) or $B_r^{ph}$ (in the ``radial'' variant). In Eq. (\ref{Br}) we also represent $B_r$ as a sum of the partial $B_r^l$ due to the SMF with the same degree $l$.
In our works on the structure of the solar cycle maximum phase we used the SMF energy index introduced in \cite{Obridko_Shelting_SP_137_167_1992}. Generalizing, in \cite{Krainev_etal_Pulkovo_Conf_121_1999,Bazilevskaya_etal_SolPhys_197_157_2000,Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016} we discussed the behavior of the energy index of the radial SMF component
integrated over all longitudes and latitudes on the
photosphere, but calculated without the monopole term ($l=0$) in Eq. (\ref{Br}),
\begin{equation}
\verb|Br2_PH|=
\int_0^{\pi}\int_0^{2\pi}{B_r}^2(r_{ph},\vartheta,\varphi)\sin\vartheta d\vartheta d\varphi,
\label{Br2_PH}
\end{equation}
\noindent and also the similar energy index on the source surface, \verb|B2_SS|.
In two upper panels (a, b) of Fig. \ref{SC_QBO_B2_PH_SS} the sunspot and QBO cycles in the photospheric radial SMF energy index \verb|Br2_PH| are compared with those of the sunspot area. This comparison supports the conclusion made in \cite{Bazilevskaya_etal_JoP_CS_inprint_2015,Bazilevskaya_etal_CosRes_inprint_2016}, that these two characteristics change similar both in their long-term and QBO variations. Besides it can be seen that the amplitudes of the QBO are the highest around the sunspot maximum phase (the time period between two Gnevyshev peaks in $B_{ss}$ smoothed with 1-year period, shaded in Fig. \ref{SC_QBO_B2_PH_SS}). Similarly, in two lower panels (c, d) of Fig. \ref{SC_QBO_B2_PH_SS} the sunspot and QBO cycles in the SMF energy index for the source surface, \verb|B2_SS|, are compared with those of the HMF strength $B^{hmf}$. It can be seen from Fig. \ref{SC_QBO_B2_PH_SS} (c) that there is some correlation between the long-term variations of \verb|B2_SS| and $B^{hmf}$, especially during the maximum phase. As to the QBOs in \verb|B2_SS| and $B^{hmf}$ (Fig. \ref{SC_QBO_B2_PH_SS} (d)), they change almost synchronously.
\begin{figure}[h]
\centering
\includegraphics[width=1.\textwidth]{SC13_QBO_B2_PH_SS_Col_Eng.eps}
\caption{The sunspot cycle and QBO in B2-indices on the photosphere and source surface in 1975-2015 in comparison with the same cycles in the sunspot area and HMF strength.
The periods of the sunspot maxima are shaded and the HMF polarity $A$ and the moments of the maximum sunspot area are indicated above the panels. In the panels the following variations are shown:
(a, b) the sunspot cycles and QBOs, respectively, in the radial photospheric SMF energy index
(red) and in the sunspot area (blue);
(c,d) the sunspot cycles and QBOs, respectively, in the SMF energy index on the source surface (red) and in the HMF strength near the Earth (blue).
}
\label{SC_QBO_B2_PH_SS}
\end{figure}
So in the transition from the Sun to the heliosphere the QBO in the SMF energy index demonstrates approximately the same change as that reported in \cite{Bazilevskaya_etal_SSR_186_359_2014} for QBOs in the sunspot activity and the HMF strength. Also in this transition some small shift is observed in the long-term variation of both characteristics.
As it is well known, for $B_r$ in the potential approximation the partial contribution $B_r^l$ from the same degree $l$
changes with $r$ as $\propto r^{-(l+2)}$, so that at the source surface the magnetic field,
which determines the HMF, is influenced mostly by the SMF of the low $l$. It is enticing to suggest that the change in QBOs from the Sun to the heliosphere is due to the same cause. So beside the total SMF energy indices \verb|Br2_PH| and \verb|B2_SS| we constructed the partial SMF energy index \verb|Br2_PH_l|, connected with definite degree $l$:
\begin{equation}
\verb|Br2_PH_l|=\int_0^\pi\int_0^{2\pi}\left.B^l_r\right.^2(r_{ph},\vartheta,\varphi)\sin\vartheta d\vartheta d\varphi\label{Br2_PH_l}
\end{equation}
\noindent and also the similar partial energy index on the source surface, \verb|B2_SS_l|.
Because of the orthogonality of the spherical functions
$\verb|Br2_PH|=\sum_{l=0}^9 \verb|Br2_PH_l|$ and $\verb|B2_SS|=\sum_{l=0}^9 \verb|B2_SS_l|$.
\begin{figure}[h]
\centering
\includegraphics[width=1.\textwidth]{SC13_QBO_B2_L_PH_SS_Col_Eng.eps}
\caption{The sunspot cycle and QBO in the partial for each $l$ B2-indices on the photosphere and source surface in 1976-2015. The periods of the sunspot maxima are shaded and the HMF polarity $A$ and the moments of the maximum sunspot area are indicated above the panels. The correspondence between the colors of lines and $l=1\div 9$ is shown in the upper panel. In the panels the following variations are shown:
(a, b) the sunspot cycle and QBO, respectively, in the radial photospheric SMF partial energy indices
for different $l=1\div 9$; (c, d) the sunspot cycle and QBO, respectively, in the SMF partial energy indices on the source surface for different $l$.}
\label{SC_QBO_B2_L_PH_SS}
\end{figure}
In two upper panels (a, b) of Fig. \ref{SC_QBO_B2_L_PH_SS} the time profiles of the sunspot and QBO cycles in the partial photospheric SMF energy indices \verb|Br2_PH_l| for each $l$ are shown. It could be seen that both for the sunspot and QBO cycles the contributions into the total energy index from the high degrees $l=3,4,7-9$ are significantly greater than from low degrees $l=1,2$ and the time profile of the partial energy index with $l=1$ is somewhat lags behind the more powerful partial indices. In two lower panels of Fig. \ref{SC_QBO_B2_L_PH_SS} the sunspot and QBO cycles in the partial SMF energy indices \verb|B2_SS_l| for each $l$ are shown. It is clearly seen that on the source surface and hence in the HMF for both long-term and QBO cycles the low-$l$ partial indices are the most important. So in the first approximation our hypothesis is justified that the change in the sunspot and QBO cycles in the transition from the Sun to the heliosphere is due to 1) the different magnitude and time behavior of the large-scale (low $l$) and small-scale (high $l$) photospheric SMF and 2) the stronger attenuation of the SMF with higher $l$ in this transition. The first of these facts and how it correlates with the sunspot distribution still should be thought over. The conclusion in \cite{Bazilevskaya_etal_SSR_186_359_2014} that the 11-year variation in contrast with QBO does not changes its phase during this transition is probably due to the fact that the observed lag of the time profiles in the heliosphere with respect to those on the Sun (approximately similar for both variations) for 11-y cycle is much smaller than its period.
Note that the partial SMF energy indices can be constructed not only for the whole spheres but for different ranges of latitude, e.g., for different hemispheres or the royal zones (the latitude ranges with the sunspots) \cite{Krainev_etal_Pulkovo_Conf_121_1999}. We are planning to use them when looking for the explanation of, e.g., the different QBOs in the sunspot activity in the N- and S-hemispheres, reported in \cite{Bazilevskaya_etal_SSR_186_359_2014}.
\section{On the causes and mechanisms of the QBO in GCR intensity}
It was suggested in \cite{Bazilevskaya_etal_SSR_186_359_2014} that the probable cause of the $QBO_J$ is the opposite in sign variation in the HMF strength, $QBO_{Bhmf}$, and it is indicative of the diffusion as a main mechanism of the $QBO_J$. Besides, the intermittent QBO in the heliospheric current sheet (HCS) quasi-tilt, $QBO_{\alpha_{qt}}$, \cite{Bazilevskaya_etal_SSR_186_359_2014} can play some role in the difference between the $QBO_J$ for the periods of the opposite HMF polarity. Here we note that change of $B^{hmf}$ influences not only the diffusion coefficient ($K_{diff}\propto 1/B^{hmf}$ in many models of the GCR intensity modulation), but also the magnetic drift velocity (also $V_{drift}\propto 1/B^{hmf}$). So the same $QBO_{Bhmf}$ in different periods could not only result in the same $QBO_J$, but also along with $QBO_{\alpha_{qt}}$ could give rise to different $QBO_J$ for the $A>0$ and $A<0$ periods. Moreover, as the contribution of the magnetic drift into the 11-year variation of the GCR intensity could be significant (see \cite{Krainev_Kota_Potgieter_icrc2015-198} and references therein), the same $QBO_J$ from the same $QBO_{Bhmf}$ could be the result not only of the diffusion but of the magnetic drift as well.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{Map_F_dFdtheta_3_types_Examples_Col_Eng.eps}
\caption{The map of the HMF polarity ${\cal F}$, the longitude averaged HMF polarity $F$ and its the polar angle derivative $dF/d\vartheta$ for three types of the HMF polarity distribution. The panels in the upper row are the HMF polarity (${{\cal F}}$) distributions for the "dipole" type (left), the "transition dipole" (middle) and "inversion" (right) types. The color (red for negative and blue for positive) stands for the HMF polarity, the black lines between being the HCSs. In the panels in the middle and lower rows $F$ and $dF/d\vartheta$, respectively, are shown for the corresponding HMF polarity distributions of the upper row.}
\label{Maps_F_dFdt_3_Types}
\end{figure}
Our second note on the mechanisms of the QBO in GCR intensity concerns the description of the magnetic drift during the maximum phase of the sunspot cycle. For the phase of the low sunspot activity with the ``dipole'' type of the HMF polarity distribution (the single global HCS connecting all longitudes; see the HMF polarity classification in \cite{Krainev_etal_icrc2015-437}), the GCR intensity can be calculated using the transport equation with the usual magnetic drift velocity terms, e.g., utilizing the tilted-HCS model with a tilt $\alpha_t$ as a parameter, and getting $\alpha_t$ as the quasi-tilt $\alpha_{qt}$ from \cite{WSO_Site}. However, it is not so simple to get these terms for the high sunspot activity phase with the ``transition dipole'' and ``inversion'' types of the HMF polarity distributions, when there are several HCSs and the use of the formally defined quasi-tilt is questionable.
For the time being we use the following procedure.
As in \cite{Kalinin_Krainev_ECRS21_222-225_2009} the regular 3D HMF can be represented as $\vec{\cal B}(r,\vartheta,\varphi)={\cal F}(r,\vartheta,\varphi)\vec{\cal B}^m(r,\vartheta,\varphi)$,
where $\vec{\cal B}^m$ is the unipolar
(or ``monopolar'', $B^{hmf}_r>0$ everywhere) magnetic field and the HMF polarity ${\cal F}$ is a scalar function equal to $+1$ in the positive and $-1$ in negative sectors,
changing on the HCS surface ${{\cal F}} (r,\vartheta,\varphi)=0$.
Then the 3D particle drift velocity is
${\vec{{\cal V}}}^d=pv/3q\left[{\bf{\nabla}}\times(\vec{{\cal B}}/{{\cal B}}^2)\right]$, \cite{Rossi_Olbert_1970}, where $v$ and $q$ are the particle speed and
charge, respectively. Then one can decompose the drift velocity into the regular and current sheet velocities:
\begin{eqnarray}
{\vec{{\cal V}}}^{d,reg}=pv/3q{{\cal F}}\left[{\bf{\nabla}}\times(\vec{{\cal B}}^m/{{\cal B}}^2)\right]\label{Vreg}\\
{\vec{{\cal V}}}^{d,cs}=pv/3q\left[{\bf{\nabla}}{{\cal F}}\times(\vec{{\cal B}}^m/{{\cal B}}^2)\right].\label{Vcs}
\end{eqnarray}
So to get the magnetic drift velocities for any type of the HMF polarity distribution in 3D case one needs only ${\cal F}$ and ${\bf{\nabla}}{\cal F}$ and in 2D case the averaged over the longitude $F$ and $dF/d\vartheta$. All of these quantities (${{\cal F}},\nabla{\cal F},F,dF/d\vartheta$) can be calculated numerically for any calculated HMF polarity distribution, including the "transition dipole" and "inversion" types. In Fig. \ref{Maps_F_dFdt_3_Types} along with the maps of ${{\cal F}}(\vartheta,\varphi)$ on the source surface, the averaged over the longitude $F$ and $dF/d\vartheta$ are shown as functions of latitude for three types of the HMF polarity distribution.
It can be seen that for the ``dipole'' type (the left column of Fig. \ref{Maps_F_dFdt_3_Types}) both $F$ (and hence the regular drift velocity (\ref{Vreg})) and $dF/d\vartheta$ (and hence the HCS-drift velocity (\ref{Vcs})) have the usual (although somewhat irregular) appearance similar to the case of simple tilted-HCS. The addition to the global HCS of one more local HCS (the middle column of Fig. \ref{Maps_F_dFdt_3_Types}) adds two features of opposite sign to $dF/d\vartheta$ (and hence two streams of opposite direction to HCS-drift velocity) at the latitudinal boundaries of the additional HCS. At last, the ``inversion'' type of the HMF polarity distribution (the right column of Fig. \ref{Maps_F_dFdt_3_Types}) changes drastically both $F$ and $dF/d\vartheta$, so that the HMF polarity in the polar regions are of the same signs and in these regions there are two strong HCS-streams of opposite direction. In 3D case there are also strong meridional flows along the HCSs and so the overall picture of the regular and HCS drifts looks rather systematic and intriguing. Note, however, that the HCS-drift velocity (\ref{Vcs}) does not take into account the smearing of the current sheet drift due to the finite larmor radius of the GCR particles \cite{Burger_etal_ApSS_116_107_1985}. In \cite{Krainev_Kalinin_33ICRC_317_2013} we suggested that the fundamental difference between the global and nonglobal HCS lies in the fact that the sign of the radial component of the current sheet drift changes as the particle moves along the nonglobal HCS, so that the connection between the inner and outer heliosphere is blocked.
So with respect to the HMF polarity distribution the conditions in the heliosphere are quite different around the sunspot maximum and during the periods of low solar activity. If we take into account the presence during high solar activity of the global merged interactive regions serving as barriers for the GCR propagation \cite{LeRoux_Potgieter_ApJ_442_847_1995}, the suggestion made in \cite{Bazilevskaya_etal_SSR_186_359_2014} that both step-like changes of the GCR intensity and Gnevyshev Gap effect could be viewed as the manifestations of the QBO in the GCR intensity, that is, could have the same nature, looks questionable.
\section{Acknowledgments}
The authors thank for help the Russian Foundation for Basic Research (grants 13-02-00585, 14-02-00905)
and the Program of the Presidium of the Russian Academy of Sciences ``Fundamental Properties of Matter and Astrophysics''.
|
1,477,468,750,931 | arxiv | \section{Introduction}
The most widely studied class of antiferromagnets contains two sublattices on which the magnetic moments point oppositely to each other. Materials where the magnitude of the moments on the sublattices is different are known as ferrimagnets. Both antiferromagnets~\cite{Gomonay2014,Jungwirth2016,Baltz2018,Nemec2018} and ferrimagnets~\cite{Barker2021,Kim2022}
have attracted much attention recently as a material platform for spintronics.
Their dynamics is typically orders of magnitude faster than that of ferromagnets, including higher spin-wave frequencies~\cite{Gomonay2014}; relativistic domain wall motion \cite{Otxoa2020,Caretta2020}; enhanced magnetization switching rates induced by current~\cite{Wadley2016}, thermal excitations~\cite{Meinert2018,Rozsa2} or ultrashort laser pulses~\cite{Ostler2012,Jakobs2022a,Jakobs2022c}; and increased demagnetization speeds~\cite{Jakobs2022b}.
Antiferromagnets may transform into a phase where the sublattice magnetizations have a weak parallel or ferromagnetic component. This transformation can be achieved by applying an external field~\cite{Tsujikawa1959}, increasing the temperature~\cite{Morin1950}, or even in the ground state in the presence of the Dzyaloshinsky--Moriya interaction~\cite{Dzyaloshinsky1958,Moriya1960}.
Due to the different temperature dependence of the sublattice magnetizations and angular momenta, they can become compensated in certain ferrimagnets,
influencing the velocity and the movement direction of domain walls and skyrmions driven by spin-polarized currents or thermal gradients~\cite{Caretta2018,Hirata2019,Donges2020}.
At the angular momentum compensation point, ferrimagnets combine the advantages of both ferromagnets and antiferromagnets: easy control and detection of their net magnetization by an external field, antiferromagnetic-like ultrafast dynamics and the potential for high-density devices.
These phenomena can often be successfully modelled by theoretical approaches agnostic to the underlying atomic structure, such as finite-temperature macrospin models like the Landau--Lifshitz--Bloch equation~\cite{Atxitia_2017} or continuum theories.
Computer simulations using atomistic spin models~\cite{Nowak2007BOOK} can naturally describe the equilibrium thermodynamics and non-equilibrium dynamics of antiferromagnets and ferrimagnets, but they are considerably more resource intensive.
The price paid for the reduced computational cost of the mesoscopic methods is that the temperature-dependent effective parameters of these models are difficult to determine.
While first-principles methods have proven successful in calculating atomistic~\cite{LIECHTENSTEIN1987,Udvardi2003,Xiang2011} or zero-temperature continuum~\cite{Heide2008,Freimuth_2014} spin-model parameters, these values cannot directly be transformed to finite-temperature mesoscopic models. In these models, the on-site anisotropy contributions and the pair-wise interactions, such as exchange or Dzyaloshinsky--Moriya terms, are intrinsically temperature dependent due to averaging over the fluctuations of atomic spins in a finite volume. Obtaining these parameter values for the models require fitting to experimental data obtained in a wide temperature range which are not always available, or to the results of atomistic spin-model simulations what counteracts the reduced computational cost of the mesoscopic models.
This difficulty can be circumvented by applying analytical methods which can approximate these temperature-dependent parameters at low computational costs. Such methods are often based on Green's functions, where the difficulty arises in choosing the decoupling scheme, i.e., the procedure for truncating the infinite series of correlation functions of increasing order. Conventional diagrammatic perturbation methods developed for bosonic and fermionic systems are often inaccurate for spin models at elevated temperatures, for example at predicting the order of the phase transition~\cite{Bloch1962}. Semi-empirical decoupling schemes for spin Green's functions have been developed by Tyablikov~\cite{Tyablikov1959}, known as the random phase approximation, and by Callen~\cite{Callen}, which have proven to be especially accurate for low and high spin values, respectively. The method has been generalized to two-sublattice antiferromagnets by Anderson and Callen~\cite{Anderson}. The applications of these methods to two-dimensional ferromagnetic and antiferromagnetic quantum spin systems is discussed in Ref.~\cite{FROBRICH2006}. These works primarily focused on the calculation of the magnetizations, the correlation functions and the magnon frequencies at finite temperatures, which can be used to describe phase transitions, but not on the connection between the atomistic and mesoscopic models. The latter topic has been investigated for ferromagnets in the classical limit of infinite spin in Refs.~\cite{Bastardis,Rozsa,Evans}, but the corresponding investigations for antiferromagnetically aligned systems seem to be lacking.
Here, we derive the temperature dependence of the effective interaction parameters in mesoscopic models of two-sublattice antiferromagnets and ferrimagnets. We extend the Green's function theory in Ref.~\cite{Anderson} by including all terms preserving rotational symmetry around the axis of the magnetizations, namely Heisenberg and Dzyaloshinsky--Moriya exchange interactions as well as single-ion and two-ion anisotropy terms, and discuss both the classical and quantum cases. A comparison with atomistic Monte Carlo simulations demonstrates the accuracy of the method in treating spin correlations at a fraction of the computational cost of the numerical simulations.
The paper is organized as follows. In Sec.~\ref{sec2a}, we present the self-consistency equations of Green's function theory, which we apply to derive the correspondence between the atomistic and mesoscopic models in Sec.~\ref{sec2b}. We apply the method to a square lattice in Sec.~\ref{sec3} and compare the predictions with atomistic simulations.
\section{Theory\label{sec2}}
\subsection{Green's function theory\label{sec2a}}
We consider a two-sublattice magnet described by the atomistic spin Hamiltonian
\begin{eqnarray}
\mathcal{H}=&&-\frac{1}{2}\sum_{i,j,r,s}\left(J_{ij}^{rs}\boldsymbol{S}_{i}\boldsymbol{S}_{j}+\Delta J_{ij}^{rs}S_{i}^{z}S_{j}^{z}+\boldsymbol{D}_{ij}^{rs}\left(\boldsymbol{S}_{i}\times\boldsymbol{S}_{j}\right)\right)\nonumber
\\
&&-\sum_{i,r}K^{r}\left(S_{i}^{z}\right)^{2}-\sum_{i,r}\mu_{r}B^{z}S_{i}^{z}.\label{eqn1}
\end{eqnarray}
Here $r,s\in\left\{A,B\right\}$ denote the two sublattices, $J_{ij}^{rs}$ is the Heisenberg exchange interaction between atoms at sites $i$ and $j$, $\boldsymbol{D}_{ij}^{rs}$ is the Dzyaloshinsky--Moriya vector, $\Delta J_{ij}^{rs}$ is the two-ion anisotropy, $K^{r}$ is the single-ion magnetocrystalline anisotropy, $\mu_{r}$ is the magnetic moment, and $B^{z}$ is the external magnetic field.
We note that this model describes an antiferromagnet when $\mu_A=\mu_B$ and a ferrimagnet when $\mu_A \neq \mu_B$.
$\boldsymbol{S}_{i}$ stands for the spin vectors; for most considerations they will be treated as classical unit vectors $|\boldsymbol{S}_{i}|=1$, since atomistic spin-model simulations used for comparison are easier to perform in the classical limit. However, at certain points the quantum-mechanical case with spin operators will be discussed.
The number of unit cells in the lattice will be denoted by $N_{\textrm{c}}$, corresponding to the number of atoms in both the $A$ and the $B$ sublattices. Note that fully analytical results in the limit $N_{c} \rightarrow \infty$ are only available when the types of interactions are more restricted; therefore, in most cases we will rely on semi-analytical techniques where lattice sums over a finite number of lattice sites must be performed.
It will be assumed that in the classical ground state, spins on sublattice $A$ are oriented along the $+z$ direction, while spins on sublattice $B$ point along the $-z$ direction. To stabilize the antiferromagnetic alignment, it will be assumed that the antiferromagnetic intersublattice coupling $J_{ij}^{AB}<0$ is dominant compared to the intrasublattice coupling and the Dzyaloshinsky--Moriya interaction, while the anisotropy prefers spin alignment along the $z$ axis. Only the $z$ component of the Dzyaloshinsky--Moriya vectors will be treated. This way, Eq.~\eqref{eqn1} contains all possible single-spin and two-spin terms that are rotationally invariant around the $z$ direction. The rotational symmetry simplifies the following calculations.
Following Refs.~\cite{Callen,Anderson}, we use a Green's function formalism to determine the single-particle excitation spectrum and the static expectation values and two-spin correlation functions self-consistently. Details of the derivation are given in Appendix~\ref{appendixA}. We introduce a local coordinate system where all spins are oriented along the $+z$ direction, with $\tilde{\boldsymbol{S}}_{iA}=\boldsymbol{S}_{iA},\tilde{S}_{iB}^{z}=-S_{iB}^{z}$ and $\tilde{S}_{iB}^{\pm}=-S_{iB}^{\mp}$, where $S_{iB}^{\pm}=S_{iB}^{x}\pm\textrm{i}S_{iB}^{y}$ denotes the ladder operators. After performing Fourier transformation, we will use the notations
\begin{eqnarray}
\mathfrak{J}_{\boldsymbol{q}}^{rs}&=&\sum_{\boldsymbol{R}_{i}-\boldsymbol{R}_{j}}\textrm{e}^{-\textrm{i}\boldsymbol{q}\left(\boldsymbol{R}_{i}-\boldsymbol{R}_{j}\right)}\left(J_{ij}^{rs}+\Delta J_{ij}^{rs}+2K^{r}\delta^{rs}\right)\left<\tilde{S}_{s}^{z}\right>,\label{eqn2}
\\
\mathfrak{J}_{\boldsymbol{q}}^{'rs}&=&\left<\tilde{S}_{r}^{z}\right>\sum_{\boldsymbol{R}_{i}-\boldsymbol{R}_{j}}\textrm{e}^{-\textrm{i}\boldsymbol{q}\left(\boldsymbol{R}_{i}-\boldsymbol{R}_{j}\right)}\left(J_{ij}^{rs}+\textrm{i}D_{ij}^{z,rs}\right).\label{eqn3}
\end{eqnarray}
Here, $\boldsymbol{R}_i-\boldsymbol{R}_j$ denotes the vector connecting the lattice sites $i$ and $j$,
where $\boldsymbol{R}_i = (x_i, y_i, z_i)$ stands for the position of the spin $i$ in the lattice.
The self-consistency equations read
\begin{eqnarray}
\tilde{\Gamma}^{AA}_{\boldsymbol{q}}&=&\mathfrak{J}_{\boldsymbol{0}}^{AA}+\mu_{A}B^{z}-\mathfrak{J}_{\boldsymbol{0}}^{AB}-\mathfrak{J}_{\boldsymbol{q}}^{'AA}\nonumber
\\
&&-2\alpha_{0}\sum_{\boldsymbol{q}'}\left[\left(\mathfrak{J}_{\boldsymbol{q}-\boldsymbol{q}'}^{AA}-\mathfrak{J}_{\boldsymbol{q}'}^{'AA}\right)\Phi^{AA}_{\boldsymbol{q}'}\left<\tilde{S}_{A}^{z}\right>+\mathfrak{J}_{\boldsymbol{q}'}^{'AB}\Phi^{AB}_{\boldsymbol{q}'}\left<\tilde{S}_{B}^{z}\right>\right],\label{eqn4}
\\
\tilde{\Gamma}^{AB}_{\boldsymbol{q}}&=&\mathfrak{J}_{\boldsymbol{q}}^{'AB}+2\alpha_{0}\sum_{\boldsymbol{q}'}\mathfrak{J}_{\boldsymbol{q}-\boldsymbol{q}'}^{AB}\Phi^{BA}_{\boldsymbol{q}'}\left<\tilde{S}_{A}^{z}\right>,\label{eqn5}
\\
\tilde{\Gamma}^{BA}_{\boldsymbol{q}}&=&-\mathfrak{J}_{\boldsymbol{q}}^{'BA}-2\alpha_{0}\sum_{\boldsymbol{q}'}\mathfrak{J}_{\boldsymbol{q}-\boldsymbol{q}'}^{BA}\Phi^{AB}_{\boldsymbol{q}'}\left<\tilde{S}_{B}^{z}\right>,\label{eqn6}
\\
\tilde{\Gamma}^{BB}_{\boldsymbol{q}}&=&-\mathfrak{J}_{\boldsymbol{0}}^{BB}+\mu_{B}B^{z}+\mathfrak{J}_{\boldsymbol{0}}^{BA}+\mathfrak{J}_{\boldsymbol{q}}^{'BB}\nonumber
\\
&&+2\alpha_{0}\sum_{\boldsymbol{q}'}\left[\left(\mathfrak{J}_{\boldsymbol{q}-\boldsymbol{q}'}^{BB}-\mathfrak{J}_{\boldsymbol{q}'}^{'BB}\right)\Phi^{BB}_{\boldsymbol{q}'}\left<\tilde{S}_{B}^{z}\right>+\mathfrak{J}_{\boldsymbol{q}'}^{'BA}\Phi^{BA}_{\boldsymbol{q}'}\left<\tilde{S}_{A}^{z}\right>\right],\label{eqn7}
\end{eqnarray}
where the coefficient $\alpha_{0}$ is a phenomenological constant required for the decoupling of the Green's functions (see Appendix~\ref{appendixA}); here it will be set to $\alpha_{0}=1/2$ in the classical limit of the decoupling scheme proposed by Callen and Anderson~\cite{Callen,Anderson}. The $\Phi^{rs}_{\boldsymbol{q}}$ quantities denote the transversal spin correlation functions $\Phi^{rs}_{\boldsymbol{q}}=\left<\tilde{S}_{-\boldsymbol{q}r}^{(1)}\tilde{S}_{\boldsymbol{q}s}^{(2)}\right>$, where the notations $\tilde{S}_{\boldsymbol{q}r}^{(1)}\in\left\{\tilde{S}_{\boldsymbol{q}A}^{+},\tilde{S}_{\boldsymbol{q}B}^{-}\right\}$ and $\tilde{S}_{\boldsymbol{q}r}^{(2)}\in\left\{\tilde{S}_{\boldsymbol{q}A}^{-},\tilde{S}_{\boldsymbol{q}B}^{+}\right\}$ were introduced because different spin operators enter the correlation functions on the $A$ and $B$ sublattices. They can be calculated as
\begin{eqnarray}
\Phi^{AA}_{\boldsymbol{q}}&=&\frac{\gamma}{\mu_{A}}\frac{1}{2N_{\textrm{c}}}\left(\frac{1}{\nu_{\boldsymbol{q}}}\left(n\left(\omega^{+}_{\boldsymbol{q}}\right)-n\left(\omega^{-}_{\boldsymbol{q}}\right)\right)+n\left(\omega^{+}_{\boldsymbol{q}}\right)-n\left(-\omega^{-}_{\boldsymbol{q}}\right)\right),\label{eqn8}
\\
\Phi^{AB}_{\boldsymbol{q}}&=&\frac{\gamma}{\mu_{A}}\frac{\gamma}{\mu_{B}}\frac{1}{N_{\textrm{c}}}\frac{1}{\omega^{+}_{\boldsymbol{q}}-\omega^{-}_{\boldsymbol{q}}}\tilde{\Gamma}^{BA}_{\boldsymbol{q}}\left(n\left(\omega^{+}_{\boldsymbol{q}}\right)-n\left(\omega^{-}_{\boldsymbol{q}}\right)\right),\label{eqn9}
\\
\Phi^{BA}_{\boldsymbol{q}}&=&-\frac{\gamma}{\mu_{A}}\frac{\gamma}{\mu_{B}}\frac{1}{N_{\textrm{c}}}\frac{1}{\omega^{+}_{\boldsymbol{q}}-\omega^{-}_{\boldsymbol{q}}}\tilde{\Gamma}^{AB}_{\boldsymbol{q}}\left(n\left(\omega^{+}_{\boldsymbol{q}}\right)-n\left(\omega^{-}_{\boldsymbol{q}}\right)\right),\label{eqn10}
\\
\Phi^{BB}_{\boldsymbol{q}}&=&\frac{\gamma}{\mu_{B}}\frac{1}{2N_{\textrm{c}}}\left(\frac{1}{\nu_{\boldsymbol{q}}}\left(n\left(\omega^{+}_{\boldsymbol{q}}\right)-n\left(\omega^{-}_{\boldsymbol{q}}\right)\right)-n\left(\omega^{+}_{\boldsymbol{q}}\right)+n\left(-\omega^{-}_{\boldsymbol{q}}\right)\right),\label{eqn11}
\end{eqnarray}
with
\begin{align}
\nu_{\boldsymbol{q}}=&\sqrt{1+\frac{4\frac{\gamma}{\mu_{A}}\tilde{\Gamma}^{AB}_{\boldsymbol{q}}\frac{\gamma}{\mu_{B}}\tilde{\Gamma}^{BA}_{\boldsymbol{q}}}{\left(\frac{\gamma}{\mu_{A}}\tilde{\Gamma}^{AA}_{\boldsymbol{q}}-\frac{\gamma}{\mu_{B}}\tilde{\Gamma}^{BB}_{\boldsymbol{q}}\right)^{2}}},\label{eqn12}
\\
\omega^{\pm}_{\boldsymbol{q}}=&\frac{1}{2}\left(\frac{\gamma}{\mu_{A}}\tilde{\Gamma}^{AA}_{\boldsymbol{q}}+\frac{\gamma}{\mu_{B}}\tilde{\Gamma}^{BB}_{\boldsymbol{q}}\right)\pm \frac{1}{2}\left(\frac{\gamma}{\mu_{A}}\tilde{\Gamma}^{AA}_{\boldsymbol{q}}-\frac{\gamma}{\mu_{B}}\tilde{\Gamma}^{BB}_{\boldsymbol{q}}\right)\nu_{\boldsymbol{q}}.\label{eqn13}
\end{align}
The function $n\left(\omega_{\boldsymbol{q}}\right)=k_{\textrm{B}}T/\omega_{\boldsymbol{q}}$ corresponds to the occupation number of the magnon mode $\boldsymbol{q}$ in the classical limit in units of action. However, note that $\omega^{+}_{\boldsymbol{q}}\ge 0$ and $\omega^{-}_{\boldsymbol{q}}\le 0$ if the antiferromagnetic alignment of the sublattices is stable, meaning that for the $\omega^{-}_{\boldsymbol{q}}$ mode calculating the actual occupation number requires a transformation. Substituting the expression for $n\left(\omega^{\pm}_{\boldsymbol{q}}\right)$ simplifies Eqs.~\eqref{eqn8}-\eqref{eqn11} to
\begin{eqnarray}
\Phi^{AA}_{\boldsymbol{q}}&=&\frac{1}{N_{\textrm{c}}}\frac{k_{\textrm{B}}T}{\textrm{det}\:\tilde{\Gamma}_{\boldsymbol{q}}}\tilde{\Gamma}^{BB}_{\boldsymbol{q}},\label{eqn14}
\\
\Phi^{AB}_{\boldsymbol{q}}&=&-\frac{1}{N_{\textrm{c}}}\frac{k_{\textrm{B}}T}{\textrm{det}\:\tilde{\Gamma}_{\boldsymbol{q}}}\tilde{\Gamma}^{BA}_{\boldsymbol{q}},\label{eqn15}
\\
\Phi^{BA}_{\boldsymbol{q}}&=&\frac{1}{N_{\textrm{c}}}\frac{k_{\textrm{B}}T}{\textrm{det}\:\tilde{\Gamma}_{\boldsymbol{q}}}\tilde{\Gamma}^{AB}_{\boldsymbol{q}},\label{eqn16}
\\
\Phi^{BB}_{\boldsymbol{q}}&=&-\frac{1}{N_{\textrm{c}}}\frac{k_{\textrm{B}}T}{\textrm{det}\:\tilde{\Gamma}_{\boldsymbol{q}}}\tilde{\Gamma}^{AA}_{\boldsymbol{q}},\label{eqn17}
\end{eqnarray}
with $\textrm{det}\:\tilde{\Gamma}_{\boldsymbol{q}}=\tilde{\Gamma}^{AA}_{\boldsymbol{q}}\tilde{\Gamma}^{BB}_{\boldsymbol{q}}-\tilde{\Gamma}^{AB}_{\boldsymbol{q}}\tilde{\Gamma}^{BA}_{\boldsymbol{q}}$. The sublattice magnetizations in Eqs.~\eqref{eqn2}-\eqref{eqn7} are given by the Langevin function
\begin{eqnarray}
\left<\tilde{S}_{r}^{z}\right>=\textrm{coth}\:\Phi_{r}^{-1}-\Phi_{r},\label{eqn18}
\end{eqnarray}
where $\Phi_{r}=\sum_{\boldsymbol{q}}\Phi^{rr}_{\boldsymbol{q}}$ may be interpreted as the total spin carried by the magnons on sublattice $r$. At zero temperature, $\Phi_{r}=0$ holds as can be seen from Eqs.~\eqref{eqn14} and \eqref{eqn17}, and the sublattice magnetizations are saturated $\left<\tilde{S}_{r}^{z}\right>=1$.
It is worth noting that $\left<\tilde{S}_{r}^{z}\right>=1$ does not hold in the quantum case even for $T=0$, as is already known from linear spin-wave theory.
Using spin operators in the quantum case, Eqs.~\eqref{eqn4}-\eqref{eqn13} remain valid, but the function $n^{\textrm{quantum}}\left(\omega_{\boldsymbol{q}}\right)=\hbar \left(\textrm{e}^{\hbar\omega_{\boldsymbol{q}}/\left(k_{\textrm{B}}T\right)}-1\right)^{-1}$ now gives the Bose--Einstein occupation number for $\omega_{\boldsymbol{q}}>0$. The magnetic moments $\mu_{r}$ have to be replaced by $g\mu_{\textrm{B}}$, where $g$ is the spin gyromagnetic factor of the electron and $\mu_{\textrm{B}}$ is the Bohr magneton, leading to $\gamma/\mu_{r}=\hbar^{-1}$, since the magnitude of the moments is described by the spin quantum number $S$ in this case. Due to this different normalization, the decoupling coefficient reads $\alpha_{0}=1/\left(2S^{2}\right)$ in this case. Equations~\eqref{eqn8}-\eqref{eqn11} now express the expectation values of the anticommutators $\Phi^{rs}_{\boldsymbol{q}}=\frac{1}{2}\left<\left[\tilde{S}_{-\boldsymbol{q}r}^{(1)},\tilde{S}_{\boldsymbol{q}s}^{(2)}\right]_{+}\right>$. The expectation values of the magnetization can be calculated from the Brillouin function as
\begin{eqnarray}
\left<\tilde{S}_{r}^{z}\right>^{\textrm{quantum}}=SB_{S}\left(SX_{r}\right),\label{eqn19}
\end{eqnarray}
with $X_{r}=2\:\textrm{arcoth}\left(2\Phi_{r}\right)$ using the definition of $\Phi_{r}$ given above. Although the notations are different, it can be shown that the system of equations presented here is equivalent to Ref.~\cite{Anderson} when the Dzyaloshinsky--Moriya and two-ion anisotropy terms are set to zero. For $T=0$, one obtains $\Phi_{r}>0$ and $\left<\tilde{S}_{r}^{z}\right><S$, indicating that the classical ground state is not the correct quantum ground state.
Note that although the Brillouin function and in the classical limit the Langevin function also define the magnetization in mean-field theory, the argument of the functions differs from the mean-field model in Green's function theory; see Ref.~\cite{Callen3} for a detailed discussion.
The main result of the Green's function formalism is the calculation of the frequencies of the two magnon modes $\omega^{+}_{\boldsymbol{q}}$ and $-\omega^{-}_{\boldsymbol{q}}$ in Eq.~\eqref{eqn13}, and of the sublattice magnetizations in Eq.~\eqref{eqn18}. These expressions allows us to calculate the temperature-dependent mesoscopic parameters via direct comparison to the excitation frequencies of the continuum model. Our theory is numerically validated in Sec.~\ref{sec3}, where the proposed analytical expressions are compared to numerical Monte Carlo simulations for a specific spin model, where the excitation frequencies can be given in a simpler form.
\subsection{Effective temperature-dependent parameters\label{sec2b}}
In the long-wavelength limit, the Hamiltonian in Eq.~\eqref{eqn1} can be approximated by the free-energy functional
\begin{eqnarray}
\mathcal{F}&=&\int\frac{1}{2}\sum_{r,s}\left(\mathcal{J}_{m}^{rs}\sum_{\alpha,\beta}\partial_{\alpha}m_{r}^{\beta}\partial_{\alpha}m_{s}^{\beta}+\mathcal{D}_{m}^{rs}\left(m_{r}^{x}\partial_{x}m_{s}^{y}-m_{r}^{y}\partial_{x}m_{s}^{x}\right)\right)\nonumber
\\
&&-\sum_{r,s}\mathcal{K}_{m}^{rs}m_{r}^{z}m_{s}^{z}-\sum_{r}M_{r}B^{z}m_{r}^{z}-\mathcal{J}_{m0}^{AB}\boldsymbol{m}_{A}\boldsymbol{m}_{B}
\textrm{d}^{d}\boldsymbol{r}\label{eqn20}
\end{eqnarray}
in $d$ spatial dimensions.
Here, the magnetization fields $\boldsymbol{m}_{r}$ are required to be of unit length, being defined as $M_{r}m_{r}^{z}=\mu_{r}\left<\tilde{S}_{r}^{z}\right>/V_{\textrm{c}}$, where $V_{\textrm{c}}$ is the volume of the unit cell and $M_{r}$ is the saturation magnetization of the sublattice. The $m$ subscript denotes effective mesoscopic parameters, while $\alpha$ and $\beta$ are Cartesian indices. The first line of Eq.~\eqref{eqn20} describes energy contributions from a spatially inhomogeneous magnetization. For simplicity of the notation, the exchange stiffness $\mathcal{J}_{m}^{rs}$ is assumed to be isotropic not only in spin space but also in real space as denoted by the partial derivatives $\partial_{\alpha}$; this is satisfied in, e.g., hypercubic lattices. The Dzyaloshinsky--Moriya vector is assumed to be pointing along the $z$ direction as in the atomistic model, and it is assumed to be finite between spins which are separated along the $x$ axis. This functional form of the Dzyaloshinsky-Moriya interaction is appropriate for the model system described in Sec.~\ref{sec3}, while it has to be substituted by the appropriate Lifshitz invariant in other symmetry classes. The second line of Eq.~\eqref{eqn20} remains finite for homogeneous sublattice magnetizations, describing the energy contribution depending on the global orientation of the magnetization vectors with respect to the easy axis, to the external field, and to each other in the two sublattices.
Requiring that the excitation frequencies of the continuum model in Eq.~\eqref{eqn20} coincide with Eq.~\eqref{eqn13} of the atomistic model in the long-wavelength limit, the following temperature dependence is obtained for the parameters:
\begin{eqnarray}
\mathcal{J}_{m}^{rs}&=&\frac{1}{2V_{\textrm{c}}}\sum_{j}\left[J_{ij}^{rs}+\alpha_{0}\left(J_{ij}^{rs}+\Delta J_{ij}^{rs}\right)\textrm{Re}\left<\tilde{S}_{js}^{(1)}\tilde{S}_{ir}^{(2)}\right>\right]\nonumber
\\
&&\times\left(r_{ij}^{x}\right)^{2}\left<\tilde{S}_{r}^{z}\right>\left<\tilde{S}_{s}^{z}\right>,\label{eqn21}
\\
\mathcal{D}_{m}^{rs}&=&-\frac{1}{V_{\textrm{c}}}\sum_{j}\left[D_{ij}^{rs}+\alpha_{0}\left(J_{ij}^{rs}+\Delta J_{ij}^{rs}\right)\textrm{Im}\left<\tilde{S}_{js}^{(1)}\tilde{S}_{ir}^{(2)}\right>\right]\nonumber
\\
&&\times r_{ij}^{x}\left<\tilde{S}_{r}^{z}\right>\left<\tilde{S}_{s}^{z}\right>,\label{eqn22}
\\
\mathcal{K}_{m}^{rs}&=&\frac{1}{V_{\textrm{c}}}\left[K^{r}\delta_{rs}\left(1-\alpha_{0}\left<\tilde{S}_{ir}^{(1)}\tilde{S}_{ir}^{(2)}\right>\right)+\frac{1}{2}\sum_{j}\Delta J_{ij}^{rs}\left(1-\alpha_{0}\right.\right.\nonumber
\\
&&\left.\left.\times\textrm{Re}\left<\tilde{S}_{js}^{(1)}\tilde{S}_{ir}^{(2)}\right>\right)+\alpha_{0}D_{ij}^{rs}\textrm{Im}\left<\tilde{S}_{js}^{(1)}\tilde{S}_{ir}^{(2)}\right>\right]\left<\tilde{S}_{r}^{z}\right>\left<\tilde{S}_{s}^{z}\right>,\label{eqn23}
\\
\mathcal{J}_{m0}^{AB}&=&\frac{1}{V_{\textrm{c}}}\sum_{j}\left(J_{ij}^{AB}+\alpha_{0}\left(J_{ij}^{AB}+\Delta J_{ij}^{AB}\right)\textrm{Re}\left<\tilde{S}_{jB}^{-}\tilde{S}_{iA}^{-}\right>\right)\nonumber
\\
&&\times\left<\tilde{S}_{A}^{z}\right>\left<\tilde{S}_{B}^{z}\right>,\label{eqn24}
\end{eqnarray}
where $r^x_{ij}=x_i-x_j$ is the distance between the sites $i$ and $j$ along the $x$ axis.
The parameters Eq.~\eqref{eqn21}-\eqref{eqn24} show similar trends to what has been calculated in single-sublattice ferromagnetic systems in Refs.~\cite{Bastardis,Rozsa,Evans}. The main contribution to the temperature dependence of all the parameters comes from $\left<\tilde{S}_{r}^{z}\right>\left<\tilde{S}_{s}^{z}\right>$, which corresponds to the mean-field or random-phase approximations. By considering a decoupling scheme different from the random-phase approximation, i.e., $\alpha_0\neq 0$, the temperature dependence of the micromagnetic parameters is corrected by taking spin correlation effects into account. The correlation corrections proportional to $\alpha_{0}$ have a positive sign for the isotropic exchange and the Dzyaloshinsky--Moriya terms, which makes these terms decrease slower in magnitude with the temperature.
Note that for only nearest-neighbor interactions, the relative correction to the isotropic exchange and Dzyaloshinsky--Moriya interactions turn out to be precisely the same, similar to what is observed in a single-sublattice ferromagnet in Ref.~\cite{Rozsa}.
In contrast, the correlation corrections are negative for the anisotropy terms, indicating a faster decrease.
In ferromagnets, this is known to correspond to the Callen--Callen law $\mathcal{K}\sim \langle S^{z}\rangle^3$ for the temperature dependence of the uniaxial anisotropy~\cite{Callen2} based on the first term in Eq. \eqref{eqn23}, and to a scaling exponent $\mathcal{K}\sim \langle S^{z}\rangle^{2+\varepsilon}$ slightly larger than $2$ for the two-ion anisotropy~\cite{Evans} in the second term.
The last term in Eq.~\eqref{eqn23} gives a positive contribution to the anisotropy, meaning that the Dzyaloshinsky--Moriya interaction stabilizes collinear order in the presence of thermal fluctuations~\cite{Rozsa}.
Equation~\eqref{eqn21} defines different mesoscopic exchange stiffness parameters for the intrasublattice coupling $\mathcal{J}_{m}^{AA},\mathcal{J}_{m}^{BB}$ and for the intersublattice coupling $\mathcal{J}_{m}^{AB},\mathcal{J}_{m}^{BA}$.
Together with the anisotropy, the value of these exchange stiffness parameters are relevant for the estimation of domain wall width $\delta_w (T) \propto \sqrt{\mathcal{K}(T)/\mathcal{J}(T)}$. Equation~\eqref{eqn22} describes the temperature dependence of the mesoscopic intrasublattice and intersublattice Dzyaloshinsky-Moriya parameters, which are necessary for the estimation of the skyrmion radius~\cite{Barker2016,Tomasello2018}. The competition between the different contributions to the anisotropy term in Eq.~\eqref{eqn23} gives rise to fluctuation-driven spin reorientation transitions induced by the Dzyaloshinsky--Moriya interaction~\cite{Nagyfalusi} and unusual exponents in the temperature dependence of the anisotropy parameter~\cite{Evans}, similarly to what has been observed before in ferromagnetic systems.
As mentioned in Sec.~\ref{sec2a}, in the quantum case $\tilde{S}_{js}^{(1)}\tilde{S}_{ir}^{(2)}$ has to be replaced by the anticommutator $1/2\left[\tilde{S}_{js}^{(1)},\tilde{S}_{ir}^{(2)}\right]_{+}$. For $S=1/2$, this choice enforces the coefficient of the single-ion anisotropy $K^{r}$ to vanish ($\alpha_{0}=1/\left(2S^{2}\right)$ and $1/2\left[\tilde{S}_{ir}^{(1)},\tilde{S}_{ir}^{(2)}\right]_{+}=1/4$), which is consistent with the fact that the single-ion anisotropy just acts as a constant energy term. Indeed, this condition was one of the motivations behind choosing the value of the decoupling coefficient $\alpha_{0}$ in Ref.~\cite{Anderson}.
\subsection{Scaling exponents}
As mentioned above, the temperature dependence of the parameters of the continuum model is often expressed in terms of a power law of the magnetization. This is common practice partially because it is easier to implement numerically in a micromagnetic framework and partially because in non-equilibrium situations the value of the magnetization represents better the thermodynamic state of the system than the temperature of the heat bath. Both antiferromagnetic and ferrimagnetic systems may be characterized by the sublattice magnetizations $\left<\tilde{S}_{A}^{z}\right>,\left<\tilde{S}_{B}^{z}\right>$, or the combinations $\mu_{A}\left<\tilde{S}_{A}^{z}\right>\pm\mu_{B}\left<\tilde{S}_{B}^{z}\right>$ resulting in the total and staggered magnetizations, respectively. In the classical limit, assuming that all intrasublattice interaction terms are the same and there is no external magnetic field, it can be shown based on the definition Eq.~\eqref{eqn18} that $\left<\tilde{S}_{A}^{z}\right>=\left<\tilde{S}_{B}^{z}\right>$, meaning that the temperature dependence of all magnetizations is precisely the same if they are normalized to their zero-temperature value. This is the case for antiferromagnets with identical sublattices, but also for ferrimagnets with $\mu_{A}\neq\mu_{B}$ where the intrasublattice interactions are negligible compared to the intersublattice ones. This follows from the fact that the self-consistency Eqs.~\eqref{eqn4}-\eqref{eqn7} and Eqs.~\eqref{eqn14}-\eqref{eqn17} do not depend explicitly on the magnetic moments. In the following we restrict our attention to this limit, and leave the case of different sublattice magnetizations observable in, e.g., ferrimagnets with a compensation point or in the quantum limit, to later studies.
If all terms in the Hamiltonian Eq.~\eqref{eqn1} are negligible compared to the intersublattice isotropic exchange $J_{ij}^{AB}$, using Eqs.~\eqref{eqn4}-\eqref{eqn7}, \eqref{eqn14}, \eqref{eqn18}, and \eqref{eqn21}, in the low-temperature limit the effective exchange interaction may be expressed as
\begin{eqnarray}
\mathcal{J}_{m0}^{rs}\propto\left<\tilde{S}^{z}\right>^{2}\left(1-\varepsilon\left<\tilde{S}^{z}\right>\right)\approx\left<\tilde{S}^{z}\right>^{2-\varepsilon},\label{eqn25}
\end{eqnarray}
where $\left<\tilde{S}^{z}\right>$ is the magnetization on either sublattice and the correction to the exponent reads
\begin{eqnarray}
\varepsilon=\frac{\frac{1}{N_{\textrm{c}}}\sum_{\boldsymbol{q}}\frac{\left(\gamma_{\boldsymbol{q}}^{AB}\right)^{2}}{1-\left(\gamma_{\boldsymbol{q}}^{AB}\right)^{2}}}{\frac{1}{N_{\textrm{c}}}\sum_{\boldsymbol{q}}\frac{1}{1-\left(\gamma_{\boldsymbol{q}}^{AB}\right)^{2}}},\label{eqn26}
\end{eqnarray}
with the geometrical factor
\begin{eqnarray}
\gamma_{\boldsymbol{q}}^{AB}\sum_{\boldsymbol{R}_{i}-\boldsymbol{R}_{j}}J_{ij}^{AB}=\sum_{\boldsymbol{R}_{i}-\boldsymbol{R}_{j}}\textrm{e}^{-\textrm{i}\boldsymbol{q}\left(\boldsymbol{R}_{i}-\boldsymbol{R}_{j}\right)}J_{ij}^{AB}.\label{eqn27}
\end{eqnarray}
The power law in Eq.~\eqref{eqn25} is not only similar to the ferromagnetic case discussed in, e.g., Ref.~\cite{Bastardis}, but also numerically identical to it. If it is assumed that the antiferromagnetic alignment of the spins is realized in a system where all atoms together form a Bravais lattice, then one obtains $\gamma_{\boldsymbol{q}+\boldsymbol{Q}}^{AB}=-\gamma_{\boldsymbol{q}}^{AB}$ with $\boldsymbol{Q}$ the wave vector of the antiferromagnetic ordering, making it possible to rewrite the correction to the exponent as
\begin{eqnarray}
\varepsilon=\frac{\frac{1}{2N_{\textrm{c}}}\sum_{\boldsymbol{q},\textrm{FM}}\frac{\gamma_{\boldsymbol{q}}^{AB}}{1-\gamma_{\boldsymbol{q}}^{AB}}}{\frac{1}{2N_{\textrm{c}}}\sum_{\boldsymbol{q},\textrm{FM}}\frac{1}{1-\gamma_{\boldsymbol{q}}^{AB}}},\label{eqn28}
\end{eqnarray}
where the summation now runs over the atomic or ferromagnetic Brillouin zone which is twice the size of the antiferromagnetic one. For infinite lattices where the summations can be replaced by integrals, the correction to the exponent is $\varepsilon^{\textrm{sc}}=0.341$ for the simple cubic and $\varepsilon^{\textrm{bcc}}=0.282$ for the body-centered cubic lattice~\cite{Bastardis}, both of which can accommodate a two-sublattice ordering. Even for systems where all atoms together do not form a Bravais lattice (e.g., the honeycomb lattice), it can be derived that the expectation values and the correlation functions in the antiferromagnetically aligned model precisely coincide with those of the ferromagnetic model where the sign of all intersublattice coupling terms is reversed. This essentially relies on the fact that in the classical limit although not in the quantum case, the self-consistency Eqs.~\eqref{eqn4}-\eqref{eqn7} and \eqref{eqn14}-\eqref{eqn17} do not depend on the magnon frequencies in Eq.~\eqref{eqn13}, which are different between the ferromagnetic and the antiferromagnetic alignment.
For weak Dzyaloshinsky--Moriya interaction, the same correction $\varepsilon$ to the scaling exponent can be used. For two-site anisotropy between the same pairs of atoms as the exchange, the exponent is $2+\varepsilon$ owing to the opposite sign of the correlation correction between Eqs.~\eqref{eqn21}-\eqref{eqn22} and the second term in Eq.~\eqref{eqn23}, respectively. For the on-site anisotropy term, the exponent is close to $3$ as in the ferromagnetic case~\cite{Callen2} since the on-site correlations are stronger than the two-site terms.
In two-dimensional systems, the sums in Eq.~\eqref{eqn26} diverge for infinite lattice sizes, as is known from, e.g., the proof of the Mermin--Wagner theorem~\cite{Mermin}. This implies that the exponents may only be calculated if a finite anisotropy is taken into account, in which case they have to be evaluated numerically. Since the correlation corrections are expected to be enhanced in low-dimensional systems, this procedure is carried out and compared to numerical simulations in Sec.~\ref{sec3}.
\section{Simulations\label{sec3}}
To probe the accuracy of the analytical method described in Sec.~\ref{sec2}, its predictions will be compared to the numerical simulations based on the Hamiltonian Eq.~\eqref{eqn1}. While the magnetization and the static correlation functions may be directly determined from averaging over spin configurations from the different simulation steps, the frequencies required for determining the temperature dependence of the parameters in the continuum model are more difficult to access. Equations~\eqref{eqn8}-\eqref{eqn13} establish the relations between the expectation values and the frequencies. They may be reformulated as
\begin{eqnarray}
&&\left<S_{-\boldsymbol{q}A}^{+}S_{\boldsymbol{q}A}^{-}\right>\left<S_{-\boldsymbol{q}B}^{+}S_{\boldsymbol{q}B}^{-}\right>-\left<S_{-\boldsymbol{q}B}^{+}S_{\boldsymbol{q}A}^{-}\right>\left<S_{-\boldsymbol{q}A}^{+}S_{\boldsymbol{q}B}^{-}\right>\nonumber
\\
&&=-4\left<\tilde{S}_{A}^{z}\right>\left<\tilde{S}_{B}^{z}\right>\frac{1}{N_{\textrm{c}}^{2}}\frac{\gamma}{\mu_{A}}\frac{\gamma}{\mu_{B}}\frac{\left(k_{\textrm{B}}T\right)^{2}}{\omega^{+}_{\boldsymbol{q}}\omega^{-}_{\boldsymbol{q}}},\label{eqn29}
\\
&&\frac{\left<S_{-\boldsymbol{q}A}^{+}S_{\boldsymbol{q}A}^{-}\right>}{2\gamma\mu_{A}^{-1}\left<S_{A}^{z}\right>}+\frac{\left<S_{-\boldsymbol{q}B}^{+}S_{\boldsymbol{q}B}^{-}\right>}{2\gamma\mu_{B}^{-1}\left<S_{B}^{z}\right>}=\frac{1}{N_{\textrm{c}}}\frac{k_{\textrm{B}}T\left(\omega^{+}_{\boldsymbol{q}}+\omega^{-}_{\boldsymbol{q}}\right)}{\omega^{+}_{\boldsymbol{q}}\omega^{-}_{\boldsymbol{q}}}.\label{eqn30}
\end{eqnarray}
The product of the frequencies $\omega_{\textrm{prod}}=\omega^{+}_{\boldsymbol{q}}\omega^{-}_{\boldsymbol{q}}$ is given by Eq.~\eqref{eqn29}, which yields the sum $\omega_{\textrm{sum}}=\omega^{+}_{\boldsymbol{q}}+\omega^{-}_{\boldsymbol{q}}$ from Eq.~\eqref{eqn30}.
The individual frequencies may be calculated as
\begin{eqnarray}
\omega^{\pm}_{\boldsymbol{q}}=\frac{1}{2}\left[\omega_{\textrm{sum}}\pm\sqrt{\omega_{\textrm{sum}}^{2}-4\omega_{\textrm{prod}}}\right].\label{eqn31}
\end{eqnarray}
Note that in Eqs.~\eqref{eqn29} and \eqref{eqn30}, the correlation functions are given in the global coordinate system for easier implementation in the simulations. Since these equations establish a connection between the eigenfrequencies, the correlation functions and the temperature, they may be considered as a form of the equipartition theorem. Although Eqs.~\eqref{eqn29} and \eqref{eqn30} were determined from the Green's function formalism, they do not depend on the explicit form of the decoupling $\alpha_{0}$, only on the assumption that the spectral density is concentrated in single-particle excitations. Therefore, substituting the expectation values obtained from the simulations into Eq.~\eqref{eqn29} and \eqref{eqn30} enables the calculation of the frequencies of the simulated system. Furthermore, this method allows for determining the frequencies based on Monte Carlo simulations, which accurately describe thermal equilibrium properties but do not provide direct access to the real-time dynamics of the system.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figure1.pdf}
\caption{Sketch of the system used for the simulations. The spin directions illustrate a spin wave propagating along the $x$ direction on an antiferromagnetic background along the $z$ direction.
$K^A=K^B=K$ stand for the uniaxial anisotropy constants, $-J^{AB}=J$ the interlattice antiferromagnetic exchange parameter and $D^{AB}=D$ the Dzyaloshinsky--Moriya interaction parameter.}\label{fig1}
\end{figure}
The simulated model system is illustrated in Fig.~\ref{fig1}. It consists of a square lattice with equivalent sublattices $\mu_{A}=\mu_{B}=\mu_{\textrm{S}}$, only considering nearest-neighbor intersublattice Heisenberg exchange $-J^{AB}=J>0$ and Dzyaloshinsky--Moriya interactions of magnitude $D^{AB}=D$, with the Dzyaloshinsky--Moriya vectors being perpendicular to the lattice vectors connecting the neighbours following a $C_{4\textrm{v}}$ symmetry. The easy axis $K^{A}=K^{B}=K$ was assumed to lie along one of the nearest-neighbour directions, which enables the investigation of the Dzyaloshinsky--Moriya vectors parallel to the $z$ direction on the spin-wave spectrum. The external magnetic field was set to zero. We performed Monte Carlo simulations on a $64\times 64$ lattice using the single-spin Metropolis algorithm where the trial spin direction is chosen uniformly on the surface of the unit sphere. The lattice was equilibrated for $2\cdot10^{5}$ Monte Carlo steps at each temperature, then the expectation values were calculated from data obtained over $10^{8}$ Monte Carlo steps. To further improve the accuracy, $50$ independent simulations were averaged in the end.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figure2.pdf}
\caption{Spin-wave spectrum for $D=0.2J$ and $K=0.1J$. Results of the numerical simulations (symbols) are compared to Green's function theory calculations (lines) at two different temperatures.\label{fig2}}
\end{figure}
The spin-wave spectrum at finite temperature was calculated based on Eq.~\eqref{eqn29}, since the symmetry of the sublattices implies $\omega^{+}_{\boldsymbol{q}}=-\omega^{-}_{\boldsymbol{q}}$, $\left<\tilde{S}^{z}_{A}\right>=\left<\tilde{S}^{z}_{B}\right>=\left<S^{z}\right>$ and both sides of Eq.~\eqref{eqn30} vanish. For the considered system, the two branches of the spin-wave dispersion relation are given by
\begin{align}
&\omega^{+}_{\boldsymbol{q}}=\left<S^{z}\right>^{-1}\nonumber
\\
&\times\sqrt{\left(4\mathcal{J}+2\mathcal{K}\right)^{2}-\left(2\mathcal{J}\left[\cos\left(q^{x}a\right)+\cos\left(q^{z}a\right)\right]-2\mathcal{D}\sin\left(q^{x}a\right)\right)^{2}},\label{eqn32}
\\
&-\omega^{-}_{-\boldsymbol{q}}=\left<S^{z}\right>^{-1}\nonumber
\\
&\times\sqrt{\left(4\mathcal{J}+2\mathcal{K}\right)^{2}-\left(2\mathcal{J}\left[\cos\left(q^{x}a\right)+\cos\left(q^{z}a\right)\right]+2\mathcal{D}\sin\left(q^{x}a\right)\right)^{2}}.\label{eqn33}
\end{align}
The spectrum is illustrated in Fig.~\ref{fig2}. The Dzyaloshinsky--Moriya interaction lifts the degeneracy of the two branches and shifts the minimum of the spectrum away from $q^{x}=0$. The anisotropy opens a gap in the spectrum, which is exchange enhanced compared to the ferromagnetic case:
for $K\ll J$, $\omega_{\boldsymbol{0},\textrm{AFM}} \approx \sqrt{2(4\mathcal{J}) (2\mathcal{K})} \gg 2\mathcal{K}=\omega_{\boldsymbol{0},\textrm{FM}}$.
The theoretical curves are given by Eqs.~\eqref{eqn32} and \eqref{eqn33}, where the parameters $\mathcal{J},\mathcal{D}=\lvert\mathcal{D}_{ij}\rvert,$ and $\mathcal{K}$ are given by
\begin{eqnarray}
\mathcal{J}&=&\left[J+\alpha_{0}J\textrm{Re}\left<\tilde{S}_{jB}^{(1)}\tilde{S}_{iA}^{(2)}\right>\right]\left<\tilde{S}^{z}\right>^{2},\label{eqn34}
\\
\mathcal{D}_{ij}&=&\left[D_{ij}-\alpha_{0}J\textrm{Im}\left<\tilde{S}_{jB}^{(1)}\tilde{S}_{iA}^{(2)}\right>\right]\left<\tilde{S}^{z}\right>^{2},\label{eqn35}
\\
\mathcal{K}&=&\left[K\left(1-\alpha_{0}\left<\tilde{S}_{iA}^{(1)}\tilde{S}_{iA}^{(2)}\right>\right)-\frac{1}{2}\sum_{j}\alpha_{0}D_{ij}\textrm{Im}\left<\tilde{S}_{jB}^{(1)}\tilde{S}_{iA}^{(2)}\right>\right]\left<\tilde{S}^{z}\right>^{2},\label{eqn36}
\end{eqnarray}
based on Eqs.~\eqref{eqn21}-\eqref{eqn23} in an atomistic description. Note that the sign changes in Eqs.~\eqref{eqn35} and \eqref{eqn36} compared to Eqs.~\eqref{eqn22} and \eqref{eqn23} appear due to the sign change in $J$ and the antiferromagnetic alignment of the sublattices, respectively.
Figure~\ref{fig2} supports the high accuracy of the Green's function formalism up to intermediate temperature values of $k_{\textrm{B}}T=0.40J$, where the critical temperature of the system is around
$k_{\textrm{B}}T_{\textrm{c}}\approx 0.84J$ from Green's function theory. Note that the magnon frequencies cannot be determined at temperatures close to $T_{\textrm{c}}$, since due to the relatively small system size and the long simulation length the system starts to switch between the $+z$ and $-z$ directions, making it impossible to calculate $\left<S^{z}\right>$ accurately.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figure3.pdf}
\caption{Dependence of the effective interaction parameters on the staggered magnetization $n$, equal to the sublattice magnetization $\left<S^{z}\right>$ in this system. All quantities are normalized to their zero-temperature value. Results of the numerical simulations (symbols) are compared to Green's function theory calculations (lines). Simulation data were obtained by fitting the functions in Eq.~\eqref{eqn32} and \eqref{eqn33} to the simulated frequencies; error bars denote the error of this fit. Dashed lines show a low-temperature power-law fit to the simulation data. The atomistic model parameters are $K=0.1J$ and $D=0.2J$ for the blue and orange curves, $D=0.0J$ for the yellow curves.\label{fig3}}
\end{figure}
The temperature dependence of the parameters in the magnon spectrum is shown in Fig.~\ref{fig3}. This confirms the accuracy of the Green's function method in predicting the simulation results. The Dzyaloshinsky--Moriya interaction $\mathcal{D}$ decreases slower in temperature than the anisotropy $\mathcal{K}$, as discussed in Sec.~\ref{sec2b} for the general case. The temperature dependence of the Heisenberg term $\mathcal{J}$ is identical to that of the Dzyaloshinsky--Moriya interaction in Green's function theory and agrees with it in the simulations within error bars; therefore, it is omitted from the figure. The dependence on the order parameter, represented by the staggered magnetization $n$, may also be expressed as a power law. Based on a fit to the simulation data, this exponent is $1.58$ for the Dzyaloshinsky--Moriya interaction, decreased by the correction $\varepsilon=0.42$ compared to the uncorrelated value.
This value for the correction exponent
agrees with $\varepsilon=1.54-1.57$ obtained for the ferromagnetic case in Ref.~\cite{Rozsa}.
For the anisotropy, an exponent of $3.03$ is obtained without Dzyaloshinsky--Moriya interaction, rather close to the well-known power law predicting an exponent of $3$~\cite{Callen2}. In the presence of the Dzyaloshinsky--Moriya interaction, the exponent is slightly reduced to $2.92$, i.e., there is an additional positive contribution to the temperature dependence of the uniaxial anisotropy due to the Dzyaloshinsky--Moriya interaction.
\begin{figure}
\centering
\includegraphics[width = \columnwidth]{figure4.pdf}
\caption{Correlation correction to the effective interaction parameters as a function of the staggered magnetization $n$. Data and notations are identical to Fig.~\ref{fig3}, apart from subtracting $n^{2}$ from the normalized parameters as indicated in the legend.\label{fig4}}
\end{figure}
The accuracy of the decoupling scheme may be better visualized after subtracting $n^{2}$ from the normalized parameters, leaving only the correlation corrections shown in Fig.~\ref{fig4}. Note that in the random-phase approximation obtained for $\alpha_{0}=0$, the curves would be zero as indicated by the dashed line in the figure. Comparing Figs.~\ref{fig3} and \ref{fig4}, it is clear that the correlation corrections are not negligible, contributing around $10\%$ of the total value of the Dzyaloshinsky--Moriya interaction and around $50\%$ of the total anisotropy at the highest simulated temperatures. As mentioned earlier, for $D=0$ the correction to the anisotropy will be $n^3-n^2$, i.e., it results in the Callen--Callen power law~\cite{Callen2}. The corrections are positive for the exchange and negative for the anisotropy terms as mentioned above, leading to increased and decreased effective exponents, respectively. While in this plot the deviations between Green's function theory and the simulations become apparent, even for the anisotropy terms the analytical description reproduces about $2/3$ of the corrections observed in the simulations. The accuracy appears to be higher for the Dzyaloshinsky--Moriya interaction itself and its correction to the anisotropy (difference of the orange and yellow lines).
\section{Conclusion}
We applied Green's function theory to calculate the magnon frequencies in two-sublattice antiferromagnetically aligned systems, and to determine the temperature dependence of the interaction parameters in the magnon spectrum. We found that transversal spin correlations stabilize the Heisenberg and Dzyaloshinsky--Moriya exchange interactions against thermal fluctuations, but induce a faster decay of the anisotropy terms with the temperature. The Dzyaloshinsky--Moriya interaction also contributes to the uniaxial anisotropy term via the spin correlations, increasing its value at finite temperature in contrast to the typical decrease. We obtained good agreement between the predictions of the theory and Monte Carlo simulations performed on a square lattice, where the correlations play a pronounced role due to the reduced dimensionality.
These results agree with previous calculations for ferromagnets~\cite{Bastardis,Rozsa,Evans}. The agreement is not simply qualitative: the self-consistency equations may be exactly transformed into each other in the classical limit when reversing the magnetization direction on one sublattice simultaneously with the sign of all intersublattice coupling terms. If the intrasublattice interactions are identical, the sublattice magnetizations and consequently the total and staggered magnetizations show precisely the same temperature dependence, even if the magnetic moments on the sublattices are different.
The calculated temperature dependence of the parameters are fundamental for the development of multiscale models connecting first-principles spin-model parameters to finite-temperature mesoscopic computational approaches, such as micromagnetism or the Landau--Lifshitz--Bloch equation. Most of the multiscale approaches proposed so far
rely on an intermediate step based on classical spin-model simulations, which could be replaced by the considerably more efficient semi-analytical expression presented here.
Multiscale methods would be able to access the dynamics of and the phase transitions in antiferromagnetically aligned systems, for example for a realistic description of all-optical ultrafast switching processes in ferrimagnets~\cite{Raposo2022}
or of magnetic domain wall motion in antiferromagnets~\cite{Hirst2022}.
Deviations in the equilibrium parameters from single-sublattice ferromagnets are expected to be observed in systems where the intrasublattice terms are not equivalent, such as ferrimagnets with a compensation point, or particularly when quantum effects are taken into account. Validating the predictions of Green's function theory in the quantum limit would require comparisons with classical spin-model simulations augmented by a semi-quantum thermostat~\cite{Barker2019} or with renormalized heat-bath temperatures~\cite{Evans2015}, or to quantum spin-model simulations based on quantum Monte Carlo~\cite{Sandvik2007} or tensor-product states~\cite{Cirac_2009}.
The multi-scale quantum approach would be completed by using the calculated temperature-dependent parameters in the quantum version
of the Landau-Lishitz-Bloch equation~\cite{Nieves2014}.
\begin{acknowledgments}
L. R. gratefully acknowledges funding by the National Research, Development, and Innovation Office (NRDI) of Hungary under Project No. K131938 and by the Young Scholar Fund at the University of Konstanz.
U. A. gratefully acknowledges support by grant PID2021-122980OB-C55 and the grant RYC-2020-030605-I funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe" and "ESF Investing in your future".
\end{acknowledgments}
|
1,477,468,750,932 | arxiv | \section{Introduction}
Let $A_1A_2A_3A_4A_{5}$ be a closed hexahedron in
$\mathbb{R}^{3},$ $B_{i}$ be a non-negative number (weight) which
corresponds to each vertex $A_i,$ $A_{0}$ be a point in
$\mathbb{R}^{3}$ and $a_{ij}$ be the Euclidean distance of the
linear segment $A_{i}A_{j},$ for $i,j=0,1,2,3,4,5$ respectively.
The weighted Fermat-Torricelli problem for a closed hexahedron
$A_1A_2A_3A_4A_{5}$ in $\mathbb{R}^{3}$ states that:
\begin{problem}\label{5FT}
Find a point $A_0$ which minimizes the sum of the lengths of the
linear segments that connect every vertex $A_{i}$ with $A_0$
multiplied by the positive weight $B_i$:
\begin{equation} \label{eq:001}
\sum_{i=1}^{5}B_{i}a_{0i}=minimum.
\end{equation}
\end{problem}
For $B_{1}=B_{2}=B_{3}$ and $B_{4}=B_{5}=0$ we derive the
classical Fermat-Torricelli problem which has been introduced by
Fermat in 1643 and Torricelli discover the first geometrical
construction in $\mathbb{R}^{2}.$ In 1877, Engelbrecht extended
Torricelli's construction in the weighted case. In 2014, Uteshev
succeeded in finding an elegant algebraic solution of the weighted
Fermat-Torricelli problem in $\mathbb{R}^{2}$ in
\cite{Uteshev:12}. A detailed history of the weighted
Fermat-Torricelli problem is given in \cite{Kup/Mar:97},
\cite{BolMa/So:99} and \cite{Gue/Tes:02}.
In 1997, Y. Kupitz and H. Martini gave a complete study concerning
the existence, uniqueness and a characterization of the weighted
Fermat-Torricelli point for $n$ non-collinear points in
$\mathbb{R}^{m}$ in \cite{Kup/Mar:97} (see also in
\cite[Theorem~18.37, p.~250]{BolMa/So:99}).
\begin{theorem}\label{theor}
Let there be given n non-collinear points in $\mathbb{R}^{m},$
with corresponding positive weights $B_{1},B_{2},...,B_{n}.$
(i)Then the weighted Fermat-Torricelli point $A_0$ of
$\{A_{1}A_{2}A_{3}...A_{n}\}$ exists and is unique.
(ii) If
\[ \|{\sum_{j=1}^{n}B_{j}\vec {u}(A_i,A_j)}\|>B_i, i\neq j. \] for
{i,j}={1,2,3,4,5}, then
(a) the weighted Fermat-Torricelli point does not belong in $\{A_1A_2A_3...A_n\}$ (Weighted Floating Case). \\
(b) \[\sum_{i=1}^{n}B_{i}\vec{u}(A_0,A_i)=\vec{0}\]
(Weighted Floating Case).
(iii) If there is some i with \[ \|{\sum_{j=1}^{n}B_{j}\vec
u(A_i,A_j)}\|\le B_i, i\neq j. \] for {i,j}={1,2,3,4,5}, then the
weighted Fermat-Torricelli point is the vertex $A_i$
(Weighted Absorbed Case),
where $\vec {u}(A_i,A_j)$ is the unit vector with direction from
$A_{i}$ to $A_{j},$ for $i,j=0,1,2,3,..,n$ and $i\ne j.$
\end{theorem}
The inverse weighted Fermat-Torricelli problem for tetrahedra in
$\mathbb{R}^{3}$ states that:
\begin{problem}
Given a point $A_{0}$ which belongs to the interior of
$A_{1}A_{2}A_{3}A_{4}$ in $\mathbb{R}^{3}$, does there exist a
unique set of positive weights $B_{i},$ such that
\begin{displaymath}
B_{1}+B_{2}+B_{3}+B_{4} = c =const,
\end{displaymath}
for which $A_{0}$ minimizes
\begin{displaymath}
f(A_{0})=\sum_{i=1}^{4}B_{i}a_{0i}.
\end{displaymath}
\end{problem}
By letting $B_{4}=0$ and $c=1$ in the inverse weighted
Fermat-Torricelli problem for tetrahedra we obtain the
(normalized) inverse weighted Fermat-Torricelli problem for three
non-collinear points in $\mathbb{R}^{2}.$ In 2002, S. Gueron and
R. Tessler introduce the normalized inverse weighted
Fermat-Torriceli problem for three non-collinear points in
$\mathbb{R}^{2}$ who also gave a positive answer in
\cite{Gue/Tes:02}.
In 2009, a positive answer with respect to the inverse weighted
Fermat-Torricelli problem for tetrahedra is given in
\cite{Zach/Zou:09} and recently, Uteshev also obtain a positive
answer in \cite{Uteshev:12} by using the Cartesian coordinates of
the four non-collinear and non-coplanar fixed vertices. In 2011, a
negative answer with respect to the inverse weighted
Fermat-Torricelli problem for tetragonal pyramids in
$\mathbb{R}^{3}$ is derived in \cite{ZachosZu:11}. This negative
answer lead to an important dependence of the five variable
weights, such that the corresponding weighted Fermat-Torricelli
point remains the same, which we call a plasticity principle of
closed hexahedra. In 2013, we prove a plasticity principle of
closed hexahedra in $\mathbb{R}^{3}$ and a plasticity principle
for convex quadrilaterals in \cite{Zachos:13} and
\cite{Zachos:14}, respectively.
\begin{figure}\label{hex1}
\centering
\includegraphics[scale=0.80]{Boundaryhex}
\caption{}
\end{figure}
In this paper, we consider an important generalization of the
inverse weighted Fermat-Torricelli problem for boundary tetrahedra
in $\mathbb{R}^{3}$ which is derived as a application of the
geometric plasticity of weighted Fermat-Torricelli trees of degree
four for boundary tetrahedra in a two-way communication network
(Section~3, Proposition~3). This new evolutionary approach gives a
new type of plasticity of mass transportation networks of degree
four for boundary tetrahedra and of degree five for boundary
closed hexahedra in $\mathbb{R}^{3}$ (Section~3, Theorem~2,
Proposition~4). As a corollary, we also derive an important
generalization of the inverse weighted Fermat-Torricelli problem
for three non-collinear points and a new type of plasticity for
mass transportation networks of degree four for boundary weighted
quadrilaterals in $\mathbb{R}^{2}$ (Section~4, Theorem~3,
Proposition~5, Theorem~4). It is worth mentioning that this method
provides a unified approach to deal with the inverse weighted
Fermat-Torricelli problem for boundary triangle invented by S.
Gueron and R. Tessler which also includes the weighted absorbed
case (Theorem~1 (iii) for $n=3$).
\section{The Dependence of the angles of a weighted Fermat-Torricelli tree having degree at most four and at most five}
We shall start with the definitions of a tree topology, a
Fermat-Torricelli tree topology, the degree of a boundary vertex
in $\mathbb{R}^{3}$ and the degree of the weighted
Fermat-Torricelli point which is located at the interior of the
convex hull of a closed hexahedron or tetrahedron, in order to
describe the structure of a weighted Fermat-Torricelli tree of a
boundary closed hexahedron or a boundary tetrahedron in
$\mathbb{R}^{3}.$
\begin{definition}{\cite{GilbertPollak:68}}\label{topology}
A tree topology is a connection matrix specifying which pairs of
points from the list
$A_{1},A_{2},...,A_{m},A_{0,1},A_{0,2},...,A_{0,m-2}$ have a
connecting linear segment (edge).
\end{definition}
\begin{definition}{\cite{IvanTuzh:094}}\label{degreeSteinertree}
The degree of a vertex corresponds to the number of connections of
the vertex with linear segments.
\end{definition}
\begin{definition}{\cite{GilbertPollak:68}}\label{Steinertopology}
A Fermat-Torricelli tree topology of degree at most five is a tree
topology with all boundary vertices of a closed hexahedron and one
mobile vertex having at most degree five.
\end{definition}
\begin{definition}\label{FTtree}
A tree of minimum length with a Fermat-Torricelli tree topology of
degree at most five is called a Fermat-Torricelli tree.
\end{definition}
\begin{definition}\label{wFTtree5}
A Fermat-Torricelli tree of weighted minimum length with a
Fermat-Torricelli tree topology of degree at most five is called a
weighted Fermat-Torricelli tree of degree at most five.
\end{definition}
\begin{definition}\label{wFTtree4} A Fermat-Torricelli tree of
weighted minimum length having one zero weight is called a
weighted Fermat-Torricelli tree of degree at most four.
\end{definition}
\begin{definition}\label{wFTtree51} A unique solution of the weighted Fermat-Torricelli problem for closed hexahedra is a unique weighted Fermat-Torricelli tree of
of degree at most five.
\end{definition}
\begin{definition}\label{wFTtree41} A unique solution of the weighted Fermat-Torricelli problem for tetrahedra is a unique weighted Fermat-Torricelli tree (weighted Fermat-Torricelli network)
of degree at most four .
\end{definition}
By following the methodology given in \cite[Lemmas~1,~2
pp.~15-17]{Zachos:13} and \cite[Solution of Problem~2,
pp.~119-120]{Zach/Zou:09}, we shall show that the position of a
weighted Fermat-Torricelli tree w.r. to a boundary tetrahedron is
determined by five given angles.
We denote by $\alpha_{i0j}\equiv \angle A_{i}A_{0}A_{j}$ and
$\alpha_{i,j0k}$ the angle which is formed by the linear segment
that connects $A_0$ with the trace of the orthogonal projection of
$A_i$ to the plane $A_jA_0A_k$ with $a_{0i}$, for
$i,j,k,l=1,2,3,4,$ and $i\neq j\neq k\neq i.$
\begin{proposition}\label{importinv2}
The angles $\alpha_{i,k0m}$ depend on exactly five given angles
$\alpha_{102},$ $\alpha_{103},$ $\alpha_{104},$ $\alpha_{203}$ and
$\alpha_{204},$ for $i,k,m=1,2,3,4,$ and $i \ne k \ne m.$
\end{proposition}
\begin{proof}[Proof of Lemma~\ref{importinv2}:]
We shall use the same expressions used in \cite[Solution of
Problem~2, pp.~119-120]{Zach/Zou:09} for the unit vectors
$\vec{a_{i}}$ in terms of spherical coordinates, for $i=1,2,3,4.$
We denote by
\begin{equation}\label{vec:1}
\vec{a_{1}}=(1,0,0)
\end{equation}
\begin{equation}\label{vec:2}
\vec{a_{2}}=(\cos(\alpha_{102}),\sin(\alpha_{102}),0)
\end{equation}
\begin{equation}\label{vec:3}
\vec{a_{3}}=(\cos(\alpha_{3,102})\cos(\omega_{3,102}),\cos(\alpha_{3,102})\sin(\omega_{3,102}),\sin({\alpha_{3,102}}))
\end{equation}
\begin{equation}\label{vec:4}
\vec{a_{4}}=(\cos(\alpha_{4,102})\cos(\omega_{4,102}),\cos(\alpha_{4,102})\sin(\omega_{4,102}),\sin({\alpha_{4,102}}))
\end{equation}
such that: $\abs{\vec{a_{i}}}=1$.
The angles $\alpha_{3,102},$ $\alpha_{4,102},$ are calculated by
the following two relations in
\cite[Formulas~(10),~(11),p.~120]{Zach/Zou:09}:
\begin{equation}\label{inv3}
\cos^{2}({\alpha_{3,102}})=\frac{\cos^{2}({\alpha_{203}})+\cos^{2}({\alpha_{103}})-2\cos({\alpha_{203}})\cos({\alpha_{103}})\cos({\alpha_{102}})}{\sin^{2}({\alpha_{102}})},
\end{equation}
and
\begin{equation}\label{inv4}
\cos^{2}({\alpha_{4,102}})=\frac{\cos^{2}({\alpha_{204}})+\cos^{2}({\alpha_{104}})-2\cos({\alpha_{204}})\cos({\alpha_{104}})\cos({\alpha_{102}})}{\sin^{2}({\alpha_{102}})}
\end{equation}
The inner product of $\vec{a_{i}}$, $\vec{a_{j}}$ is given by:
\begin{equation}\label{innerp}
\vec{a_{i}}\cdot \vec{a_{j}}=\cos({\alpha_{i0j}}).
\end{equation}
By replacing (\ref{inv3}) and (\ref{inv4}) in (\ref{innerp}), by
eliminating $\omega_{3,102}$ and $\omega_{4,102}$ and by squaring
both parts of the derived equation, we obtain a quadratic equation
w.r. to $\cos\alpha_{304}:$
\begin{eqnarray}\label{quadrangle304}
&&[-\cos\alpha _{103} \cos\alpha _{104}+\cos\alpha
_{304}-\left(-\cos\alpha _{102} \cos\alpha _{103}+\cos\alpha
_{203}\right)\nonumber\\
&& \left(-\cos\alpha _{102} \cos\alpha _{104}+\cos\alpha
_{204}\right) \csc{}^2\alpha _{102}]{}^2=
(1-\cos^{2}\alpha_{3,102})(1-\cos^{2}\alpha_{4,102})\nonumber\\.
\end{eqnarray}
By solving (\ref{quadrangle304}) w.r. to $\cos\alpha_{304},$ we
get:
\begin{eqnarray}\label{calcalpha3041}
&&\cos\alpha_{304}=-\frac{1}{4} [2 b+4 \cos\alpha _{102}
\left(\cos\alpha _{104} \cos\alpha _{203}+\cos\alpha _{103}
\cos\alpha _{204}\right)- \nonumber\\
&&-4\left(\cos\alpha_{103}\cos\alpha_{104}+\cos\alpha _{203}
\cos\alpha_{204}\right)] \csc{}^2\alpha _{102}
\end{eqnarray}
or
\begin{eqnarray}\label{calcalpha3042}
&&\cos\alpha_{304}=\frac{1}{4} [4 \cos\alpha _{103} (\cos\alpha
_{104}-\cos\alpha _{102} \cos\alpha _{204})+\nonumber\\
&&+2 \left(b+2 \cos\alpha _{203} \left(-\cos\alpha _{102}
\cos\alpha _{104}+\cos\alpha _{204}\right)\right)] \csc{}^2\alpha
_{102}\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}\label{calcalpha304auxvar}
b\equiv\sqrt{\prod_{i=3}^{4}\left(1+\cos\left(2 \alpha
_{102}\right)+\cos\left(2 \alpha _{10i}\right)+\cos\left(2 \alpha
_{20i}\right)-4 \cos\alpha _{102} \cos\alpha _{10i} \cos\alpha
_{20i}\right)}\nonumber\\.
\end{eqnarray}
Therefore, $\alpha_{304}$ depends exactly on $\alpha_{102},$
$\alpha_{103},$ $\alpha_{104},$ $\alpha_{203}$ and $\alpha_{204}.$
By projecting the vector $a_{i}$ w.r. to the plane defined by
$\triangle A_{1}A_{0}A_{3}$ or $\triangle A_{2}A_{0}A_{3}$ or
$\triangle A_{1}A_{0}A_{4}$ or $\triangle A_{2}A_{0}A_{4}$ or
$\triangle A_{3}A_{0}A_{4},$ we get:
\begin{equation}\label{invimp1}
\cos^{2}({\alpha_{i,k0m}})=\frac{\sin^{2}({\alpha_{k0m}})-\cos^{2}({\alpha_{m0i}})-\cos^{2}({\alpha_{k0i}})+2\cos({\alpha_{m0i}})\cos({\alpha_{k0i}})\cos({\alpha_{k0m}})}{\sin^{2}({\alpha_{k0m}})}
\end{equation}
Hence, taking into account (\ref{invimp1}) and
(\ref{calcalpha3041}) or (\ref{calcalpha3042}) we derive that
$\alpha_{i,k0m}$ depends on $\alpha_{102},$ $\alpha_{103},$
$\alpha_{104},$ $\alpha_{203}$ and $\alpha_{204}.$
\end{proof}
\begin{proposition}\label{importinv22b}
The angles $\alpha_{i,k0m}$ depend on exactly seven given angles
$\alpha_{102},$ $\alpha_{103},$ $\alpha_{104},$ $\alpha_{105},$
$\alpha_{203},$ $\alpha_{204}$ and $\alpha_{205},$ for
$i,k,m=1,2,3,4,5$ and $i \ne k \ne m.$
\end{proposition}
\begin{proof}
We consider the directions of five unit vectors which meet a fixed
point $A_{0}.$
For instance, we get:
\begin{equation}\label{vec:15}
\vec{a_{1}}=(1,0,0)
\end{equation}
\begin{equation}\label{vec:25}
\vec{a_{2}}=(\cos(\alpha_{102}),\sin(\alpha_{102}),0)
\end{equation}
\begin{equation}\label{vec:35}
\vec{a_{3}}=(\cos(\alpha_{3,102})\cos(\omega_{3,102}),\cos(\alpha_{3,102})\sin(\omega_{3,102}),\sin({\alpha_{3,102}}))
\end{equation}
\begin{equation}\label{vec:45}
\vec{a_{4}}=(\cos(\alpha_{4,102})\cos(\omega_{4,102}),\cos(\alpha_{4,102})\sin(\omega_{4,102}),\sin({\alpha_{4,102}}))
\end{equation}
\begin{equation}\label{vec:4n45}
\vec{a_{5}}=(\cos(\alpha_{5,102})\cos(\omega_{5,102}),\cos(\alpha_{5,102})\sin(\omega_{5,102}),\sin({\alpha_{5,102}}))
\end{equation}
such that: $\abs{\vec{a_{i}}}=1$. The inner product of
$\vec{a_{i}}$, $\vec{a_{j}}$ is:
\begin{equation}\label{innerp}
\vec{a_{i}}\cdot \vec{a_{j}}=\cos({\alpha_{i0j}}).
\end{equation}
By following a similar process with the proof of
Proposition~\ref{importinv2}, we obtain that $\cos(\alpha_{304}),$
$\cos(\alpha_{305})$ and $\cos(\alpha_{405})$ derived by
(\ref{innerp}) are given by the following six relations which
depend on exactly seven angles $\alpha_{102},$ $\alpha_{103},$
$\alpha_{104},$ $\alpha_{105},$ $\alpha_{203},$ $\alpha_{204}$ and
$\alpha_{205}:$
\begin{eqnarray}\label{calcalpha30415}
&&\cos\alpha_{304}=-\frac{1}{4} [2 b_{304}+4 \cos\alpha _{102}
\left(\cos\alpha _{104} \cos\alpha _{203}+\cos\alpha _{103}
\cos\alpha _{204}\right)- \nonumber\\
&&-4\left(\cos\alpha_{103}\cos\alpha_{104}+\cos\alpha _{203}
\cos\alpha_{204}\right)] \csc{}^2\alpha _{102}
\end{eqnarray}
or
\begin{eqnarray}\label{calcalpha30425}
&&\cos\alpha_{304}=\frac{1}{4} [4 \cos\alpha _{103} (\cos\alpha
_{104}-\cos\alpha _{102} \cos\alpha _{204})+\nonumber\\
&&+2 \left(b_{304}+2 \cos\alpha _{203} \left(-\cos\alpha _{102}
\cos\alpha _{104}+\cos\alpha _{204}\right)\right)] \csc{}^2\alpha
_{102}\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}\label{calcalpha304auxvar5}
b_{304}\equiv\sqrt{\prod_{i=3}^{4}\left(1+\cos\left(2 \alpha
_{102}\right)+\cos\left(2 \alpha _{10i}\right)+\cos\left(2 \alpha
_{20i}\right)-4 \cos\alpha _{102} \cos\alpha _{10i} \cos\alpha
_{20i}\right)}\nonumber\\,
\end{eqnarray}
\begin{eqnarray}\label{calcalpha30515}
&&\cos\alpha_{305}=-\frac{1}{4} [2 b_{305}+4 \cos\alpha _{102}
\left(\cos\alpha _{105} \cos\alpha _{203}+\cos\alpha _{103}
\cos\alpha _{205}\right)- \nonumber\\
&&-4\left(\cos\alpha_{103}\cos\alpha_{105}+\cos\alpha _{203}
\cos\alpha_{205}\right)] \csc{}^2\alpha _{102}
\end{eqnarray}
or
\begin{eqnarray}\label{calcalpha30525}
&&\cos\alpha_{305}=\frac{1}{4} [4 \cos\alpha _{103} (\cos\alpha
_{105}-\cos\alpha _{102} \cos\alpha _{205})+\nonumber\\
&&+2 \left(b_{305}+2 \cos\alpha _{203} \left(-\cos\alpha _{102}
\cos\alpha _{105}+\cos\alpha _{205}\right)\right)] \csc{}^2\alpha
_{102}\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}\label{calcalpha305auxvar5}
b_{305}\equiv\sqrt{\prod_{i=3,i\neq 4}^{5}\left(1+\cos\left(2
\alpha _{102}\right)+\cos\left(2 \alpha _{10i}\right)+\cos\left(2
\alpha _{20i}\right)-4 \cos\alpha _{102} \cos\alpha _{10i}
\cos\alpha _{20i}\right)}\nonumber\\.
\end{eqnarray}
and
\begin{eqnarray}\label{calcalpha40515}
&&\cos\alpha_{405}=-\frac{1}{4} [2 b_{405}+4 \cos\alpha _{102}
\left(\cos\alpha _{105} \cos\alpha _{204}+\cos\alpha _{104}
\cos\alpha _{205}\right)- \nonumber\\
&&-4\left(\cos\alpha_{104}\cos\alpha_{105}+\cos\alpha _{204}
\cos\alpha_{205}\right)] \csc{}^2\alpha _{102}
\end{eqnarray}
or
\begin{eqnarray}\label{calcalpha40525}
&&\cos\alpha_{405}=\frac{1}{4} [4 \cos\alpha _{104} (\cos\alpha
_{105}-\cos\alpha _{102} \cos\alpha _{205})+\nonumber\\
&&+2 \left(b_{405}+2 \cos\alpha _{204} \left(-\cos\alpha _{102}
\cos\alpha _{105}+\cos\alpha _{205}\right)\right)] \csc{}^2\alpha
_{102}\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}\label{calcalpha405auxvar}
b_{305}\equiv\sqrt{\prod_{i=4}^{5}\left(1+\cos\left(2 \alpha
_{102}\right)+\cos\left(2 \alpha _{10i}\right)+\cos\left(2 \alpha
_{20i}\right)-4 \cos\alpha _{102} \cos\alpha _{10i} \cos\alpha
_{20i}\right)}.\nonumber\\
\end{eqnarray}
\end{proof}
\begin{remark}\label{rm1}
We note that the calculations of formulas of $\cos\alpha_{304},$
$\cos\alpha_{305},$ and $\cos\alpha_{405},$ which are derived in
\cite[Lemma~1, pp.~16]{Zachos:13} are corrected and replaced by
(\ref{calcalpha30415}), (\ref{calcalpha30425}),
(\ref{calcalpha30515}), (\ref{calcalpha30525}),
(\ref{calcalpha40515}) and (\ref{calcalpha40525}).
\end{remark}
\section{A generalization of the inverse weighted Fermat-Torricelli problem in $\mathbb{R}^{3}.$}
In this section, we consider mass transportation networks which
deal with weighted Fermat-Torricelli networks of degree at most
four (or five), in which the weights correspond to an
instantaneous collection of images of masses and satisfy some
specific conditions.
We denote by $h_{0,ik}$ the length of the height of $\triangle
A_{0}A_{i}A_{k}$ from $A_{0}$ with respect to $A_{i}A_{j},$ by
$A_{0,ij}$ the intersection of $h_{0,ij}$ with $A_{i}A_{j},$ and
by $h_{0,ijk}$ the distance of $A_{0}$ from the plane defined by
$\triangle A_{i}A_{j}A_{k}.$
We denote by $\alpha,$ the dihedral angle which is formed between
the planes defined by $\triangle A_{1}A_{2}A_{3}$ and $\triangle
A_{1}A_{2}A_{0},$ with $\alpha_{g_{i}}$ the dihedral angle which
is formed by the planes defined by $\triangle A_{1}A_{2}A_{i}$ and
$\triangle A_{1}A_{2}A_{0},$ and by $\alpha_{i,r0s}$ the angle
which is formed by $a_{0i}$ and the linear segment which connects
$A_{0}$ with the trace from the orthogonal projection of $a_{0i}$
to the plane defined by $\triangle A_{0}A_{r}A_{s},$ for
$i,k,l,m,r,s=0,1,2,3,4,5.$
We proceed by mentioning a fundamental result which we call a
geometric plasticity principle of mass transportation networks for
boundary closed hexahedra and it is proved in \cite[Appendix
A.II]{ZachosZu:11} for closed polyhedra in $\mathbb{R}^{3}.$
\begin{proposition}{\cite[Appendix A.II]{ZachosZu:11}}\label{geomplastprinciple}
Suppose that there is a closed polyhedron $A_1A_2A_{3}A_{4}A_{5}$
in $\mathbb{R}^{3}$ and each vertex $A_i$ has a non-negative
weight $B_i$ for $i=1,2,3,4,5.$ Assume that the floating case
of the generalized weighted Fermat-Torricelli point $A_0$ point is valid: \\
for each $A_i$ $\in$ $\{A_{1},A_{2},A_{3},A_{4},A_{5}\}$
\[ \|{\sum_{j=1}^{5}B_{j}\vec u(A_i,A_j)}\|>B_i, i\neq j. \]
\\If $A_0$ is connected with every vertex $A_i$ for $i=1,2,3,4,5,$ and a point $A_i'$ is selected with a non-negative weight $B_i$ of the line that is defined by the
linear segment $A_0A_i$ and a closed hexahedron $A_1'A_2'...A_n'$ is constructed such that: \\
\[ \|{\sum_{j=1}^{5}B_{j}\vec u(A_i',A_j')}\|>B_i, i\neq j .\]
Then the generalized weighted Fermat-Torricelli point $A_0'$ is
identical with $A_0$ (geometric plasticity principle).
\end{proposition}
The geometric plasticity principle of closed hexahedra connects
the weighted Fermat-Torricelli problem for closed hexahedra with
the modified weighted Fermat-Torricelli problem for boundary
closed hexahedra by allowing a mass flow continuity for the
weights, such that the corresponding weighted Fermat-Torricelli
point remains invariant in $\mathbb{R}^{3}.$
The modified weighted Fermat-Torricelli problem for closed
hexahedra states that:
\begin{problem}{Modified weighted Fermat-Torricelli
problem}\label{modFT}\\
Let $A_1A_2A_3A_4A_{5}$ be a closed hexahedron in
$\mathbb{R}^{3},$ $\mathcal{B}_{i}$ be a non-negative number
(weight) which corresponds to each linear segment $A_{0}A_{i},$
respectively. Find a point $A_0$ which minimizes the sum of the
lengths of the linear segments that connect every vertex $A_{i}$
with $A_0$ multiplied by the positive weight $\mathcal{B}_i$:
\begin{equation} \label{eq:001m}
\sum_{i=1}^{5}\mathcal{B}_{i}a_{0i}=minimum.
\end{equation}
\end{problem}
By letting $\mathcal{B}{i}=B_{i},$ for $i=1,2,3,4,5$ the weighted
Fermat-Torricelli problem for closed hexahedra (Problem~\ref{5FT})
and the corresponding modified weighted Fermat-Torricelli problem
(Problem~\ref{modFT}) are equivalent by collecting instantaneous
images of the weighted Fermat-Torricelli network via the geometric
plasticity principle.
We note that various generalizations of the modified
Fermat-Torricelli problem for weighted minimal networks of degree
at most three in the sense of the Steiner tree Problem are given
in the classical work of A. Ivanov and A. Tuzhilin in
\cite{IvanTuzh:094}.
We introduce a mixed weighted Fermat-Torricelli problem in
$\mathbb{R}^{3}$ which may give some new fundamental results in
molecular structures and mass transportation networks in a new
field that we may call in the future Mathematical Botany and
possible applications in the geometry of drug design.
We state the mixed Fermat-Torricelli problem for closed hexahedra
in $\mathbb{R}^{3},$ considering a two way communication weighted
network.
\begin{problem}
Given a boundary closed hexahedron $A_{1}A_{2}A_{3}A_{4}A_{5}$ in
$\mathbb{R}^{3}$ having one interior weighted mobile vertex
$A_{0}$ with remaining positive weight $\bar{B_{0}}$ find a
connected weighted system of linear segments of shortest total
weighted length such that any two of the points of the network can
be joined by a polygon consisting of linear segments:
\begin{equation}\label{objin}
f(X)=\bar{B_{1}} a_{1}+\bar{B_{2}} a_{2}+ \bar{B_{3}}
a_{3}+\bar{B_{4}} a_{4}+\bar{B_{5}} a_{5}=minimum,
\end{equation}
where
\begin{equation}\label{imp1mix}
B_{i}+\tilde{B_{i}}=\bar{B_{i}}
\end{equation}
under the following condition:
\begin{equation}\label{cond3mix}
\bar{B_{i}}+\bar{B_{j}}+\bar{B_{k}}+\bar{B_{l}}=\bar{B_{0}}+\bar{B_{m}}
\end{equation}
for $i,j,k,l=1,2,3,4,5$ and $i\ne j\ne k\ne l.$
\end{problem}
The invariance of the mixed weighted Fermat-Torricelli tree of
degree at most five is obtained by the inverse mixed weighted
Fermat-Torricelli problem for closed hexahedra in
$\mathbb{R}^{3}:$
\begin{problem}\label{mixinv5}
Given a point $A_{0}$ which belongs to the interior of
$A_{1}A_{2}A_{3}A_{4}A_{5}$ in $\mathbb{R}^{3}$, does there exist
a unique set of positive weights $B_{i},$ such that
\begin{displaymath}
\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}+\bar{B_{4}}+\bar{B_{5}} = c =const,
\end{displaymath}
for which $A_{0}$ minimizes
\begin{displaymath}
f(A_{0})=\sum_{i=1}^{5}\bar{B_{i}}a_{0i}
\end{displaymath}
and
\begin{equation}\label{imp1mix}
B_{i}+\tilde{B_{i}}=\bar{B_{i}}
\end{equation}
under the condition for the weights:
\begin{equation}\label{cond3mix}
\bar{B_{i}}+\bar{B_{j}}+\bar{B_{k}}+\bar{B_{l}}=\bar{B_{0}}+\bar{B_{m}}
\end{equation}
for $i,j,k,l,m=1,2,3,4,5,$ and $i\ne j\ne k\ne l\ne m$ (Inverse
mixed weighted Fermat-Torricelli problem for closed hexahedra).
\end{problem}
Letting $\bar{B_{5}}=0$ in Problem~\ref{mixinv5} we obtain the
inverse mixed weighted Fermat-Torricelli problem for tetrahedra.
\begin{theorem}\label{propomix4}
Given the mixed weighted Fermat-Torricelli point $A_{0}$ to be an
interior point of the tetrahedron $A_{1}A_{2}A_{3}A_{4}$ with the
vertices lie on four prescribed rays that meet at $A_{0}$ and from
the five given values of $\alpha_{102},$ $\alpha_{103},$
$\alpha_{104},$ $\alpha_{203},$ $\alpha_{204},$ the positive real
weights $\bar{B_{i}}$ given by the formulas
\begin{equation}\label{inversemix42}
\bar{B_{1}}=\left(\frac{\sin\alpha_{4,203}}{\sin\alpha_{1,203}}\right)\frac{c-\bar{B_{0}}}{2},
\end{equation}
\begin{equation}\label{inversemix43}
\bar{B_{2}}=\left(\frac{\sin\alpha_{4,103}}{\sin\alpha_{2,103}}\right)\frac{c-\bar{B_{0}}}{2},
\end{equation}
\begin{equation}\label{inversemix44}
\bar{B_{3}}=\left(\frac{\sin\alpha_{4,102}}{\sin\alpha_{3,102}}\right)\frac{c-\bar{B_{0}}}{2}
\end{equation}
and
\begin{equation}\label{inversemix41}
\bar{B_{4}}=\frac{c-\bar{B_{0}}}{2}
\end{equation}
give a negative answer w.r. to the inverse mixed weighted
Fermat-Torricelli problem for tetrahedra for $i,j,k,m=1,2,3,4$ and
$i\ne j\ne k\ne m.$
\end{theorem}
\begin{proof}
We denote by $B_{i}$ a mass flow which is transferred from $A_{i}$
to $A_{0}$ for $i=1,2,3$ by $B_{0}$ a residual weight which
remains at $A_{0}$ and by $B_{4}$ a mass flow which is transferred
from $A_{0}$ to $A_{4}.$
We denote by $\tilde{B_{i}}$ a mass flow which is transferred from
$A_{0}$ to $A_{i}$ for $i=1,2,3$ by $\tilde{B_{0}}$ a residual
weight which remains at $A_{0}$ and by $\tilde{B_{4}}$ a mass flow
which is transferred from $A_{4}$ to $A_{0}.$
Hence, we get:
\begin{equation}\label{weight1outflow}
B_{1}+B_{2}+B_{3}=B_{4}+B_{0}
\end{equation}
and
\begin{equation}\label{weight2inflow}
\tilde{B_{1}}+\tilde{B_{2}}+\tilde{B_{3}}+\tilde{B_{0}}=\tilde{B_{4}}.
\end{equation}
By adding (\ref{weight1outflow}) and (\ref{weight2inflow}) and by
letting $\bar{B_{0}}=B_{0}-\tilde{B_{0}}$ we get:
\begin{equation}\label{weight12inoutflow}
\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}=\bar{B_{4}}+\bar{B_{0}}
\end{equation}
such that:
\begin{equation}\label{weight12inflowsum}
\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}+\bar{B_{4}}=c,
\end{equation}
where $c$ is a positive real number.
Therefore, the objective function takes the form:
\begin{equation}\label{nobjmod1}
\sum_{i=1}^{4}B_{i}a_{0i}+\sum_{i=1}^{4}\tilde{B_{i}}a_{0i}=minimum,
\end{equation}
which yields
\begin{equation}\label{nobjmod}
\sum_{i=1}^{4}\bar{B_{i}}a_{0i}=minimum.
\end{equation}
We start by expressing the lengths $a_{0i},$ w.r. to $a_{0j},
a_{0k}, a_{0l}.$
For instance, the lengths $a_{03}$ and $a_{04}$ are expressed w.r.
to $a_{01},$ $a_{02}$ and the dihedral angle $\alpha$ taking into
account the two formulas given in \cite[Formulas (2.14), (2.20)
p.~116]{Zach/Zou:09}:
\begin{equation}\label{impa03}
a_{03}^2=a_{02}^2 +a_{23}^2-2 a_{23}[\sqrt{a_{02}^2-h_{0,12}^2}
\cos\alpha_{123} +h_{0,12}\sin\alpha_{123}\cos\alpha ]
\end{equation}
and
\begin{equation}\label{impa04}
a_{04}^2=a_{02}^2 +a_{24}^2-2 a_{24}[\sqrt{a_{02}^2-h_{0,12}^2}
\cos\alpha_{124}
+h_{0,12}\sin\alpha_{124}\cos(\alpha_{g_{4}}-\alpha) ]
\end{equation}
By eliminating $\alpha$ from (\ref{impa03}) and (\ref{impa04}) we
get:
\begin{eqnarray}\label{a04da01a02a03}
&&a_{04}^2=a_{02}^2 +a_{24}^2-2 a_{24}[\sqrt{a_{02}^2-h_{0,12}^2}
\cos\alpha_{124}{} \nonumber \\
&&{}+h_{0,12}\sin\alpha_{124}(\cos\alpha_{g_{4}}\left(
\frac{\left(\frac{a_{02}^2+a_{23}^2-a_{03}^2}{2 a_{23}}
\right)-\sqrt{a_{02}^2-h_{0,12}^2}\cos\alpha_{123}}{h_{0,12}\sin\alpha_{123}}
\right)+{} \nonumber \\
&&{}+\sin\alpha_{g_{4}}\sin\arccos\left(
\frac{\left(\frac{a_{02}^2+a_{23}^2-a_{03}^2}{2 a_{23}}
\right)-\sqrt{a_{02}^2-h_{0,12}^2}\cos\alpha_{123}}{h_{0,12}\sin\alpha_{123}}
\right) ) ]\nonumber\\
\end{eqnarray}
By differentiating (\ref{a04da01a02a03}) w.r. to $a_{01},$
$a_{02}$ and $a_{03},$ we obtain:
\begin{equation}\label{derv1n}
\frac{\partial a_{04}}{\partial
a_{01}}=-\frac{\sin\alpha_{4,203}}{\sin\alpha_{1,203}}
\end{equation}
\begin{equation}\label{derv2n}
\frac{\partial a_{04}}{\partial
a_{02}}=-\frac{\sin\alpha_{4,103}}{\sin\alpha_{2,103}}
\end{equation}
\begin{equation}\label{derv3n}
\frac{\partial a_{04}}{\partial
a_{03}}=-\frac{\sin\alpha_{4,102}}{\sin\alpha_{3,102}}.
\end{equation}
By differentiating (\ref{nobjmod}) w.r. to $a_{01},$ $a_{02}$ and
$a_{03},$ and taking into account (\ref{derv1n}), (\ref{derv2n})
and (\ref{derv3n}), we obtain:
\begin{equation}\label{derv1nn}
\frac{\bar{B_{1}}}{\bar{B_{4}}}=\frac{\sin\alpha_{4,203}}{\sin\alpha_{1,203}},
\end{equation}
\begin{equation}\label{derv2nn}
\frac{\bar{B_{2}}}{\bar{B_{4}}}=\frac{\sin\alpha_{4,103}}{\sin\alpha_{2,103}}
\end{equation}
and
\begin{equation}\label{derv3nn}
\frac{\bar{B_{3}}}{\bar{B_{4}}}=\frac{\sin\alpha_{4,102}}{\sin\alpha_{3,102}}.
\end{equation}
By following a similar process and by expressing $a_{0i}$ as a
function w.r. to $a_{0j},$ $a_{0k}$ and $a_{0l},$ for
$i,j,k,l=1,2,3,4$ and $i\ne j\ne k\ne l,$ we get:
\begin{equation}\label{derv3nnaijkl}
\frac{\bar{B_{i}}}{\bar{B_{j}}}=\frac{\sin\alpha_{j,k0l}}{\sin\alpha_{i,k0l}}.
\end{equation}
By subtracting (\ref{weight12inoutflow}) from
(\ref{weight12inflowsum}) we obtain (\ref{inversemix41}).
By replacing (\ref{inversemix41}) in (\ref{derv1nn}),
(\ref{derv2nn}) and (\ref{derv3nn}) and taking into account
Lemma~\ref{importinv2} we derive (\ref{inversemix42}),
(\ref{inversemix43}) and (\ref{inversemix44}). Therefore, the
weights $\bar{B_{i}}$ depend on the residual weight $\bar{B_{0}}$
and the five given angles $\alpha_{102},$ $\alpha_{103},$
$\alpha_{104},$ $\alpha_{203}$ and $\alpha_{204}.$
\end{proof}
\begin{corollary}\label{mixinv1}
If
$\alpha_{102}=\alpha_{103}=\alpha_{104}=\alpha_{203}=\alpha_{204}=\arccos\left(-\frac{1}{3}\right),$
$\bar{B_{0}}=\frac{1}{2}$ and
\[\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}+\bar{B_{4}}=1,\]
then
$\bar{B_{1}}=\bar{B_{2}}=\bar{B_{3}}=\bar{B_{4}}=\frac{1}{4}.$
\end{corollary}
\begin{proof}
By letting
\[\alpha_{102}=\alpha_{103}=\alpha_{104}=\alpha_{203}=\alpha_{204}=\arccos\left(-\frac{1}{3}\right)\]
in (\ref{calcalpha3041}) and (\ref{calcalpha3041}), we derive that
$\cos\alpha_{304}=-\frac{1}{3}$ or $\cos\alpha_{304}=1$ which
yield $\alpha_{304}=-\arccos\left(\frac{1}{3}\right).$ By
replacing $\bar{B_{0}}=\frac{1}{2}$ in (\ref{inversemix42}),
(\ref{inversemix43}), (\ref{inversemix44}) and
(\ref{inversemix41}) we derive
$\bar{B_{1}}=\bar{B_{2}}=\bar{B_{3}}=\bar{B_{4}}=\frac{1}{4}.$
\end{proof}
\begin{corollary}\label{mixinv2}
For
\begin{equation}\label{derv3nnn0}
\bar{B_{0}}=c\left(1-\frac{2}{1+\frac{\sin\alpha_{4,203}}{\sin\alpha_{1,203}}+\frac{\sin\alpha_{4,103}}{\sin\alpha_{2,103}}+\frac{\sin\alpha_{4,102}}{\sin\alpha_{3,102}}}\right).
\end{equation}
we derive a unique solution
\begin{equation}\label{dervnnBi}
\bar{B_{i}}=\frac{c}{1+\frac{\sin\alpha_{i,j0k}}{\sin\alpha_{l,j0k}}+\frac{\sin\alpha_{i,j0l}}{\sin\alpha_{k,j0l}}+\frac{\sin\alpha_{i,k0l}}{\sin\alpha_{j,k0l}}}.
\end{equation}
for $i,j,k,l=1,2,3,4$ and $i\neq j\neq k\neq l,$ which coincides
with the unique solution of the inverse weighted Fermat-Torricelli
problem for tetrahedra.
\end{corollary}
\begin{proof}
By replacing (\ref{derv3nnn0}) in (\ref{inversemix42}),
(\ref{inversemix43}), (\ref{inversemix44}) and
(\ref{inversemix41}) we obtain (\ref{dervnnBi}), which yields a
positive answer to the inverse weighted Fermat-Torricelli problem
for tetrahedra in $\mathbb{R}^{3}.$
\end{proof}
We proceed by generalizing the equations of (dynamic) plasticity
for closed hexahedra, taking into account the residual weight
$\bar{B_{0}}$ which exist at the knot $A_{0},$ by following the
method used in \cite[Proposition~1, p.~17]{Zachos:13}.
We set $sgn_{i,j0k}=\begin{cases} +1,& \text{if $A_{i}$
is upper from the plane $A_{j}A_{0}A_{k}$ },\\
0,& \text{if $A_{i}$ belongs to the plane $A_{j}A_{0}A_{k}$},\\
-1, & \text{if $A_{i}$ is under the plane $A_{j}A_{0}A_{k}$ } ,
\end{cases}$
with respect to an outward normal vector $N_{j0k}$ for
$i,j,k=1,2,3,4,5,$ $i \ne j\ne k.$ We remind that the position of
an arbitrary directed plane is determined by the outward normal
and the distance from the weighted Fermat-Torricelli point
$A_{0}.$
\begin{proposition}\label{propdynamic1mix}
The following equations point out a new plasticity of mixed
weighted closed hexahedra with respect to the non-negative
variable weights $(\bar{B_{i}})_{12345}$ in $\mathbb{R}^{3}$:
\begin{equation}\label{dynamicplasticity2}
(\frac{\bar{B_{1}}}{\bar{B_{4}}})_{12345}=-(\frac{sgn_{4,203}}{sgn_{1,203}})(\frac{\bar{B_{1}}}{\bar{B_{4}}})_{1234}(1+\frac{sgn_{5,203}}{sgn_{4,203}}(\frac{\bar{B_{5}}}{\bar{B_{4}}})_{12345}(\frac{\bar{B_{4}}}{\bar{B_{5}}})_{2345})
\end{equation}
\begin{equation}\label{dynamicplasticity3}
(\frac{\bar{B_{2}}}{\bar{B_{4}}})_{12345}=-(\frac{sgn_{4,103}}{sgn_{2,103}})(\frac{\bar{B_{2}}}{\bar{B_{4}}})_{1234}(1+\frac{sgn_{5,103}}{sgn_{4,103}}(\frac{\bar{B_{5}}}{\bar{B_{4}}})_{12345}(\frac{\bar{B_{4}}}{\bar{B_{5}}})_{1345})
\end{equation}
\begin{equation}\label{dynamicplasticity1}
(\frac{\bar{B_{3}}}{\bar{B_{4}}})_{12345}=-(\frac{sgn_{4,102}}{sgn_{3,102}})(\frac{\bar{B_{3}}}{\bar{B_{4}}})_{1234}(1+\frac{sgn_{5,102}}{sgn_{4,102}}(\frac{\bar{B_{5}}}{\bar{B_{4}}})_{12345}(\frac{\bar{B_{4}}}{\bar{B_{5}}})_{1245})
\end{equation}
under the conditions
\begin{equation}\label{isoperimetric1}
\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}+\bar{B_{4}}+\bar{B_{5}} = c
=constant
\end{equation}
and
\begin{equation}\label{mixedcond2}
\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}+\bar{B_{5}}=\bar{B_{0}}+\bar{B_{4}}
\end{equation}
where the weight $\bar{(B_{i})_{12345}}$ corresponds to the vertex
that lies on the ray $A_{0}A_{i},$ for $i=1,2,3,4,5,$ and the
weight $\bar{(B_{j})_{jklm}}$ corresponds to the vertex $A_{j}$
that lies in the ray $A_{0}A_{j}$ regarding the tetrahedron
$A_{j}A_{k}A_{l}A_{m},$ for $j,k,l,m=1,2,3,4,5$ and $j\ne k\ne
l\ne m.$
\end{proposition}
\begin{proof}
By eliminating $\bar{B_{1}}, \bar{B_{2}}, \bar{B_{3}}$ and $
\bar{B_{5}}$ from (\ref{isoperimetric1}) and (\ref{mixedcond2}) we
get:
\begin{equation}\label{mixedcond2}
\bar{B_{4}}=\frac{c-\bar{B_{0}}}{2}
\end{equation}
We assume that the residual weight $\bar{B_{0}}$ could be split at
the mixed weighted Fermat-Torricelli trees of degree four at
$A_{0},$ such that the residual weights $\bar{B_{0,2345}}$ and
$\bar{B_{0,1345}},$ and $\bar{B_{0,1245}},$ correspond to the
boundary tetrahedra $A_{2}A_{3}A_{4}A_{5},$ $A_{1}A_{3}A_{4}A_{5}$
and $A_{1}A_{2}A_{4}A_{5}.$
We select five initial (given) values $\bar{(B_{i})_{12345}(0)}$
concerning the weights $\bar{(B_{i})_{12345}}$ for $i=1,2,3,4,5$
such that the mixed weighted Fermat-Torricelli point $A_{0}$
exists and it is located at the interior of
$A_{1}A_{2}A_{3}A_{3}A_{5}.$
By applying the method used in the proof of
Theorem~\ref{propomix4}, the length of the linear segments
$a_{04}$, $a_{05}$ can be expressed as functions of $a_{01}$,
$a_{02}$ and $a_{03}$:
\begin{eqnarray}\label{a0ida01a02a03}
&&a_{0i}^2=a_{02}^2 +a_{2i}^2-2 a_{2i}[\sqrt{a_{02}^2-h_{0,12}^2}
\cos\alpha_{12i}{} \nonumber \\
&&{}+h_{0,12}\sin\alpha_{12i}(\cos\alpha_{g_{i}}\left(
\frac{\left(\frac{a_{02}^2+a_{23}^2-a_{03}^2}{2 a_{23}}
\right)-\sqrt{a_{02}^2-h_{0,12}^2}\cos\alpha_{123}}{h_{0,12}\sin\alpha_{123}}
\right)+{} \nonumber \\
&&{}+\sin\alpha_{g_{i}}\sin\arccos\left(
\frac{\left(\frac{a_{02}^2+a_{23}^2-a_{03}^2}{2 a_{23}}
\right)-\sqrt{a_{02}^2-h_{0,12}^2}\cos\alpha_{123}}{h_{0,12}\sin\alpha_{123}}
\right) ) ]\nonumber\\
\end{eqnarray}
for $i=4,5.$
From (\ref{a0ida01a02a03}), we get:
\begin{equation}\label{minimumf}
\bar{B_1}a_{01}+\bar{B_2}a_{02}+\bar{B_3}a_{03}+\bar{B_4}a_{04}(a_{01},a_{02},a_{03})+\bar{B_5}a_{05}(a_{01},a_{02},a_{03})=minimum.
\end{equation}
By differentiating (\ref{minimumf}) with respect to $a_{01}$,
$a_{02}$ and $a_{03}$ we get:
\begin{equation}\label{eq:2200}
\bar{B_1}+\bar{B_4}\frac{\partial a_{04}}{\partial
a_{01}}+\bar{B_5}\frac{\partial a_{05}}{\partial a_{01}}=0.
\end{equation}
\begin{equation}\label{eq:2100}
\bar{B_{2}}+\bar{B_4}\frac{\partial a_{04}}{\partial
a_{02}}+\bar{B_5}\frac{\partial a_{05}}{\partial a_{02}}=0.
\end{equation}
\begin{equation}\label{eq:2000}
\bar{B_3}+\bar{B_4}\frac{\partial a_{04}}{\partial
a_{03}}+\bar{B_5}\frac{\partial a_{05}}{\partial a_{03}}=0.
\end{equation}
By differentiating (\ref{a0ida01a02a03}) w.r. to $a_{03}$ and by
replacing $\frac{\partial a_{0i}}{\partial a_{03}}$ for $i=4,5$ in
(\ref{eq:2000}), we obtain:
\begin{equation}\label{eq:fundamentall1n}
(\frac{\bar{B_{3}}}{\bar{B_{4}}})_{12345}=-(\frac{sgn_{4,102}}{sgn_{3,102}})\frac{\sin(\alpha_{4,102})}{\sin(\alpha_{3,102})}(1+(\frac{\bar{B_{5}}}{\bar{B_{4}}})_{12345}\frac{sgn_{5,102}}{sgn_{4,102}}\frac{\sin(\alpha_{5,102})}{\sin(\alpha_{4,102})}).
\end{equation}
Taking into account the solution of the inverse mixed weighted
Fermat-Torricelli problem for boundary tetrahedra we derive
(\ref{dynamicplasticity1}). Following a similar evolutionary
process, we derive (\ref{dynamicplasticity3}) and
(\ref{dynamicplasticity2}).
\end{proof}
From Lemma~\ref{importinv2}, the variable weights
$(\bar{B_{1}})_{12345},$ $(\bar{B_{2}})_{12345},$ and
$(\bar{B_{3}})_{12345},$ depend on the weight
$\bar{(B_{5})_{12345}}$ the residual weight $\bar{B_{0}}$ and the
seven given angles $\alpha_{102},$ $\alpha_{103},$ $\alpha_{104},$
$\alpha_{105},$ $\alpha_{203},$ $\alpha_{204}$ and $\alpha_{205}.$
\begin{remark}\rm
We note that numerical examples of the plasticity of tetragonal
pyramids are given in \cite[Examples~3.4,
3.7,p.~844-847]{ZachosZu:11}.
\end{remark}
\section{A generalization of the inverse weighted Fermat-Torricelli problem in $\mathbb{R}^{2}$}
The inverse mixed weighted Fermat-Torricelli problem for three
non-collinear points in $\mathbb{R}^{2}$ states that:
\begin{problem}\label{mixinv5triangle}
Given a point $A_{0}$ which belongs to the interior of $\triangle
A_{1}A_{2}A_{3}$ in $\mathbb{R}^{2}$, does there exist a unique
set of positive weights $\bar{B_{i}},$ such that
\begin{equation}\label{isoptriangle}
\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}} = c =const,
\end{equation}
for which $A_{0}$ minimizes
\begin{displaymath}
f(A_{0})=\sum_{i=1}^{3}\bar{B_{i}}a_{0i}
\end{displaymath}
and
\begin{equation}\label{imp1mixtr}
B_{i}+\tilde{B_{i}}=\bar{B_{i}}
\end{equation}
under the condition for the weights:
\begin{equation}\label{cond3mixtr}
\bar{B_{i}}+\bar{B_{j}}=\bar{B_{0}}+\bar{B_{k}}
\end{equation}
for $i,j,k=1,2,3$ and $i\ne j\ne k$ (Inverse mixed weighted
Fermat-Torricelli problem for three non-collinear points).
\end{problem}
\begin{theorem}\label{propomix4triangle}
Given the mixed weighted Fermat-Torricelli point $A_{0}$ to be an
interior point of the triangle $\triangle A_{1}A_{2}A_{3}$ with
the vertices lie on three prescribed rays that meet at $A_{0}$ and
from the two given values of $\alpha_{102},$ $\alpha_{103},$ the
positive real weights $\bar{B_{i}}$ given by the formulas
\begin{equation}\label{inversemix42tr}
\bar{B_{1}}=-\left(\frac{\sin(\alpha_{103}+\alpha_{102})}{\sin\alpha_{102}}\right)\frac{c-\bar{B_{0}}}{2},
\end{equation}
\begin{equation}\label{inversemix43tr}
\bar{B_{2}}=\left(\frac{\sin\alpha_{103}}{\sin\alpha_{102}}\right)\frac{c-\bar{B_{0}}}{2},
\end{equation}
and
\begin{equation}\label{inversemix41tr}
\bar{B_{3}}=\frac{c-\bar{B_{0}}}{2}
\end{equation}
give a negative answer w.r. to the inverse mixed weighted
Fermat-Torricelli problem for three non-collinear points in
$\mathbb{R}^{2}.$
\end{theorem}
\begin{proof}
Eliminating $\bar{B_{1}}$ and $\bar{B_{2}}$ from
(\ref{isoptriangle}) and (\ref{cond3mixtr}) we get
(\ref{inversemix41tr}). By setting $\alpha_{g_{i}}=\alpha$ in
(\ref{impa03}), we derive that $a_{03}=a_{03}(a_{01},a_{02}).$ By
differentiating $a_{03}=a_{03}(a_{01},a_{02})$ w.r. to $a_{0i}$
and by replacing $\frac{\partial a_{03}}{\partial a_{0i}}$ for
$i=1,2$ and setting $B_{4}=0$ in (\ref{nobjmod}) we obtain
(\ref{inversemix42tr}) and (\ref{inversemix43tr}).
\end{proof}
\begin{corollary}\label{mixinv1triangle}
If $\alpha_{102}=\alpha_{103}=120^{o},$ $\bar{B_{0}}=\frac{1}{3}$
and
\[\bar{B_{1}}+\bar{B_{2}}+\bar{B_{3}}=1,\]
then
$\bar{B_{1}}=\bar{B_{2}}=\bar{B_{3}}=\bar{B_{0}}=\frac{1}{3}.$
\end{corollary}
\begin{proof}
By letting
\[\alpha_{102}=\alpha_{103}=120^{o}\]
we have:
\[\alpha_{203}=2\pi-\alpha_{102}-\alpha_{103}=120^{o}\]
By replacing $\bar{B_{0}}=\frac{1}{3}$ and $c=1$ in (\ref{inversemix42tr}),
(\ref{inversemix43tr}) and (\ref{inversemix41tr}) we derive
$\bar{B_{1}}=\bar{B_{2}}=\bar{B_{3}}=\frac{1}{3}.$
\end{proof}
\begin{corollary}\label{mixinv2triangle}
For
\begin{equation}\label{derv3nnn0tr}
\bar{B_{0}}=c\left(1-\frac{2}{1-\left(\frac{\sin(\alpha_{103}+\alpha_{102})}{\sin\alpha_{102}}\right)+\left(\frac{\sin\alpha_{103}}{\sin\alpha_{102}}\right)}\right).
\end{equation}
we derive a unique solution
\begin{equation}\label{dervnnBi}
\bar{B_{i}}=\frac{c}{1+\frac{\sin\alpha_{j0i}}{\sin\alpha_{j0k}}+\frac{\sin\alpha_{k0i}}{\sin\alpha_{j0k}}}.
\end{equation}
for $i,j,k=1,2,3,$ and $i\neq j\neq k,$ which coincides with the
unique solution of the inverse weighted Fermat-Torricelli problem
for three non-collinear points.
\end{corollary}
\begin{proof}
By replacing (\ref{derv3nnn0tr}) in (\ref{inversemix42tr}),
(\ref{inversemix43tr}) and (\ref{inversemix41tr}) we obtain
(\ref{dervnnBi}), which yields a positive answer to the inverse
weighted Fermat-Torricelli problem for three non-collinear points
in $\mathbb{R}^{2}.$
\end{proof}
\begin{proposition}\label{mixedabsorbedcase}
Let $\triangle A_{1}A_{2}A_{3}$ be a triangle in $\mathbb{R}^{2}.$
If
\begin{equation}\label{condabsorbed}
\|\bar{B_{1}}\vec{u}(A_3,A_1)+ \bar{B_{2}}\vec{u}(A_3,A_2)\|
\le\bar{B_{3}}
\end{equation}
and
\begin{equation}\label{condabsorbed2}
\bar{B_{1}}+\bar{B_{2}}=\bar{B_{3}}+\bar{B_{0}}
\end{equation}
holds, then the solution w.r. to the inverse mixed weighted
Fermat-Torricelli problem for three non-collinear points in
$\mathbb{R}^{2}$ for the weighted absorbed case is not unique.
\end{proposition}
\begin{proof}
Suppose that we choose three initial weights $\bar{B_{i}}(0)\equiv
\bar{B_{i}},$ such that (\ref{condabsorbed}) holds. From
Theorem~\ref{theor} the weighted absorbed case occurs and the
mixed weighted Fermat-Torricelli point $A_{0}\equiv A_{3}.$ Hence,
if we select a new weight $\bar{B_{3}}+\bar{B_{0}}$ which remains
at the knot $A_{3},$ then (\ref{condabsorbed}) also holds and the
corresponding mixed weighted Fermat-Torricelli point remains the
same $A_{0}^{\prime}\equiv A_{3}.$
\end{proof}
\begin{remark}
Proposition~\ref{mixedabsorbedcase} generalizes the inverse
weighted Fermat-Torricelli problem for three non-collinear points
in the weighted absorbed case.
\end{remark}
Setting a condition with respect to the specific dihedral angles
$\alpha_{g_{3}}=\alpha_{g_{4}}=\alpha$ we obtain quadrilaterals as
a limiting case of tetrahedra on the plane defined by $\triangle
A_{1}A_{0}A_{2}.$ These equations are important, in order to
derive a new plasticity for weighted quadrilaterals in
$\mathbb{R}^{2},$ where the weighted floating case of
Theorem~\ref{theor} occurs.
\begin{theorem}\label{planar plasticity}
If $\alpha_{g_{3}}=\alpha_{g_{4}}=\alpha$ then the following four
equations point out the mixed dynamic plasticity of convex
quadrilaterals in $\mathbb{R}^{2}:$
\begin{equation} \label{plastic1P5quad}
(\frac{\bar{B_2}}{\bar{B_1}})_{1234}=(\frac{\bar{B_2}}{\bar{B_1}})_{123}[1-(\frac{\bar{B_4}}{\bar{B_1}})_{1234}
(\frac{\bar{B_1}}{\bar{B_4}})_{134}],
\end{equation}
\begin{equation} \label{plastic2P5quad}
(\frac{\bar{B_3}}{\bar{B_1}})_{1234}=(\frac{\bar{B_3}}{\bar{B_1}})_{123}[1-(\frac{\bar{B_4}}{\bar{B_1}})_{1234}
(\frac{\bar{B_1}}{\bar{B_4}})_{124}],
\end{equation}
\begin{equation}\label{invcond4quad}
(\bar{B_{1}})_{1234}+(\bar{B_{2}})_{1234}+(\bar{B_{3}})_{1234}+(\bar{B_{4}})_{1234}=c=constant.
\end{equation}
and
\begin{equation}\label{invcond4quad2}
(\bar{B_{1}})_{1234}+(\bar{B_{2}})_{1234}+(\bar{B_{3}})_{1234}=(\bar{B_{4}})_{1234}+(\bar{B_{0}})_{1234}.
\end{equation}
\end{theorem}
|
1,477,468,750,933 | arxiv | \section{Introduction}
Absorption from layers of photo-ionized gas in the circumnuclear
regions of AGN is commonly observed in more than half of radio-quiet
objects, the so-called warm absorbers (e.g., Blustin et al.~2005;
McKernan et al.~2007). These absorbers are usually detected in the
X-ray spectra at energies below $\sim$2--3~keV. The typical
characteristics of this material are an ionization parameter of
log$\xi\sim$0--2~erg~s$^{-1}$~cm, a column density of
$N_H$$\sim$$10^{20}$--$10^{22}$~cm$^{-2}$ and an outflow velocity of
$\sim$100--1000~km/s. It has been suggested that the origin of this gas
might be connected with the Optical-UV Broad Line Region or with torus
winds (e.g., Blustin et al.~2005; McKernan et al.~2007).
In addition, there have been several papers in the literature recently
reporting the detection of blue-shifted Fe K absorption lines at
rest-frame energies of $\sim$7--10~keV in the X-ray spectra of
radio-quiet AGN (e.g., Chartas et al.~2002; Chartas et al.~2003; Pounds
et al.~2003; Dadina et al.~2005; Markowitz et al.~2006; Braito et
al.~2007; Cappi et al.~2009; Reeves et al.~2009a). These lines are
commonly interpreted as due to resonant absorption from Fe XXV and/or
Fe XXVI associated with a zone of circumnuclear gas photo-ionized by the
central X-ray source, with ionization parameter
log$\xi$$\sim$3--5~erg~s$^{-1}$~cm and column density
$N_H$$\sim$$10^{22}$--$10^{24}$~cm$^{-2}$. The energies of these
absorption lines are systematically blue-shifted and the corresponding
velocities can reach up to mildly relativistic values of
$\sim$0.2--0.4c. In particular, a uniform and systematic search for
blue-shifted Fe K absorption lines in a large sample of radio-quiet
AGN observed with XMM-Newton has been performed by Tombesi et
al.~(2010). This allowed the authors to assess their global detection
significance and to overcome any possible publication bias
(e.g., Vaughan \& Uttley 2008). The lines were detected in $\sim$40\%
of the objects, and are systematically blue-shifted implying large
outflow velocities, even larger than 0.1c in $\sim$25\% of the sources. These
findings, corroborated by the observation of short time-scale
variability ($\sim$100~ks), indicate that the absorbing material is
outflowing from the nuclear regions of AGN, at distances of the
order of $\sim$100~$r_s$ (Schwarzschild radii,
$r_s=2GM_{\mathrm{BH}}/c^2$) from the central super-massive black hole
(e.g., Cappi et al.~2009 and references therein). Therefore, these
findings suggest the presence of previously unknown Ultra-fast
Outflows (UFOs) from the central regions of radio-quiet AGN, possibly
connected with accretion disk winds/ejecta (e.g., King \& Pounds 2003;
Proga \& Kallman 2004; Ohsuga et al.~2009; King 2010) or the base of a
possible weak jet (see the ``aborted jet'' model by Ghisellini et
al.~2004). The mass outflow rate of these UFOs can be comparable to
the accretion rate and their kinetic energy can correspond to a
significant fraction of the bolometric luminosity (e.g., Pounds et
al.~2003; Dadina et al.~2005; Markowitz et al.~2006; Braito et
al.~2007; Cappi et al.~2009; Reeves et al.~2009a). Therefore, they
have the possibility to bring outward a significant amount of mass and
energy, which can have an important influence on the surrounding
environment (e.g., see review by Cappi 2006). In fact, feedback from
the AGN is expected to have a significant role in the evolution of the
host galaxy, such as the enrichment of the ISM or the reduction of star
formation, and could also explain some fundamental relations (e.g., see
review by Elvis 2006 and Fabian 2009). Moreover, the ejection of a
substantial amount of mass from the central regions of AGN can also
inhibit the growth of the super-massive black holes (SMBHs),
potentially affecting their evolution. The study of UFOs can
also give us further clues on the relation between the accretion disk
and the formation of winds/jets.
Evidence for winds/outflows in radio-loud AGN in the X-rays has been
missing so far. However, thanks to the superior sensitivity and
energy resolution of current X-ray detectors, we are now beginning to
find evidence for outflowing gas in radio-loud AGN as well. In fact,
the recent detection of a warm absorber in the Broad-Line Radio Galaxy
(BLRG) 3C~382 (Torresi et al.~2010; Reeves et al. 2009b) has been the
starting point for a change to the classical picture of the
radio-quiet vs. radio-loud dichotomy, at least in the X-ray domain.
This gas has an ionization parameter of
log$\xi\simeq$2--3~erg~s$^{-1}$~cm, a column density of
$N_H$$\simeq$$10^{21}$--$10^{22}$~cm$^{-2}$ and is outflowing with a
velocity of $\sim$800--1000~km/s. These parameters are somewhat
similar to those of the typical warm absorbers of Seyfert 1 galaxies
(e.g., Blustin et al.~2005; McKernan et al.~2007), which are the
radio-quiet counterparts of BLRGs. This result indicates the presence
of ionized outflowing gas in a radio-loud AGN at a distance of
$\sim$100~pc from the central engine, suggesting its possible
association with the Optical-UV Narrow Line Region (Torresi et
al.~2010; Reeves et al.~2009b).
In this paper, we present the detection, for the first time, of
ionized ultra-fast outflows in BLRGs {\it on sub-pc scales} from
Suzaku observations. The sources in the sample -- 3C~111, 3C~390.3,
3C~120, 3C~382, and 3C~445 -- were observed with Suzaku by us as part
of our ongoing systematic study of the X-ray properties of BLRGs
(Sambruna et al. 2009), with the exception of 3C~120 which was
observed during the Guaranteed Time Observer period (Kataoka et
al. 2007). These five BLRGs represent the ``classical'' X-ray
brightest radio-loud AGN, well studied at X-rays with previous
observatories. Thanks to the high sensitivity of the XIS
detectors and the long net exposures of these observations of
$\sim$100~ks, we have been able to reach a high S/N in the Fe K band
that allowed, for the first time, to obtain evidence for UFOs in these
sources, in the form of blue-shifted Fe~K absorption lines at energies
greater than 7~keV. The presence of UFOs in radio-loud AGN provides a
confirmation of models for jet-disk coupling and stresses the
importance of this class of sources for AGN feedback mechanisms. Full
accounts of the broad-band Suzaku spectra for each source will be
given in forthcoming papers.
This paper is structured as follows. In \S~2 and \S~3 we describe the
Suzaku data reduction and analysis, including statistical tests used
to assess the reality of the Fe~K absorption features (\S~3.3) and
detailed photo-ionization models used for the fits (\S~3.4). The
general results are given in \S~4, while \S~5 presents the Discussion
with the Conclusions following in \S~6. Appendix A contains the
details of the spectral fits for each BLRG and Appendix B a consistency
check of the results. Throughout this paper, a
concordance cosmology with H$_0=71$ km s$^{-1}$ Mpc$^{-1}$,
$\Omega_{\Lambda}$=0.73, and $\Omega_m$=0.27 (Spergel et al. 2003) is
adopted. The power-law spectral index, $\alpha$, is defined such that
$F_{\nu} \propto \nu^{-\alpha}$. The photon index is $\Gamma=\alpha+1$.
\section{Suzaku observations and data reduction}
The observational details for the five BLRGs observed with Suzaku
(Mitsuda et al.~2007) are summarized in Table~1. The data were taken
from the X-ray Imaging Spectrometer (XIS, Koyama et al.~2007) and
processed using v2
of the Suzaku pipeline. The observations were taken with the XIS
nominal (on-axis) pointing position, with the exception of the 3C~111
observation, which was taken with HXD nominal pointing. The Suzaku
observation of 3C~120 is composed by four different exposures of
$\sim$40~ks each, taken over a period of about one month (see
Table~1). We looked at the individual spectra and found that while
observation 2, 3 and 4 did not change significantly overall,
observation 1 instead showed a stronger X-ray emission, especially in
the soft X-ray part of the spectrum, in agreement with Kataoka et
al. (2007). Therefore, we decided to add only observations 2, 3 and 4
(we will call this observation 3C~120b) and to analyze the spectrum of
observation 1 separately (we will call this observation 3C~120a).
Data were excluded within 436 seconds of passage through the South
Atlantic Anomaly (SAA) and within Earth elevation angles or Bright
Earth angles of $<5^\circ$ and $<20^\circ$, respectively. XIS data were
selected in $3 \times 3$ and $5 \times 5$ edit-modes using grades
0, 2, 3, 4, 6, while hot and flickering pixels were removed using the {\sc
sisclean} script. Spectra were extracted from within circular
regions of between 2.5\arcmin--3.0\arcmin\, radius, while background
spectra were extracted from circles offset from the source and
avoiding the chip corners containing the calibration sources. The
response matrix ({\sc rmf}) and ancillary response ({\sc arf}) files
were created using the tasks {\sc xisrmfgen} and {\sc xissimarfgen},
respectively, the former accounting for the CCD charge injection and
the latter for the hydrocarbon contamination on the optical blocking
filter.
Spectra from the front illuminated XIS~0, XIS~2 (where available) and
XIS~3 chips were combined to create a single source spectrum
(hereafter XIS-FI). Given its superior sensitivity in the region of
interest, 3.5--10.5~keV, we restricted our analysis to the XIS-FI
data. The data from the back illuminated XIS~1 (hereafter XIS-BI)
chip were analysed separately and checked for consistency with the
XIS-FI results. In all cases, the power-law continuum and Fe
K$\alpha$ emission line parameters are completely consistent. Instead,
the lower S/N of the XIS-BI in the 4--10~keV band ($\sim$40\% of the
XIS-FI) allowed us to place only lower limits to the equivalent width
of the blue-shifted absorption lines (see Appendix B and Table~5).
Furthermore, Appendix B gives more details on the various consistency
checks we have performed in order to verify the reality of the
absorption lines detected in the 7--10~keV band. In particular, we
determined that the XIS background has a negligible effect on the
detection of each of the individual absorption lines and we checked the
consistency of the results among the individual XIS cameras (see Table~5).
We also tested that the alternative modeling of the lines with ionized Ni
K-shell transitions and ionized Fe K edges is not feasible. Finally, in
\S4.2 we verified the fit results from the broad-band (E$=$0.5--50~keV)
XIS$+$PIN spectra.
\section{Spectral fits}
We performed a uniform spectral analysis of the small sample of five BLRGs
in the Fe K band (E$=$3.5--10.5~keV). We used the \emph{heasoft}
v. 6.5.1 package and XSPEC v. 11.3.2. We extracted the source spectra
for all the observations, subtracted the corresponding background and
grouped the data to a minimum of 25 counts per energy bin to enable
the use of the $\chi^2$ when performing spectral fitting. Fits were limited
to the 3.5--10.5~keV energy band.
\subsection{The baseline model}
As plausible phenomenological representation of the continuum in
3.5--10.5~keV, we adopt a single power-law model.
We did not find it necessary
to include neutral absorption from our own Galaxy as the
relatively low column densities involved
(Dickey \& Lockman 1990; Kalberla et al.~2005) have negligible effects in
the considered energy band, see Table~1.
The only exception is 3C~445, where the
continuum is intrinsically absorbed by a column density of
neutral/mildly-ionized gas as high as $N_H$$\sim$$10^{23}$~cm$^{-2}$
(Sambruna, Reeves \& Braito 2007); for this source we included also a
neutral intrinsic absorption component with a column density of
$N_H$$\simeq$$2\times 10^{23}$~cm$^{-2}$ (see Table~2). A more
detailed discussion of absorption in this source using Chandra and
Suzaku data is presented in Reeves et al. (2010, in prep.) and
Braito et al. (2010, in prep.).
The ratios of the spectral data against the simple (absorbed for 3C~445)
power-law continuum
for the five BLRGs are shown in the upper panels of Fig.~1, Fig.~3 and
Fig.~5. Some additional spectral complexity can be clearly seen, such
as an ubiquitous, prominent neutral Fe K$\alpha$ emission line at the
rest frame energy of 6.4~keV, absorption structures at energies
greater than 7~keV (3C~111, 3C~120, and 3C~390.3) and narrow emission
features red-ward (3C~445) and blue-ward (3C~120 and 3C~382) to the
neutral Fe K$\alpha$ line. To model the emission lines we added
Gaussian components to the power law model, including the Fe K$\alpha$
emission line at E$\simeq$6.4~keV and ionized Fe~K emission lines in
the energy range E$\sim$6.4--7~keV, depending on the ionization state
of iron, which in this energy interval is expected to range from Fe~II
up to Fe~XXVI.
We find that the baseline model composed by a power-law plus Gaussian Fe K
emission lines provides an excellent phenomenological
characterization of the 3.5--10.5~keV XIS data with the lowest number
of free parameters. The results of the fits for the five BLRGs are
reported in Table~2. Note that only those emission lines with detection
confidence levels greater than 99\% were retained in following
fits. The weak red-shifted emission line present in 3C~445 was not
included because this has negligible effect on the fit results; this
line will be discussed by Braito et al. (2010 in prep.).
\subsection{Fe K absorption lines search}
As apparent from Figure~1, 3, and 5, several absorption dips are
present in the residuals of the baseline model in various cases. To
quantify their significance, we computed the $\Delta\chi^2$ deviations
with respect to the baseline model (\S~3.1) over the whole 3.5--10.5~keV
interval. The method is similar to the one used by the \emph{steppar}
command in XSPEC to visualize the error contours, but in this case the
inner contours indicate higher significance than the outer ones
(e.g., Miniutti \& Fabian 2006; Miniutti et al.~2007; Cappi et
al.~2009; Tombesi et al.~2010).
The analysis has been carried out for each source spectrum as follows:
1) we first fitted the 3.5--10.5~keV data with the baseline model and
stored the resulting $\chi^2$; 2) a further narrow, unresolved
($\sigma=$10~eV) Gaussian test line was then added to the model, with
its normalization free to have positive or negative values. Its
energy was stepped in the 4--10~keV band at intervals of
100~eV in order to properly sample the XIS energy resolution, each
time making a fit and storing the resulting $\chi^2$ value. In this
way we derived a grid of $\chi^2$ values and then plot the contours
with the same $\Delta\chi^2$ with respect to the baseline model.
The contour plots for the different sources are reported in the lower
panel of Figures~1, 3 and 5. The contours refer to $\Delta\chi^2$
levels of $-2.3$, $-4.61$ and $-9.21$, which correspond to F-test
confidence levels of 68\% (red), 90\% (green) and 99\% (blue),
respectively. The position of the neutral Fe K$\alpha$ emission line
at rest-frame energy E$=$6.4~keV is marked by the dotted vertical
line. The arrows indicate the position of the blue-shifted absorption
lines detected at $\ge$99\%. The black contours indicate the baseline
model reference level ($\Delta\chi^2=+0.5$).
We then proceeded to directly fit the spectra, adding Gaussian
absorption lines where indications for line-like absorption features
with confidence levels greater than 99\% were present. As already
noted in \S3.1, we checked that neglecting to include the weak
red-shifted emission line apparent only in the spectrum of 3C~445 has
no effect in the fit results. The detailed fitting and modeling of the
Fe K absorption lines is reported in Table~3 and is discussed in the
Appendix A for each source.
\subsection{Line significance from Monte Carlo simulations}
The contour plots in the lower panels of Fig.~1, Fig.~3 and Fig.~5
visualize the presence of spectral structures in the data and
simultaneously give an idea of their energy, intensity and confidence
levels using the standard F-test. However, they give only a
semi-quantitative indication and the detection of each line must be
confirmed by directly fitting the spectra. Moreover, it has been
demonstrated that the F-test method can slightly overestimate the
actual detection significance for a blind search of
emission/absorption lines as it does not take into account the
possible range of energies where a line might be expected to occur,
nor does it take into account the number of bins (resolution elements)
present over that energy range (e.g., Protassov et al.~2002). This
problem requires an additional test on the red/blue-shifted lines
significance and can be solved by determining the unknown underlying
statistical distribution by performing extensive Monte Carlo (MC)
simulations (e.g., Porquet et al.~2004; Yaqoob \& Serlemitsos 2005;
Miniutti \& Fabian 2006; Markowitz et al.~2006; Cappi et al.~2009;
Tombesi et al.~2010).
Therefore, we performed detailed MC simulations to estimate
the actual significance of the absorption lines detected at energies
greater than 7~keV. We essentially tested the null hypothesis that
the spectra were adequately fitted by a model that did not include the
absorption lines. The simulations have been carried out as follows:
1) we simulated a source spectrum using the \emph{fakeit} command in
XSPEC and assuming the baseline model listed in Table~2 without any
absorption lines and with the same exposure as the real data. We
subtracted the appropriate background and grouped the data to a
minimum of 25 counts per energy bin; 2) we fitted the faked spectrum
with the baseline model in the 3.5--10.5~keV band, stored the new
parameters values and generated another simulated spectrum as in step
2 but using the refined model. This procedure accounts for the
uncertainty in the null hypothesis model itself and is particularly
relevant when the original data set is noisy; 3) the newly simulated
spectrum was fitted again with the baseline model in the 3.5--10.5~keV
and the resultant $\chi^2$ was stored; 4) then, a further Gaussian
line (unresolved, $\sigma=$10~eV) was added to the model, with its
normalization initially set to zero and let free to vary between
positive and negative values. To account for the range of energies in
which the line could be detected in a blind search, we stepped its
centroid energy between 7~keV and 10~keV at intervals of 100~eV to
sample the XIS energy resolution, fitting each time and storing
only the maximum of the resultant $\Delta\chi^2$ values. The procedure
was repeated $S=1000$ times and consequently a distribution of
simulated $\Delta\chi^2$ values was generated. The latter indicates
the fraction of random generated emission/absorption features in the
7--10~keV band that are expected to have a $\Delta\chi^2$ greater than
a threshold value. In particular, if $N$ of the simulated
$\Delta\chi^2$ values are greater or equal to the real value, then the
estimated detection confidence level from MC simulations is
simply $1-N/S$.
The MC detection probabilities for the absorption lines are given in
Table~3. The values are in the range between 91\% and 99.9\%.
As expected, these estimates are slightly lower than those derived from the
F-test ($\ge$99\%) because they effectively take into account the random
generated lines in the whole 7--10~keV energy interval.
\subsection{Photo-ionization modeling}
To model the absorbing material that is photo-ionized by the nuclear
radiation, a grid with the Xstar code (Kallman \& Bautista 2001) was
generated. We modeled the nuclear X-ray ionizing continuum with a
power-law with photon index $\Gamma=2$, as usually assumed for Seyfert
galaxies, which takes into account the possible steeper soft
excess component (e.g., Bianchi et al.~2005). A different choice of
the power-law slope in the range $\Gamma$$=$1.5--2.5 has negligible
effects ($<$5\%) on the parameter estimates in the considered Fe K
band, E$=$3.5--10.5~keV. Moreover, as already noted by McKernan et
al.~(2003a), the presence or absence of the possible UV-bump in the
SED has a negligible effect on the parameters of the photo-ionized gas
in the Fe K band because in this case the main driver is the ionizing
continuum in the hard X-rays (E$>$6~keV). Standard solar abundances
are assumed throughout (Grevesse et al.~1996).
The velocity broadening of absorption lines from the photo-ionized
absorbers in the central regions of Seyfert galaxies is dominated by
the turbulence velocity component, commonly assumed to be in the range
$\sim$100--1000~km/s (e.g., Bianchi et al.~2005; Risaliti et al.~2005;
Cappi et al.~2009 and references therein). The energy resolution of
the XIS instruments in the Fe K band is FWHM$\sim$100--200~eV,
implying that lines with velocity broadening lower than
$\sim$2000--4000~km/s are unresolved. Therefore, given that we cannot
estimate the velocity broadening of the lines directly from the
spectral data, we generated an Xstar grid assuming the most likely
value for the turbulent velocity of the gas of 500~km/s. We checked
that for higher choices of this parameter, the resultant
estimate of the ionization parameter was not affected, although the
derived absorber column density was found to be slightly lower.
This is due to the fact that the core of the line tends to
saturate at higher $N_H$, with increasing the velocity broadening
(e.g., Bianchi et al.~2005). The opposite happens for
lower choices of the turbulent velocity. However, the
resulting difference of $\sim$5--10\% in the derived values is
completely negligible and well within the measurement errors.
Therefore, we apply this photo-ionization grid to model directly the
different absorption lines detected in the Fe K band.
The free parameters of the model are the absorber column density
$N_H$, the ionization parameter $\xi$ and the velocity shift $v$. We let the
code find the best-fit values and it turned out that the gas is
systematically outflowing, with velocities consistent with those derived from
the Gaussian absorption lines fits (see \S3.2 and Appendix A for a detailed discussion of each source).
The Xstar parameters are reported in Table~4 and the best-fit models are shown
in Fig.~2 and Fig.~4. A consistency check of the results from a broad-band spectral analysis is reported in \S4.2.
\section{General results}
\subsection{Fe K band spectral analysis}
In this Section we summarize the results of the spectral fits to the
3.5--10.5~keV XIS-FI spectra of the BLRGs of our sample with a model
consisting of the baseline model plus absorption lines and a detailed
photoionization grid (see above). Results for individual sources are
discussed in Appendix A.
The results of the fits with the baseline model are listed in Table~2,
while the residuals of this model are shown in Figures~1, 3, and 5 for
the five BLRGs, together with the $\Delta\chi^2$ contours. As
mentioned above, absorption dips are visible and to assess their
statistical significance we used both the F-test and extensive Monte
Carlo simulations. The results of these tests, reported in Table~3,
establish that only in 3/5 sources we do detect reliably absorption
features at energies $\sim$ 7.3--7.5~keV and 8.1--8.7~keV, namely in
3C~111, 3C~120b, and 3C~390.3. In these three sources, the absorption
lines are detected with confidence levels higher than 99\% with the F-test
and higher than 91\% with the Monte Carlo method (Table~3). We fitted the
absorption features by adding narrow Gaussian components, or a blend
of narrow components, to the baseline model. The Gaussian parameters
are reported in Table~3.
Given the high cosmic abundance of Fe, the most intense spectral
features expected from a highly ionized absorber in the 3.5--10.5~keV
band are the K-shell resonances of Fe XXV and Fe XXVI (e.g., Kallman
et al.~2004). However, the rest-frame energies of the detected
absorption lines are in the range $\simeq$7.3--7.5~keV and
$\simeq$8.1--8.8~keV, larger than the expected energies of the atomic
transitions for Fe XXV and Fe XXVI. An interesting possibility is that
the absorption lines detected in the BLRGs are due, similarly to those
recently observed in Seyferts, to blueshifted resonant lines of highly
ionized Fe, thus implying the presence of fast outflows in radio-loud AGN
as well. If we hold this interpretation true, the derived outflow
velocities are in the range $\simeq$0.04--0.15c.
We also performed more physically consistent spectral fits using the
Xstar photoionization grid described in \S~3.4 (see Fig.~2 and
Fig.~4). Good fits are obtained with this model, yielding ionization
parameters log$\xi$$\simeq$4--5.6~erg~s$^{-1}$~cm and column densities
$N_H$$\simeq$$10^{22}$--$10^{23}$~cm$^{-2}$. The
derived blue-shifted velocities are consistent with those from the
simple phenomenological fits, $v$$\simeq$0.04--0.15c (see Table~4). We
note that, given the very high ionization level of this absorbing
material, no other significant signatures are expected at lower
energies as all the elements lighter than iron are almost completely
ionized.
An important caveat is that the velocities and column densities
derived by fitting the spectral data with the Xstar grid depend on the
unknown inclination angle of the outflow with respect to the line of
sight. In other words, they depend on whether we are actually looking
directly down to the outflowing stream or intercepting only part of
it (e.g., Elvis 2000). Therefore, the obtained values (see Table~4)
are only conservative estimates and represent lower limits.
In conclusion, we detected for the first time in radio-loud AGN at
X-rays, absorption lines in the energy range 7--10~keV in the Suzaku
XIS spectra of 3/5 BLRGs -- 3C~111, 3C~390.3, and 3C~120. If
interpreted as blueshifted resonant absorption lines of highly ionized
Fe, the features imply the presence of ultra-fast (v $\sim$
0.04-0.15c) outflows in the central regions of BLRGs. In \S5 we discuss
more in depth this association and the inferred outflow physical
properties.
\subsection{Broad-band spectral analysis}
As a consistency check of the Fe K band (E$=$3.5--10.5~keV) based results, we exploited the broad-band capabilities of Suzaku combining the XIS and PIN spectra.
The energy band covered in this way is very broad, from 0.5~keV up to 50~keV.
We downloaded and reduced the PIN data of 3C~111, 3C~390.3 and 3C~120 and analyzed the combined XIS-FI and PIN spectra.
For 3C~390.3 and 3C~120 we applied the broad-band models already published in the literature by Sambruna et al.~(2009) and Kataoka et al.~(2007).
Instead, for 3C~111, we used the broad-band model that will be reported by us in Ballo et al.~(2010, in prep.). This is essentially composed by a power-law continuum with Galactic absorption, plus cold reflection ($R\la$1) and the Fe K$\alpha$ emission line at E$\simeq$6.4~keV. The resultant power-law photon index of this fit is $\Gamma$$\simeq$1.6, which is slightly steeper than the estimate of $\Gamma$$\simeq$1.5 from the local continuum in the 3.5--10.5~keV band (see Table~2). We included the neutral Galactic absorption component in all broad-band fits (see Table~1).
Then, we modeled the blue-shifted absorption lines with the Xstar photo-ionization grid already discussed in \S3.4., letting the column density, the ionization parameter and the velocity shift vary as free parameters.
The best-fit estimates of the Fe K absorbers derived from these broad-band fits are completely consistent with those reported in Table~4.
In particular, for 3C~111 we obtain an ionization parameter of log$\xi$$=$$4.9^{+0.2}_{-0.4}$~erg~s$^{-1}$~cm, a column density of $N_H$$>$$1.5\times 10^{23}$~cm$^{-2}$ and an outflow velocity of $v_{out}$$=$$+0.039\pm0.003$c.
For 3C~390.3, we estimate log$\xi$$=$$5.6\pm0.5$~erg~s$^{-1}$~cm, $N_H$$>$$2\times 10^{22}$~cm$^{-2}$ and $v_{out}$$=$$+0.146\pm0.007$c.
Finally, for 3C~120b, we derive log$\xi$$=$$3.7\pm0.2$~erg~s$^{-1}$~cm, $N_H$$=$$(1.5\pm0.4)\times 10^{22}$~cm$^{-2}$ and $v_{out}$$=$$+0.075\pm0.003$c.
This also assures that the addition of a weak reflection component with $R$$<$1 (e.g., Sambruna et al.~2009; Kataoka et al.~2007; Ballo et al.~2010 in prep.) does not change at all the fit results.
Moreover, it is important to note here that we do not find any evidence for a lower ionization (log$\xi$$\la$3~erg~s$^{-1}$~cm) warm absorber at E$\la$3~keV in these three sources. This rules out any possible systematic contamination from moderately ionized iron and strengthens the interpretation of the absorption lines at E$>$7~keV as genuine blue-shifted Fe XXV and Fe XXVI K-shell transitions.
As already introduced in \S1, the only object with the detection of a soft X-ray warm absorber in its high energy resolution Chandra HETG (Reeves et al.~2009b) and XMM-Newton RGS (Torresi et al.~2010) spectra is 3C~382. On the other hand, a heavy soft X-ray absorption from neutral/mildly-ionized gas with $N_H$$\sim$$10^{23}$~cm$^{-2}$ has been reported in the XMM-Newton spectrum of 3C~445 (Sambruna, Reeves \& Braito 2007). This result is also confirmed by a Chandra LETG and a Suzaku broad-band spectral analysis presented in Reeves et al.~(2010, in prep.) and Braito et al.~(2010, in prep.), respectively.
However, we did not find any significant narrow Fe K absorption
line features in the 7--10~keV Suzaku XIS spectra of these two sources
from this analysis.
\section{Discussion}
\subsection{Evidence for Ultra-fast Outflows in BLRGs}
The discovery of Ultra-Fast Outflows (UFOs) in radio-loud BLRGs
parallels the detection of UFOs in the X-ray spectra of several
Seyfert galaxies and radio-quiet quasars (e.g., Chartas et al.~2002;
Chartas et al.~2003; Pounds et al.~2003; Dadina et al.~2005; Markowitz
et al.~2006; Braito et al.~2007; Cappi et al.~2009; Reeves et
al.~2009a). The presence of UFOs in radio-quiet sources was recently
established through a systematic, uniform analysis of the XMM-Newton archive
on a large number of sources (Tombesi et al. 2010), overcoming
possible publication biases (e.g., Vaughan \& Uttley 2008).
While a uniform analysis was also performed in this work, it should be
noted that our small sample is not complete and the results might not
be representative of the global population of BLRGs. Therefore, to
obtain better constraints on the statistical incidence and parameters
of UFOs in BLRGs, it is imperative to expand the sample of sources
with high-quality X-ray observations in the next few years through
Suzaku and XMM-Newton observations of additional sources.
However, it has been claimed that part (or even all) of the blue-shifted
ionized absorption features detected in the X-ray spectra of bright
AGN could be affected by contamination from local ($z\simeq0$)
absorption in our Galaxy or by the Warm/Hot Intergalactic Medium
(WHIM) at intermediate red-shifts, due to the fact that some of them
have blue-shifted velocities comparable to the sources cosmological
red-shifts (e.g., McKernan et al.~2003b; McKernan et al.~2004; McKernan
et al.~2005). We performed some tests to look into this scenario. We
can use the velocity information and compare the absorber blue-shifted
velocities with the cosmological red-shifts of the sources. The
blue-shifted velocities of the absorbers detected in 3C~120 and
3C~390.3 (see Table~4) are much larger than the sources cosmological
red-shifts (see Table~1). This conclusion is strong enough to rule out
any contamination due to absorption from local or intermediate
red-shift material in these two sources. However, the derived
blue-shifted velocity of $v$$=$$+0.041\pm0.003$c for the highly
ionized absorber in 3C~111 is instead somewhat similar to the source
cosmological red-shift of $z=0.0485$ and needs to be investigated in
more detail. The difference between the two values is
$zc-v$~$\simeq$~$0.007$c, which could indicate absorption from highly
ionized material either in our Galaxy and outflowing with that
velocity ($v$$\sim$2000~km/s) along the line of sight or at rest and
located at that intermediate red-shift ($z$$\simeq$$0.007$).
The galaxy 3C~111 is located at a relatively low latitude ($b=
-8.8\degr$) with respect to the Galactic plane and therefore its X-ray
spectrum could be, at some level, affected by local
obscuration. However, the estimated column density of Galactic
material along the line of sight of the source is $N_H\sim 3\times
10^{21}$~cm$^{-2}$ (Dickey \& Lockman 1990; Kalberla et al.~2005),
which is far too low to explain the value of $N_H\sim
10^{23}$~cm$^{-2}$ of the absorber from fits of the Suzaku spectrum
(see Table~4). Nevertheless, the source is located near the direction of the
Taurus molecular cloud, which is the nearest large star-forming region in our
Galaxy.
A detailed optical and radio study of the cloud has been reported by
Ungerer et al.~(1985). From the analysis of the emission from the
stars in that region and the molecular emission lines, the authors
estimated several parameters of the cloud, such
as the location at a distance of $\sim$400~pc (with a linear extent of
$\sim$5~pc), a kinetic temperature of $T\simeq$10~K, a typical
velocity dispersion of $\sim$1--3~km/s and a low number density of
$n(\mathrm{H_2})$$\sim$300~cm$^{-3}$. These parameters are completely
inconsistent with the properties of the X-ray absorber. In fact, the
extreme ionization level ($\log\xi\sim5$~erg~s$^{-1}$~cm) needed to
have sufficient Fe XXVI ions would completely destroy all the
molecules and ionize all the lighter atoms. The temperature associated
with such photo-ionized absorber ($T$$\sim$$10^6$~K) is larger than
estimated for the Taurus molecular cloud. Also the outflow
velocity of $\sim$2000~km/s, expected if associated with such Galactic
clouds, would be substantially higher than the velocity dispersion
estimated by Ungerer et al.~(1985). The authors also stated that the
mapping of the visual extinction due to the molecular cloud clearly
shows that the region of the cloud in front of 3C~111 is not the
densest part (see Fig.~3 of Ungerer et al.~1985).
This result is also supported by a recent detailed X-ray study of this
region that has been performed by the XMM-Newton Extended Survey of
the Taurus Molecular Cloud project (G\"udel et al.~2007). This work
has been focused on the study of the stars and gas located in the most
populated $\simeq$5 square degrees region of the Taurus cloud. With a
declination of $\sim$$38\degr$, 3C~111 is located outside the edge of
this complex region, where mainly only extended cold and low density
molecular clouds are distributed (see Fig.~1 of G\"udel et al.~2007).
Therefore, the identification of the highly ionized absorber of 3C~111
with local Galactic absorption is not feasible.
We also find that association with absorption from the WHIM at intermediate
red-shift ($z$$\sim$$0.007$) is very unlikely. In fact, this diffuse gas is
expected to be
collisionally ionized, instead of being photo-ionized by the AGN
continuum. Therefore, the temperature required to have a substantial
He/H-like iron population would be much higher
($T$$\sim$$10^{7}$--$10^{8}$~K) than the expected
$T$$\sim$$10^5$--$10^6$~K. The huge column density of gas
($N_H$$\ga$$10^{23}$~cm$^{-2}$) required to reproduce the observed
features is also too high compared to those expected for the WHIM
($N_H$$\la$$10^{20}$~cm$^{-2}$). Moreover, the detection of highly
ionized absorbers in 3C~120 and 3C~390.3 with blue-shifted velocities
substantially larger than the relative cosmological red-shifts
strongly supports the association of the absorber in 3C~111 with a UFO
intrinsic to the source.
Similar conclusions were reached by Reeves et al.~(2008) concerning
the bright quasar PG~1211+143. The X-ray spectrum of this source
showed a blue-shifted absorption line from highly ionized iron (Pounds
et al.~2003; Pounds \& Page 2006) with a blue-shifted velocity
comparable to the cosmological red-shift of the source. This led some
authors to suggest its possible association with absorption from
intervening diffuse material at $z$$\sim$0 (e.g., McKernan et
al.~2004). However, the detection of line variability on a time-scale
less than 4~yrs, suggesting a compact $\sim$pc scale absorber, and the
extreme parameters of the absorber, e.g.,
log$\xi$$\sim$3--4~erg~s$^{-1}$~cm and
$N_H$$\sim$$10^{22}$--$10^{23}$~cm$^{-2}$, led Reeves et al.~(2008) to
exclude such interpretation. As pointed out by the authors, the
evidence of several other radio-quiet AGN with Fe K absorption with
associated blue-shifted velocities higher than the relative
cosmological red-shift suggested that the case of PG~1211+143 was a
mere coincidence (see also Tombesi et al.~2010).
We conclude that the evidence for UFOs in BLRGs from Suzaku data is
indeed robust. In the next section we examine their physical properties
in detail.
\subsection{Physical properties of Ultra-fast Outflows}
From the definition of the ionization parameter $\xi=L_{ion}/nr^2$
(Tarter, Tucker \& Salpeter 1969), where $n$ is the average absorber
number density and $L_{ion}$ is the source X-ray ionizing luminosity
integrated between 1~Ryd and 1000~Ryd (1~Ryd=13.6~eV), we can estimate
the maximum distance $r$ of the absorber from the central source. The
column density of the gas $N_H$ is a function of the density of the
material $n$ and the shell thickness $\Delta r$: $N_H= n \Delta
r$. Making the reasonable assumption that the thickness is less than
the distance from the source $r$ and combining with the expression for
the ionization parameter, we obtain the upper limit $r<L_{ion}/\xi
N_H$. Using the absorption corrected luminosities
$L_{ion}\simeq2.2\times 10^{44}$~erg~s$^{-1}$, $L_{ion}\simeq2.3\times
10^{44}$~erg~s$^{-1}$ and $L_{ion}\simeq 5.1\times
10^{44}$~erg~s$^{-1}$ directly estimated from the Suzaku data and the
ionization parameters and column densities listed in Table~4, we
obtain the limits of $r<2\times 10^{16}$~cm ($<$0.007~pc), $r<
10^{18}$~cm ($<$0.3~pc) and $r<4\times 10^{16}$~cm ($<$0.01~pc) for
3C~111, 3C~120 and 3C~390.3, respectively. Using the black hole mass
estimates of $M_{\mathrm{BH}}\sim3\times 10^9$$M_{\sun}$ for 3C~111
(Marchesini et al.~2004), $M_{\mathrm{BH}}\sim5\times 10^7$$M_{\sun}$
for 3C~120 (Peterson et al.~2004) and $M_{\mathrm{BH}}\sim3\times
10^{8} M_{\sun}$ for 3C~390.3 (Marchesini et al.~2004; Peterson et
al.~2004), the previous limits on $r$ correspond to a location for the
absorber within a distance of $\sim$20~$r_s$, $\sim$7$\times
10^4$~$r_s$ and $\sim$500~$r_s$ from the super-massive black hole,
respectively. The expected variability time-scale of the absorbers
from the light crossing time, $t$$\sim$$r/c$, is $t$$\sim$600--700~ks
($\sim$7 days) for 3C~111, $t$$\sim$1~yr for 3C~120 and
$t$$\sim$15--20~days for 3C~390.3, respectively.
A rough estimate of the escape velocity along the radial distance for
a Keplerian disk can be derived from the equation
$v_{esc}^2=2GM_{\mathrm{BH}}/r$, which can be re-written as
$v_{esc}=(r_s/r)^{1/2}c$. Therefore, for 3C~111 the escape velocity
at the location of $\sim 20 r_s$ is $v_{esc}\sim0.2$c, which is larger
than the measured outflow velocity of $v\sim$0.041c (see Table
4). This implies that most likely the absorber is actually in the form
of a blob of material which would eventually fall back down, possibly
onto the accretion disk. For 3C~120, the measured outflow velocity
$v\sim$0.076c (see Table~4) is equal to the escape velocity at a
distance of $\sim$200$r_s$ from the black hole. Therefore, if the
launching region is further away than this distance, the ejected blob
is likely to escape the system. Concerning 3C~390.3, the measured
velocity of $v\sim$0.146c (see Table~4) is larger than the escape
velocity at $\sim500 r_s$ and equals that at a distance of
$\sim$50--60$r_s$. Therefore, if the blob of material has been ejected
from a location between, say, $\sim100 r_s$ and $\sim500r_s$, it has
likely enough energy to eventually leave the system.
We can get an idea of the effectiveness of the AGN in producing
outflows by comparing their luminosity with the Eddington luminosity,
$L_{Edd}$$\simeq$$1.3\times10^{38}
(M_{\mathrm{BH}}/M_{\sun})$~erg~s$^{-1}$. Substituting the estimated
black hole mass for each source, we have
$L_{Edd}$$\simeq$$3.9\times10^{47}$~erg~s$^{-1}$ for 3C~111,
$L_{Edd}$$\simeq$$6.5\times10^{45}$~erg~s$^{-1}$ for 3C~120 and
$L_{Edd}$$\simeq$$3.9\times10^{46}$~erg~s$^{-1}$ for 3C~390.3,
respectively. From the relation $L_{bol}$$\simeq$$10L_{ion}$
(e.g., McKernan et al.~2007), the bolometric luminosities of the
different sources are:
$L_{bol}$$\simeq$$2.2\times10^{45}$~erg~s$^{-1}$ for 3C~111,
$L_{bol}$$\simeq$$2.3\times10^{45}$~erg~s$^{-1}$ for 3C~120 and
$L_{bol}$$\simeq$$5.1\times10^{45}$~erg~s$^{-1}$ for 3C~390.3,
respectively. The ratio $L_{bol}/L_{Edd}$ is
almost negligible for 3C~111 but it is of the order of $\sim$0.1--0.4
for 3C~120 and 3C~390.3. These two sources are emitting closer to
their Eddington limits and therefore are more capable of producing
powerful outflows/ejecta that would eventually leave the system
(e.g., King \& Pounds 2003; King 2010). This supports the conclusions
from the
estimates on the location of the ejection regions and the comparison
of the outflow velocities with respect to the escape velocities.
Moreover, assuming a constant velocity for the outflow and the
conservation of the total mass, we can roughly estimate the mass loss
rate $\dot{M}_{out}$ associated to the fast outflows,
$\dot{M}_{out}=4\pi C r^2 n m_p v$ (e.g., Blustin et al.~2005; McKernan
et al.~2007), where $v$ is the outflow velocity, $n$ is the absorber
number density, $r$ is the radial distance, $m_p$ is the proton mass
and $C\equiv(\Omega/4\pi)$ is the covering fraction, which in turn
depends on the solid angle $\Omega$ subtended by the absorber. From
the definition of the ionization parameter $\xi$, we obtain
$\dot{M}_{out}=4\pi C \frac{L_{ion}}{\xi} m_p v$. Substituting the
relative values, we derive estimates of $\dot{M}_{out} \sim
2\,C$~M$_{\sun}~$yr$^{-1}$, $\dot{M}_{out} \sim
17\,C$~M$_{\sun}$~yr$^{-1}$ and $\dot{M}_{out} \sim
2\,C$~M$_{\sun}$~yr$^{-1}$ for 3C~111, 3C~120 and 3C~390.3,
respectively.
The kinetic power carried by the outflows can be estimated as
$\dot{E}_K\equiv \frac{1}{2} \dot{M}_{out} v^2$, which roughly
corresponds to $\dot{E}_K \sim 4.5\times 10^{43}\,C$~erg~s$^{-1}$,
$\dot{E}_K \sim 3\times 10^{45}\,C$~erg~s$^{-1}$ and $\dot{E}_K \sim
1.2\times 10^{45}\,C$~erg~s$^{-1}$ for 3C~111, 3C~120 and 3C~390.3,
respectively. Note that, depending on the estimated covering fraction,
the kinetic power injected in these outflows can be substantial,
possibly reaching significant fractions ($\sim$0.01--0.5) of the
bolometric luminosity and can be comparable to the typical jet power
of these sources of $\sim$$10^{44}$--$10^{45}$~erg~s$^{-1}$, the
latter being the power deposited in the radio lobes (Rawlings \&
Saunders 1991).
Therefore, it is important to compare the fraction of mass that goes
into accretion of the system with respect to that which is lost
through these outflows. Following McKernan et al.~(2007), we can
derive a simple relation for the ratio between the mass outflow rate
and the mass accretion rate, i.e. $\dot{M}_{out}/\dot{M}_{acc}\simeq
6000\, C(v_{0.1}/\xi_{100})\eta_{0.1}$, where $v_{0.1}$ is the outflow
velocity in units of 0.1c, $\xi_{100}$ is the ionization parameter in
units of 100~erg~s$^{-1}$~cm and $\eta=\eta_{0.1}\times 0.1$ is the
accretion efficiency. Substituting the parameters with their relative
values listed in Table~4, we obtain
$\dot{M}_{out}/\dot{M}_{acc}$$\sim$$2\,C$ for 3C~111,
$\dot{M}_{out}/\dot{M}_{acc}$$\sim$$40\,C$ for 3C~120 and
$\dot{M}_{out}/\dot{M}_{acc}$$\sim$$2\,C$ for 3C~390.3, respectively.
These estimates depend on the unknown value of the covering fraction
$C$. A very rough estimate of the global covering fraction of these
absorbers can be derived from the fraction of sources of our small
sample: $C\simeq f= 3/5\sim 0.6$ (e.g., Crenshaw et al.~1999). This
suggests that the geometrical distribution of the absorbing material
is not very collimated but large opening angles are favored. The
rough estimate $C\sim0.6$ implies the possibility of reaching ratios
of about unity or higher between the mass outflow and accretion
rates. This means that these outflows can potentially generate
significant mass and energy losses from the system. However, the
covering fraction crude estimate of $C\sim0.6$ has been derived from a
very small sample which is far from being complete and therefore could not be
fully representative of the whole population of BLRGs.
The physical characteristics of UFOs here derived for the three
BLRGs strongly point towards an association with winds/outflows from
the inner regions of the putative accretion disk. In fact,
simulations of accretion disks in AGN ubiquitously predict the
generation of mass outflows. For instance, the location, geometry,
column densities, ionization and velocities of our detected UFOs are
in good agreement with the AGN accretion disk wind model of Proga \&
Kallman (2004). In this particular model the wind is driven by
radiation pressure from the accretion disk and the opacity is
essentially provided by UV lines. Depending on the angle with respect
to the polar axis, three main wind components can be identified: a
hot, low density and extremely ionized flow in the polar region; a
dense, warm and fast equatorial outflow from the disk; and a
transition zone in which the disk outflow is hot and struggles to
escape the system. The ionization state of the wind decreases from
polar to equatorial regions. Instead, the column densities increase
from polar to equatorial, up to very Compton-thick values
($N_H$$>$$10^{24}$~cm$^{-2}$). The outflows can easily reach large
velocities, even higher than $\sim$$10^4$~km/s.
Lines of sight through the transition region of the simulated outflow,
where the density is moderately high
($n$$\sim$$10^8$--$10^{10}$~cm$^{-3}$) and the column density can
reach values up to $N_H$$\sim$$10^{24}$~cm$^{-2}$, result in spectra
that have considerable absorption features from ionized species
imprinted in the X-ray spectrum, mostly with intermediate/high
ionization parameters, log$\xi$$\sim$3--5~erg~s$^{-1}$~cm. This
strongly suggests that the absorption material could be observed in
the spectrum through Fe K-shell absorption lines from Fe XXV and Fe
XXVI (e.g., Sim et al.~2008; Schurch et al.~2009; Sim et al.~2010), in
complete agreement with our detection of UFOs.
In particular, Sim et al.~(2008) and Sim et al.~(2010) used their accretion disk wind model to successfully reproduce the 2--10~keV spectra of two bright radio-quiet AGN in which strong blue-shifted Fe K absorption lines were detected in their XMM-Newton spectra, namely Mrk~766 (from Miller et al.~2007) and PG~1211+143 (from Pounds et al.~2003 ). Notably, the authors have been able to account for both emission and absorption features in a physically self-consistent way and demonstrated that accretion disk winds/outflows might well imprint also other spectral signatures in the X-ray spectra of AGN (e.g., Pounds \& Reeves 2009 and references therein).
Hydrodynamic wind simulations are highly inhomogeneous in density,
column and ionization and have strong rotational velocity
components. Therefore the outflow, especially in its innermost
regions, is rather unstable. In particular, the outflow properties
through the transition region show considerable variability and this
is expected to be reflected by the spectral features associated with
this region, i.e. by the corresponding blue-shifted Fe XXV/XXVI
K-shell absorption lines.
Proga \& Kallman (2004) and Schurch et al.~(2009) state that it is
possible that some parts or blobs of the flow, especially in the
innermost regions, do not have enough power to allow a ``true'' wind
to be generated. In these cases, a considerable amount of material is
driven to large-scale heights above the disk but the velocity of the
material is insufficient for it to escape the system and it will
eventually fall back onto the disk. Despite returning to the
accretion disk at larger radii, while it is above the disk, this
material can imprint features on the observed X-ray spectrum
(e.g., Dadina et al.~2005 and references therein). This can indeed be
the case for some of the UFOs discussed here (i.e., 3C~111).
This overall picture is also partially in agreement with what
predicted by the ``aborted jet'' model by Ghisellini et
al.~(2004). This model was actually proposed to explain, at least in
part, the high-energy emission in radio-quiet quasars and Seyfert
galaxies. It postulates that outflows and jets are produced by every
black hole accretion system. Blobs of material can then be ejected
intermittently and can sometimes only travel for a short radial
distance and eventually fall back, colliding with others approaching.
Therefore, the flow can manifest itself as erratic high-velocity
ejections of gas from the inner disk and it is expected that some
outflows/blobs are not fast enough to escape the system and will
eventually fall back onto the disk. An intriguing possibility could
be that these outflows are generated by localized ejection of material
from the outer regions of a bubbling corona, which emits the bulk of
the X-ray radiation (Haardt \& Maraschi 1991), in analogy with what
observed in the solar corona during the Coronal Mass Ejection events
(e.g., Low 1996). The velocity and frequency of these strong events
should then be limited to some extent, in order not to cause the
disruption or evaporation of the corona itself. Such extreme
phenomena could then be the signatures of the turbulent environment
close to the super-massive black hole.
The detection of UFOs in both radio-quiet and radio-loud galaxies
suggests a similarity of their central engines and demonstrates that
the presence of strong relativistic jets do not exclude the existence
of winds/outflows from the putative accretion disk. Moreover, it has
been demonstrated by Torresi et al.~(2010) and Reeves et al.~(2009)
that a warm absorber is indeed present also in BLRGs (in particular
3C~382) and this indicates that jets and slower winds/outflows can
coexist in the same source, even beyond the broad-line region.
However, BLRGs are radio-loud galaxies and they have powerful jets.
Therefore, the fact that for BLRGs we are observing down to the
outflowing stream at intermediate angles to the jet
($\sim$15--30$^{\circ}$; e.g., Eracleous \& Halpern 1998) suggests
that the fast winds/outflows we observe are at greater inclination
angles with respect to the jet axis, somewhat similar to what expected
for accretion disk winds (e.g., Proga \& Kallman 2004). These outflows
would then not be able to undergo the processes which instead
accelerate the jet particles to velocities close to the speed of
light.
For instance, studies of Galactic stellar-mass black holes, or
micro-quasars, showed that wind formation occurs in competition with
jets, i.e. winds carry away matter halting their flow into jets
(e.g., Neilsen \& Lee 2009). Given the well-known analogy between
micro-quasars and their super-massive relatives, one would naively
expect a similar relationship for radio-loud AGN. The BLRGs 3C~111 and
3C~120 are regularly monitored in the radio and X-ray bands with the
VLBA and RXTE as part of a project aimed at studying the disk-jet
connection (e.g., Marscher et al~2002). We have detected UFOs in both of
these sources (see Table~4), and indeed in both cases the 4--10~keV
fluxes measured with Suzaku corresponded to historical low(est) states
if compared to the RXTE long-term light curves. For instance,
correlated spectroscopic observations of 3C~111, where the shortest
variability timescales are predicted (t$\sim$7 days), during low and
high jet continuum states could provide, in a manner analogous to
micro-quasars, valuable information on the synergy among disk, jet,
and outflows, and go a long way towards elucidating the physics of
accretion/ejection in radio-loud AGN.
However, whether it is possible to accelerate such ultra-fast outflows
with velocities up to $\sim$0.15c only through UV line-driving is
unclear. Moreover, the material needs to be shielded from the high
X-ray ionizing flux in the inner regions of AGN, otherwise it would
become over-ionized and the efficiency of this process would be
drastically reduced. Other mechanisms as well are capable of accelerating
winds from accretion disks, in particular radiation pressure through Thomson
scattering and magnetic forces.
In fact, Ohsuga et al.~(2009) proposed a unified model of
inflow/outflow from accretion disks in AGN based on radiation-MHD
simulations. Disk outflows with helical magnetic fields, which are
driven either by radiation-pressure force or magnetic-pressure, are
ubiquitous in their simulations. In particular, in their case A (see
their Fig.~1) a geometrically thick, very luminous disk forms with a
luminosity $L \sim L_{Edd}$, which effectively drives a fast
Compton-thick wind with velocities up to $\sim$0.2--0.3c. It is
important to note that the models of Ohsuga et al.~(2009) include both
radiation and magnetic forces which, depending on the state of the
system, can generate both relativistic jets and disk winds.
Moreover, King \& Pounds (2003) and King (2010) showed that black
holes accreting with modest Eddington rates are likely to produce fast
Compton-thick winds. They considered only radiation-pressure and
therefore fast winds can be effectively generated by low magnetized
accretion disks as well. In particular, King (2010) derived that
Eddington winds from AGN are likely to have velocities of $\sim$0.1c
and to be highly ionized, showing the presence of helium- or
hydrogen-like iron. These properties strongly point toward an
association of our detected UFOs from the innermost regions of AGN
with Eddington winds/outflows from the putative accretion disk.
Depending on the estimated covering fraction, the derived mass outflow
rate of the UFOs can be comparable to the accretion rate and their
kinetic power can correspond to a significant fraction of the
bolometric luminosity and is comparable to the jet power. Therefore,
the UFOs may have the possibility of bringing significant amount of
mass and energy outward, potentially contributing to the expected
feedback from the AGN. In particular, King (2010) demonstrated that
fast outflows driven by black holes in AGN can explain important
connections between the SMBH and the host galaxy, such as the observed
$M_{BH}$--$\sigma$ relation (e.g., Ferrarese \& Merritt 2000). These
UFOs can potentially provide an even more important contribution to the
expected feedback between the AGN and the host galaxy than the jets in
radio-loud sources. In fact, even if jets are highly energetic, they
are also extremely collimated and carry a negligible mass. Fast
winds/outflows from the accretion disks, instead, are found to be
massive and extend over wide angles. Thus, we suggest that UFOs in
radio-loud AGN are a new, important ingredient for feedback models
involving these sources.
\section{Summary and Conclusions}
Using high signal-to-noise Suzaku observations, we detected several
absorption
lines in the $\sim$7--10~keV band of three out of five BLRGs with high
statistical significance. If interpreted as blueshifted K-shell resonance
absorption lines
from Fe XXV and Fe XXVI, the lines imply the presence of
outflowing gas from the central regions of BLRGs with mildly
relativistic velocities, in the range $\simeq$0.04--0.15c. The inferred
ionization states and column densities of the absorbers are in the range
log$\xi$$\sim$4--5.6~erg~s$^{-1}$~cm and
$N_H$$\sim$$10^{22}$--$10^{23}$~cm$^{-2}$, respectively. This is the
first time that evidence for Ultra-fast Outflows (UFOs) from the
central regions of radio-loud AGN is obtained at X-rays.
The estimated location of these UFOs at distances within
$\sim$0.01--0.3~pc from the
central super-massive black hole suggests that the outflows might be
connected with AGN accretion disk winds/ejecta (e.g., King \& Pounds
2003; Proga \& Kallman 2004; Ohsuga et al.~2009; King 2010).
Depending on the covering fraction estimate (here, $C$$\sim$0.6),
their mass outflow rate can be comparable to the accretion rate and
their kinetic power may correspond to a significant fraction of the
bolometric luminosity and be comparable to the jet power. These UFOs
would thus bring outward significant amounts of mass and energy,
potentially contributing to the expected feedback from the AGN on the
surrounding environment.
These results are in analogy with the recent findings of blue-shifted
Fe K absorption lines at $\sim$7--10~keV in the X-ray spectra of
several radio-quiet AGN, which demonstrated the presence of UFOs in
the central regions of these sources (e.g., APM~08279$+$5255, Chartas
et al.~2002; PG~1115$+$080, Chartas et al.~2003; PG~1211+143, Pounds
et al.~2003; IC4329A, Markowitz et al.~2006; MCG-5-23-16, Braito et
al.~2007; Mrk~509, Cappi et al.~2009; PDS~456, Reeves et al.~2009a;
see Tombesi et al.~2010 for a systematic study on a large sample of
Seyfert galaxies). In particular, it is important to note that the
physical parameters of UFOs in radio-loud AGN previously discussed
are completely consistent with those reported in
radio-quiet AGN. This strongly suggests that we could actually be
witnessing the same physical phenomenon in the two classes of objects
and this can help us improve the understanding of the relation between
the disk and the formation of winds/jets in black hole accretion
systems.
Several questions remain open. It is important to note that the
estimate of the covering factor $C\sim0.6$ in \S~5.2 might actually be
only a lower limit. Fast outflows are expected to
come from regions close to the central black hole and to be highly
ionized. Thus, a slight increase in the ionization level of the
absorbers would cause iron to be completely ionized and the gas to
become invisible also in the Fe K band. Therefore, it is also quite
possible that most,
\emph{if not all}, radio-loud AGN contain ultra-fast outflows which
can not be seen at present simply because they are highly ionized.
The physical properties of UFOs in BLRGs are also of great interest to
understand the dynamics of accretion/ejection and the disk-jet
connection.
In particular, by studying the source variability, which, in some sources, is
expected to occur on timescales as short as a few days, we can investigate
the gas densities and internal dynamics of the outflow, as well as
better constrain
its distance from the SMBH. This can help us understand in
detail whether the UFOs in radio-loud AGN are similar to those in
radio-quiet ones, or if major quantitative differences exist which
affect jet formation and thus the radio-loud/radio-quiet AGN division.
Finally, a substantial improvement is expected from the higher
effective area and supreme energy resolution (down to $\sim$2--5~eV)
in the Fe K band offered by the calorimeters on board the future
Astro-H and IXO missions. In particular, the lines will be resolved
and also their profiles could be measured. The parameters of UFOs will
be determined with unprecedented accuracy and their dynamics could,
potentially, also be studied through time-resolved spectroscopy on short
time-scales (e.g., Tombesi et al.~2009).
\acknowledgments
We thank Laura Maraschi, Demos Kazanas, Keigo Fukumura, and Meg Urry
for useful discussions. FT and RMS acknowledge financial support from
NASA grant NAG5-10708 and the Suzaku program. MC acknowledge financial
support from ASI under contract I/088/06/0.
|
1,477,468,750,934 | arxiv | \section{Introduction}
Many combinatorial problems in machine learning can be cast as the minimization of submodular functions (i.e., set functions that exhibit a diminishing marginal returns property). Applications include isotonic regression, image segmentation and reconstruction, and semi-supervised clustering (see, e.g.,~\cite{bach2013learning}).
In this paper we consider the problem of minimizing in a distributed fashion (without any central unit) the sum of $N\in\natural$ submodular functions, i.e.,
\begin{equation}\label{pb:standard}
\begin{aligned}
& \mathop{\textrm{minimize}}_{X\subseteq V}
& & F(X)=\sum_{i=1}^N F_i(X)
\end{aligned}
\end{equation}
where $V=\until{n}$ is called the \emph{ground set} and the functions $F_i$ are submodular.
We consider a scenario in which problem~\eqref{pb:standard} is to be solved by $N$ peer agents communicating locally and performing local computations.
The communication is modeled as a \emph{directed} graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\until{N}$ is the set of agents and $\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ is the set of directed edges in the graph. Each agent $i$ \emph{receives} information only from its in-neighbors, i.e., agents $j\in
\mathcal{N}_{i}^{\text{in}}\triangleq\{j\mid (j,i)\in\mathcal{E}\}\cup \{i\}$, while it \emph{sends} messages only to its out-neighbors $j\in\mathcal{N}_{i}^{\text{out}}\triangleq\{j\mid (i,j)\in\mathcal{E}\}\cup\{i\}$, where we have included agent $i$ itself in these sets.
In this set-up, each agent knows only a portion of the entire optimization problem. Namely, agent $i$, knows the function $F_i(X)$ and the set $V$ only. Moreover, the local functions $F_i$ must be maintained private by each agent and cannot be shared.
In order to give an insight on how the proposed scenario arises, let us introduce the distributed image segmentation problem that we will consider later on as a numerical example. Given a certain image to segment, the ground set $V$ consists of the pixels of such an image. We consider a scenario in which each of the $N$ agents in the network has access to only a portion $V_i\subseteq V$ of the image. In Figure~\ref{fig:cams} a concept with the associated communication graph is shown. Given $V_i$, the local submodular functions $F_i$ are constructed by using some locally retrieved information, like pixel intensities. While agents do not want to share any information on how they compute local pixel intensities (due to, e.g., local proprietary algorithms), their common goal is to correctly segment the entire image.
\begin{figure}[t]
\centering
\includegraphics[width=.38\textwidth]{cameraview_mickey_copyred}
%
\caption{Distributed image segmentation set-up.
Network agents (blue nodes) have access only to a subset (colored grids)
of the whole image pixels. Directed arcs between nodes represent
the communication links.
}
\label{fig:cams}
\end{figure}
Such a distributed set-up is motivated by the modern organization of data and computational power. It is extremely common for computational units to be connected in networks, sharing some resources, while keeping other private, see, e.g.,~\cite{stone2000multiagent,decker1987distributed}. Thus, distributed algorithms in which agents do not need to disclose their own private data will represent a novel disruptive technology.
This paradigm has received significant attention in the last decade in the area of control and signal processing,~\cite{ahmed2016distributed,chen2018internet}.
\paragraph*{Related work}
Submodular minimization problems can be mainly addressed in two ways. On the one hand, a number of combinatorial algorithms have been proposed~\cite{iwata2001combinatorial,iwata2009simple}, some based on graph-cut algorithms~\cite{jegelka2011fast} or relying on problems with a particular structure~\cite{kolmogorov2012minimizing}. On the other hand, convex optimization techniques can be exploited to face submodular minimization problems by resorting the so called Lov\`{a}sz extension. Many specialized algorithms have been developed in the last years by building on the particular properties of submodular functions (see~\cite{bach2013learning} and reference therein).
In this paper we focus on the problem of minimizing the sum of many submodular functions, which has received attention in many works~\cite{stobbe2010efficient,kolmogorov2012minimizing,jegelka2013reflection,fix2013structured,nishihara2014convergence}.
In particular, centralized algorithms have been proposed based on smoothed convex minimization~\cite{stobbe2010efficient} or alternating projections and splitting methods~\cite{jegelka2013reflection}, whose convergence rate is studied in~\cite{nishihara2014convergence}. This problem structure typically arises, for example, in Markov Random Fields (MRF) Maximum a-Posteriori (MAP) problems~\cite{shanu2016min,fix2013structured}, a notable example of which is image segmentation.
While a vast literature on distributed continuous optimization has been developed in the last years (see, e.g.,~\cite{notarstefano2019distributed}), distributed approaches for tackling (submodular) combinatorial optimization problems started to appear only recently.
Submodular maximization problems have been treated and approximately solved in a distributed way in several works
~\cite{kim2011distributed,mirzasoleiman2013distributed,bogunovic2017distributed,williams2017decentralized,gharesifard2017distributed,grimsman2017impact}. In particular, distributed submodular maximization subject to matroid constraints is addressed in~\cite{williams2017decentralized,gharesifard2017distributed}, while in~\cite{grimsman2017impact}, the authors handle the design of communication structures maximizing the worst case efficiency of the well-known greedy algorithm for submodular maximization when applied over networks.
Regarding distributed algorithms for submodular minimization problems, they have not received much attention yet.
In~\cite{jaleel2018real} a distributed subgradient method is proposed, while in~\cite{testa2018distributed} a greedy column generation algorithm is given. All these approaches involve the communication/update of the entire decision variable at each time instant. This can be an issue when the decision variable is extremely large. Thus, block-wise approaches like those proposed in~\cite{notarnicola2018distributed} should be explored.
\paragraph*{Contribution and organization}
The main contribution of this paper is the MIxing bloCKs and grEedY (MICKY) method, i.e., a distributed block-wise algorithm for solving problem~\eqref{pb:standard}. At any iteration, each agent computes a weighted average on local copies of neighbors solution estimates. Then, it selects a random block and performs an ad-hoc (block-wise) greedy algorithm (based on the one in~\cite[Section~3.2]{bach2013learning}) until the selected block is updated. Finally, based on the output of the greedy algorithm, the selected block of the local solution estimate is updated and broadcast to the out-neighbors.
The proposed algorithm is shown to produce cost-optimal solutions in expected value by showing that it is an instance of the Distributed Block Proximal Method presented in~\cite{farina2019arXivProximal}. In fact, the partial greedy algorithm performed on the local submodular cost function $F_i$ is shown to compute a block of a subgradient of its Lov\`{a}sz extension.
A key property of this algorithm is that each agent is required to update and transmit only one block of its solution estimate.
In fact, it is quite common for networks to have communication bandwidth restrictions. In these cases the entire state variable may not fit the communication channels and, thus, standard distributed optimization algorithms cannot be applied. Furthermore, the greedy algorithm can be very time consuming when an oracle for evaluating the submodular functions is not available and, hence, halting it earlier can reduce the computational load.
The paper is organized as follows. The distributed algorithm is presented and analyzed in Section~\ref{sec:algo}, and it is tested on a distributed image segmentation problem in Section~\ref{sec:numerical}.
\vspace{-2ex}
\paragraph*{Notation and definitions}
Given a vector $x\in\mathbb{R}^n$, we denote by $x_\ell$ the $\ell$-th entry of $x$.
Let $V$ be a finite, non-empty set with cardinality $|V|$. We denote by $2^V$ the set of
all its $2^{|V|}$ subsets.
Given a set $X\subseteq V$, we denote by $\mathbf{1}_X\in{\mathbb{R}}^{|V|}$
its indicator vector, defined as
$\mathbf{1}_{X_\ell}=1$ if $\ell \in X$, and $0$ if $\ell \not\in X$.
A set function $F:2^V\to \mathbb{R}$ is said to be submodular if it exhibits the diminishing marginal returns property, i.e., for all $A,B\subseteq V$, $A\subseteq B$ and for all $j\in V\setminus B$, it holds that $F(A\cup\{j\})- F(A)\geq F(B\cup\{j\})- F(B)$. In the following we assume $F(X)<\infty$ for all $X\subseteq V$ and, without loss of generality, $F(\emptyset)=0$.
Given a submodular function $F:2^V\to \mathbb{R}$, we define the associated \emph{base polyhedron} as $\mathcal{B}(F):=\{w\in{\mathbb{R}}^n\mid \sum_{\ell\in X} w_\ell \leq F(X)\; \forall X\in 2^V,\; \sum_{\ell\in V} w_\ell = F(V)\}$ and by $f(x)=\max_{w\in\mathcal{B}(F)}w^\top x$ the Lov\`{a}sz extension of $F$.
\section{Distributed algorithm}\label{sec:algo}
\subsection{Algorithm description}
In order to describe the proposed algorithm, let us introduce the following nonsmooth convex optimization problem
\begin{equation}\label{pb:lovasz}
\begin{aligned}
& \mathop{\textrm{minimize}}_{x\in [0,1]^n}
& & f(x)=\sum_{i=1}^N f_i(x)
\end{aligned}
\end{equation}
where $f_i(x):\mathbb{R}^n\to\mathbb{R}$ is the Lov\`{a}sz extension of $F_i$ for all $i\in\until{N}$. It can be shown that solving problem~\eqref{pb:lovasz} is equivalent to solving problem~\eqref{pb:standard} (see, e.g.,~\cite{lovasz1983submodular} and~\cite[Proposition~3.7]{bach2013learning}).
In fact, given a solution $x^\star$ to problem~\eqref{pb:lovasz}, a solution $X^\star$ to problem~\eqref{pb:standard} can be retrieved by thresholding the components of $x^\star$ at an arbitrary $\tau\in[0,1]$ (see~\cite[Theorem 4]{bach2019submodular}), i.e.,
\begin{align}
\label{eq:thresh}
X^\star = \{\ell\mid x^\star_\ell>\tau\}.
\end{align}
Notice that, given $F_i$ in problem~\eqref{pb:standard}, each agent $i$ in the network is able to compute $f_i$, thus, in the considered distributed set-up, problem~\eqref{pb:lovasz} can be addressed in place of problem~\eqref{pb:standard}. Moreover, since $F_i$ is submodular for all $i$, then $f_i$ is a continuous, piece-wise affine, nonsmooth convex function, see, e.g.,~\cite{bach2013learning}.
In order to compute a single block of a subgradient of $f_i$, each agent $i$ is equipped with a local routine (reported next), that we call \textsc{BlockGreedy} and that resembles a local (block-wise) version of the greedy algorithm in~\cite[Section~3.2]{bach2013learning}.
This routine takes as inputs a vector $y$ and the required block $\ell$, and returns the $\ell$-th block of a subgradient $g_i$ of $f_i$ at $y$. For the sake of simplicity, suppose $l$ is a single component block. Moreover, assume to have a routine \textsc{PartialSort} that generates an ordering $\{m_1,\ldots,m_p\}$ such that
$y_{m_1}\geq\ldots\geq y_{m_p}$,
$m_p=\ell$ and
$y_{r}\leq y_\ell$ for each
$r\in\until{n}\setminus\{m_1,\ldots,m_p\}$.
Then, the \textsc{BlockGreedy} algorithm reads as follows.
\begin{algorithm}
\renewcommand{\thealgorithm}{}
\floatname{algorithm}{Routine}
\begin{algorithmic}[h!]
\small
\inpt $y$, $\ell$
\State Obtain a partial order via
$$
\{m_1,\ldots,m_{p-1}, m_{p}=\ell\}= \textsc{PartialSort} (y)
$$
%
%
\State Evaluate $g_{i,m_p}$ as %
\begin{align*}
g_{i,\ell}
\!\!=\!
\begin{cases}
F_i(\{\ell\}),
& \text{if } p = 1
\\
F_i(\{m_1 \ldots m_{p-1}, \ell \}) \!-\! F_i(\{m_1 \ldots m_{p-1} \}), \!\!
& \text{otherwise}%
\end{cases}
\end{align*}
%
\outpt $g_{i,\ell}$
\end{algorithmic}
\caption{\textsc{BlockGreedy}$(y,\ell)$ for agent $i$}
\label{alg:local_greedy}
\end{algorithm}
The MICKY algorithm works as follows. Each agent stores a local solution estimate $x_i^k$ of problem~\eqref{pb:lovasz} and, for each in-neighbor $j\in\mathcal{N}_{i}^{\text{in}}$, a local copy of the corresponding solution estimate $x_j^k{\mid}_i$. At the beginning, each node selects the initial condition $x_i^0$ at random in $[0,1]^n$ and shares it with its out-neighbors. We associate to the communication graph $\mathcal{G}$ a weighted adjacency matrix $\mathcal{W}\in\mathbb{R}^{N\times N}$ and we denote with $w_{ij}=[\mathcal{W}]_{ij}$ the weight associated to the edge $(j,i)$.
At each iteration $k$,
agent $i$ performs three tasks:
\begin{enumerate}[label=(\roman*)]
\item it computes a weighted average $y_i^k = \sum_{j\in\mathcal{N}_{i}^{\text{in}}} w_{ij} x_j^k{\mid}_i$;
\item it picks randomly (with arbitrary probabilities bounded away from $0$) a block $\ell_i^k\in\until{n}$ and performs the \textsc{BlockGreedy}$(y_i^k,\ell_i^k)$;
\item it updates $x_{i,\ell_i^k}^{k+1}$ according to~\eqref{eq:up_alg}, where $\Pi_{[0,1]}[\cdot]$ is the projector on the set $[0,1]$ and $\alpha_i^k\in(0,1)$, and broadcasts it to its out-neighbors $j\in\mathcal{N}_{i}^{\text{out}}$.
\end{enumerate}
Agents halt the algorithm after $K>0$ iterations and recover the local estimates $X_i^{\text{end}}$ of the set solution to problem~\eqref{pb:standard} by thresholding the value of $x_i^K$ as in~\eqref{eq:thresh}. Notice that, in order to avoid to introduce additional notation, we have assumed each block of the optimization variable to be scalar (so that blocks are selected in $\until{n}$). However, blocks of arbitrary sizes can be used (as shown in the subsequent analysis).
A pseudocode of the proposed algorithm is reported in the next table.
\begin{algorithm}[H]
\renewcommand{\thealgorithm}{}
\floatname{algorithm}{Algorithm}
\begin{algorithmic}[H!]
\small
\init $x_i^0$
\item[]
\For{$k=1,\dots,K-1$}
\State \textsc{Update} for all $j\in\mathcal{N}_{i}^{\text{in}}$
\begin{equation}\label{eq:xl_update}
x_{j,\ell}^k{\mid}_i =
\begin{cases}
x_{j,\ell}^k, &\text{if }\ell=\ell_j^{k-1}\\
x_{j,\ell}^{k-1}{\mid}_i, &\text{otherwise}
\end{cases}
\end{equation}
\State \textsc{Compute}
\begin{equation}
y_i^k = \sum_{j\in\mathcal{N}_{i}^{\text{in}}} w_{ij} x_j^k{\mid}_i
\end{equation}
\State \textsc{Pick} randomly a block $\ell_i^{k}\in\until{n}$ %
\State \textsc{Compute}
\begin{equation}
g_{i,\ell_i^k}^k = \textsc{BlockGreedy}(y_i^k,\ell_i^k)
\end{equation}
\State \textsc{Update}
\begin{equation}\label{eq:up_alg}
x_{i,\ell}^{k+1} = \begin{cases}
\Pi_{[0,1]}\left[x_{i,\ell_i^k}^{k} - \alpha_i^k g_{i,\ell_i^k}^k\right]&\text{if }\ell=\ell_i^k\\
x_{i,\ell}^{k}&\text{otherwise}
\end{cases}
\end{equation}
\State \textsc{Broadcast} $x_{i,\ell_i^k}^{k+1}$ to all $j\in\mathcal{N}_{i}^{\text{out}}$
\EndFor
\textsc{Thresholding}
\begin{equation}\label{eq:reconstruct}
X_i^{\text{end}} = \{\ell\mid x_{i,\ell}^K > \tau\}
\end{equation}
\end{algorithmic}
\caption{MICKY\! (Mixing\! Blocks\! and\! Greedy\! Method)}\label{alg:DSM}
\end{algorithm}
\subsection{Discussion}
The proposed algorithm possesses many interesting features. Its distributed nature requires agents to communicate only with their direct neighbors, without resorting to multi-hop communications. Moreover, all the local computations involve locally defined quantities only. In fact, stepsize sequences and block drawing probabilities are locally defined at each node.
Regarding the block-wise updates and communications, they bring benefits in two areas.
Communicating single blocks of the optimization variable, instead of the entire one, can significantly reduce the communication bandwidth required by each agent in broadcasting their local estimates. This makes the proposed algorithm implementable in networks with communication bandwidth restrictions. Moreover, the classical greedy algorithm requires to evaluate $|V|$ times the submodular function in order to produce a subgradient. When $|V|$ is very high and an oracle for evaluating functions $F_i$ is not available, this can be a very time consuming task.
For example, in the example application in Section~\ref{sec:numerical}, we will resort to the minimum graph cut problem. Evaluating the value of a cut for a graph in which $E\subseteq V\times V$ is the set of arcs, requires a running-time $O(|E|)$.
%
In the \textsc{BlockGreedy} routine, in contrast with what happens in the standard greedy routine,
the sorting operation is (possibly) performed only on a part of the entire vector $y$, i.e., until the $\ell$-th component has been sorted. Thus, our routine evaluates the $\ell$-th component of the subgradient in at most two evaluations of the submodular function.
\subsection{Analysis}
In order to state the convergence properties of the proposed algorithm,
let us make the following two assumptions on the communication graph and the associated weighted adjacency matrix $\mathcal{W}$.
\begin{assumption}[Strongly connected graph]\label{assumption:graph}
The digraph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})$ is strongly connected.\oprocend
\end{assumption}
\begin{assumption}[Doubly stochastic weight matrix]\label{assumption:stochastic}
For all $i,j\in\mathcal{V}$, the weights $w_{ij}$ of the weight matrix $\mathcal{W}$ satisfy
\begin{enumerate}[label=(\roman*)]
\item if $i\neq j$, $w_{ij}>0$ if and only if $j\in\mathcal{N}_{i}^{\text{in}}$;
\item there exists a constant $\eta>0$ such that $w_{ii}\geq\eta$ and if $w_{ij}>0$, then $w_{ij}\geq\eta$;
\item $\sum_{j=1}^N w_{ij}=1$ and $\sum_{i=1}^N w_{ij}=1$.\oprocend
\end{enumerate}
\end{assumption}
The above two assumptions are very common when designing distributed optimization algorithms. In particular, Assumption~\ref{assumption:graph} guarantees that the information is spread through the entire network, while Assumption~\ref{assumption:stochastic} assures that each agent gives sufficient weight to the information coming from its in-neighbors.
Let $\bar{x}^k\triangleq\frac{1}{N}\sum_{i=1}^N x_i^k$ be the average over the agents of the local solution estimates at iteration $k$ and define $f_{best}(x_i^k)\triangleq\min_{r\leq k} \mathbb{E}[f(x_i^r)]$.
Then, in the next result, we show that by cooperating through the proposed algorithm all the agents agree on a common solution and the produced sequences $\{x_i^t\}$ are asymptotically cost optimal in expected value when $K\to\infty$.
\begin{theorem}\label{thm:conv}
Let Assumptions~\ref{assumption:graph} and~\ref{assumption:stochastic} hold and let $\{x_i^k\}_{k\geq 0}$ be the sequences generated through the MICKY algorithm. Then,
%
if the sequences $\{\alpha_i^k\}$ satisfy
\begin{equation}
\sum_{k=0}^\infty \alpha_i^k = \infty, \qquad \sum_{k=0}^\infty (\alpha_i^k)^2 < \infty, \qquad \alpha_i^{k+1}\leq\alpha_i^k\label{eq:alpha}
\end{equation}
for all $k$ and all $i\in\mathcal{V}$, it holds that,
\begin{equation}\label{eq:consensus}
\lim_{k\to\infty}\mathbb{E}[\|x_i^k-\bar{x}^k\|]=0,
\end{equation}
and
\begin{equation}\label{eq:optimality}
\lim_{k\to\infty}f_{best}(x_i^k)=f(x^\star),
\end{equation}
being $x^\star$ the optimal solution to~\eqref{pb:lovasz}.
%
\end{theorem}
\begin{proof}
By using the same arguments used in~\cite[Lemma~3.1]{farina2019arXivProximal}, it can be shown that $x_j^k{\mid}_i=x_j^k$ for all $k$ and all $i,j\in\mathcal{V}$. Then~\eqref{eq:consensus} follows from~\cite[Lemma~5.11]{farina2019arXivProximal}.
Moreover, as anticipated, it can be shown that $g_{i,\ell_i^k}^k$ is the $\ell_i^k$-th block of a subgradient of the function $f_i(x)$ in problem~\eqref{pb:lovasz} (see, e.g.,~\cite[Section~3.2]{bach2013learning}).
%
In fact, being $f_i$ defined as the support function of the base polyhedron $\mathcal{B}(F_i)$, i.e., $f_i(x)=\max_{w\in\mathcal{B}(F_i)}w^\top x$, the greedy algorithm~\cite[Section~3.2]{bach2013learning} iteratively computes a subgradient of $f_i$ component by component. Moreover, subgradients of $f_i$ are bounded by some constant $G<\infty$, since every component of a subgradient of $f_i$ is computed as the difference of $F_i$ over two different subsets of $V$.
%
Given that, the proposed algorithm can be seen as a special instance of the Distributed Block Proximal Method in~\cite{farina2019arXivProximal}. Thus, since Assumptions~\ref{assumption:graph} and~\ref{assumption:stochastic} holds, it inherits all the convergence properties of the Distributed Block Proximal Method and under the assumption of diminishing stepsizes~\eqref{eq:alpha} respectively, the result in~\eqref{eq:optimality} follows (see~\cite[Theorem~5.15]{farina2019arXivProximal}). \oprocend
\end{proof}
Notice that the result in Theorem~\ref{thm:conv} does not say anything about the convergence of the sequences $\{x_i^k\}$, but only states that if diminishing stepsizes are employed, asymptotically these sequences are consensual and cost optimal in expected value.
Despite that, from a practical point of view, two facts typically happen. First, agents approach consensus, i.e., for all $i\in\until{N}$, the value $\|x_i^k-\bar{x}^k\|$ becomes small, extremely fast, so that they all agree on a common solution. Second, if the number of iterations $K$ in the algorithm is sufficiently large, the value of $x_i^K$ is a good solution to problem~\eqref{pb:lovasz}.
Then, given $x_i^K$, each agent can reconstruct a set solution to problem~\eqref{pb:standard} by using~\eqref{eq:reconstruct} and, in order to obtain the same solution for all the agents, we consider a unique threshold value, known to all the agents, $\tau\in[0,1]$.
\begin{remark}
Notice that, by resorting to classical arguments, it can be easily shown from the analysis in~\cite{farina2019arXivProximal} that the convergence rate of $f_{best}$ in Theorem~\ref{thm:conv} is sublinear (with explicit rate depending on the actual stepsize sequence). Moreover, if constant stepsizes are employed, convergence of $f_{best}$ to the optimal solution is attained in expected value with a constant error with rate $O(1/k)$~\cite[Theorem~2]{farina2019arXivProximal}.
\end{remark}
\section{Cooperative image segmentation}\label{sec:numerical}
Submodular minimization has been widely applied to
computer vision problems as image classification, segmentation
and reconstruction, see,
e.g.,~\cite{stobbe2010efficient,jegelka2013reflection,
greig1989exact}.
In this section, we consider a binary image segmentation problem in which $N=8$ agents have to cooperate in order to separate an object from the background in an image of size $D\times D$ pixels (with $D=64$). Each agent has access only to a portion of the entire image, see Figure~\ref{fig:agent_start}, and can communicate according to the graph reported in the figure.
Before giving the details of the distributed experimental set-up let us introduce how such a problem is usually treated in a centralized way, i.e., by casting it into a $s$--$t$ minimum cut problem.
\subsection{$s$--$t$ minimum cut problem}
Assume the entire $D\times D$ image be available for segmentation, and denote as $V=\until{D^2}$ the set of pixels.
As shown, e.g., in~\cite{greig1989exact,boykov2006graph} this problem can be reduced
to an equivalent $s$--$t$ minimum cut problem, which can be approached by submodular minimization techniques.
\begin{figure}[]
\centering
\includegraphics[width=0.85\columnwidth]{agent_start_graph_reduced} %
\caption{Cooperative image segmentation. The considered communication graph is depicted on top, where agents are represented by blue nodes. Under each node, the portion of the image accessible by the corresponding agent is depicted.}
\label{fig:agent_start}
\end{figure}
More in detail, this approach is based on the construction of a weighted digraph $G_{s-t} = (V_{s-t},E_{s-t}, A_{s-t})$, where $V_{s-t}=\{1,\ldots,D^2,s,t\}$ is the set of nodes, $E_{s-t}\subseteq V_{s-t}\times V_{s-t}$ is the edge set and $A_{s-t}$ is a positive weighted adjacency matrix.
There are two sets of directed edges $(s,p)$ and $(p,t)$, with positive weights $a_{s,p}$ and $a_{p,t}$ respectively, for all $p\in V$. Moreover, there is an undirected edge $(p,q)$ between any two neighboring pixels with weight $a_{p,q}$.
The weights $a_{s,p}$ and $a_{p,t}$ represent individual penalties for assigning pixel $p$ to the object and to the background respectively. On the other hand, given two pixels $p$ and $q$, the weight $a_{p,q}$ can be interpreted as a penalty for a discontinuity between their intensities.
In order to quantify the weights defined above, let us denote by $I_p\in [0,1]$ the intensity of pixel $p$. Then, see, e.g.,~\cite{boykov2006graph}, $a_{p,q}$ is computed as
\begin{align*}
a_{p,q} = e^{-\frac{(I_p-I_q)^2}{2\sigma^2}},
\end{align*}
where $\sigma$ is a constant modeling, e.g., the variance of the camera noise.
Moreover, weights $a_{s,p}$ and $a_{p,t}$ are respectively computed as
\begin{align*}
a_{s,p} =& -\lambda \log\text{P}(x_p = 1)\\
a_{p,t} =& -\lambda \log\text{P}(x_p = 0),
\end{align*}
where $\lambda>0$ is a constant and $\text{P}(x_p=1)$ (respectively $\text{P}(x_p=0)$) denotes the probability of pixel $p$ to belong to the foreground (respectively background).
The goal of the $s$--$t$ minimum cut problem is to find a subset $X\subseteq V$ of pixels such that the sum of the weights of the edges from $X\cup\{s\}$ to $\{t\}\cup V\setminus X$ is minimized.
\subsection{Distributed set-up}
In the considered distributed set-up, $N=8$ agents are connected
according to a strongly-connected Erd\H{o}s-R\'{e}nyi random digraph and each of them has access only to a portion of the image (see Figure~\ref{fig:agent_start}). In this set-up, clearly, each agent can assign weights only to some edges in $E_{s-t}$ so that, it cannot segment the entire image on its own.
Let $V_i\subseteq V$ be the set of pixels seen by agent $i$. Each node $i$ assigns a local intensity $I^i_p$ to each pixel $p\in V_i$. Then, it computes its local weights as
%
\begin{align*}
a^i_{p,q} &=
\begin{cases}
e^{-\frac{(I^i_p-I^i_q)^2}{2\sigma^2}}, & \text{if } p,q\in V_i\\
0, & \text{otherwise}
\end{cases}\\
a^i_{s,p} &=
\begin{cases}
-\lambda \log\text{P}(x^i_p = 1), & \text{if } p\in V_i\\
0, & \text{otherwise}
\end{cases}\\
a^i_{p,t} &=
\begin{cases}
-\lambda \log\text{P}(x^i_p = 0), & \text{if } p\in V_i\\
0, & \text{otherwise}
\end{cases}
\end{align*}
Given the above locally defined weights, each agent $i$ construct its private submodular function $F_i$ as
\begin{align}
F_i(X)=
\sum_{\substack{p\in X \\ q\in V\setminus X}}\!\! a^i_{p,q}
+\!\!
\sum_{q\in V\setminus X}\!\! a^i_{s,q}
+
\sum_{p\in X} a^i_{p,t}
-\!\!
\sum_{q\in V}\!\! a^i_{s,q}.\label{eq:submod_local}
\end{align}
Here, the first term takes into account the edges from $X$ to $V\setminus X$, the
second one those from $s$ to $V\setminus X$,
and the third one those from $X$ to $t$. The last term is a normalization term
guaranteeing $F_i(\emptyset)=0$.
Then, by plugging~\eqref{eq:submod_local} in problem~\eqref{pb:standard}, the optimization problem that the agents have to cooperatively solve in order to segment the image is
\begin{equation*}
\begin{aligned}
&\mathop{\textrm{minimize}}_{X\subseteq V}\sum_{i=1}^{N}\left(
\sum_{\substack{p\in X \\ q\in V\setminus X}}\!\! a^i_{p,q}
+\!\!
\sum_{q\in V\setminus X}\!\! a^i_{s,q}
+
\sum_{p\in X} a^i_{p,t}
-\!\!
\sum_{q\in V}\!\! a^i_{s,q}\right)\!\!.
\end{aligned}
\end{equation*}
\begin{figure}
\begin{center}
\includegraphics[width=.7\columnwidth]{agent_recon_graph_reduced.jpg} %
\caption{Cooperative image segmentation. Evolution of the local solution estimates for each agent in the network.}
\label{fig:agent_stamps}
\end{center}
\end{figure}
We applied the MICKY distributed algorithm to this set-up and we split the optimization variable in $40$ blocks.
In order to mimic possible errors in the construction of the local weights,
we added some random noise to the image.
We implemented the MICKY algorithm by using the Python package DISROPT~\cite{farina2019disropt} and we ran it for $K=1000$ iterations. A graphical representation of the results is reported in Figure~\ref{fig:agent_stamps}.
Each row is associated to one network agent while each column is associated to a different time stamp. More in detail, we show the initial condition at time $k=0$ and the candidate (continuous) solution at $k\in\{100,200,300,400,500,600\}$ iterations. The last column represents the solution $X_i^{\text{end}}$ of each agent obtained by thresholding $x_i^{k}$ with $k=1000$ and $\tau=0.5$.
As appearing in Figure~\ref{fig:agent_stamps}, the local solution set estimates $X_i^{\text{end}}$ are almost identical. Moreover, the connectivity structure of the network clearly affects the evolution of the local estimates.
Finally, the evolution of the cost error is depicted in Figure~\ref{fig:costconv}, where $X_i^k\triangleq \{i\mid x_i^k >\tau\}$.
\begin{figure}[h!]
\centering
\includegraphics[scale=.78]{convrate_mickey64}
%
\caption{Numerical example. Evolution of the error between the cost computed at the (thresholded) local solution estimates and the optimal cost.}
\label{fig:costconv}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this paper we presented MICKY, a distributed algorithm for solving submodular problems involving the minimization of the sum of many submodular functions without any central unit. It involves random block updates and communications, thus requiring a reduced local computational load and allowing its deployment on networks with low communication bandwidth (since it requires
a small amount of information to be transmitted at each iteration).
Its convergence in expected value has been shown under mild assumptions. The MICKY algorithm has ben tested on a cooperative image segmentation problem in which each agent has access to only a portion of the entire image.
|
1,477,468,750,935 | arxiv | \section{Introduction}
\label{Introduction}
The ability of perceiving the environment and building structured maps that can be used in path and motion planning is among the most critical abilities of mobile robots. For legged robots, due to the discretely changing footholds, such an ability becomes even more vital. The most significant advantage of legged robots as compared with traditional wheeled ones is their adaptability to complex terrains. Obviously, being able to extract terrain information is a premise of realizing such advantages. Hence, investigations on terrain understanding and characterization are of great importance for fully exploiting legged robots.
The problem of establishing usable maps for path and trajectory planning for mobile robots has been extensively studied in both robotics and computer vision communities. Especially, the simultaneous localization and mapping (SLAM) has long been one of the most hottest topics in robotics~\cite{lee_indoor_2012,geneva_lips_2018,yang_monocular_2019,zuo_lic-fusion_2020,sun_plane-edge-slam_2021}. Three dimensional (3D) reconstruction, which is a classical problem in computer vision that has been attracting considerably increasing research attentions recently~\cite{yun_3d_2007,keller_real-time_2013,liu_3d_2016,pan_dense_2019}, is fundamentally of the same mathematical nature. The specific terrain mapping problem for legged robots differs from the above standard problems in various aspects. Among them, the requirement of providing guidance on how to selected future foothold is perhaps the most distinctive feature. How to accurately, reliably and efficiently establish a structured map that encodes this feature has not been given adequate research attention.
\begin{figure}[t]
\centering
\includegraphics[height=5.5cm]{figures/Top_figure.png}
\caption{{\small Ploytopic planar region characterization results.}}
\label{fig:fig_one}
\end{figure}
Trying to bridge the aforementioned gap, we focus on the planar region extraction and polytopic approximation problem in this paper. Inspired by the fact that regions of candidate footholds serve as constraints in foothold planning problem of legged robots, we aim to construct a terrain map consisting of only polytopes, which renders the constraints linear and therefore amenable. Given a sequence of depth images and the associated camera frames, our proposed approach first segments the candidate planes within each depth image. Then, planar regions extracted from different depth images that are actually corresponding to the same physical plane are merged to respect the integrity of the real planar regions. Once the pixel-wise planar region characterizations are constructed, we develop a polygonal approximation to balance between accuracy and tractability. Finally, the potentially non-convex polygons are convexified via polytopic partitions to generate the desired polytopic representation of the terrain.
\subsection{Related Works}
As briefly mentioned above, 3D reconstruction and Simultaneous Localization And Mapping (SLAM) both aim to generate representations of the environment, which lay the foundation for more specialized terrain mapping strategies. Investigations on 3D reconstruction mainly aim to establish the precise reconstruction of the 3D model of objects or scenes \cite{yun_3d_2007,keller_real-time_2013,liu_3d_2016,pan_dense_2019}, whose output representation of the environment remains complicated. On the other hand, SLAM, especially indoor SLAM, utilizes planes as landmarks to achieve the simultaneous motion estimation and environment mapping~\cite{lee_indoor_2012,geneva_lips_2018,yang_monocular_2019,zuo_lic-fusion_2020,sun_plane-edge-slam_2021}. However, 3D reconstruction approaches keep using 3D point clouds to represent environment features and require high computational and storage cost. SLAM approaches focus more on plane fitting and matching rather than accurately represent the complete planar regions with structured boundary characterization. Neither of them can be directly applied to help with the foothold planning problem for legged robots.
Following the core idea of SLAM, various perception schemes for accurate terrain mapping have been proposed for legged robot robot locomotion recently. Height map based strategies are among the most widely adopted schemes~\cite{kolter_control_2008,magana_fast_2019,villarreal_mpc-based_2020,fankhauser_robot-centric_2014,fankhauser_robust_2018}.
Kolter et al. \cite{kolter_control_2008} presented a planning architecture that first generates the height map from a 3D model of the terrain. Such an idea has been extended in~\cite{fankhauser_robot-centric_2014,fankhauser_robust_2018}, where robot-centric elevation maps of the terrain from onboard range sensors are constructed. Excellent experimental results reported in these works have proven that projecting the point clouds onto the discrete height map is an efficient way to represent rough terrain. Nonetheless, all these scheme require an additional step of constructing an associated cost map for foothold planning, which is non-trivial and time consuming. Furthermore, building the height map does not extract the direct information required for legged robot locomotion, and the grid structure of height maps imposes limitation to subsequent foothold choosing.
The planar region based terrain characterization has also been studied in the literature. Numerous methods for high speed multi-plane segmentation with RGB-D camera readings have been proposed recently \cite{dong_efficient_2018,hulik_fast_2012,proenca_fast_2018,feng_fast_2014,roychoudhury_plane_2021}. Feng et al. \cite{feng_fast_2014} applied Agglomerative Hierarchical Clustering (AHC) method on the graph constructed by groups of point clouds for plane segmentation. Proença and Gao \cite{proenca_fast_2018} presented a cell-wise region growing approach to extract planes and cylinders segments from depth images. These real-time plane segmentation methods well suit robotic application, but the limited FoV of depth camera leads to incomplete detection of physical planes, which prevents us from simply extracting planar regions in single depth measurement.
Building upon the idea of planar region characterization, pioneering efforts on further approximating the planar regions via polygonal or polytopic regions and applying to practical foothold planning schemes have been made. Deits et al.~\cite{akin_computing_2015} presented an iterative regional inflation method to represent safe area by convex polytopes formed by obstacle boundaries. While this approach is efficient, a hand-selected starting point is needed to grow the inscribed ellipsoid of obstacles. Moreover, it is hard to represent complex obstacle-free regions by the convex polytope obtained by inflating inscribed ellipsoid. Bertrand et al.~\cite{bertrand_detecting_2020} constructed an octree to store the point clouds from LIDAR readings, and grouped the nodes into planar regions which are then used for footstep planning of humanoid robots. This work effectively extract horizontal planar regions from LIDAR scans, but the algorithm can only achieve about 2 Hz frame rate and the resulting representation is not guaranteed to be convex, limiting its applications in reactive foothold planning for legged locomotion.
\subsection{Contributions}
The main contributions of this paper are summarized as follows. First, originate from the specific application of foothold planning for legged robots, the proposed approach characterizes terrain environment via polytopic representation that ensures direct applicability to legged locomotion. Second, by combining plane segmentation and merging techniques, the proposed approach reconstructs the complete terrain with limited field of view (FOV), effectively removing various restrictions on the used sensory system and hence expanding the applicability of the proposed approach. Third, through careful integration of the participating modules, the proposed approach is very efficient, capable of running at a higher frequency than the intrinsic frame-rate of the sensory system in our experimental tests, fulfilling the requirement for subsequent planning for legged locomotion problems.
\begin{figure*}[tbp]
\centering
\includegraphics[height=6cm]{figures/overview.png}
\caption{Overview of the proposed method. (a) is a sequence of depth images $\mathcal{F}$ with associated camera frames $\mathcal{W}$ (b) illustrates the planes $\mathscr{P}$ segmented in each frame. After plane merging, the extracted planar regions in the world frame is shown in (c). Black lines in (d) show the boundaries of the convex polytopes $C$ obtained by convexification.}
\label{fig:overview}
\end{figure*}
\section{Problem Description and Solution Overview}
\label{Problem Description and Overview of the Approach}
The primary objective of the planar region identification for legged locomotion is to acquire a set of polytopes characterizing feasible supports for foothold selection. In this paper, we particularly focus on the case where only depth/point cloud measurements are available.
To rigorously formulate the planar region identification problem, we consider a sequence of depth image measurements denoted by $\mathcal{F} = \{F_1,F_2,\ldots, F_N\}$ where $F_i\in \mathbb{R}^{U\times V}, \ \forall i=1,\ldots,N$ denotes the depth image of the $i$-th measurement, and a sequence of associated camera frames $\mathcal{W} = \{W_1,W_2,\ldots,W_N\}$ expressed in an inertial frame. The planar region identification problem studied in this paper aims to construct the collections of low-dimensional polytopic regions $\mathscr{P} = \{\P_1,\P_2,\ldots,\P_N\}$ with $\P_i = \{P_i^1,\ldots,P_i^M\}$ where $P_i^k\subset \mathbb{R}^3, \ \forall k=1,\ldots,M$ is a polytope in the collection. For a generic polytope $P$ considered in this paper, we adopt the following representation
\eq{\label{eq:poly_rep} P = (\mathcal{V}, N, \bar{p}, \vec{n}, \text{MSE}).} In this representation, $\mathcal{V}$ and $\vec{n}$ jointly specifies the polytope, where $\mathcal{V}$ is the set of vertices of the polytope and $\vec{n}$ denotes the normal vector of the two-dimensional polytope in three-dimensional space. The quantities $N, \bar{p}$ and $\text{MSE}$ are introduced to relate the polytope with the point cloud measurement, where $N$ is the number of points associated with the polytope, $\bar{p}$ is the average of all points associated with the polytope, and $\text{MSE}$ is the mean square error between the sampled points and the identified polytope.
Given the above setup, the planar region extraction and convexification problem studied in this paper can be rigorously formulated below.
\begin{problem}
Given the sequence of depth image measurements $\mathcal{F}$ and the associated camera frames $\mathcal{W}$, find the polytopic representation $\P$ for all planar regions contained in the overall perceived environment.
\end{problem}
\begin{remark}
Practically speaking, the sequence of depth images is commonly indexed by time as well, making the identification problem an estimation problem in nature. Such a viewpoint is widely adopted in the simultaneous localization and mapping (SLAM) community. In essence, the problem we study in this paper can be viewed as a special SLAM problem with pre-specified structures of the map characterization.
\end{remark}
To address the planar region identification problem described above, we adopt a two-stage strategy in this paper. At the first stage, the planar regions appeared in a sequence of depth images are identified and merged. Typically, these identified planar regions are expressed with pixel-wise representation, which is overly complicated and not utilizable for applications in foothold planning for legged robots. In view of these issues, we
subsequently approximate the extracted planar regions via polytopic representations at the second stage, which eventually gives the desired polytopic representations of all candidate planar regions in the perceived environment.
It is worth noting that, finding all possible planar regions from depth image or point cloud is a combinatorial problem, which is fundamentally challenging. Furthermore, the complexity of real world scenarios makes it practically impossible to perfectly classify all points in the point cloud to some polytopic region. In view of these theoretical and practical difficulties, our proposed solution leverages the special structure of our polytopic representation that is efficient and reliable. In the following subsection, an overview of the proposed solution is first provided.
\subsection{Overview of the Proposed Framework}
The schematic diagram of the proposed solution is depicted in Fig.~\ref{fig:overview}. As briefly mentioned before, the overall solution consists of two major parts. The planar region detection module takes the raw depth image measurements $\mathcal{F}$ and the associated camera frames $\mathcal{W}$ as inputs. These inputs are first transformed into a point cloud representation with neighborhood information encoded. With this specialized point cloud data, classical multi-plane segmentation techniques can be applied to acquire both the mask/boundary information and the parameters (including center $\bar{p}$, normal vector $\vec{n}$ and mean square error $\text{MSE}$) of the candidate planes from one depth image. As we receive upcoming depth images, the candidate planes segmented from different images are transformed to a common inertial frame through the camera frames $\mathcal{W}$. In order to improve the integrity and accuracy of planar regions representation, newly extracted planes are fused with historical planar regions. For coplanar and connected planes, we first merge their parameters, then rasterize them into a single 2D plane and merge their boundaries by constructing their masks.
Once the extraction of planar regions are completed, we conservatively approximate the planar regions by a set of convex polygons $C=\{\vec{n},\mathcal{V}_C\}$. Such an approximation not only reduces the complexity of representing a polygonal region, but also offers tractability for future foothold planning schemes. Finally, after converting all convex polygons into the robot's local frame $R$, a robot-centric polytopic planar regions map that can be used in foothold planning is constructed.
\section{Planar Region Extraction}
\label{Planar Region Extraction}
To extract all planar regions from various depth images of environment, we first need to identify all planes in a single depth image. Then, the identified planar regions belonging to a common physical plane are merged to obtain the a minimum representation. By adopting an efficient cell-based region growing algorithm, all planes in one frame are labelled as point clusters in real time. We then compute the plane parameters and extract the boundaries of the planar regions. After that, we store the planes with low-dimensional representation in a fixed frame. For all planes in the frame detected from different depth images, we proposed a rasterization-based method to efficiently merge newly extracted plane segments and restore the original planar region in the physical world. In the rest of this section, the details on these two major steps for plane extraction are presented.
\subsection{Single-Image Plane Segmentation}\label{Planar Region Extraction-A}
Single-image plane segmentation is a classical problem that has been extensively studied in the computer vision literature. For applications in robotics, various practical restrictions call for specialized treatment on this problem. Taking into account of the real-time implementable requirement, we follow the idea proposed in~\cite{proenca_fast_2018}.
Given a frame of depth image captured by RGB-D camera, the method first generates the organized point clouds data segmented into grid cells. Then, Principal Component Analysis (PCA) are applied in each cell to compute the principle axis of the 3D point cluster and accomplish plane fitting. The seed cells are selected if their mean square errors (MSE) are lower than a prescribed threshold and there are no discontinuities inside the cells. Given the seed cells, cell-wise region growing is performed in the order determined by the histogram of planar cell normals. Note that, by utilizing the grid structure of the point clouds in image format, normal vector computing and region growing are operated on point clusters, which significantly reduces the computational cost and therefore accelerates the processing. After cell-wise region growing, the coarse cell-level boundaries of the obtained planes are then refined by performing pixel-wise region growing along the boundary extracted by morphological operations. With the point-wise refinement, the accuracy of segmented plane boundary is greatly improved.
Now, every plane in the camera frame is labeled as a point cluster $\{p_i\}_{i=1}^k \subset{\mathbb{R}^3}$. The normal vector $\vec{n}$ is the cross product of the first two principal axis computed through PCA. The centroid of the plane is defined as:
$\bar{p} = \frac{1}{k} \cdot \sum_{i=1}^k{p_i}$
The mean square error is calculated by:
$$ \text{MSE} = \frac{1}{k} \sum_{i=1}^k{(\vec{n} \cdot p_i - b)^2} $$
where $b = \vec{n} \cdot \bar{p}$ is a bias element of the plane.
With labeled pixels in the camera space, a digitized binary
image is obtained for each plane. We then extract the boundary of each plane in the camera space. Given the vertices $\mathcal{V}_{img}=\{u_i,v_i\}_{i=1}^p$ of the 2D contour extracted in the image, 3D vertices $\mathcal{V}_{cam}=\{x_i,y_i,z_i\}_{i=1}^p$ in the camera frame can be calculated through the prospective camera model and the plane equation:
$$ \begin{cases}
x = z(u-c_x)/f \\ y = z(v-c_y)/f \\ z = \frac{\vec{n} \cdot \bar{p}}{n_x(u-c_x)/f+n_y(v-c_y)/f+n_z}
\end{cases} $$
where $K=\begin{bmatrix} f&0&c_x\\mathbf{0}&f&c_y\\mathbf{0}&0&1 \end{bmatrix}$ is the intrinsic matrix of the camera, $\vec{n}=(n_x,n_y,n_z)$ is the normal vector of the plane.
For all planes extracted in one frame, the vertices $\mathcal{V}_{cam}$, normal vector $\vec{n}$, and centroid $\bar{p}$ in the camera space can be directly transferred to the world frame given the corresponding camera pose.
\begin{figure}[t]
\centering
\includegraphics[height=4cm]{figures/plane_parameter.png}
\caption{(a) Coordinate frame and plane parameters. (b) Definition of plane masks.}
\label{fig:rasterization}
\end{figure}
\subsection{Plane Merging}
Due to the limited field of view (FOV) of the depth camera, planes detected in each frame during robot motion could be only a part of the original plane. Once the planes segmented from different depth images with different camera frames are obtained, we need to merge those that actually correspond to the same physical plane to restore the real planar region and to ensure the integrity and accuracy of planar region detection.
\begin{algorithm}[t]
\caption{Plane Merging}
\label{alg:Plane Merging}
\KwIn{\par List of historical planes $L_H$ and new planes $L_N$}
\KwOut{$L_H$ after merging}
\BlankLine
\ForEach{$\P_N \in L_N$}{
\ForEach{$\P_H \in L_H$}{
\If{isCoplanar($\P_N$, $\P_H$)}{
$Para_M$ $\leftarrow$ MergeParameter($\P_N$, $\P_H$)\par
$Mask_N$ $\leftarrow$ Rasterize($\P_N$, $Para_M$)\par
$Mask_H$ $\leftarrow$ Rasterize($\P_H$, $Para_M$)\par
\If{$Mask_N \cap Mask_H \not\in \emptyset$}{
$Mask_M$ = $Mask_N \cup Mask_H$\par
$\P_M$ = InvRasterize($Mask_M$, $Para_M$)\par
$L_H$.Replace($\P_H$, $\P_M$)\par
$L_N$.Delete($\P_N$)\par
$\P_N$ = $\P_M$
}
}
}
}
$L_H$.Append($L_N$)
\end{algorithm}
After plane extraction, all planes extracted in a frame are stored in the world frame as a global map $\mathscr{P}$ with their vertices ($\mathcal{V}_P=\{x_i,y_i,z_i\}_{i=1}^p$), point number ($N$), mean ($\bar{p}$), normal vector ($\vec{n}$) and mean square error ($\text{MSE}$):
$$\mathscr{P} = \{\P_i = (\mathcal{V}_P,N,\bar{p},\vec{n},\text{MSE})\}_{i=1}^n$$
For a newly extracted plane, we first find the planes coplanar with it by following criterion:
$$ \begin{cases}
\mid n_1 \cdot n_2 \mid < \tau_{\theta} \\
\mid n_1 \cdot \bar{p}_1 - n_2 \cdot \bar{p}_2\mid < \tau_{b}
\end{cases} $$
where $\tau_{\theta}$ and $\tau_{b}$ are two preset thresholds.\par
Centroids of planes after merging are updated through the following formulas:
\eqn{\ald{{N}_m& = \sum_i{{N}_i}, \quad \bar{p}_m&=\frac{1}{{N}_m} \cdot \sum_i{\bar{p}_i \cdot {N}_i}} }
The low dimensional plane representation can significantly reduce the computational and storage cost. However, as we abandon the original point clouds data for each plane, the normal vectors of the merged planes can not be directly obtained. To account for this issue, we notice that the MSE reflects the accuracy of plane fitting model. Therefore, we take it as a weight to fuse normal vector $\vec{n}$ of coplanar planes. By transferring $\vec{n}$ to the spherical coordinate system ($\vec{n}=[\theta,\phi,1]$), we merge $[\theta,\phi]$ according to the MSE as follows:
\begin{gather*}
\text{MSE}_m = (\sum_i{\text{MSE}_i^{-1}})^{-1} \\
\begin{bmatrix} \theta_m\\\phi_m \end{bmatrix} = \text{MSE}_m \cdot \sum_i{\text{MSE}_i^{-1} \cdot \begin{bmatrix} \theta_i\\\phi_i \end{bmatrix}}
\end{gather*} \par
Once the parameters are updated, we first transfer the contour $\mathcal{V}_P$ to the coordinate system with $\bar{p}_m$ as origin and $n_m$ as the z-axis (See Fig \ref{fig:rasterization}). By rasterizing the x-y plane of the coordinate and fill the grids inside the boundaries, we then obtained the mask of each plane in 2D matrix form. Given the masks of all planes, we can simply check their connectivity through bit-wise-and and merge connected planes with bit-wise-or operation. With the merged mask in binary matrix form, its vertices can be extracted again as in Section \ref{Planar Region Extraction-A}. When the 2D vertices are transferred back to the world frame by inverse transformation, we obtain the digitized boundary $\mathcal{V}_P$ of the merged plane. The overall merging algorithm is outlined in Algorithm \ref{alg:Plane Merging}.
\section{Polytopic Approximation}
\label{Polytopic Approximation}
Through combining a plane segmentation step and a merging step, we have successfully segmented candidate planes given a sequence of depth images and their corresponding camera frames. However, the extracted planar regions are expressed via pixel-wise mask matrices, which does not imply direct applications to foothold planning problems for legged locomotion. Concerned with the potential applications in legged locomotion, we need to further simplify the representation of the planar regions via polytopes. In the following section, we develop our approach to approximate the constructed planar regions via polytopes efficiently and accurately.
\subsection{Contour Simplification}
After plane extraction, the accurate high-resolution contour of the planar region are obtained. However, the large number of vertices in plane representation may contain much detailed information which is nonessential for foothold planning. Besides, numerous small notches in the contour may result in a large number of convex polygons and significantly increase the computational and storage cost. Thus, the high-resolution contour needs to be simplified while keeping the original shape of the planar area and ensuring the safety of footstep planning.
\begin{figure}[t]
\centering
\includegraphics[height=4cm]{figures/approximation_sketch.png}
\caption{(a) Triangle defining the error $\varepsilon$ associated with the elimination of point $v_i$, and its inscribed circle with diameter $d$. (b) Triangles defining $\varepsilon$ of point $v_{i+1}$ before (red) and after (blue) the elimination of $v_i$.}
\label{fig:approximation sketch}
\end{figure}
\begin{algorithm}[b]
\caption{Contour simplification}
\label{alg:Contour Simplification}
\KwIn{ Plane Vertices $\mathcal{V}_P = \{v_i=(x,y,z)\}_{i=1}^p$}
\BlankLine
Compute $\varepsilon$ for all $v_i \in \mathcal{V}_P$\par
$H \leftarrow$ BuildMinHeap($\mathcal{V}_P$, $\varepsilon$)\par
\Repeat{$\varepsilon_i > \varepsilon$}{
$v_i,\varepsilon_i \leftarrow$ $H$.ReturnMinVertex()\par
\If{isConcave($v$)}{
$d \leftarrow$ ComputeInnerCircleDiameter($v_i$)\par
\If{$d > d_{r}$}{
$H$.Update($v_i$, Infinity)\par
continue
}
}
$H$.UpdateNeighborVertex($v_i$)\par
$H$.Delete($v_i$)
}
\end{algorithm}
To simplify the contour, we first build a min-heap data structure from the contour $\mathcal{V}_P$. The value of each leaf $v_i \in \mathcal{V}_P$ is defined as the area of the triangle formed by vertices $v_{i-1}$, $v_i$ and $v_{i+1}$. Then, we iteratively pop out the vertex with minimum value until a preset threshold $\varepsilon$ is reached. After popping out vertex $v_i$, the value of $v_{i-1}$ and $v_{i+1}$ should be updated by their new neighbor vertices. In this way, the vertex whose elimination introduces least error $\varepsilon$ is greedily deleted, and we can control the roughness of the contour by adjusting the threshold $\varepsilon$.\par
However, the planar region could be dilated through this method if concave vertices are deleted. The larger boundary may cause the footstep being selected outside the real plane. To avoid this situation, we check the convexity of each vertex before vertex popping. For concave vertex, we calculate the diameter $d$ of the inscribed circle of the triangle formed by vertices $v_{i-1}$, $v_i$ and $v_{i+1}$. If the value is smaller than the diameter of the robot's foot $d_{r}$, vertex $v_i$ is deleted. Otherwise, $v_i$ is preserved and moved to the bottom of the heap. In this way, the planar region can be approximated safely without causing the robot to miss its step. The adapted method is described in Algorithm \ref{alg:Contour Simplification}. For time complexity, the use of the min-heap data structure yields a worst-case complexity of $O(n\log n)$.
\subsection{Convex Partition}
Once Contour Simplification is obtained, we divide the planar region into multiple convex polygons $C=\{\vec{n},\mathcal{V}_C\}$ through a vector method. Given the contour $\mathcal{V}_P$ of a plane, we iterate through each vertex $v_i$ and check its convexity by the sign of the cross product $(v_i-v_{i-1}) \times (v_{i+1}-v_i)$. For a concave vertex $v_i$, we extend the vector $v_i-v_{i-1}$ and compute the first intersection $s$ with the boundary. The polygon is then divided in two parts by the segment $v_is$. The procedures are repeated on the resulting two contours until no concave vertex is found.\par
The method is simple but efficient, a polygon with $p$ vertices and $q$ concave vertices can be divided into no more than $q+1$ convex polygons with time complexity $O(p+q)$. In practice, the computation time of convex partition can be nearly neglected since concave vertices become much fewer after Contour Simplification.
\begin{figure}[t]
\centering
\includegraphics[height=3cm]{figures/convex_approximation_sketch.png}
\caption{Plane contour can be simplified and decomposed into convex polygons by proposed method. $P_1$: concave vertex to be deleted satisfying $\varepsilon_i < \varepsilon$ and $d < d_{r}$. $P_2$: convex vertexto be deleted satisfying $\varepsilon_i < \varepsilon$. $P_3$: preserved concave vertex with $\varepsilon_i < \varepsilon$ but $d > d_{r}$. $P_4$: intersection between contour and extended concave vector. The plane is decomposed to two convex polygons by segment $P_3P_4$.}
\label{fig:convex approximation sketch}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=6cm]{figures/single_frame.png}
\caption{Results of single-image plane extraction. Normal vectors and centroids of the planes are indicated by white arrows and points.}
\label{fig:single frame}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=6cm]{figures/plane_merging.png}
\caption{Planes extracted from three images captured at different viewpoints before and after merging.}
\label{fig: plane merging}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=5cm]{figures/plane_merging_example.png}
\caption{Left: two planes before merging (IoU = 86.3$\%$ and 73.0$\%$). Right: ground truth and the plane after merging (IoU = 87.4$\%$).}
\label{fig:plane merging example}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[height=4cm]{figures/convex_approximation.png}
\caption{Results of polytopic approximation with different $\varepsilon$ (cm$^2$).}
\label{fig:convex approximation}
\end{figure*}
\begin{table*}[tp!]
\centering
\begin{tabular}{|l|ccc|ccc|ccc|c|}
\hline
Scene & & 1 & & & 2 & & & 3 & & Average \\ \hline
Frame & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & 3 & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & 6 & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & 9 & \textbf{} \\ \hline
$\alpha$($^{\circ}$) & \multicolumn{1}{c|}{1.52} & \multicolumn{1}{c|}{1.27} & 2.32 & \multicolumn{1}{c|}{1.43} & \multicolumn{1}{c|}{1.13} & 0.81 & \multicolumn{1}{c|}{1.75} & \multicolumn{1}{c|}{1.19} & 0.59 & \textbf{1.33} \\ \hline
$\Delta b$ (mm) & \multicolumn{1}{c|}{16.5} & \multicolumn{1}{c|}{19.9} & 20.6 & \multicolumn{1}{c|}{30.3} & \multicolumn{1}{c|}{18.4} & 14.6 & \multicolumn{1}{c|}{28.7} & \multicolumn{1}{c|}{17.3} & 15.2 & \textbf{20.2} \\ \hline
IoU (\%) & \multicolumn{1}{c|}{85.4} & \multicolumn{1}{c|}{71.9} & 81.5 & \multicolumn{1}{c|}{76.8} & \multicolumn{1}{c|}{82.3} & 81.8 & \multicolumn{1}{c|}{71.5} & \multicolumn{1}{c|}{77.5} & 85.3 & \textbf{79.3} \\ \hline
$\alpha_m$($^{\circ}$) & & 1.33 & & & 1.14 & & & 1.15 & & \textbf{1.21} \\ \hline
$\Delta b_m$ (mm) & & 18.1 & & & 20.1 & & & 16.4 & & \textbf{18.2} \\ \hline
IoU$_m$ (\%) & & 87.7 & & & 86.3 & & & 90.2 & & \textbf{88.1} \\ \hline
\end{tabular}
\caption{Results of plane extraction and plane merging test.}
\label{tab:experiment data}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{figures/convex_approximation_value.png}
\caption{Results of polytopic approximation with different value of $\varepsilon$. $N_V$: average vertices number per plane. $N_C$: average convex polygons number per plane. IoU: Intersection over Union of the planes before and after polytopic approximation.}
\label{fig:convex approximation value}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=7cm]{figures/runtime.png}
\caption{Run-time of the algorithm.}
\label{fig:runtime}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=6cm]{figures/moving_camera.png}
\caption{Moving camera result on a quadrupedal robot. The upper figure is real scenario. All the detected planes are illustrated in lower figure. Blue plane is ground. Green planes are horizontal. Red planes are vertical. Yellow stands for inclined planes. Violet line is robot trajectory.}
\label{fig:moving camera}
\end{figure}
\section{Experimental Results}
\label{Experimental Results}
To comprehensively evaluate the proposed method in terms of accuracy, robustness and efficiency, we conducted three sets of experiments as described in following subsections. The depth images are captured by an Intel Realsense D435 RGB-D camera under VGA resolution ($640 \times 480$ pixels). All algorithms are implemented in C++ and run on a laptop with Intel i5-7300HQ CPU.
\subsection{Planar Region Extraction Result}
We first evaluated the plane extraction algorithm on a single frame as illustrated in Fig. \ref{fig:single frame}. The experiment was conducted over 3 scenarios with multiple planar regions. For each case, we captured 3 frames at different viewpoints with known camera pose. The proposed plane extraction algorithm segmented disconnected planes in each frame and output their normal vectors $\vec{n}$, centroids $\bar{p}$, and vertices $\mathcal{V}_p$. Given the ground truth parameters, we evaluated the accuracy of the algorithm by the average angle $\alpha$ between the normal vectors, average difference of the bias element $b = \vec{n} \cdot{\bar{p}}$, and the Intersection over Union (IoU) after the measured plane was projected to the corresponding ground truth plane. The results are shown in Table \ref{tab:experiment data}. Considering the original bias of the depth camera (depth accuracy: $<$2\% at 2m), the results are acceptable.
Then, for three frames captured at the same scenario from different view points, we perform plane merging on the planes extracted by single frame segmentation. An intuitive comparison of the planes before and after merging is shown in Fig. \ref{fig: plane merging}. Again, we computed the plane parameters after plane merging and compare with the ground truth as shown in Table~\ref{tab:experiment data}. Compared to the results before plane merging, the average angle bias of the normal vector $\alpha$ and the average difference of plane bias $\Delta b$ are reduced by 9\%. The average IoU is improved by 11\% after plane merging. As shown in Fig. \ref{fig:plane merging example}, for planes that cannot be completely detected in a single frame due to camera pose or sensor bias, plane merging helps to reconstruct the original plane by fusing multiple plane segments extracted from different frames. In practice, the accuracy of the planes after merging is adequate for foothold planning of legged robot.
\subsection{Polytopic Approximation Result}
We tested the proposed polytopic approximation method with different preset threshold $\varepsilon$ as discussed in Section \ref{Polytopic Approximation} B. The planar regions after approximation and all corresponding convex components are illustrated in Fig. \ref{fig:convex approximation}. The average number of vertices and convex polygons number per plane when choosing different $\varepsilon$ are shown in Fig. \ref{fig:convex approximation value}. We also computed the average IoU of the planes before and after polytopic approximation with different threshold.
The run-time of the overall algorithm and each module is illustrated in Fig. \ref{fig:runtime}. According to the results, as the number of detected planes increases from 10 to 50, the overall running time per frame only increased by around 2ms (7ms to 9ms). In other words, the method can achieve more than 100 Hz frame rate in most cases.
\subsection{Moving Camera Result}
To evaluate the overall algorithm, we mounted the depth camera on a quadruped robot with a visual odometry (Realsense T265) to provide pose estimation. The algorithm ran continuously during robot locomotion, and built a global map of the planar region in the surrounding environment as shown in Fig.\ref{fig:moving camera}.
\section{Conclusion}
\label{Conclusion}
In this paper, we present a plane segmentation method for extracting plane structures from depth image to help for legged robots and its foothold planning. To achieve this, we combine different methods to do planes extraction and post-processing. First, all planes are extracted in one frame and stored into low-dimensional representation in global map. Then, limited by FoV, a physical plane may not be fully constructed in a single frame. Also, error may exist between two detected planes for the same physical plane in different frames because of camera movement. Thus, we use a novel plane merging method to fuse two plane features into one. After planes are extracted, a post-processing is implemented. It simplifies the contours of each plane so that the extracted plane features are transformed into polygons. After that, these plane regions are partitioned into convex polygons so that foothold planning method can perform on it directly. All the work in this paper is aim at legged robot and foothold planning, so the output are convex polygons and the efficiency for the whole algorithm is guaranteed. Finally, we illustrate the performance of this method and each part of it, by implementing it in different scenarios.
\bibliographystyle{ieeetr}
|
1,477,468,750,936 | arxiv | \section{Introduction}\label{sect:intr}
In particle and nuclear physics, intensity interferometry provides a direct
experimental method for the determination of sizes, shapes and lifetimes
of particle-emitting sources
(for reviews see, \eg, \cite{Gyulassy:1979,Boal:1990,Baym:1998,Wolfram:Zako2001,Tamas:HIP2002}).
In particular, boson interferometry provides a powerful tool for the
investigation of the space-time structure of particle production processes,
since Bose-Einstein correlations (BEC) of two identical bosons reflect both
geometrical and dynamical properties of the particle radiating source.
Given the point-like nature of the underlying interaction, \ensuremath{\mathrm{e^{+}e^{-}}}\ annihilation provides an ideal environment
to study these properties in multiparticle production by quark fragmentation.
\section{Bose-Einstein Correlation Function}
The two-particle correlation function of two particles with
four-momenta $p_{1}$ and $p_{2}$ is given by the ratio of the two-particle number density,
$\rho_2(p_{1},p_{2})$,
to the product of the two single-particle number densities, $\rho_1 (p_{1})\rho_1 (p_{2})$.
Being only interested in the correlation $R_2$ due to Bose-Einstein
interference, the product of single-particle densities is replaced by
$\rho_0(p_1,p_2)$,
the two-particle density that would occur in the absence of Bose-Einstein correlations:
\begin{equation} \label{eq:R2def}
R_2(p_1,p_2)=\frac{\rho_2(p_1,p_2)}{\rho_0(p_1,p_2)} \;.
\end{equation}
Since the mass of the two identical particles of the pair is fixed to the pion mass,
the correlation function is defined in six-dimensional momentum space.
Since Bose-Einstein correlations can be large only at small four-momentum difference
$Q=\sqrt{-(p_1-p_2)^2}$,
they are often parametrized in terms of this one-dimensional distance measure.
There is no reason, however,
to expect the hadron source for jet fragmentation to be spherically symmetric.
Recent investigations, using the Bertsch-Pratt parametrization~\cite{Pratt:86,Bertsch:88},
have, in fact, found an elongation of the source along the
jet axis~\cite{L3_3D:1999,OPAL3D:2000,DELPHI2D:2000,ALEPH:2004,OPAL:2007}
in the longitudinal center-of-mass (LCMS) frame~\cite{tamas:workshop91}.
While this effect is well established, the elongation is actually only about 20\%,
which suggests that a parametrization in terms of the single variable $Q$,
may be a good approximation.
There have been indications that the size of the source, as measured using BEC, depends on the transverse
mass, $\mt=\sqrt{m^2+\pt^2}=\sqrt{E^2-p_z^2}$, of the pions~\cite{Smirnova:Nijm96,Dalen:Maha98,OPAL:2007}.
It has been shown~\cite{Bialas:1999,Bialas:2000} that such a dependence can be understood if the produced
pions satisfy, approximately, the (generalized) Bjorken-Gottfried
condition~\cite{Gottfried:1972,Bjorken:SLAC73,Bjorken:1973,Gottfried:1974,Low:1978,Bjorken:ISMD94}, whereby
the four-momentum of a produced particle and the space-time position at which it is produced are linearly
related: $ x = d p $.
Such a correlation between space-time and momentum-energy is also a feature of the Lund string model
as incorporated in {\scshape Jetset}\ \cite{JETSET74},
which is very successful in describing detailed features of the hadronic final states of \ensuremath{\mathrm{e^{+}e^{-}}}\ annihilation.
Recently, experimental support for this strong correlation has been found \cite{OPAL:2007}.
A model which predicts both a $Q$- and an \mt-dependence while incorporating the Bjorken-Gottfried
condition is the so-called $\tau$-model \cite{Tamas;Zimanji:1990}.
In this Letter\ we develop this model further and apply it to the reconstruction of
the space-time evolution of pion production in \ensuremath{\mathrm{e^{+}e^{-}}}\ annihilation.
\section{BEC in the \boldmath{$\tau$} model} \label{sect:taumodel}
In the $\tau$-model, it is assumed that the average production point in the overall center-of-mass system,
$\overline{x}=(\overline{t},\overline{r}_x,\overline{r}_y,\overline{r}_z)$, of particles with a given
four-momentum $p$ is given by
\begin{equation} \label{eq:tau-corr}
\overline{x} (p) = a\tau p \;.
\end{equation}
In the case of two-jet events,
$a=1/\mt$
where
\mt\ is the transverse mass
and
$\tau = \sqrt{\overline{t}^2 - \overline{r}_z^2}$ is the longitudinal proper time.\footnote{The
terminology `longitudinal' proper time and `transverse' mass seems customary in the literature
even though their definitions are analogous $\tau = \sqrt{\overline{t}^2 - \overline{r}_z^2}$
and $ \mt = \sqrt{E^2 - p_z^2}$.}
For isotropically distributed particle production, the transverse mass is replaced by the
mass in the definition of $a$ and $\tau$ is the proper time.
In the case of three-jet events the relation is more complicated.
The correlation between coordinate space and momentum space variables
is described by the distribution of $x (p)$ about its average by
$\delta_\Delta ( x (p ) - \overline{x}(p ) ) = \delta_\Delta(x-a\tau p)$.
The emission function of the $\tau$-model is then given by \cite{Tamas;Zimanji:1990}
\begin{equation} \label{eq:source}
S(x,p) = \int_0^{\infty} \mathrm{d}\tau H(\tau)\delta_{\Delta}(x-a\tau p) \rho_1(p) \;,
\end{equation}
where $H(\tau)$ is the (longitudinal) proper-time distribution
and $\rho_1(p)$ is the experimentally measurable single-particle momentum spectrum,
both $H(\tau)$ and $\rho_1(p)$ being normalized to unity.
The two-pion distribution, $\rho_2(p_1,p_2)$, is related to $S(x,p)$, in the plane-wave approximation,
by the Yano-Koonin formula~\cite{Yano}:
\begin{equation} \label{eq:yano}
\rho_2(p_1,p_2) = \int \mathrm{d}^4 x_1 \mathrm{d}^4 x_2 S(x_1,p_1) S(x_2,p_2)
\left\{\strut 1 + \cos\left[ (p_1-p_2)(x_1-x_2) \right]\strut\right\} \;.
\end{equation}
Assuming that the distribution of $x (p)$ about its average
is much narrower than the proper-time distribution,
\Eq{eq:yano} can be evaluated in a saddle-point approximation.
Approximating
the function $\delta_\Delta$ by a Dirac delta function yields the same result.
Thus the integral of \Eq{eq:source} becomes
\begin{equation} \label{eq:S}
\int_0^{\infty} \mathrm{d}\tau H(\tau) \rho_1\left(\frac{x}{a\tau}\right) \;,
\end{equation}
and the argument of the cosine in \Eq{eq:yano} becomes
\begin{equation} \label{eq:cos}
(p_1 - p_2)(\bar{x}_1 - \bar{x}_2) = - 0.5 (a_1\tau_1 + a_2\tau_2) Q^2 \;.
\end{equation}
Substituting \Eqs{eq:S} and (\ref{eq:cos}) in \Eq{eq:yano} leads to the following approximation of
the two-particle Bose-Einstein correlation function:
\begin{equation} \label{eq:levyR2}
R_2(Q,a_1,a_2) = 1 + \mathrm{Re} \widetilde{H}\left(\frac{a_1 Q^2}{2}\right)
\widetilde{H}\left(\frac{a_2 Q^2}{2}\right) \;,
\end{equation}
where $\widetilde{H}(\omega) = \int \mathrm{d} \tau H(\tau) \exp(i \omega \tau)$
is the Fourier transform of $H(\tau)$.
This formula simplifies further if $R_2$ is measured with the restriction
\begin{equation} \label{eq:a1a2equal}
a_1\approx a_2\approx \bar{a} \;.
\end{equation}
In that case, $R_2$ becomes
\begin{equation} \label{eq:levyR2a}
R_2(Q,\bar{a}) = 1 + \mathrm{Re} \widetilde{H}^2 \left( \frac{\bar{a}Q^2}{2} \right)
\;.
\end{equation}
Thus for a given average of $a$ of the two particles, $R_2$ is found to
depend only on the invariant relative momentum $Q$.
Further, the model predicts a specific dependence on $\bar{a}$, which for two-jet events is
a specific dependence on $\overline{m}_\mathrm{t}$.\footnote{In the initial formulation of
the $\tau$-model
this dependence was averaged over \cite{Tamas;Zimanji:1990} due to the lack of \mt\ dependent data
at that time.}
Since there is no particle production before the onset of the collision,
$H(\tau)$ should be a one-sided distribution.
We choose a one-sided L\'evy distribution, which has the characteristic function (Fourier transform)~\cite{Tamas:Levy2004}
(for $\alpha\ne1$)\footnote{For the special case $\alpha=1$, see, \eg, Ref.~\citen{Nolan}.}
\begin{equation} \label{eq:levy1sidecharf}
\widetilde{H}(\omega) = \exp\left\{ -\frac{1}{2}\left(\Delta\tau|\omega|\strut\right)^{\alpha\strut}
\left[ 1 - i\, \mathrm{sign}(\omega)\tan\left(\frac{\alpha\pi}{2}\right) \strut \right]
+ i\,\omega\tau_0\right\} \;,
\end{equation}
where the parameter $\tau_0$ is the proper time of the onset of particle production
and $\Delta \tau$ is a measure of the width of the proper-time distribution.
Using this characteristic function in \Eq{eq:levyR2a} yields
\begin{eqnarray} \label{eq:levyR2av}
R_2(Q,\bar{a}) &=&
1+ \cos \left[\strut{\bar{a}\tau_0 Q^2}
+ \tan \left( \frac{\alpha \pi}{2} \right)
\left( \frac{\bar{a}\Delta\tau Q^2}{2}\right)^{\!\alpha\strut} \right]
\nonumber \\
& & \cdot \exp \left[\strut -\left( \frac{\bar{a}\Delta\tau Q^2}{2}\right)^{\!\alpha\strut} \right]
\;,
\end{eqnarray}
which for two-jet events is
\begin{eqnarray} \label{eq:levy2jetR2av}
R_2(Q,\mtbar) &=&
1+ \cos \left[ \frac{\tau_0 Q^2}{\mtbar}
+ \tan \left( \frac{\alpha \pi}{2} \right)
\left( \frac{\Delta\tau Q^2}{2\mtbar}\right)^{\!\alpha\strut} \right]
\nonumber \\
& & \cdot \exp \left[ -\left( \frac{\Delta\tau Q^2}{2\mtbar}\right)^{\!\alpha\strut} \right]
\;.
\end{eqnarray}
We now consider a simplification of the equation
obtained by assuming (a) that particle production starts immediately, \ie, $\tau_0=0$,
and (b) an average $a$-dependence, which is implemented in an approximate way by defining an effective
radius, $R=\sqrt{\bar{a}\Delta\tau/2}$, which for 2-jet events becomes
$R=\sqrt{\Delta\tau/(2\overline{m}_\mathrm{t})}$.
This results in:
\begin{equation}\label{eq:asymlevR2}
R_2(Q) = 1+ \cos \left[(R_\mathrm{a}Q)^{2\alpha}\right] \exp \left[-(RQ)^{2\alpha} \right] \;,
\end{equation}
where $R_\mathrm{a}$ is related to $R$ by
\begin{equation}\label{eq:asymlevRaR}
R_\mathrm{a}^{2\alpha} = \tan\left(\frac{\alpha\pi}{2}\right) R^{2\alpha} \;.
\end{equation}
To illustrate that \Eq{eq:asymlevR2} can provide a reasonable parametrization, we show in \Fig{fig:a_levy}
a fit of \Eq{eq:asymlevR2} with $R_\mathrm{a}$ a free parameter
to Z-boson decays generated by {\scshape Pythia}\ \cite{PYTHIAsix}
with BEC simulated by the {\BE\textsubscript{32}}\ algorithm \cite{LS98} as tuned to {\scshape l}{\small 3}\ data \cite{L3:QCDphysrep}.
In particular, it describes well the dip in $R_2$ below unity in the $Q$-region 0.5--1.5\,\GeV, unlike
the usual Gaussian or exponential parametrizations.
While generalizations \cite{tamas:edge:lag00}
of the Gaussian by an Edgeworth expansion and of the exponential by a Laguerre
expansion can describe the dip, they require more additional parameters than
\Eq{eq:asymlevR2}.
Recently the {\scshape l}{\small 3}\ Collaboration has presented preliminary results showing that \Eq{eq:asymlevR2}
describes their data on hadronic Z decay \cite{wes:WPCF2006}.
\begin{figure}[htb]
\centering
\includegraphics*[width=.95\figwidth]{fit_levy1sf1100_1.eps}
\caption{The Bose-Einstein correlation function $R_2$ for events generated by {\scshape Pythia}.
The curve corresponds to a fit of the one-sided L\'evy parametrization, \Eq{eq:asymlevR2}.
\label{fig:a_levy}
}
\end{figure}
\section{The emission function of two-jet events} \label{sect:emission2jet}
Within the framework of the $\tau$-model, we now show how to
reconstruct the space-time picture of pion emission.
We restrict ourselves to two-jet events where we know what $a$ is, namely $a=1/\mt$.
The emission function in configuration space, $S_\mathrm{x}(x)$, is the proper time derivative of the
integral over $p$ of $S(x,p)$, which in the
$\tau$-model is given by \Eq{eq:source}.
Approximating $\delta_\Delta$ by a Dirac delta function, we find
\begin{equation} \label{eq:Sspace}
S_\mathrm{x}(x) = \frac{1}{\bar{n}} \frac{\mathrm{d}^4 n}{\mathrm{d}\tau\mathrm{d}^{3}x}
= \left(\frac{\mt}{\tau}\right)^3 H(\tau) \rho_1\left( p=\frac{\mt x}{\tau} \right) \;,
\end{equation}
where $n$ and $\bar{n}$ are the number and average number of pions produced, respectively.
Given the symmetry of two-jet events, $S_\mathrm{x}$ does not depend on the azimuthal angle, and we can
write it in cylindrical coordinates as
\begin{equation} \label{eq:Srzt}
S_\mathrm{x}(r,z,t) = P(r,\eta) H(\tau) \;,
\end{equation}
where $\eta$ is the space-time rapidity.
With the strongly correlated phase-space of the $\tau$-model,
$\eta=y$ and $r=\pt\tau/\mt$.
Consequently,
\begin{equation} \label{eq:Preta}
P(r,\eta) = \left(\frac{\mt}{\tau}\right)^{\!3} \rho_\mathrm{\pt,y}(r\mt/\tau, \eta) \;,
\end{equation}
where $\rho_\mathrm{\pt,y}$ is the joint single-particle distribution of \pt\ and $y$.
The reconstruction of $S_\mathrm{x}$ is simplified if $\rho_\mathrm{\pt,y}$
can be factorized in the product of the single-particle \pt\ and rapidity distributions, \ie,
$ \rho_\mathrm{\pt,y} = \rho_\mathrm{\pt}(\pt) \rho_\mathrm{y}(y)$.
Then \Eq{eq:Preta} becomes
\begin{equation} \label{eq:fact}
P(r,\eta) = \left(\frac{\mt}{\tau}\right)^{\!3} \rho_\mathrm{\pt}(r\mt/\tau) \rho_\mathrm{y}(\eta) \;,
\end{equation}
The transverse part of the emission function is obtained by integrating over $z$ as well as azimuthal angle.
Pictures of this function evaluated at successive times would together form a movie revealing the time
evolution of particle production in 2-jet events in \ensuremath{\mathrm{e^{+}e^{-}}}\ annihilation.
To summarize: Within the $\tau$-model, $H(\tau)$ is obtained from a fit of
\Eq{eq:levy2jetR2av} to the Bose-Einstein correlation function.
From $H(\tau)$ together with the inclusive distribution of rapidity and \pt,
the full emission function in configuration space, $S_\mathrm{x}$, can then be reconstructed.
\section*{Acknowledgments}
One of us (T.C.)
acknowledges support of
the Scientific Exchange between Hungary (OTKA) and The Netherlands (NWO),
project B64-27/N25186 as well as Hungarian OTKA grants T49466 and NK73143.
\bibliographystyle{l3style}
|
1,477,468,750,937 | arxiv | \section{Introduction}
\label{sec:intro}
Searching for new resonances as bumps in the invariant mass spectrum of the new particle decay products is one of the oldest and most robust techniques in particle physics, from the $\rho$ meson discovery~\cite{PhysRev.126.1858} and earlier up through the recent Higgs boson discovery~\cite{Aad:2012tfa,Chatrchyan:2012xdj}. This technique is very powerful because sharp structures in invariant mass spectra are not common in background processes, which tend to produce smooth distributions. As a result, the background can be estimated directly from data by fitting a shape in a region away from the resonance (sideband) and then extrapolating to the signal region. It is often the case that the potential resonance mass is not known a priori and a technique like the BumpHunter~\cite{Choudalakis:2011qn} is used to scan the invariant mass distribution for a resonance. In some cases, the objects used to construct the invariant mass (e.g. jet substructure) and their surroundings (e.g. presence of additional forward jets) have properties that can be used to increase the signal purity. Both ATLAS and CMS\footnote{The references here are the Run 2 results; Run 1 results can be found within the cited papers. The techniques described in this paper also apply to leptonic or photonic final states, but jets are used as a prototypical example due to their inherent complex structure.} have conducted extensive searches for resonances decaying into jets originating from generic quarks and gluons~\cite{Aaboud:2017yvp,Sirunyan:2016iap,Khachatryan:2015dcf}, from boosted $W$~\cite{Sirunyan:2017acf,Aaboud:2017eta}, $Z$~\cite{Sirunyan:2016wqt}, $Z'$~\cite{Sirunyan:2017dnz,Sirunyan:2017nvi,Aaboud:2018zba} or Higgs bosons~\cite{Sirunyan:2017dgc,Sirunyan:2017isc,Aaboud:2017ahz}, from $b$-quarks~\cite{Aaboud:2016nbq}, as well as from boosted top quarks~\cite{Sirunyan:2017uhk,Aaboud:2018juj}. There is some overlapping sensitivity in these searches, but in general the sensitivity is greatly diminished away from the target process (see e.g.~\cite{Aguilar-Saavedra:2018xpl,Aguilar-Saavedra:2017zuc,boosted_diboson} for examples). It is not feasible to perform a dedicated analysis for every possible topology and so some signals may be missed. Global searches for new physics have been performed by the the LHC experiments and their predecesors, but only utilize simple objects and rely heavily on simulation for background estimation~\cite{ATLAS-CONF-2017-001,CMS-PAS-EXO-10-021,ATLAS-CONF-2012-107,ATLAS-CONF-2014-006,Aktas:2004pz,Aaron:2008aa,Abbott:2000fb,Abbott:2000gx,Aaltonen:2007dg,Aaltonen:2008vt,sleuth,Knuteson:2004nj}.
The tagging techniques used to isolate different jet types have increased in sophistication with the advent of modern machine learning classifiers~\cite{Larkoski:2017jix,Cogan:2014oua,Almeida:2015jua,deOliveira:2015xxd,Baldi:2016fql,Barnard:2016qma,Kasieczka:2017nvn,Butter:2017cot,Komiske:2016rsd,Louppe:2017ipp,ATLAS-CONF-2017-064,ATL-PHYS-PUB-2017-013,ATL-PHYS-PUB-2017-004,CMS-DP-2017-005,CMS-DP-2017-013,ATL-PHYS-PUB-2017-003,Pearkes:2017hku,Datta:2017rhs,Datta:2017lxt,ATL-PHYS-PUB-2017-017,CMS-DP-2017-027,Fraser:2018ieu,Andreassen:2018apy,Macaluso:2018tck}. These new algorithms can use all of the available information to achieve optimal classification performance and could significantly improve the power of hadronic resonance searches. Deep learning techniques are able to outperform traditional methods by exploiting subtle correlations in the radiation pattern inside jets. These correlations are not well-modeled in general~\cite{Barnard:2016qma} which renders classifiers sub-optimal when training on simulation and testing on data. This is already apparent for existing multivariate classifiers where post-hoc mis-modeling corrections can be large~\cite{Aad:2015ydr,Chatrchyan:2012jua,Aad:2014gea,CMS:2013kfa,CMS-DP-2016-070,Aad:2015rpa,Khachatryan:2014vla,Aad:2016pux,CMS:2014fya}. Ideally, one would learn directly from data (if possible) and/or combine with other approaches to mitigate potential mis-modeling effects during training (e.g. with adversaries~\cite{Louppe:2016ylz}).
We propose a new method that combines resonance searches with recently proposed techniques for learning directly from data~\cite{Dery:2017fap,Metodiev:2017vrx,Komiske:2018oaa,Cohen:2017exh}. Simply stated, the new algorithm trains a fully supervised classifier to distinguish a signal region from a mass sideband using auxiliary observables which are decorrelated from the resonance variable under the background-only hypothesis. A bump hunt is then performed on the mass distribution after applying a threshold on the classifier output. This is Classification Without Labels (CWoLa)~\cite{Metodiev:2017vrx} where the two mixed samples are the signal region and sideband and the signal is a potential new resonance and the background is the Standard Model continuum. The algorithm naturally inherits the property of CWoLa that it is fully based on data and thus is insensitive to simulation mis-modeling\footnote{The algorithm also inherits the assumptions of the CWoLa method. In this context, the main assumption will be that signal region and sideband region can only be distinguished with the mass. More details on this are in the next sections. }. The key difference with respect to Ref.~\cite{Metodiev:2017vrx,Komiske:2018oaa} is that the signal process need not be known a priori. Therefore, we can become sensitive to new signatures for which we did not think to construct dedicated searches.
In addition to CWoLa, the extended bump hunt shares some features with the sPlot technique~\cite{Pivk:2004ty}. Our proposed extension to the bump hunt makes use of auxiliary features to enhance the presence of signal events over background events in a target distribution, where the signal is expected to be resonant. Similarly, sPlot provides a procedure for using auxiliary features (`discriminating variables' in the language of Ref.~\cite{Pivk:2004ty}) to extract the distribution of signal and background events in a target distribution (`control variable' in Ref.~\cite{Pivk:2004ty}). In both cases, the auxiliary features must be uncorrelated with the target feature. One main difference between the methods is that the extended bump hunt uses machine learning to identify regions of phase space that are signal-like. A second key distinction between methods is that sPlot takes the distribution of the auxiliary features as input, whereas this information is not required for the extended bump hunt.
This paper is organized as follows. Section~\ref{sec:cwolahunting} formally introduces the CWoLa hunting approach and briefly discusses how auxiliary information can be useful for bump hunting. Then, Sec.~\ref{sec:example} uses a simplified example to show how a neural network can be used to identify new physics structures from pseudodata. A complete procedure for applying the CWoLa hunting approach is given in Sec.~\ref{sec:fullmethod}. Finally, a realistic example based on a hadronic resonance search is presented in Sec.~\ref{sec:physicsexample}. Conclusions and future outlook are presented in Sec.~\ref{sec:conc}.
\section{Bump Hunting using Classification Without Labels}
\label{sec:cwolahunting}
In a typical resonance search, events have at least two objects whose four-vectors are used to construct an invariant mass spectrum. The structure of these objects as well as other information in the event may be useful for distinguishing signal from background even though there may be no other resonance structures. Let $m_\text{res}$ be a random variable that represents the invariant mass. The distribution of $m_\text{res}$ given $\text{background}$ is smooth while $m_\text{res}$ given $\text{signal}$ is expected to be localized near some $m_0$. Let $Y$ be another random variable that represents all other information available in the events of interest. Define two sets of events:
\begin{align}
M_1&=\{(m_\text{res},Y)||m_\text{res}-m_0|<\delta\} \hspace{3mm}\text{(the signal region) }\\
M_2&=\{(m_\text{res},Y)|\delta<|m_\text{res}-m_0|<\epsilon\}\hspace{3mm}\text{(the sideband region)},
\end{align}
\noindent where $\epsilon > \delta$. The value of $\delta$ is chosen such that $M_1$ should have much more signal than $M_2$ and the value of $\epsilon$ is chosen such that the distribution of $Y$ is nearly the same between $M_1$ and $M_2$. CWoLa hunting entails training a classifier to distinguish $M_1$ from $M_2$ using $Y$ and then performing a usual bump hunt on $m_\text{res}$ after placing a threshold on the classifier output. This procedure is then repeated for all mass hypotheses $m_0$. Note that nothing is assumed about the distribution of $Y$ other than that it should be nearly the same for $M_1$ and $M_2$ under the background-only hypothesis.
Ideally, $Y$ incorporates as much information as possible about the properties of the objects used to construct the invariant mass and their surroundings. The subsequent sections will show how this can be achieved with neural networks. To build intuition for the power of auxiliary information, the rest of this section provides analytic scaling results for a simplified bump hunt with the most basic case: $Y\in\{0,1\}$.
Suppose that we have two mass bins $M_1$ and $M_2$ and the number of expected events in each mass bin is $N_b$. Further suppose that the signal is in at most one of the $M_i$ (not required in general) and the expected number of signal events is $N_s$. A version of the bump hunt would be to compare the number of events in $M_1$ and $M_2$ to see if they are significantly different. As a Bernoulli random variable, $Y$ is uniquely specified by $\Pr(Y=1)$. Define $\Pr(Y=1|\text{background})=p$ and $\Pr(Y=1|\text{signal})=q$. The purpose of CWoLa hunting is to incorporate the information about $Y$ into the bump hunt. By only considering events with $Y=1$, the significance of the signal scales as $qN_s/\sqrt{N_bp}$. Therefore, the information about $Y$ is useful when $q>\sqrt{p}$.
More quantitatively, suppose that we declare discovery of new physics when the number of events with $Y=1$ in $M_1$ exceeds the number of events with $Y=1$ in $M_2$ by some amount. Under the background-only case, for $N_b\gg 1$, the difference between the number of events in $M_1$ and $M_2$ with $Y=1$ is approximately normally distributed with mean $0$ and variance $2N_bp$. If we want the probability for a false positive to be less than 5\%, then the threshold value is simply $\sqrt{2N_bp}\times\Phi^{-1}(0.95)$, where $\Phi$ is the cumulative distribution function of a standard normal distribution. Ideally, we would like to reject the SM often when there is BSM, $N_s>0$. Figure~\ref{fig:analytic} shows the probability to reject the SM for a one-bin search using $N_b=1000$ and $N_s=20$ for different values of $p$ as a function of $q$. The case $p=q=1$ corresponds to the standard search that does not gain from having additional information. However, away from this case, there can be a significant gain from using $Y$, especially when $p$ is small and $q$ is close to $1$. In the case where $Y$ is a \textit{truth bit}, i.e. $p=1-q=0$, the SM is rejected as long as a single BSM event is observed. By construction, when $q\rightarrow 0$ (for $p>0$), the rejection probability is 0.05. Note that when $q<p$, only considering events with $Y=1$ is sub-optimal - this is a feature that is corrected in the full CWoLa hunting approach.
While the model used here is simple, it captures the key promise of CWoLa hunting that will be expanded upon in more detail in the next sections. In particular, the main questions to address are: how to find $Y$ and how to use the information about $Y$ once it is identified.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{AnalyticCalculation.pdf}
\label{fig:analytic}
\caption{The probability to reject the SM as a function of $q$ for fixed values of $p$ as indicated in the legend when only considering events with $Y=1$. The expected background is fixed at 1000 and the expected BSM is fixed at 20. When $q=0$, there is no signal and therefore the rejection probability is 5\%, by construction. When $q=p=1$, $Y$ is not useful, but the probability to reject is above 5\% simply because there is an excess of events inclusively. }
\end{figure}
\section{Illustrative Example: Learning to Find Auxiliary Information}
\label{sec:example}
This section shows how to identify the useful attributes of the auxiliary information using a neural network. The example used here is closer to a realistic case, but is still simplified for illustration. Let the auxiliary information $Y=(x,y)$ be two-dimensional and assume that $Y$ and the invariant mass are independent given the process (signal or background). This auxiliary information will become the jet substructure observables in the next section. For simplicity, for each process $Y$ is considered to be uniformly distributed on a square of side length $\ell$ centered at the origin. The background has $\ell=1$ $(-0.5<x<0.5,-0.5<y<0.5)$ and the signal follows $\ell=w$ $(-w/2 < x < w/2, -w/2 < y < w/2)$. Similarly to the full case, suppose that there are three bins of mass for a given mass hypothesis: a signal region $m_0\pm\Delta$ and mass sidebands $(m_0-2\Delta, m_0-\Delta)$, $(m_0+\Delta, m_0+2\Delta)$. As in the last section, the signal is assumed to only be present in one bin (the signal region) with $N_s$ expected events. There are $N_b$ expected background events in the signal region and $N_b/2$ expected events in each of the mass sidebands.
The model setup described above and used for the rest of this section is depicted in Fig.~\ref{fig:toy_cwola}. The numerical examples presented below use $N_b=10,000$, $N_s=300$, and $w=0.2$. Without using $Y$, these values correspond to $N_s/\sqrt{N_b}=3\sigma$. The ideal tagger (one that is optimal by the Neyman-Pearson lemma~\cite{Neyman289}) should reject all events outside of the square in the $(x,y)$ plane centered at zero with side length $w$. For the $N_s$ and $N_b$ used here, the expected significance of the ideal tagger is $15\sigma$. The goal of this section is to show that without using any truth information, the CWoLa approach can recover much of the discriminating power from a neural network trained in the $(x,y)$ plane. Note that optimal classifier is simply given by thresholding the likelihood ratio~\cite{Neyman289} $p_s(Y)/p_b(Y)$; in this two-dimensional case it is possible to provide an accurate approximation to this classifier without neural networks. However, these approximations often do not scale well with the dimensionality and will thus be less useful for the realistic example presented in the next section. This is illustrated in the context of the CWoLa hunting in Fig.~\ref{fig:toy_likelihood_estimators}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{toy_cwola.pdf}
\caption{An illustration of the CWoLa procedure for the simple two-dimensional uniform example presented in Sec.~\ref{sec:example}. The left plot shows the mass distribution for the three mass bins, which is uniform for the background. The blue line is the total number of events and the other lines represent thresholds on various neural networks described in the text leading up to Fig.~\ref{fig:toy_cuts}. The center plots show the $(x,y)$ distribution for the events in each mass bin with truth labels (purple for background and yellow for signal). The black square is the true signal region for this example model, with signal distributed uniformly inside. The right plot shows the combined distribution in the $(x,y)$ plane with CWoLa labels that can be used to train a classifier even without any truth-level information (red for target window, blue sideband). \label{fig:toy_cwola}}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{toy_likelihood_estimators.pdf}
\caption{The CWoLa-labeled data can be used to construct an estimate for the optimal classifier $h(x,y)=\frac{p_b(x,y) + p_s(x,y)}{p_b(x,y)}$. The top path shows an estimate constructed by histogramming the observed training events in the $(x,y)$ plane. The bottom path shows an estimate constructed by using a neural network trained as described in the text, which can be efficiently generalized to higher dimensional distributions. The optimal classifier would be 1 outside of the small box centered at the origin and 1.75 inside the box. \label{fig:toy_likelihood_estimators}}
\end{figure}
To perform CWoLa hunting, a neural network is trained on $(x,y)$ values to distinguish events in the mass sidebands from the signal region. Due to the simple nature of the example, it is also possible to easily visualize what the network is learning. A fully-connected feed-forward network is trained using the \textsc{Python} deep learning library \textsc{Keras}~\cite{keras} with a \textsc{Tensorflow}~\cite{tensorflow} backend. The network has three hidden layers with (256, 256, 64) nodes. The network was trained with the categorical cross-entropy loss function using the \textsc{Adam} algorithm~\cite{adam} with a learning rate of 0.003 and a batch size of 1024. The data are split into three equal sets, one used for training, one for validation, and one for testing. The training is terminated based on the efficiency of the signal region cut on the validation data at a fixed false-positive-rate of $2\%$ for the sideband data. If it fails to improve for 60 epochs, the training is halted and the network reverts to the last epoch for which there was a training improvement. This simple scheme is robust against enhancing statistical fluctuations but reduces the number of events used for the final search by a factor of three as only the classifier output on the test set is used for the bump hunt. In the physical example described later, a more complicated scheme maximizes the statistical power of the available data.
Visualizations of the neural network trained as described above are presented in Fig.~\ref{fig:toy_NNactivations}. In the top two examples, the network finds the signal region and correctly estimates the magnitude of the likelihood ratio. In both these cases, the network also overtrains on a (real) fluctuation in the training data, despite the validation procedure. Such regions will tend to decrease the effectiveness of the classifier, since a given cut threshold will admit more background in the test data. In the bottom left example of Fig.~\ref{fig:toy_NNactivations}, the network finds a function approximately monotonic to $h(x,y)$ but with different normalization -- while the cost function would have preferred to optimize this network to reach $h(x,y)$, the validation procedure cut off the optimization when the correct shape to isolate the signal region had been found. Due to the nature of the cuts, there is no performance loss for this network, since crucially it has found the correct shape near the signal region. The last network fails to converge to the signal region, and instead focuses its attention on the fluctuation in the training data. The variation in the network performance illustrates the importance of training multiple classifiers and using schemes to mitigate the impact of statistical fluctuations in the training dataset.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{toy_NNactivations.pdf}
\caption{The classifier $h(x,y)$ constructed from four independent training runs on the same example two-dimensional model dataset described in the text. The thick contours represent the cuts that would reduce the events in the test data target window by a factor of $\epsilon_{\rm test} = 10\%, 5\%, 1\%$. \label{fig:toy_NNactivations}}
\end{figure}
Figure~\ref{fig:toy_cuts} shows the mass distribution in the three bins after applying successfully tighter threshold on the neural network output. Since $Y$ is not a truth bit, the data are reduced in both the signal region and the mass sidebands. For each threshold, the background expectation $\hat{n}_b$ assuming a uniform distribution is estimated by fitting a straight line to the mass sidebands. Then, the significance is estimated from the number of observed events in the signal region, $n_o$, via $\mathcal{S}\approx (n_o-\hat{n}_b)/\sqrt{\hat{n}_b}$. Of the threshold presented, the maximum significance corresponds to the 5\% efficiency with $\mathcal{S}\approx 10.8\sigma$. Even though the ideal significance is $15\sigma$, for the particular pseudodataset shown in Fig.~\ref{fig:toy_cuts}, the ideal classifier significance is $13.9\sigma$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{toy_cuts.pdf}
\caption{The mass distribution after various threshold on the neural network classifier. The flat background fit from the sideband regions are the dashed lines, and the statistical uncertainty for each bin is shown by the error bars. The top histogram is the model before any threshold in the $(x,y)$ plane, and from top to bottom respectively the histogram is given for efficiency thresholds of $10\%,5\%,1\%,0.2\%$. The significance is $\mathcal{S} = 3\sigma, 9.4\sigma, 10.8\sigma,$ and $3.4\sigma$ for respectively no threshold, $10\%$, $5\%$, and $1\%$. The $0.2\%$ threshold reduces the signal to no statistical significance. \label{fig:toy_cuts}}
\end{figure}
We can study the behavior of our NN classifiers by looking at the significance generated by ensembles of models trained on signals of different strength, as shown in Fig.~\ref{fig:toy_ensembles}. The top histogram shows the significance for an ensemble of models trained on the example signal (blue) and on a control dataset with no signal (green). The control ensemble appears to be normally distributed around $s/\sqrt{b}=0$, while the example signal ensemble is approximately normally distributed around $12\sigma$ (compared to $13.9\sigma$ for the ideal cut), along with a small $O(5\%)$ population of networks that fail to find the signal. The middle histogram shows the effect of decreasing the size of the signal region $w_s$ while modifying $N_s$ to maintain an expected significance of $15\sigma$ with ideal cuts. When $w_s$ is decreased, the training procedure appears to have a harder time picking up the signal, possibly due to our choice of an operating point of $2\%$ false-positive rate for the sideband validation. For $w_s=0.1$ (green), about $50\%$ of the networks effectively find the signal. For $w_s =0.05$ (red), only about $5\%$ find the signal. The bottom plot shows the effect of increasing $w_s$ while keeping $N_s$ fixed, so that the strength of the signal decreases. When the size of the signal region is doubled to $w_s=0.4$ (green), giving a expected signifance of $7.5\sigma$, the network performs similarly to the $w_s=0.2$ example (blue). When the signal distribution is identical to the background distribution ($w_s=1.0$, red), there is on average a small decrease in performance compared to simply not using a classifier.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{toy_efftest.pdf}
\includegraphics[width=0.45\textwidth]{toy_sigtest.pdf}
\includegraphics[width=0.45\textwidth]{toy_widthtest.pdf}
\caption{\textbf{Top-left:} histogram of significance at a test theshold of $6\%$ for 100 NN trained on the example toy model data (blue), and 100 NN trained on a control dataset with no signal present (green). The dashed green line gives the expected significance of $13.9\sigma$ for the example dataset with ideal cuts.
\textbf{Top-right:} histogram of significance for ensembles with expected signal strength of $15\sigma$ with ideal cuts. The blue is $(N_b = 10000, N_s = 300, w_s = 0.2)$ at a test threshold of $6\%$, the green is $(N_B=10000, N_s = 150, w_s = 0.1)$ at a test threshold of $1.5\%$, and the red is $(N_B=10000, N_s = 75, w_s = 0.05)$ at a test threshold of $0.4\%$. For each ensemble, 100 independent instances of the dataset are generated and one NN is trained on each dataset.
\textbf{Bottom:} histogram of significance for ensembles with $(N_b = 10000, N_s = 300)$ and varying $w_s$. Blue is $w_s = 0.2$ at a test threshold of $6\%$, green is $w_s=0.4$ at a test threshold of $24\%$, and red is $w_s=1.0$ (for which the background and signal distribution in $(x,y)$ are identical) at a test threshold of $50\%$. For each ensemble, 100 independent instances of the dataset are generated and one NN is trained on each dataset. The dashed line gives the expected significance for each ensemble.}
\label{fig:toy_ensembles}
\end{figure}
\section{Full Method}
\label{sec:fullmethod}
The previous sections uses key elements of the full extended bump hunt but do not include all components, including the full background estimation and statistical analysis. This section gives a concrete prescription for applying the CWoLa hunting method in practice, which will be used in an explicit example in Sec.~\ref{sec:physicsexample}. The setup is as in the previous sections: there is feature $m_\text{res}$ where the signal is expected to be resonant and then a set of other features $Y$ that are uncorrelated with $m_\text{res}$, but potentially useful for distinguishing signal from background. It is important to state that while a detailed model of $Y$ is not required to perform the CWoLa hunting procedure that is described in the rest of the section, a limited model of $Y$ is required to ensure the correlations with $m_\text{res}$ are minimal. Such a model could come from simulation, from theory, or directly from a sufficiently signal-devoid data sample.
While in the presence of signal, the CWoLa hunting method would ideally learn systematic correlations between $m_\text{res}$ and $Y$, instead, it may focus on statistical fluctuations in the background distributions. A naive application of CWoLa directly on the data may produce bumps in $m_\text{res}$ by seeking local statistical excesses in the background distribution. This corresponds to a large look-elsewhere effect over the space of observables $Y$ -- the classifier may search this entire space and find the selection with the largest statistical fluctuation. In Sec.~\ref{sec:example}, we took the approach of splitting the dataset into training, validation and test samples which eliminates this affect, since the statistical fluctuations in the three samples will be uncorrelated. However, applying this approach in practice would reduce the effective luminosity available for the search and thus degrade sensitivity. We therefore apply a cross-validation technique which allows all data to be used for testing while ensuring that event subsamples are never selected using a classifier that was trained on them. We split the events randomly, bin-by-bin, into five event samples of equal size. The first sample is set aside, and the first classifier is trained on the signal- and sideband-region events of the remaining four samples. This classifier may learn the statistical fluctuations in these event samples, but those will be uncorrelated with the fluctuations of the first sample. Applying the classifier to the set-aside event sample will then correspond to only one statistical test, eliminating the look elsewhere effect. By repeating this procedure five times -- each time setting aside one $k$-fold for testing and four for training and validation, all the data can be used for the bump hunt by adding up the selected events from each $k$-fold.
\begin{algorithm}[t]
\label{algo:crossval}
Split dataset into 5 subsets stratified by $m_\text{res}$ binning
\For{$\mathrm{subset}_i$ in subsets}{
Set aside $\mathrm{subset}_i$ as test data
\For{$\mathrm{subset}_j$ in subsets, $j \ne i$}{
Validation data sideband = merge sideband bins of $\mathrm{subset}_j$
Validation data signal-region = merge signal-region bins of $\mathrm{subset}_j$
Training data sideband = merge sideband bins of remaining subsets
Training data signal-region = merge signal-region bins of remaining subsets
Assign signal-region data with label 1
Assign sideband data with label 0
Train twenty classifiers on training data, each with different random initialization
$\mathrm{model}_{i, j}$ = best of the twenty models, as measured by performance on validation data
}
$\mathrm{model}_{i}$ = $\sum_j \mathrm{model}_{i,j} / 4$
Select $Y$\% most signal-like data points of $\mathrm{subset}_i$, as determined by $\mathrm{model}_{i}$. The threshold on the neural network to achieve $Y\%$ is determined using all other bins with large numbers of events and so the uncertainty on the value is negligible.
}
Merge selected events from each subset into new $m_\text{res}$ histogram
Fit smooth background distribution to $m_\text{res}$ distribution with the signal region masked
Evaluate $p$-value of signal region excess using fitted background distribution interpolated into the signal region.
\caption{Nested cross-validation training and event selection procedure. See Appendix~\ref{app:stats} for further details on the last two points.}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{cross_validation_fig.pdf}
\caption{Illustration of the nested cross-validation procedure. \textbf{Left:} the dataset is randomly partitioned bin-by-bin into five groups. \textbf{Center:} for each group $i\in\{1,2,3,4,5\}$ (the test set), an ensemble classifier is trained on the remaining groups $j\neq i$. There are four ways to split the four remaining groups into three for training and one for validation. For each of these four ways, many classifiers are trained and the one with best validation performance is selected. The ensemble classifier is then formed by the average of the four selected classifiers (one for each way to assign the training/validation split). \textbf{Right:} Data are selected from each test group using a threshold cut from their corresponding ensemble classifier. The selected events are then merged into a single $m_\text{res}$ histogram.}
\label{fig:crossval}
\end{figure}
The algorithm we used for this procedure is summarized in Algorithm~\ref{algo:crossval}, and illustrated in Fig.~\ref{fig:crossval}. For each set-aside test set, we perform four rounds of training and validation using the four remaining data subsets. In each round, we set aside one of the remaining subsets as validation data, and the final three are used for training data. Only data falling in the signal and sideband regions are used for training and validation. The training and validation data are labelled as 0 or 1 if they fall in the sideband or signal regions, respectively. For each round, we train twenty NNs on the same training and validation data, using a different initialization each time. Each classifier is validated according to its performance as measured on validation data. Our performance metric $\epsilon_\mathrm{val}$ is the true positive rate for correctly classifying a signal-region event as such, evaluated at a threshold with given false positive rate $s\%$ for incorrectly classifying a sideband region event as a signal region event. If a signal is present in the signal region and the classifier is able to find it, then it should be that $\epsilon_\mathrm{val} > s\%$. On the other hand, if no signal is present then $\epsilon_\mathrm{val} \simeq s\%$ is expected. Since we will be typically considering $\mathcal{O}(1\%)$-level signals we consider $s\% \sim 1\%$ in our test, and set $s\% = 0.5\%$ to generate our final results. For each of the twenty models, we end training if its performance has not improved in 300 epochs, and revert the model to a checkpoint saved at peak performance. We select the best of these twenty models, and discard the others. At the end of four rounds, the four selected models are are averaged to form an ensemble model which is expected to be more robust than any individual model. The ensemble model is used to classify events in the test set, by selecting the $r\%$ most signal-like events. This procedure is repeated for all five choices of test set, and the selected events from each are combined into a signal histogram in $m_\text{res}$. The presence of an identifiable signal will be indicated by a bump in the signal region, for which standard bump-hunting techniques can be used to perform a hypothesis test. The use of averaged ensemble models is important to reduce any performance degradation due to overfitting. Since each of the four models used to make each ensemble model has been trained on different training sets and with different random weight initialization, they will tend to overfit to different events. The models will therefore disagree in regions where overfitting has occurred, but will tend to agree in any region where a consistent excess is found.
Further technical details about the statistical methods can be found in Appendix~\ref{app:stats}. Asymptotic formulae can be used to determine the local $p$-value of an excess, but such formulae must be validated using more computationally expensive methods for each application of CWoLa hunting, as is demonstrated in the appendix.
\subsection{Interpreting the Results}
The main result following the application of the method from Sec.~\ref{sec:fullmethod} is the local $p$-value. To determine the compatibility of the entire mass range with the no-resonance hypothesis, it is desirable to be able to compute a global $p$-value. In the result presented here, the mass bins were fixed ahead of time and were also non-overlapping. Therefore, it is relatively simple to estimate a global $p$-value using e.g. a Bonferroni correction. However, this is not ideal (over-conservative) when the mass bin width is scanned as part of the procedure. It is still possible to determine a global $p$-value, in the same spirit as the full bumphunter statistic~\cite{Choudalakis:2011qn}. This would require a significant computational overhead as a large number of neural networks would need to be trained for each of many pseudo-experiments. An additional trials factor would be associated with scanning the threshold fraction on the neural network output. In the simplest approach, a small number of well-separated working points would be chosen, such as 10\%, 1\%, and 0.1\%. These should be sufficiently different that the three local $p$-values could be treated as independent. However, a finer scan would require a proper assessment of the global $p$-value using pseudo-experiments. It may be possible to significantly reduce the computational cost by estimating the correlation between mass windows and threshold fractions in order to properly account for the look-elsewhere-effect~\cite{Gross:2010qma,VITELLS2011230}.
One final remark is about how one would use CWoLa hunting to set limits. In the form described above, the CWoLa hunting approach is designed to find new signals in data without any model assumptions. However, it is also possible to recast the lack of an excess as setting limits on particular BSM models. Given a simulated sample for a particular model, it would be possible to set limits on this model by mixing the simulation with the data and training a series of classifiers as above and running toy experiments, re-estimating the background each time. This is similar to the usual bump hunt, except that there is more computational overhead because the background distribution is determined in part by the neural networks, and the distribution in expected signal efficiencies cannot be determined except by these toy experiments\footnote{This complicates the legacy utility of the results, but it would be possible to tweak procedures like those advocated by RECAST~\cite{Cranmer:2010hk} in which neutral networks would be automatically trained for a new signal model.}. In the absence of an excess, it is also possible to directly recast the results by taking the classifier trained on data with no significant signal. However, without a real excess, the classifier will have nothing to learn. Such a classifier will likely not be useful for any particular signal model. Therefore, while it is technically possible to do a standard re-interpretation of the results, the most powerful limit setting requires access to the data to retrain the neural networks for an injected signal.
\section{Physical Example}
\label{sec:physicsexample}
This section uses a dijet resonance search at the LHC to show the potential of CWoLa hunting in a realistic setting. As discussed in Sec.~\ref{sec:intro}, both ATLAS and CMS have a broad program targeting resonance decays into a variety of SM particles. Due to significance advances in jet substructure-based tagging~\cite{Larkoski:2017jix}, searches involving hadronic decays of the SM particles can be just as if not more powerful than their leptonic counterparts. The usual strategy for these searches is to develop dedicated single-jet classifiers, including\footnote{These are the latest $\sqrt{s}=13$ TeV results - see references within to find the complete history.} $W/Z$-~\cite{CMS-DP-2015-043,ATLAS-CONF-2017-064}, $H$-~\cite{CMS-DP-2015-038,ATLAS-CONF-2016-039}, top-~\cite{CMS-DP-2015-043,CMS-PAS-JME-15-002,ATLAS-CONF-2017-064}, $b$-~\cite{CMS-DP-2017-012,ATL-PHYS-PUB-2017-013}, and quark-jet taggers~\cite{CMS-DP-2016-070,ATL-PHYS-PUB-2017-009}. Simulated events with per-instance labels are used for training and then these classifiers are deployed in data. However, the best classifier in data may not be the best classifier in simulation. This problem is alleviated when learning directly from data.
Learning directly from data has another advantage - the decay products of a new heavy resonance may themselves be beyond the SM. If the massive resonance decays into new light states such as BSM Higgs bosons or dark sector particles that decay hadronically, then no dedicated SM tagger will be optimal~\cite{Aguilar-Saavedra:2018xpl,Aguilar-Saavedra:2017zuc}. A tagger trained to directly find non-generic-jet structure could find these new intermediate particles and thus also find the heavy resonance. This was the approach taken in~\cite{Aguilar-Saavedra:2017rzt}, but that method is fully supervised and so suffers the usual theory prior bias and potential sources of mismodelling. Here we will illustrate how the CWoLa hunting approach could be used instead to find such a signal. The next section (Sec.~\ref{sec:simulation}) describes the benchmark model in more detail, as well as the simulation details for both signal and background.
\subsection{Signal and background simulation}
\label{sec:simulation}
For a benchmark signal, we consider the process $p p \to W' \to W X, X \to W W$, where $W'$ and $X$ are a new vector and scalar particle respectively. This process is predicted, for example, in the warped extra dimensional construction of \cite{Agashe:2016rle, Agashe:2017wss, boosted_diboson}. The typical opening angle between the two $W$ bosons resulting from the $X$ decay is given by $\Delta R(W,W) \simeq 4 \, m_{X} / m_{W'}$ for $2 m_W \ll m_X \ll m_{W'}$, and so the $X$ particle will give rise to a single large-radius jet in the hadronic channel when $m_X \lesssim m_{W'}/4$. Taking the mass choices $m_{W'} = 3 \; \text{TeV}$ and $m_X = 400 \; \text{GeV}$, the signal in the fully hadronic channel is a pair of large-radius jets $J$ with $m_{JJ} \simeq 3 \; \text{TeV}$, one of which has a jet mass $m_J \simeq 80 \; \text{GeV}$ and a two-pronged substructure, and the other has mass $m_J \simeq 400 \; \text{GeV}$ with a four-prong substructure which often is arranged as a pair of two-pronged subjets.
Events are generated with Madgraph5\_aMC@NLO~\cite{Alwall:2014hca} v2.5.5 to generate $10^4$ signal events, using a model file implementing the tensor couplings of~\cite{Agashe:2017wss} and selecting only the fully hadronic decays of the three $W$ bosons. The events are showered using Pythia 8.226~\cite{Sjostrand:2007gs}, and are passed through the fast detector simulator Delphes 3.4.1~\cite{deFavereau:2013fsa}. Jets are clustered from energy-flow tracks and towers using the FastJet~\cite{Cacciari:2011ma} implementation of the anti-$k_t$ algorithm \cite{Cacciari:2008gp} with radius parameter $\Delta R = 1.2$. We require events to have at least two ungroomed large-radius jets with $p_T > 400 \; \text{GeV}$ and $|\eta| < 2.5$. The selected jets are groomed using the soft drop algorithm~\cite{Larkoski:2014wba} in grooming mode, with $\beta = 0.5$ and $z_\text{cut} = 0.02$. The two hardest groomed jets are selected as a dijet candidate, and a suite of substructure variables are recorded for these two jets. With the same simulation setup, $4.45 \times 10^6$ Quantum Chromodynamic (QCD) dijet events are generated with parton level cuts $p_{T, \, j} > 300 \; \text{GeV}$, $|\eta_j| < 2.5$, $m_{jj} > 1400 \; \text{GeV}$.
In order to study the behaviour of the CWoLa hunting procedure both in the presence and absence of a signal, we produce samples both with and without an injected signal. The events are binned uniformly in $\log(m_{JJ})$, with 15 bins in the range $2001 \; \text{GeV} < m_{JJ} < 4350 \; \text{GeV}$.
\subsection{Training a Classifier}
In order to test for a signal with mass hypothesis $m_{JJ} \simeq m_{\text{res}}$, we construct a `signal region' consisting of all the events in the three bins centered around $m_{\text{res}}$. We also construct a low- and a high-mass sideband consisting of the events in the two bins below and above the signal region, respectively. The mass hypothesis will be scanned over the range $2278 \; \text{GeV} \leq m_\text{res} \leq 3823 \; \text{GeV}$, to avoid the first and last bins that can not have a reliable background fit without constraints on both sides of the signal region. The signal region width is motivated by the width of the $m_{JJ}$ peak for the benchmark signal process described earlier. Because all particles in the process are very narrow, this width corresponds to the resolution allowed by the jet reconstruction and detector smearing effects and will be relevant for other narrow signal processes also. For processes giving rise to wider bumps, the width of the signal hypothesis could be scanned over just as we scan over the mass hypothesis. We will then train a classifier to distinguish the events in the signal region from those in the sideband on the basis of their substructure. The objective in constructing the training framework is that the classifier should be very poor (equal efficiency in signal region and sideband for any threshold) in the case that no signal is present in the signal region, but if a signal is present with unusual jet substructure then the classifier should be able to locate the signal and provide discrimination power between signal and SM dijet events.
The background is estimated by fitting the regions outside of the signal region to a smoothly falling distribution. In practice, this requires that the auxiliary information $Y$ is nearly independent of $m_{JJ}$; otherwise, the distribution could be sculpted. To illustrate the problem, consider a classifier trained to distinguish the sideband and signal regions using the observables $m_J$ and the N-subjettiness variable $\tau_1^{(2)}$~\cite{Thaler:2011gf}. The ratio $m_J / \sqrt{\tau_1^{(2)}}$ is approximately the jet $p_\text{T}$, which is highly correlated with $m_{JJ}$ for the background. While it is often possible to find ways to decorrelate substructure observables~\cite{Dolen:2016kst,Shimmin:2017mfk,Aguilar-Saavedra:2017rzt,Moult:2017okx}, we take a simpler approach and instead select a basis of substructure variables which have no strong correlations with $m_{JJ}$. We will use the following set of 12 observables which does not provide learnable correlations with $m_{JJ}$ sufficient to create signal-like bumps in our simulated background dijet event samples, as we shall demonstrate later in this section
\begin{equation}
\text{For each jet:} ~~~~~Y_i = \left(m_J, ~\sqrt{\tau_1^{(2)}} / \tau_1^{(1)},~ \tau_{21}, ~\tau_{32},~ \tau_{43},~ n_\text{trk}\right),
\end{equation}
where $\tau_{MN} = \tau_M^{(1)} / \tau_N^{(1)}$. The full training uses $Y=(Y_1,Y_2)$. All ratios of N-subjettiness variables are chosen to be invariant under longitudinal boosts, so that the classifier cannot learn $p_T$ from $m_J$ and the other observables. The two jets are ordered by jet mass, so that the first six observables $Y_1$ correspond to the heavier jet while the last six $Y_2$ correspond to the lighter jet. We find that while the bulk of the $m_J$ distribution in our simulated background dijet samples do not vary strongly over the sampled range of $m_{JJ}$, the high mass tails of the heavy and light jet mass distributions are sensitive to $m_{JJ}$. In lieu of a sophisticated decorrelation procedure, we simply reject outlier events which have $m_{J,\,A} > 500 \; \text{GeV}$ and $m_{J,\,B} > 300 \; \text{GeV}$, where the subscripts $A$ and $B$ refer to the heavier and lighter jet respectively.
\begin{comment}
\begin{algorithm}[t]
\label{algo:crossval}
Split dataset into 5 subsets stratified by $m_{JJ}$ binning
\For{$\mathrm{subset}_i$ in subsets}{
Set aside $\mathrm{subset}_i$ as test data
\For{$\mathrm{subset}_j$ in subsets, $j \ne i$}{
Validation data sideband = merge sideband bins of $\mathrm{subset}_j$
Validation data signal-region = merge signal-region bins of $\mathrm{subset}_j$
Training data sideband = merge sideband bins of remaining subsets
Training data signal-region = merge signal-region bins of remaining subsets
Assign signal-region data with label 1
Assign sideband data with label 0
Train twenty classifiers on training data, each with different random initialization
$\mathrm{model}_{i, j}$ = best of the twenty models, as measured by performance on validation data
}
$\mathrm{model}_{i}$ = $\sum_j \mathrm{model}_{i,j} / 4$
Select $Y$\% most signal-like data points of $\mathrm{subset}_i$, as determined by $\mathrm{model}_{i}$. The threshold on the neural network to achieve $Y\%$ is determined using all other bins with large numbers of events and so the uncertainty on the value is negligible.
}
Merge selected events from each subset into new $m_{JJ}$ histogram
Fit smooth background distribution to $m_{JJ}$ distribution with the signal region masked
Evaluate $p$-value of signal region excess using fitted background distribution interpolated into the signal region.
\caption{Nested cross-validation training and event selection procedure. See Appendix~\ref{app:stats} for further details on the last two points.}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{cross_validation_fig.pdf}
\caption{Illustration of the nested cross-validation procedure. \textbf{Left:} the dataset is randomly partitioned bin-by-bin into five groups. \textbf{Center:} for each group $i\in\{1,2,3,4,5\}$ (the test set), an ensemble classifier is trained on the remaining groups $j\neq i$. There are four ways to split the four remaining groups into three for training and one for validation. For each of these four ways, many classifiers are trained and the one with best validation performance is selected. The ensemble classifier is then formed by the average of the four selected classifiers (one for each way to assign the training/validation split). \textbf{Right:} Data are selected from each test group using a threshold cut from their corresponding ensemble classifier. The selected events are then merged into a single $m_{JJ}$ histogram.}
\label{fig:crossval}
\end{figure}
The algorithm we used for this procedure is summarized in Algorithm~\ref{algo:crossval}, and illustrated in Fig.~\ref{fig:crossval}. For each set-aside test set, we perform four rounds of training and validation using the four remaining data subsets. In each round, we set aside one of the remaining subsets as validation data, and the final three are used for training data. Only data falling in the signal and sideband regions are used for training and validation. The training and validation data are labelled as 0 or 1 if they fall in the sideband or signal regions, respectively. For each round, we train twenty NNs on the same training and validation data, using a different initialization each time. Each classifier is validated according to its performance as measured on validation data. Our performance metric $\epsilon_\mathrm{val}$ is the true positive rate for correctly classifying a signal-region event as such, evaluated at a threshold with given false positive rate $s\%$ for incorrectly classifying a sideband region event as a signal region event. If a signal is present in the signal region and the classifier is able to find it, then it should be that $\epsilon_\mathrm{val} > s\%$. On the other hand, if no signal is present then $\epsilon_\mathrm{val} \simeq s\%$ is expected. Since we will be typically considering $\mathcal{O}(1\%)$-level signals we consider $s\% \sim 1\%$ in our test, and set $s\% = 0.5\%$ to generate our final results. For each of the twenty models, we end training if its performance has not improved in 300 epochs, and revert the model to a checkpoint saved at peak performance. We select the best of these twenty models, and discard the others. At the end of four rounds, the four selected models are are averaged to form an ensemble model which is expected to be more robust than any individual model. The ensemble model is used to classify events in the test set, by selecting the $r\%$ most signal-like events. This procedure is repeated for all five choices of test set, and the selected events from each are combined into a signal histogram in $m_{JJ}$. The presence of an identifiable signal will be indicated by a bump in the signal region, for which standard bump-hunting techniques can be used to perform a hypothesis test. The use of averaged ensemble models is important to reduce any performance degradation due to overfitting. Since each of the four models used to make each ensemble model has been trained on different training sets and with different random weight initialization, they will tend to overfit to different events. The models will therefore disagree in regions where overfitting has occurred, but will tend to agree in any region where a consistent excess is found.
\end{comment}
In our study, the classifiers used are dense neural networks built and trained using Keras with a TensorFlow backend. We use four hidden layers consisting of a first layer of 64 nodes with a leaky Rectified Linear Unit (ReLU) activation (using an inactive gradient of 0.1), and second through fourth layers of 32, 16, 4 nodes respectively with Exponential Linear Unit (ELU) activation~\cite{clevert2015fast}. The output node has a sigmoid activation. The first three hidden layers are regulated with dropout layers with 20\% dropout rate~\cite{JMLR:v15:srivastava14a}. The neural networks are trained to minimize binary cross-entropy loss using the Adam optimizer with learning rate of 0.001, batch size of 20000, first and second moment decay rates of 0.8 and 0.99, respectively, and learning rate decay of $5\times10^{-4}$. The training data is reweighted such that the low sideband has equal total weight to the high sideband, the signal region has the same total weight as the sum of the sidebands, and the sum of all events weights in the training data is equal to the total number of training events. This ensures that the NN output will be peaked around 0.5 in the absence of any signal, and ensures that low and high sideband regions contribute equally to the training in spite of their disparity in event rates.
\subsection{Results}
We use a sample of 553388 QCD dijet events with dijet invariant mass $m_{JJ} > 2001 \; \mathrm{GeV}$, corresponding to a luminosity of $4.4 \; \mathrm{fb}$. We consider two cases: first, a background-only sample; and second, a sample in which a signal has been injected with $m_{JJ} \simeq 3000 \; \text{GeV}$, with 877 events in the range $m_{JJ} > 2001 \; \mathrm{GeV}$. In the signal region $2730 \; \mathrm{GeV} < m_{JJ} < 3189 \; \mathrm{GeV}$, consisting of the three bins centered around $3000 \; \text{GeV}$, there are 81341 background events and 522 signal events, corresponding to $S/B = 6.4\times10^{-3}$ and $S/\sqrt{B} = 1.8$. Labelling the bins 1 to 15, we perform the procedure outlined previously to search for signals in the background-only and background-plus-signal datasets in signal regions defined around bins 4 - 12. This leaves room to define a signal region three bins wide, surrounded by a low and high sideband each two bins wide.
\begin{figure}[!t]%
\centering
\includegraphics[width=\textwidth]{pvalplots.pdf}
\caption{\textbf{Left:} $m_{JJ}$ distribution of dijet events (including injected signal, indicated by the filled histogram) before and after applying jet substructure cuts using the NN classifier output for the $m_{JJ} \simeq 3 \; \text{TeV}$ mass hypothesis. The dashed red lines indicate the fit to the data points outside of the signal region, with the gray bands representing the fit uncertainties. The top dataset is the raw dijet distribution with no cut applied, while the subsequent datasets have cuts applied at thresholds with efficiency of $10^{-1}$, $10^{-2}$, $2\times10^{-3}$, and $2\times10^{-4}$. \textbf{Right:} Local $p_0$-values for a range of signal mass hypotheses in the case that no signal has been injected (left), and in the case that a $3 \; \text{TeV}$ resonance signal has been injected (right). The dashed lines correspond to the case where no substructure cut is applied, and the various solid lines correspond to cuts on the classifier output with efficiencies of $10^{-1}$, $10^{-2}$, and $2\times10^{-3}$.}%
\label{fig:pvalues}%
\end{figure}
In Fig.~\ref{fig:pvalues} left, we plot the the background-plus-signal dataset which survive cuts at varying thresholds using the output of the classifier trained on the signal bin centered around $3 \; \mathrm{TeV}$. The topmost distribution corresponds to the inclusive dijet mass distribution, while the subsequent datasets have thresholds applied on the neural network with overall efficiencies of 10\%, 2\%, and 0.5\%, respectively. A clear bump develops at the stronger thresholds, indicating the presence of a $3 \; \mathrm{TeV}$ resonance. The automated procedure used to determine the significance is explained in detail in Appendix~\ref{app:stats}. In brief, we estimate the background in the signal region by performing a fit of a smooth three-parameter function to the event rates in all the bins besides those in the signal region. We perform a simple counting experiment in the signal region, using the profile likelihood ratio as the test statistic, with the background fit parameters treated as nuisance, with pre-profile uncertainties taken from the background fit itself. The significance is estimated using asymptotic formulae describing the properties of the profile likelihood ratio statistic~\cite{Cowan:2010js}. Figure~\ref{fig:pvalues} shows the signal significance for each signal mass hypothesis, in the case that no signal is present (left), and in the case that the signal is present (right). We see that when no signal is present, no significant bump is created by our procedure. When a signal is present with $m_{\text{res}} = 3 \; \mathrm{TeV}$, there is a significant bump which forms at this signal hypothesis, reaching $7\sigma$ at 0.2\% efficiency. In Appendix~\ref{app:scan_plots}, we show the $m_{JJ}$ distributions for each scan point used for the calculation of these $p$-values.
The fact that there is no significant bump in the left plot of Fig.~\ref{fig:pvalues} is an important method closure test. When deploying the CWoLa hunting approach in practice, we advocate to test the method in simulation in order to validate that there are no bump-catalyzing correlations in the selected classification features. A residual concern may be that there are correlations in the data which are not present in simulation. Residual correlations may come in two forms: process and kinematic. Process correlations occur when $Y$ depends on the production channel (e.g. $pp\rightarrow qq$ or $pp\rightarrow gg$) and $m_{JJ}$ also depends on the production channel; kinematic correlations are the case when $m_{JJ}$ is correlated with $Y$ given the process. Residual process correlations do not cause bumps because the $m_{JJ}$ distribution of each process type (aside from signal) is smoothly falling. Thus, even if the classifier can exactly pick out one process, no bumps will be artificially sculpted. Residual kinematic correlations could cause artificially bumps in the $m_{JJ}$ distribution. Physically, kinematic correlations occur because $Y_i$ is correlated with $p_\text{T,i}$. One way to show in data that residual kinematic correlations are negligible is to use a \textit{mixed sample} in which pairs of jets from different events are combined. As long as the potential signal fraction is small, this mixed sample will have no resonance peak. While the features chosen in this section were designed to be uncorrelated with $m_{JJ}$ and not sculpt bumps, it may be possible to utilize correlated features in a modified CWoLa hunting procedure that includes systematic uncertainties for strong residual correlations. We leave studies of this possibility to future work.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{scattermJJ.pdf}
\caption{Events projected onto the 2D plane of the two jet masses. The classifiers are trained to discriminate events in the signal region (left plot) from those in the sideband (second plot). The third plot shows in red the 0.2\% most signal-like events determined by the classifier trained in this way. The rightmost plot shows in red the truth-level signal events.}
\label{fig:scattermJJ}
\end{figure}
We can investigate what the classifier has learnt by looking at the properties of events which have been classified as signal-like. In the first (second) plot of Fig.~\ref{fig:scattermJJ}, events in the signal (sideband) region have been plotted on the plane of the jet masses of the heavier jet ($m_{J,\,A}$) and the lighter jet ($m_{J,\,B}$). After being trained to discriminate the events of the signal region from those of the sideband, the 0.2\% most signal-like events as determined by the classifier are plotted in red in the third plot of Fig.~\ref{fig:scattermJJ}, overlaid on top of the remaining events in gray. The classifier has selected a population of events with $m_{J\,A} \simeq 400 \; \mathrm{GeV}$ and $m_{J\,B} \simeq 80 \; \mathrm{GeV}$, consistent with the injected signal. The final plot of the figure shows in red the truth-level signal events, overlaid on top of the truth-level background in grey.
Figure~\ref{fig:scatterarray} shows some further 2D projections of the data. In each case, the $x$-axis is the jet mass of the heavier or the lighter jet in the top three or bottom three rows, respectively, while the $y$-axes correspond to observables substructure variables as measured on the same jet. The first column are all events in the signal region. The second column are truth-level signal events in red overlaid on truth-level background in gray. The third column is the 0.2\% most signal-like events as determined by the classifier trained on this data. The fourth column shows the 0.2\% most signal-like events as determined by a classifier trained on the data sample with no signal data, only background. We see that the tagger trained when signal is present has found a cluster of events with a $400 \; \mathrm{GeV}$ jet with small $\tau_{43}^{(1)}$ and small $n_\text{trk}$; and an $80 \; \mathrm{GeV}$ jet with relatively small $\sqrt{\tau_1^{(2)}}/ \tau_1^{(1)}$, small $\tau_{21}^{(1)}$, and small $n_\text{trk}$. On the other hand, the events selected by the classifier trained on the background-only sample show no obvious clustering or pattern, and perhaps represent artifacts of statistical fluctuations in the training data.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{scatterarray.pdf}
\caption{2D projections of the 12D feature-space of the signal region dataset. \textbf{First column:} all signal region events. \textbf{Second column:} truth-level simulated signal events are highlighted in red. \textbf{Third column:} 0.2\% most signal-like events selected by the classifier desribed in Section~\ref{sec:physicsexample} are highlighted in red. \textbf{Fourth column:} highlighted in red are the 0.2\% most signal-like events selected by a classifier trained on the same sample but with true-signal events removed.}
\label{fig:scatterarray}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\textwidth]{dedicated_vs_cwola.pdf}
\caption{Truth-label ROC curves for taggers trained using CWoLa with varying number of $WX$ signal events, compared to those for a dedicated tagger trained on pure $WX$ signal and background samples (dashed black) and one trained to discriminate $W$ and $Z$ jets from QCD (dot-dashed black). The CWoLa examples have $B = 81341$ in the signal region and $S = (230,352,472,697,927)$.}
\label{fig:dedvcwola}
\end{figure}
The ability of the CWoLa approach to discriminate signal from background depends on the number of signal and background events in the signal and sideband regions. In Fig.~\ref{fig:dedvcwola}, we keep the number of background events fixed but vary the size of the signal, and plot truth-label ROC curves for each example. This allows us to directly asses the performance of the taggers for the signal. For varying thresholds, the $x$-axis corresponds to the efficiency on true signal events in the signal region, $\epsilon_S$, while the $y$-axis represents the inverse efficiency on true QCD events in the signal region, $\epsilon_B$. The gray dashed lines labelled 1 to 32 indicate the significance improvement, $\epsilon_S / \sqrt{\epsilon_B}$, which quantifies the gain in statistical significance compared to the raw $m_{JJ}$ distribution with no cuts applied. In solid black we show the performance of a dedicated tagger trained with labelled signal and background events using a fully supervised approach. This gives a measure of the maximum achievable performance for this signal using the selected variables. A true dedicated tagger which could be used in a realistic dedicated search would be unlikely to reach this performance, since this would require careful calibration over 12 substructure variables with only simulated data available for the signal. While the CWoLa-based taggers do not reach the supervised performance in these examples, we find that performance does gradually improve with increasing statistics.
We also show in the dashed black curve the performance of $W$/$Z$ tagger in identifying this signal for which the tagger is not designed. This tagger is trained on a sample of $p p \to W' \to W Z$ events in the fully hadronic channel. In this case, the tagger is trained on the individual $W$/$Z$ jets themselves rather than the dijet event, as is typical in the current ATLAS and CMS searches. In producing the ROC curve, dijet events are considered to pass the tagging requirement if both large-radius jets pass a threshold cut on the output of the $W$/$Z$-tagger. We see that for $\epsilon_B \sim 10^{-4}$, which is a typical background rejection rate for recent hadronic diboson searches, the signal rate is negligible since the $X$-jet rarely passes the cuts. This illustrates that CWoLa hunting may find unexpected signals which are not targeted by existing dedicated searches is $S/B$ if high enough. If $S/B$ is too low, then the CWoLa hunting approach is not able to identify the signal and it underperforms compared with the search that is targeting a different signal model.
The datasets and code used for the case study can be found at Refs.~\cite{cwola_hunting_dataset, cwola_hunting_code}.
\section{Conclusions}
\label{sec:conc}
We have presented a new anomaly detection technique for finding BSM physics signals directly from data. The central assumption is that the signal is localized as a bump in one variable in which the background is smooth, and that other features are available for additional discrimination power. This allows us to identify potential signal-enhanced and signal-depleted event samples with almost identical background characteristics on which a classifier can be trained using the Classification Without Labels approach. In the case that a distinctive signal is present, the trained classifier output becomes an effective discriminant between signal events and background events, while in the case that no signal is present the classifier output shows no clear pattern. An event selection based on a threshold cut on the classifier output produces a smooth distribution if no signal is present and produces a bump if a signal is present, and so standard bump hunting techniques can be used on the selected distribution.
The prototypical example used here is the dijet resonance search in which the dijet mass is the one-dimensional feature where the signal is localized. Related quantities could also be used, such as the single jet mass for boosted resonance searches~\cite{Sirunyan:2017dnz,Sirunyan:2017nvi,Aaboud:2018zba} or the average mass of pair produced objects~\cite{Aaboud:2017nmi,CMS:2018sek,ATLAS:2012ds,Aad:2016kww,Chatrchyan:2013izb,Khachatryan:2014lpa}. Jet substructure information was used to augment information from just the dijet mass and a CWoLa classifier was trained using a deep neural network to discriminate signal region events from sideband events based on their substructure distributions. Additional local information such as the number of leptons inside the jets, the number of displaced vertices, etc. could be used in the future to ensure sensitivity to a wide variety of models. Furthermore, event-level information such as the number of jets or the magnitude of the missing transverse momentum could be added to an extended CWoLa hunt.
The CWoLa hunting strategy is generalizable beyond this single case study. To summarize, the essential requirements are:
\begin{enumerate}
\item There is one bump-variable $m_\text{res}$ in which the background forms a smooth distribution, for which there is a background model such a parametric function, and a signal can be expected to be localized as a bump. This was the variable $m_{JJ}$ in the dijet case study.
\item There are additional features $Y$ in the events which may potentially provide discriminating power between signal and background, but the detailed topology of the of the signal in these variables is not known in advance. This was the set of substructure variables in the dijet study.
\item The background distribution in $Y$ should not have strong correlations with $m_\text{res}$ over the resonance width of the signal. In the case that such correlations exist, it may be possible to find a transformation of the variables that removes these correlations before being fed into the classifier, or alternatively to train the classifier in such a way that penalizes shaping of the $m_\text{res}$ distribution outside of the signal region. Closure tests in simulation or with mixed samples in data can be used to confirm that $Y$ is not strongly correlated with $m_\text{res}$.
\end{enumerate}
By harnessing the power of modern machine learning, CWoLa hunting and other weakly supervised strategies may provide the key to uncovering BSM physics lurking in the unique datasets already collected by the LHC experiments.
\acknowledgments
We appreciate helpful discussions with and useful feedback on the manuscript from Timothy Cohen, Aviv Cukierman, Patrick Fox, Jack Kearney, Zhen Liu, Eric Metodiev, Brian Nord, Bryan Ostdiek, Matt Schwartz, and Jesse Thaler. We would also like to thank Peizhi Du for providing the UFO file for the benchmark signal model used in Sec.~\ref{sec:physicsexample}. The work of JHC is supported by NSF under Grant No. PHY-1620074 and by the Maryland Center for Fundamental Physics (MCFP). The work of B.N. is supported by the DOE under contract DE-AC02-05CH11231.
This manuscript has been authored by Fermi Research Alliance, LLC under
Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy,
Office of Science, Office of High Energy Physics. The United States
Government retains and the publisher, by accepting the article for
publication, acknowledges that the United States Government retains a
non-exclusive, paid-up, irrevocable, world-wide license to publish or
reproduce the published form of this manuscript, or allow others to do
so, for United States Government purposes.
|
1,477,468,750,938 | arxiv | \section{Introduction}
Many experts admit that quantum field theory is a difficult and complicated subject. Besides that, it is plagued with infinities, divergences and mathematical inconsistencies. The purpose of this work is to try to gain a deeper understanding of the foundations and basic principles of the theory by considering some toy models. By necessity, these toy models will involve over-simplifications and they do not directly apply to physical reality. However, these models exist on finite-dimensional Hilbert spaces and are mathematically rigorous. Moreover, it is possible that, in some sense, they converge to an approximation of the real observed world. In Section~5 we discuss ways this may be accomplished. In particular, we assume that special relativity is an approximation to a discrete spacetime which is the fundamental description of the structure of the universe \cite{bdp16,cro16,gud172,hag14,hoo14}.
We first define free fermion, boson and mixed fermion-boson quantum fields. Applying energy and particle number cut-offs, the free fields become rigorous operators on finite-dimensional Hilbert spaces. These operators are the bases for our toy models. The free fields are employed to form what we consider to be the simplest examples of interacting fields. Many examples for free and interacting toy quantum fields are given in Sections~3 and~4. To better understand these fields we stress the eigenstructure of their quantum operators. In Section~5 we use interacting fields to construct Hamiltonian densities. Scattering operators, which are the most important operators of quantum field theory, are defined in terms of the Hamiltonian densities. We finally show how scattering probabilities can be found using the scattering operators. Scattering cross-sections, decay probabilities and particle lifetimes can also be computed in the standard ways.
It is hoped that these computed quantities will converge to numbers that will agree with experiment, but this will have to wait for future work. As we shall show in examples, even these toy models quickly involve complications which can only be solved with computer assistance. We warn the reader that our definition of a scattering operator is not the standard one involving ``second-quantization''. Perhaps it is too na\'ive, but we believe that this method is simpler and more natural.
\section{Free Toy Quantum Fields}
In this toy model we consider a system with at most $s$ particles $p_1,p_2,\ldots ,p_s$. We first assume that the particles are all fermions with the same mass (say, electrons). For simplicity, we neglect spin since the model can be extended to include spin in a straightforward way. We construct a finite-dimensional complex Hilbert space $K^s$ with orthonormal basis
\begin{equation*}
\ket{0},\ket{p_1},\ldots ,\ket{p_s}\cdots,\ket{p_{i_1}\ldots p_{i_n}},\cdots ,\ket{p_1p_2\ldots p_s}
\end{equation*}
where $\ket{p_{i_1}\ldots p_{i_n}}$ represents the state in which there are $n$ particles with energy-momentum $p_{i_1},\ldots ,p_{i_n}$. The
\textit{vacuum state} with no particles is $\ket{0}$. This should not be confused with the zero vector in $K^s$ denoted by $0$. The basis is antisymmetric in the sense that if two particles are interchanged, then a negative sign results. For example, we have that
$\ket{p_1p_2p_3}=-\ket{p_3p_2p_1}$. It follows that no two entries in a basis vector agree. Notice that the dimension of $K^s$ is
\begin{equation*}
\dim K^s=\sum _{j=0}^s\binom{s}{j}=2^s
\end{equation*}
For a particle $p_j$ we define the \textit{annihilation operator} $a(p_j)$ on a basis element by
\begin{equation*}
a(p_j)\ket{p_jp_{i_1}\dots p_{i_n}}=\ket{p_{i_1}\ldots p_{i_n}}
\end{equation*}
and if $\ket{p_{i_1}\ldots p_{i_n}}$ does not contain an entry $p_j$, then
\begin{equation*}
a(p_j)\ket{p_{i_1}\ldots p_{i_n}}=0
\end{equation*}
We then extend $a(p_j)$ to $K^s$ by linearity. Notice for example, that $a(p_1)\ket{p_1p_2}=\ket{p_2}$ while
\begin{equation*}
a(p_1)\ket{p_2p_1}=-a(p_1)\ket{p_1p_2}=-\ket{p_2}
\end{equation*}
We interpret $a(p_j)$ as the operator that annihilates a particle with energy-momentum $p_j$.
The adjoint $a(p_j)^*$ of $a(p_j)$ is the operator on $K^s$ defined by
\begin{equation*}
a(p_j)^*\ket{p_jp_{i_1}\ldots p_{i_n}}=0
\end{equation*}
and if $\ket{p_{i_1}\ldots p_{i_n}}$ does not contain an entry $p_j$, then
\begin{equation*}
a(p_j)^*\ket{p_{i_1}\ldots p_{i_n}}=\ket{p_jp_{i_1}\ldots p_{i_n}}
\end{equation*}
We call $a(p_j)^*$ a \textit{creation operator} and interpret $a(p_j)^*$ as the operator that creates a particle with energy-momentum $p_j$. Defining the \textit{commutator} and \textit{anticommutator} of two operators $A,B$ on $K^s$ by $[A,B]=AB-BA$ and $\brac{A,B}=AB+BA$, respectively, it is easy to check that $a(p_j)$ and $a(p_j)^*$ have the characteristic properties:
\begin{align}
\label{eq21}
\brac{a(p_i),a(p_j)}&=\brac{a(p_i)^*,a(p_j)^*}=0\\
\label{eq22}
\brac{a(p_i),a(p_j)^*}&=\delta _{ij}I
\end{align}
where $I$ is the identity operator. For example, if $i\ne j$ then
\begin{align*}
\brac{a(p_i),a(p_i)^*}\ket{p_j}&=a(p_i)a(p_i)^*\ket{p_j}+a(p_i)^*a(p_i)\ket{p_j}\\
&=a(p_i)\ket{p_ip_j}+a(p_i)^*(0)=\ket{p_j}
\end{align*}
while if $i=j$, then
\begin{align*}
\brac{a(p_i),a(p_i)^*}\ket{p_j}&=a(p_i)a(p_i)^*\ket{p_j}+a(p_i)^*a(p_i)\ket{p_j}\\
&=a(p_i)(0)+a(p_i)^*\ket{0}=\ket{p_i}
\end{align*}
Of course, it follows that $a(p_i)$ and $a(p_j)$ do not commute when $i\ne j$.
Corresponding to a fermion $p_j$ and a complex number $\alpha _j\in{\mathbb C}$ we define the \textit{annihilation-creation operator}
(AC-\textit{operator}) $\eta (p_j)$ by
\begin{equation*}
\eta (p_j)=\alpha _ja(p_j)+\overline{\alpha} _ja(p_j)^*
\end{equation*}
It is clear that $\eta (p_j)$ is a self-adjoint operator on $K^s$ and it follows from \eqref{eq21} and \eqref{eq22} that
\begin{equation*}
\brac{\eta (p_i),\eta (p_j)}=2\ab{\alpha _i}^2\delta _{ij}I
\end{equation*}
It follows that $\eta (p_i)$ and $\eta (p_j)$ do not commute when $i\ne j$. If $p_1,\ldots ,p_n$ are distinct fermions in our system and
$\alpha _j\in{\mathbb C}$, $j=1,\ldots ,n$ we call
\begin{equation}
\label{eq23}
\phi =\sum _{j=1}^n\eta (p_j)=\sum _{j=1}^n\sqbrac{\alpha _ja(p_j)+\overline{\alpha} _ja(p_j)^*}
\end{equation}
a \textit{free fermion toy quantum field}. Again, $\phi$ is a self-adjoint operator on $K^s$.
We now consider a system of bosons with energy-momentum $q_j$, $j=1,\ldots ,n$ and having the same mass (say, photons). We again have the restriction that there are at most $s$ particles. In this case, we can have that $q_i=q_j$ so some of the particles can be identical. We construct the symmetric Hilbert space $J^{(n,s)}$ with orthonormal basis $\ket{q_{i_1}\cdots q_{i_k}}$. We sometimes use the notation
\begin{equation*}
\ket{q_{i_1}^{j_1}q_{i_2}^{j_2}\cdots q_{i_k}^{j_k}}
\end{equation*}
to represent the state in which there are $j_1$ bosons of type $i_1,\ldots ,j_k$ bosons of type $i_k$. For example
\begin{equation*}
\ket{q_1^2q_2^0q_3^3q_4}=\ket{q_1q_1q_3q_3q_3q_4}
\end{equation*}
Since $J^{(n,s)}$ is symmetric, an interchange of entries does not affect the state. For example,
\begin{equation*}
\ket{q_1q_2q_3}=\ket{q_2q_1q_3}=\ket{q_2q_3q_1}
\end{equation*}
Notice that there is a one-to-one correspondence between basis elements and multisets with at most cardinality $s$ that have at most $n$ different elements. It is well-known that the number of multisets with $k$ elements chosen from among $q_1,\ldots ,q_n$ is
\begin{equation*}
{n+k-1\choose k}=\frac{(n+k-1)!}{k!(n-1)!}
\end{equation*}
We conclude that
\begin{equation*}
\dim J^{(n,s)}=\sum _{k=0}^s{n+k-1\choose k}
\end{equation*}
For example, if $n=2$, $s=3$ we have that
\begin{equation*}
\dim J^{(2,3)}=\sum _{k=0}^3{k+1\choose k}={1\choose 0}+{2\choose 1}+{3\choose 2}+{4\choose 3}=1+2+3+4=10
\end{equation*}
The basis elements for $J^{(2,3)}$ are
\begin{equation*}
\ket{0},\ \ket{q_1},\ \ket{q_2},\ \ket{q_1^2},\ \ket{q_1q_2},\ \ket{q_2^2},\ \ket{q_1^3},\ \ket{q_1^2q_2},\ \ket{q_1q_2^2},\ \ket{q_2^3}
\end{equation*}
For a boson $q_j$ we define the \textit{annihilation operator} $a(q_j)$ by
\begin{equation*}
a(q_j)\ket{q_j^kq_{j_1}^{k_1}\cdots q_{j_t}^{k_t}}=\sqrt{k\,}\,\ket{q_j^{k-1}q_{j_1}^{k_1}\cdots q_{j_t}^{k_t}}
\end{equation*}
where $k+k_1+\cdots +k_t\le s$. The corresponding \textit{creation operator} $a(q_j)^*$ is given by
\begin{equation*}
a(q_j)^*\ket{q_j^kq_{j_1}^{k_1}\cdots q_{j_t}^{k_t}}
=\begin{cases}\sqrt{k+1}\,\ket{q_j^{k+1}q_{j_1}^{k_1}\cdots q_{j_t}^{k_t}}&\hbox{if }k+k_1+\cdots +k_t<s\\
0&\hbox{if }k+k_1+\cdots +k_t=s\end{cases}
\end{equation*}
As before, these operators are extended to $J^{(n,s)}$ by linearity. We define the boundary $\overline{J} ^{(n,s)}$ of $J^{(n,s)}$ as the subspace of $J^{(n,s)}$ generated by the basis vectors
\begin{equation*}
V^{(n,s)}=\brac{\ket{q_{i_1}^{j_1}\cdots q_{i_t}^{j_t}}\colon j_1+\cdots +j_t=s}
\end{equation*}
We have that
\begin{align*}
\sqbrac{a(q_j),a(q_k)}&=\sqbrac{a(q_j)^*,a(q_k)}=0\ \hbox{on } J^{(n,s)}\\
\sqbrac{a(q_j),a(q_k)^*}&=\delta _{jk}I\ \hbox{ on }J^{(n,s)}\smallsetminus\overline{J} ^{(n,s)}
\end{align*}
while on $\overline{J} ^{(n,s)}$ we have that
\begin{equation*}
\sqbrac{a(q_1),a(q_k)^*}\ket{\psi}
=\begin{cases}-\sqrt{N_{q_k}(\psi )+1}\,\sqrt{N_{q_1}(\psi)}\ \ket{\psi}&\hbox{if }q_j\ne q_k\\
-N_{q_j}(\psi )\ \ket{\psi}&\hbox{if }q_1=q_k\end{cases}
\end{equation*}
where $N_{q_j}(\psi )$ is the number of $q_j$'s in the basis vector $\psi$. As before, we define the self-adjoint AC-operators
\begin{equation*}
\eta (q_j)=\alpha _ja(q_j)+\overline{\alpha} _ja(q_j)^*
\end{equation*}
and the \textit{free boson toy quantum fields}
\begin{equation}
\label{eq24}
\psi =\sum _{j=1}^m\eta (q_j)=\sum _{j=1}^m\sqbrac{\alpha _ja(q_j)+\alpha _ja(q_j)^*}
\end{equation}
We finally consider a mixed system of fermions $p_1,\ldots ,p_m$ with the same mass and bosons $q_1,\ldots ,q_n$ with the same mass. Again, we limit the total number of particles to $s$. The corresponding Hilbert space $L^{(m,n,s)}$ has orthonormal basis
\begin{equation*}
\ket{p_{j_1}\cdots p_{j_r}q_{k_1}\cdots q_{k_t}}
\end{equation*}
The basis elements are antisymmetric in the $p$'s, symmetric in the $q$'s and symmetric under an interchange of $p$'s and $q$'s. We then have the two free toy quantum fields $\phi$ of \eqref{eq23} and $\psi$ of \eqref{eq24} acting on $L^{(m,n,s)}$.
\section{Eigenstructures for Free Toy Fields}
One way to understand free toy fields is to find their eigenvalues and eigenvectors which can then be employed to construct their spectral representation. The simplest way to begin is to study the AC-operator
\begin{equation}
\label{eq31}
\eta (p_1)=\alpha a(p_1)+\overline{\alpha} a(p_1)^*
\end{equation}
on the fermion Hilbert space $K^s$. It is easy to check that the eigenvalues of $\eta (p_1)$ are $\pm\ab{\alpha}$. The eigenvectors corresponding to $\ab{\alpha}$ have the form
\begin{equation}
\label{eq32}
\ab{\alpha}\ket{p_{i_1}\cdots p_{i_k}}+\overline{\alpha}\ket{p_1p_{i_1}\cdots p_{i_k}},\quad i_j\ne 1
\end{equation}
and the eigenvectors corresponding to $-\ab{\alpha}$ have the form
\begin{equation}
\label{eq33}
\ab{\alpha}\ket{p_{i_1}\cdots p_{i_k}}-\overline{\alpha}\ket{p_1p_{i_1}\cdots p_{i_k}},\quad i_j\ne 1
\end{equation}
Since there are $2^{s-1}$ vectors of the form \eqref{eq32} and $2^{s-1}$ vectors of the form \eqref{eq33}, we have a complete set of eigenvectors (which we have not bothered to normalize). Notice that because of the form of the eigenvalues, we have
$\eta (p_1)^2=\ab{\alpha}^2I$.
We next consider the AC-operator
\begin{equation}
\label{eq34}
\eta (p_2)=\beta a(p_2)+\overline{\beta} a(p_2)^*
\end{equation}
whose eigenvalues are $\pm\ab{\beta}$ and whose eigenvectors are similar to \eqref{eq32}, \eqref{eq33}. For concreteness, letting $s=2$, the orthonormal basis for the 4-dimensional Hilbert space $K^2$ is: $\ket{0}$, $\ket{p_1}$, $\ket{p_2}$, $\ket{p_1p_2}$. We consider a free toy field defined by $\phi =\eta (p_1)+\eta (p_2)$. Relative to the given bases we have the matrix representation.
\begin{equation*}
\phi =\begin{bmatrix}\noalign{\smallskip}
0&\alpha&\beta&0\\\noalign{\smallskip}\overline{\alpha}&0&0&-\beta\\\noalign{\smallskip}\overline{\beta}&0&0&\alpha\\\noalign{\smallskip}
0&-\overline{\beta}&\overline{\alpha}&0\\\noalign{\smallskip}\end{bmatrix}\end{equation*}
The eigenvalues of $\phi$ are $\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2}$ and each of these has multiplicity two. The corresponding eigenvectors (which are not orthonormalized) are
\begin{equation*}
\begin{bmatrix}
-\alpha\\\noalign{\smallskip}-\sqrt{\ab{\alpha}^2+\ab{\beta}^2}\\\noalign{\smallskip}0\\\noalign{\smallskip}\overline{\beta}\end{bmatrix},\
\begin{bmatrix}\noalign{\smallskip}
\sqrt{\ab{\alpha}^2+\ab{\beta}^2}\\\noalign{\smallskip}\overline{\alpha}\\\noalign{\smallskip}\overline{\beta}\\\noalign{\smallskip}0\end{bmatrix},\
\begin{bmatrix}
-\alpha\\\noalign{\smallskip}\sqrt{\ab{\alpha}^2+\ab{\beta}^2}\\\noalign{\smallskip}0\\\noalign{\smallskip}\overline{\beta}\end{bmatrix},\
\begin{bmatrix}\noalign{\smallskip}
-\sqrt{\ab{\alpha}^2+\ab{\beta}^2}\\\noalign{\smallskip}\overline{\alpha}\\\noalign{\smallskip}\overline{\beta}\\\noalign{\smallskip}0\end{bmatrix}
\end{equation*}
We next consider the 8-dimensional Hilbert space $K^3$ with orthonormal basis: $\ket{0}$, $\ket{p_1}$, $\ket{p_2}$, $\ket{p_3}$,
$\ket{p_1p_2}$, $\ket{p_1p_3}$, $\ket{p_2p_3}$, $\ket{p_1p_2p_3}$. Relative to this basis we have
\begin{equation*}
\phi=\begin{bmatrix}
0&\alpha&\beta&0&0&0&0&0\\\noalign{\smallskip}\overline{\alpha}&0&0&0&-\beta&0&0&0\\\noalign{\smallskip}
\overline{\beta}&0&0&0&\alpha&0&0&0\\\noalign{\smallskip}0&0&0&0&0&\alpha&\beta&0\\\noalign{\smallskip}
0&-\overline{\beta}&\overline{\alpha}&0&0&0&0&0\\\noalign{\smallskip}0&0&0&\overline{\alpha}&0&0&0&\beta\\\noalign{\smallskip}
0&0&0&\overline{\beta}&0&0&0&\alpha\\\noalign{\smallskip}0&0&0&0&0&\overline{\beta}&\overline{\alpha}&0\\\noalign{\smallskip}
\end{bmatrix}
\end{equation*}
The eigenvalues of $\phi$ are:
$-\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$, $\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$, $\ab{\alpha}+\ab{\beta}$,
$-\paren{\ab{\alpha}+\ab{\beta}}$, $\ab{\,\ab{\alpha}-\ab{\beta}\,}$, $-\ab{\,\ab{\alpha}-\ab{\beta}\,}$.
The first two have multiplicity two and last four have multiplicity one. The corresponding (not orthonormalized) eigenvectors are:
\begin{align*}
\begin{bmatrix}
0\\0\\0\\-\alpha\beta\\0\\\ab{\alpha}\beta\\-\alpha\ab{\beta}\\\ab{\alpha}\ab{\beta}\end{bmatrix},\quad
\begin{bmatrix}0\\0\\0\\-\alpha\beta\\0\\-\ab{\alpha}\beta\\\alpha\ab{\beta}\\\ab{\alpha}\ab{\beta}\end{bmatrix},\quad
&\begin{bmatrix}0\\0\\0\\\alpha\beta\\0\\-\ab{\alpha}\beta\\-\alpha\ab{\beta}\\\ab{\alpha}\ab{\beta}\end{bmatrix},\quad
\begin{bmatrix}0\\0\\0\\-\alpha\beta\\0\\\ab{\alpha}\beta\\\alpha\ab{\beta}\\\ab{\alpha}\ab{\beta}\end{bmatrix}\\
\begin{bmatrix}\noalign{\smallskip}
-\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\overline{\alpha}\\\overline{\beta}\\0\\0\\0\\0\\0\end{bmatrix},\
\begin{bmatrix}-\alpha\\\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\0\\0\\\overline{\beta}\\0\\0\\0\end{bmatrix},\
&\begin{bmatrix}\noalign{\smallskip}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\overline{\alpha}\\\overline{\beta}\\0\\0\\0\\0\\0\end{bmatrix},\
\begin{bmatrix}-\alpha\\-\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\0\\0\\\overline{\beta}\\0\\0\\0\end{bmatrix}.\\
\end{align*}
Letting $\eta (p_3)=\gamma a(p_3)+\overline{\gamma} a(p_3)^*$ We have the free toy field $\phi =\eta (p_1)+\eta (p_2)+\eta (p_3)$ with matrix representation
\begin{equation*}
\phi=\begin{bmatrix}
0&\alpha&\beta&\gamma&0&0&0&0\\\noalign{\smallskip}\overline{\alpha}&0&0&0&-\beta&-\gamma&0&0\\\noalign{\smallskip}
\overline{\beta}&0&0&0&\alpha&0&-\gamma&0\\\noalign{\smallskip}\overline{\gamma}&0&0&0&0&\alpha&\beta&0\\\noalign{\smallskip}
0&-\overline{\beta}&\overline{\alpha}&0&0&0&0&\gamma\\\noalign{\smallskip}0&\overline{\gamma}&0&\overline{\alpha}&0&0&0&\beta\\\noalign{\smallskip}
0&0&-\overline{\gamma}&\overline{\beta}&0&0&0&\alpha\\\noalign{\smallskip}0&0&0&0&\overline{\gamma}&\overline{\beta}&\overline{\alpha}&0\\\noalign{\smallskip}
\end{bmatrix}
\end{equation*}
The eigenvalues of $\phi$ are:
$\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2+\ab{\gamma}^2\,}$,
$\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2+\ab{\gamma}^2\pm 2\sqrt{\ab{\beta}^2(\ab{\alpha}^2+\ab{\gamma}^2\,}\,}$.\newline
The first two have multiplicity two and the others have multiplicity one. The eigenvectors are rather complicated and will be omitted.
We now study free boson toy fields. These are more complicated then the fermion case because of the repeated particles. The simplest nontrivial example is the 6-dimensional symmetric Hilbert space $J^{(2,2)}$ with basis: $\ket{0}$, $\ket{q_1}$, $\ket{q_2}$, $\ket{q_1^2}$,
$\ket{q_1q_2}$, $\ket{q_2^2}$. Letting $\eta (q_1)=\alpha a(q_1)+\overline{\alpha} a(q_1)^*$ and $\eta (q_2)=\beta a(q_2)+\overline{\beta} a(q_2)^*$ we have the matrix representation of the free toy boson field $\phi =\eta (q_1)+\eta (q_2)$ given by
\begin{equation*}
\phi=\begin{bmatrix}
0&\alpha&\beta&0&0&0\\\noalign{\smallskip}\overline{\alpha}&0&0&\sqrt{2}\alpha&\beta&0\\\noalign{\smallskip}
\overline{\beta}&0&0&0&\alpha&\sqrt{2}\beta\\\noalign{\smallskip}0&\sqrt{2}\,\overline{\alpha}&0&0&0&0\\\noalign{\smallskip}
0&\overline{\beta}&\overline{\alpha}&0&0&0\\\noalign{\smallskip}0&0&\sqrt{2}\,\overline{\beta}&0&0&0\\\noalign{\smallskip}
\end{bmatrix}
\end{equation*}
The eigenvalues are: $0$, $\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$, $\pm\sqrt{3}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$. The eigenvalue $0$ has multiplicity two and the others have multiplicity one. The corresponding eigenvectors are:
\begin{align*}
\begin{bmatrix}\noalign{\smallskip}
-\sqrt{2\,}\alpha\beta\\0\\0\\\overline{\alpha}\beta\\0\\\alpha\overline{\beta}\end{bmatrix},\quad
&\begin{bmatrix}\noalign{\smallskip}-\sqrt{2\,}\alpha ^2\\0\\0\\\ab{\alpha}^2-\ab{\beta}^2\\\noalign{\smallskip}
\sqrt{2\,}\alpha\overline{\beta}\\0\end{bmatrix},\quad
\begin{bmatrix}\noalign{\smallskip}0\\-\beta\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\noalign{\smallskip}
\alpha\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\noalign{\smallskip}-\sqrt{2}\,\overline{\alpha}\beta\\
\noalign{\smallskip}\ab{\alpha}^2-\ab{\beta}^2\\\noalign{\smallskip}\sqrt{2}\,\alpha\overline{\beta}\end{bmatrix},\\
\begin{bmatrix}\noalign{\smallskip}
0\\\beta\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\noalign{\smallskip}-\alpha\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\noalign{\smallskip}
-\sqrt{2}\,\overline{\alpha}\beta\\\noalign{\smallskip}\ab{\alpha}^2-\ab{\beta}^2\\\noalign{\smallskip}\sqrt{2}\,\alpha\overline{\beta}\end{bmatrix},\quad
&\begin{bmatrix}\noalign{\smallskip}\ab{\alpha}^2+\ab{\beta}^2\\\noalign{\smallskip}\sqrt{3}\,\overline{\alpha}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\
\noalign{\smallskip}\sqrt{3}\,\overline{\beta}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\noalign{\smallskip}\sqrt{2}\,(\overline{\alpha} )^2\\
\noalign{\smallskip}2\overline{\alpha}\overline{\beta}\\\noalign{\smallskip}\sqrt{2}\,(\overline{\beta} )^2\end{bmatrix},\quad
\begin{bmatrix}\noalign{\smallskip}\ab{\alpha}^2+\ab{\beta}^2\\\noalign{\smallskip}-\sqrt{3}\,\overline{\alpha}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\
\noalign{\smallskip}-\sqrt{3}\,\overline{\beta}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\\\noalign{\smallskip}\sqrt{2}\,(\overline{\alpha} )^2\\
\noalign{\smallskip}2\overline{\alpha}\overline{\beta}\\\noalign{\smallskip}\sqrt{2}\,(\overline{\beta} )^2\\
\end{bmatrix}.
\end{align*}
We next consider that 10-dimension symmetric Hilbert space $J^{(2,3)}$ with basis: $\ket{0}$, $\ket{q_1}$, $\ket{q_2}$, $\ket{q_1^2}$,
$\ket{q_1q_2}$, $\ket{q_2^2}$, $\ket{q_1^3}$, $\ket{q_1^2q_2}$, $\ket{q_1q_2^2}$, $\ket{q_2^3}$. The toy boson field
$\phi =\eta (q_1)+\eta (q_2)$ has matrix representation
\begin{equation*}
\phi=\begin{bmatrix}
0&\alpha&\beta&0&0&0&0&0&0&0\\\noalign{\smallskip}\overline{\alpha}&0&0&\sqrt{2}\,\alpha&\beta&0&0&0&0&0\\\noalign{\smallskip}
\overline{\beta}&0&0&0&\alpha&\sqrt{2}\,\beta&0&0&0&0\\\noalign{\smallskip}0&\sqrt{2}\,\overline{\alpha}&0&0&0&0&\sqrt{3}\,\alpha&\beta&0&0\\
\noalign{\smallskip}
0&\overline{\beta}&\overline{\alpha}&0&0&0&0&\sqrt{2}\,\alpha&\sqrt{2}\,\beta&0\\\noalign{\smallskip}
0&0&\sqrt{2}\,\overline{\beta}&0&0&0&0&0&\alpha&\sqrt{3}\,\beta\\\noalign{\smallskip}
0&0&0&\sqrt{3}\,\overline{\alpha}&0&0&0&0&0&0\\\noalign{\smallskip}
0&0&0&\overline{\beta}&\sqrt{2}\,\overline{\alpha}&0&0&0&0&0\\\noalign{\smallskip}
0&0&0&0&\sqrt{2}\,\overline{\beta}&\overline{\alpha}&0&0&0&0\\\noalign{\smallskip}
0&0&0&0&0&\sqrt{3}\,\overline{\beta}&0&0&0&0\\\noalign{\smallskip}
\end{bmatrix}
\end{equation*}
The eigenvalues are:
$0$, $\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$, $\pm\sqrt{3}\,\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$,
$\pm\sqrt{3\pm\sqrt{6\,}\,}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$.\newline
The eigenvalue $0$ has multiplicity two. We omit the rather complicated eigenvectors.
The most interesting and important of the free toy fields are the mixed ones. The simplest nontrivial example is when we have two fermions $p_1$, $p_2$, two bosons $q_1$, $q_2$ and we let $s=2$. The mixed Hilbert space $L^{(2,2,2)}$ has basis: $\ket{0}$, $\ket{p_1}$,
$\ket{p_2}$, $\ket{q_1}$, $\ket{q_2}$, $\ket{p_1p_2}$, $\ket{p_1q_1}$, $\ket{p_1q_2}$, $\ket{p_2q_1}$, $\ket{p_2q_2}$, $\ket{q_1^2}$,
$\ket{q_1q_2}$, $\ket{q_2^2}$. Letting $\eta (p_1)$, $\eta (p_2)$ have the form \eqref{eq31}, \eqref{eq34}, we obtain the toy fermion field
$\phi =\eta (p_1)+\eta (p_2)$. The eigenvalues are: $0$ (multiplicity 5), $\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$ (multiplicity 4, each). In a similar way we define $\eta (q_1)=\gamma a(q_1)+\overline{\gamma} a(q_1)^*$ and $\eta (q_2)=\delta a(q_2)+\overline{\delta} a(q_2)^*$. We then obtain the free toy boson field $\psi =\eta (q_1)+\eta (q_2)$. The eigenvalues for $\psi$ are: $0$ (multiplicity 5),
$\pm\sqrt{\ab{\gamma}^2+\ab{\delta}^*\,}$ (multiplicity 3, each) and $\pm\sqrt{3}\sqrt{\ab{\gamma}^2+\ab{\delta}^2\,}$ (multiplicity 1, each). To save space, we omit the matrix representations and eigenvectors.
\section{Interacting Toy Fields}
Let $\phi =\eta (p_1)+\cdots +\eta (p_n)$ and $\psi =\eta (q_1)+\cdots +\eta (q_m)$ be free toy quantum fields acting on the same Hilbert space $K$. The operators $\phi$ and $\psi$ can be either fermion or boson operators. These fields can interact in various ways, but we shall only consider the simplest nontrivial interaction $\phi\psi$. We shall call the self-adjoint operator
\begin{equation}
\label{eq41}
\tau =\tfrac{1}{2}(\phi\psi +\psi\phi )=\tfrac{1}{2}\brac{\phi ,\psi}
\end{equation}
an \textit{interaction toy quantum field}. Even in small dimensional Hilbert spaces, the eigenstructure of $\tau$ can be quite complicated and we shall only consider some simple examples. We have seen that for a fermion AC-operator $\eta (p_1)$ the self-interaction
$\eta (p_1)^2=\ab{\alpha}^2I$ and we see that for $\phi =\eta (p_1)+\eta (p_2)$ on $K^2$ that $\phi ^2=\paren{\ab{\alpha}^2+\ab{\beta}^2}I$ so this self-interaction is also trivial. However, if $\phi =\eta (p_1)+\eta (p_2)$ on $K^3$ that we previously considered the self-interaction
$\phi ^2$ has eigenvalues $\ab{\alpha}^2+\ab{\beta}^2$ (multiplicity 4), $\paren{\ab{\alpha}-\ab{\beta}}^2$ (multiplicity 2) and
$\paren{\ab{\alpha}+\ab{\beta}}^2$ (multiplicity 2).
An important case of an interaction toy field comes from the last example of Section~3. Corresponding to free fields $\phi$ and $\psi$ we obtain the interaction toy field \eqref{eq41} whose matrix representation is
\medskip
$\tau=\left [
\begin{array}{*{20}c}
0&0&0&0&0&0&\alpha\gamma&\alpha\delta&\beta\gamma&\beta\delta&0&0&0\\
0&0&0&\overline{\alpha}\gamma&\overline{\alpha}\delta&0&0&0&0&0&0&0&0\\%\noalign{\smallskip}
0&0&0&\overline{\beta}\gamma&\overline{\beta}\delta&0&0&0&0&0&0&0&0\\%\noalign{\smallskip}
0&\alpha\overline{\gamma}&\beta\overline{\gamma}&0&0&0&0&0&0&0&0&0&0\\
0&\alpha\overline{\delta}&\beta\overline{\delta}&0&0&0&0&0&0&0&0&0&0\\\noalign{\smallskip}
0&0&0&0&0&0&\tfrac{-\overline{\beta}\gamma}{2}&\tfrac{-\overline{\beta}\delta}{2}&\tfrac{\overline{\alpha}\,\gamma}{2}
&\tfrac{\overline{\alpha}\,\delta}{2}&0&0&0\\\noalign{\smallskip}
\overline{\alpha}\,\overline{\gamma}&0&0&0&0&\tfrac{-\beta\overline{\gamma}}{2}&0&0&0&0&\tfrac{\overline{\alpha}\,\gamma}{\sqrt{2}}&\tfrac{\overline{\alpha}\,\delta}{2}
&0\\\noalign{\smallskip}
\overline{\alpha}\,\overline{\delta}&0&0&0&0&\tfrac{-\beta\overline{\delta}}{2}&0&0&0&0&0&\tfrac{\overline{\alpha}\,\gamma}{2}&\tfrac{\overline{\alpha}\,\delta}{\sqrt{2}}
\\\noalign{\smallskip}
\overline{\beta}\,\overline{\gamma}&0&0&0&0&\tfrac{\alpha\overline{\gamma}}{2}&0&0&0&0&\tfrac{\overline{\beta}\,\gamma}{\sqrt{2}}&\tfrac{\overline{\beta}\,\delta}{2}
&0\\\noalign{\smallskip}
\overline{\beta}\,\overline{\delta}&0&0&0&0&\tfrac{\alpha\overline{\delta}}{2}&0&0&0&0&0&\tfrac{\overline{\beta}\,\gamma}{2}&\tfrac{\overline{\beta}\,\delta}{\sqrt{2}}
\\\noalign{\smallskip}
0&0&0&0&0&0&\tfrac{\alpha\overline{\gamma}}{\sqrt{2}}&0&\tfrac{\beta\overline{\gamma}}{\sqrt{2}}&0&0&0&0\\\noalign{\smallskip}
0&0&0&0&0&0&\tfrac{\alpha\overline{\delta}}{2}&\tfrac{\alpha\overline{\gamma}}{2}&\tfrac{\beta\overline{\delta}}{2}&\tfrac{\beta\overline{\gamma}}{2}&0
&0&0\\\noalign{\smallskip}
0&0&0&0&0&0&0&\tfrac{\alpha\overline{\delta}}{\sqrt{2}}&0&\tfrac{\beta\overline{\delta}}{\sqrt{2}}&0&0&0\\\noalign{\smallskip}
\end{array}
\right ]$\medskip
\noindent The eigenvalues are: $0$ (multiplicity 5),
$\pm\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\sqrt{\ab{\gamma}^2+\ab{\delta}^2\,}$ (multiplicity 1, each),
$\pm\tfrac{1}{2}\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\sqrt{\ab{\gamma}^2+\ab{\delta}^2\,}$ (multiplicity 2, each)
and $\pm\sqrt{\tfrac{3}{2}}\,\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}\sqrt{\ab{\gamma}^2+\ab{\delta}^2\,}$ (multiplicity 1, each). We omit the corresponding eigenvectors.
We now begin an analysis of interaction toy fields. We admit that this is preliminary work and much more needs to be done. Consider the very simple field
\begin{equation}
\label{eq42}
\phi=\tfrac{1}{2}\sqbrac{\eta (p_1)\eta (q_1)+\eta (q_1)\eta (p_1)}
\end{equation}
We assume that $\eta (p_1)$ and $\eta (q_1)$ are boson AC-operators acting on the boson Hilbert space $J^{(n,m,s)}$ where the particles $p_1\ldots ,p_n$ are bosons of mass $m_1$ and $q_1,\ldots ,q_m$ are bosons of mass $m_2$. Moreover, we assume that $\eta (p_1)$ and
$\eta (q_1)$ have the form:
\begin{align*}
\eta (p_1)&=\alpha a(p_1)+\overline{\alpha} a(p_1)^*\\
\eta (q_1)&=\beta a(q_1)+\overline{\beta} a(q_1)^*
\end{align*}
The general basis element for $J^{(n,m,s)}$ is:
\begin{equation}
\label{eq43}
\ket{p_1^{r_1}\cdots p_n^{r_n}q_1^{s_1}\cdots q_m^{s_m}}
\end{equation}
where $\sum r_i+\sum s_i\le s$. Because of the structure of $\phi$ in \eqref{eq42}, we are mainly concerned with $p_1,q_1$ and we use the notation $\ket{rs}$ for a vector of the form \eqref{eq43}. When we write $\ket{r_1s_1}\sim\ket{r_2s_2}$, we mean that $r_1\ne r_2$ or $s_1\ne s_2$ but that the other vectors in \eqref{eq43} are identical. For example,
\begin{equation*}
\ket{p_1p_2q_1^2q_2}\sim\ket{p_1p_2q_1q_2}
\end{equation*}
If a vector has a representation
\begin{equation*}
\psi =\sum _{k=1}^t\beta _k\ket{i_kj_k},\quad\ket{i_rj_r}\sim\ket{i_sj_s},\quad r\ne s,\quad \beta _k\in{\mathbb C}
\end{equation*}
we say that $\psi$ has \textit{type} $t$ and \textit{form} $\paren{\ket{i_1j_1},\ldots ,\ket{i_tj_t}}$. If $i_k+j_k$ is even for all $k$, then $\psi$ has \textit{even form} and if $i_k+j_k$ is odd for all $k$, then $\psi$ has \textit{odd form}. For example, $\ket{00}$ has even form of type~1,
$\beta _1\ket{01}+\beta _2\ket{10}$ has odd form of type~2 and
\begin{equation*}
\beta _1\ket{00}+\beta _2\ket{11}+\beta _3\ket{20}+\beta _4\ket{02}
\end{equation*}
has even form of type~4. We shall employ this terminology to classify the eigenvectors of $\phi$. It is useful to observe that
\begin{align}
\label{eq44}
\phi&=\alpha\beta a(p_1)a(q_1)+\overline{\alpha}\,\overline{\beta} a(p_1)^*a(q_1)^*+\tfrac{1}{2}\,\alpha\overline{\beta}\brac{a(p_1),a(q_1)^*}\notag\\
&\qquad +\tfrac{1}{2}\,\overline{\alpha}\,\beta\brac{a(p_1)^*,a(q_1)}
\end{align}
The simplest case is the boson Hilbert space $J^{(1,1,2)}$ with $s=2$ and bosons $p_1,q_1$. The basis for $J^{(1,1,2)}$ is:
$\ket{0}$, $\ket{p_1}$, $\ket{q_1}$, $\ket{p_1q_1}$, $\ket{p_1^2}$, $\ket{q_1^2}$. Applying \eqref{eq44} the matrix representation for $\phi$ becomes:
\begin{equation*}
\phi=\begin{bmatrix}
0&0&0&\alpha\beta&0&0\\\noalign{\smallskip}0&0&\overline{\alpha}\,\beta&0&0&0\\\noalign{\smallskip}
0&\alpha\,\overline{\beta}&0&0&0&0\\\noalign{\smallskip}\overline{\alpha}\,\overline{\beta}&0&0&0&\tfrac{\alpha\overline{\beta}}{\sqrt{2}}
&\tfrac{\overline{\alpha}\,\beta}{\sqrt{2}}\\\noalign{\smallskip}
0&0&0&\tfrac{\overline{\alpha}\,\beta}{\sqrt{2}}&0&0\\\noalign{\smallskip}0&0&0&\tfrac{\alpha\overline{\beta}}{\sqrt{2}}&0&0\\\noalign{\smallskip}
\end{bmatrix}
\end{equation*}
The eigenvalues of $\phi$ are: $0$ (multiplicity 2), $\pm\ab{\alpha}\ab{\beta}$, $\pm\sqrt{2}\,\ab{\alpha}\ab{\beta}$. The corresponding eigenvectors are:
\begin{equation*}
\begin{bmatrix}
-\beta\\0\\0\\0\\0\\\sqrt{2\,}\,\overline{\beta}\end{bmatrix},
\begin{bmatrix}-\alpha\\0\\0\\0\\\sqrt{2\,}\overline{\alpha}\\0\end{bmatrix},
\begin{bmatrix}0\\\ab{\alpha}\beta\\\noalign{\smallskip}\alpha\ab{\beta}\\\noalign{\smallskip}0\\0\\0\end{bmatrix},
\begin{bmatrix}0\\-\ab{\alpha}\beta\\\noalign{\smallskip}\alpha\ab{\beta}\\\noalign{\smallskip}0\\0\\0\end{bmatrix},
\begin{bmatrix}\sqrt{2}\,\beta/\,\overline{\beta}\\0\\0\\
2\beta\ab{\alpha}/\alpha\ab{\beta}\\\noalign{\smallskip}\overline{\alpha}\,\beta/\alpha\overline{\beta}\\\noalign{\smallskip}1\end{bmatrix},
\begin{bmatrix}\sqrt{2}\,\beta/\,\overline{\beta}\\0\\0\\
-2\beta\ab{\alpha}/\alpha\ab{\beta}\\\noalign{\smallskip}\overline{\alpha}\,\beta/\alpha\overline{\beta}\\\noalign{\smallskip}1\end{bmatrix}.
\end{equation*}
We see that the eigenvectors have types~2,~4 and form
\begin{align*}
\paren{\ket{00},\ket{02}},\ \paren{\ket{00},\ket{20}},&\ \paren{\ket{10},\ket{01}},\ \paren{\ket{10},\ket{01}},\\
\paren{\ket{00},\ket{11},\ket{20},\ket{02}},&\ \paren{\ket{00},\ket{11},\ket{20},\ket{02}}.
\end{align*}
The third and fourth have odd form while the others have even form.
A slightly more involved case is the boson Hilbert space $J^{(2,1,2)}$ with bosons $p_1,p_2,q_1$ and $s=2$. The basis for $J^{(2,1,2)}$ is:
$\ket{0}$, $\ket{p_1}$, $\ket{p_2}$, $\ket{q_1}$, $\ket{p_1^2}$, $\ket{p_2^2}$, $\ket{p_1p_2}$, $\ket{p_1q_1}$, $\ket{p_2q_1}$, $\ket{q_1^2}$.
We omit the matrix representation of $\phi$ but we list its eigenvalues: $0$ (multiplicity 4), $\pm\ab{\alpha}\ab{\beta}$,
$\pm\tfrac{1}{2}\,\ab{\alpha}\ab{\beta}$, $\pm\sqrt{2\,}\,\ab{\alpha}\ab{\beta}$. The corresponding eigenvectors are:
\begin{align*}
\begin{bmatrix}
-\beta\\0\\0\\0\\0\\0\\0\\0\\0\\\sqrt{2\,}\,\overline{\beta}\end{bmatrix},\quad
\begin{bmatrix}-\alpha\\0\\0\\0\\\sqrt{2\,}\overline{\alpha}\\0\\0\\0\\0\\0\end{bmatrix},\quad
&\begin{bmatrix}0\\0\\0\\0\\0\\1\\0\\0\\0\\0\end{bmatrix},\
\begin{bmatrix}0\ \\0\\1\\0\\0\\0\\0\\0\\0\\0\end{bmatrix},\
\begin{bmatrix}0\\\overline{\alpha}\,\beta\\0\\\ab{\alpha}\ab{\beta}\\0\\0\\0\\0\\0\\0\end{bmatrix},\quad
\begin{bmatrix}0\\-\overline{\alpha}\,\beta\\0\\\ab{\alpha}\ab{\beta}\\0\\0\\0\\0\\0\\0\end{bmatrix},\\
\begin{bmatrix}0\\0\\0\\0\\0\\0\\\overline{\alpha}\,\beta\\0\\\ab{\alpha}\ab{\beta}\\0\end{bmatrix},\quad
\begin{bmatrix}0\\0\\0\\0\\0\\0\\-\overline{\alpha}\,\beta\\0\\\ab{\alpha}\ab{\beta}\\0\end{bmatrix},\quad
&\begin{bmatrix}\noalign{\smallskip}\sqrt{2}\,\beta/\,\overline{\beta}\\0\\0\\0\\
\overline{\alpha}\,\beta/\alpha\overline{\beta}\\0\\0\\2\overline{\alpha}\,\beta/\ab{\alpha}\ab{\beta}\\0\\1\end{bmatrix},\quad
\begin{bmatrix}\noalign{\smallskip}\sqrt{2}\,\beta/\,\overline{\beta}\\0\\0\\0\\
\overline{\alpha}\,\beta/\alpha\overline{\beta}\\0\\0\\-2\overline{\alpha}\,\beta/\ab{\alpha}\ab{\beta}\\0\\1\end{bmatrix}.
\end{align*}
The eigenvectors have types~1,~2,~4, and even form $\paren{\ket{00}}$, $\paren{\ket{00},\ket{02}}$, $\paren{\ket{00},\ket{20}}$,
$\paren{\ket{00},\ket{11},\ket{20},\ket{02}}$, and odd form $\paren{\ket{10},\ket{01}}$.
The fermion case is much simpler than the boson case. Our last example is a fermion Hilbert space $K^{(2,2,4)}$ with two fermions $p_1,p_2$ of mass $m_1$, two other fermions $q_1,q_2$ of mass $m_2$ and $s=4$. This 16-dimensional space has basis:
$\ket{0}$, $\ket{p_1}$, $\ket{p_2}$, $\ket{q_1}$, $\ket{q_2}$, $\ket{p_1p_2}$, $\ket{p_1q_1}$, $\ket{p_1q_2}$, $\ket{p_2q_1}$,
$\ket{p_2q_2}$, $\ket{q_1q_2}$, $\ket{p_1p_2q_1}$, $\ket{p_1p_2q_2}$, $\ket{p_1q_1q_2}$, $\ket{p_2q_1q_2}$, $\ket{p_1p_2q_1q_2}$.
We define the AC-operators:
\begin{align*}
\eta (p_1)&=\alpha a(p_1)+\overline{\alpha} a(p_1)^*\\
\eta (p_2)&=\beta a(p_2)+\overline{\beta} a(p_2)^*\\
\eta (q_1)&=\gamma a(q_1)+\overline{\gamma} a(q_1)^*\\
\eta (q_2)&=\delta a(q_2)+\overline{\delta} a(q_2)^*\\
\end{align*}
The corresponding free toy fermion fields become:
\begin{align*}
\phi&=\eta (p_1)+\eta (p_2)\\
\psi&=\eta (q_1)+\eta (q_2)\\
\end{align*}
The interaction toy field is $\tau =\tfrac{1}{2} (\phi\psi +\psi\phi )$. we omit the matrix representation of $\tau$ except for saying that the first row of $\tau$ is
\begin{equation*}
\sqbrac{0\cdots 0\ \alpha\gamma\ \alpha\delta\ \beta\gamma\ \beta\delta\ 0\cdots 0}
\end{equation*}
and the other rows are permutations of the first row with various minus signs and complex conjugations. The eigenvalues of $\tau$ are surprisingly simple. Letting $\omega _1=\sqrt{\ab{\alpha}^2+\ab{\beta}^2\,}$ and $\omega _2=\sqrt{\ab{\gamma}^2+\ab{\delta}^2\,}$, the eigenvalues are $\pm\omega _1\omega _2$ each with multiplicity 8. One of the eigenvectors has the form
\begin{equation*}
\sqbrac{0\cdots 0\ -\gamma\omega _1\ -\beta\omega _2\ 0\ \alpha\omega _2\ 0\cdots 0\ \overline{\delta}\,\omega _1}^T
\end{equation*}
and the others are permutations of this one with various minus signs and complex conjugations.
This pattern continues. For example, for fermions $p_1,p_2,p_3$ and $q_1,q_2,q_3$ we obtain the 64-dimensional Hilbert space
$K^{(3,3,6)}$. With the obvious notation, the eigenvalues of $\tau$ become $\pm\omega _1\omega _2\omega _3$ each with multiplicity 32.
\section{Discrete Quantum Field Theory}
This section presents the background for a discrete quantum field theory \cite{bdp16,cro16,gud172,hoo14}. Toy models emerge from this theory by imposing a particle number cut-off. We then see how the work of previous sections can be applied to approximate this general discrete theory. As before, we shall neglect spin so we are actually considering scalar fields. The main goal of this section is to introduce the concept of toy scattering operators.
Our basic assumption is that spacetime is discrete and has the form of a 4-dimensional cubic lattice ${\mathcal S}$ \cite{gud16,gud171,gud172}. We then have ${\mathcal S} ={\mathbb Z} ^+\times{\mathbb Z} ^3$ where ${\mathbb Z} ^+=\brac{0,1,2,\ldots}$ represents discrete time and ${\mathbb Z} =\brac{0,\pm 1,\pm 2,\ldots}$ so ${\mathbb Z} ^3$ represents discrete 3-space. If $x=(x_0,x_1,x_2,x_3)\in{\mathcal S}$, we sometimes write $x=(x_0,\bfx )$ where $x_0\in{\mathbb Z} ^+$ is time and $\bfx\in{\mathbb Z} ^3$ is a 3-space point. We equip ${\mathcal S}$ with the Minkowski distance
\begin{equation*}
\doubleab{x}_4^2=x_0^2-\doubleab{\bfx}_3^2=x_0^2-x_1^2-x_2^2-x_3^2
\end{equation*}
where we use units in which the speed of light is 1.
The dual of ${\mathcal S}$ is denoted by $\widehat{\sscript}$. We regard $\widehat{\sscript}$ as having the identical structure as ${\mathcal S}$ except we denote elements of $\widehat{\sscript}$ by
\begin{equation*}
p=(p_0,\bfp )=(p_0,p_1,p_2,p_3)
\end{equation*}
and interpret $p$ as the energy-momentum vector for a particle. In fact, we sometimes call $p\in\widehat{\sscript}$ a particle. Moreover, we only consider the forward cone
\begin{equation*}
\brac{p\in\widehat{\sscript}\colon\doubleab{p}_4\ge 0}
\end{equation*}
For a particular $p\in\widehat{\sscript}$ we call $p_0\ge 0$ the \textit{total energy}, $\doubleab{\bfp}_3\ge 0$ the \textit{kinetic energy} and $m=\doubleab{p}_4$ the \textit{mass} of $p$. The integers $p_1,p_2,p_3$ are \textit{momentum components}. Since
\begin{equation*}
m^2=\doubleab{p}_4^2=p_0^2-\doubleab{\bfp}_3^2
\end{equation*}
we conclude that Einstein's energy formula $p_0=\sqrt{m^2+\doubleab{\bfp}_3^2\,}$ holds. For mass $m$, we define the \textit{mass hyperboloid} by
\begin{equation*}
\Gamma _m=\brac{p\in\widehat{\sscript}\colon\doubleab{p}_4=m}
\end{equation*}
Moreover, for $x\in{\mathcal S}$, $p\in\widehat{\sscript}$ we define the indefinite inner product
\begin{equation*}
px=p_0x_0-p_1x_1-p_2x_2-p_3x_3
\end{equation*}
We now define three Hilbert spaces. For fermions $p_1,p_2,\ldots\,$, we define the \textit{fermion Hilbert space} $K$ to be the separable antisymmetric Hilbert space with orthonormal basis
\begin{equation*}
\ket{0},\ket{p_1},\ket{p_2},\ldots ,\ket{p_1p_2},\ket{p_1p_3},\cdots ,\ket{p_1p_2\cdots p_n},\cdots
\end{equation*}
For bosons $q_1,q_2,\ldots$ we define the \textit{boson Hilbert space} $J$ to be the symmetric Hilbert space with orthonormal basis
\begin{equation*}
\ket{0},\ket{q_1},\ket{q_2},\ldots ,\ket{q_1^2},\ket{p_1p_2},\cdots ,\ket{q_1^{r_1}q_2^{r_2}\cdots q_n^{r_n}},\cdots
\end{equation*}
For fermions $p_1,p_2,\ldots$ and bosons $q_1,q_2,\ldots\,$, we define the \textit{mixed Hilbert space} $L$ analogous to the construction in Section~3. Moreover, we define the annihilation and creation operators $a(p_j)$, $a(q_j)$, $a(p_j)^*$, $a(q_j)^*$ as we did in Section~2. On the Hilbert space $K$, for any $x\in{\mathcal S}$, $r\in{\mathbb Z} ^+$, we define the \textit{free fermion quantum field} at $x$ with mass $m$ and maximal total energy $r$ \cite{gud16,gud171,gud172} by:
\begin{equation}
\label{eq51}
\phi (x,r)=\sum\brac{\tfrac{1}{p_0}\sqbrac{a(p)e^{i\pi px/2}+a(p)^*e^{-i\pi px/2}}\colon p\in\Gamma _m,p_0\le r}
\end{equation}
Since there are only a finite number of terms in the summation in \eqref{eq51}, we see that $\phi (x,r)$ is a bounded self-adjoint operator on
$K$. We define free boson quantum fields on $J$ in an analogous way. We can also define quantum fields for particles of various masses.
We now impose the number of particles cut-off $s$ to obtain the finite-dimensional subspaces $K^s$, $J^s$, $L^s$ of $K$, $J$, $L$ considered in Section~2. The restriction of free field $\phi (x,r)$ to one of these subspaces becomes a free toy field and is denoted by
$\phi ^s(x,r)$. Letting $s\to\infty$, we have in a certain weak sense that
\begin{equation*}
\lim _{s\to\infty}\phi ^s(x,r)=\phi (x,r)
\end{equation*}
As shown in \cite{gud171,gud172} we can also relax the maximal energy restriction by letting $\phi (x)=\lim\limits _{r\to\infty}\phi (x,r)$.
For concreteness, consider the fermion Hilbert space $K$ and the other two cases are similar. Let $H(x_0)$ be a self-adjoint operator on $K$ that is a function of time $x_0\in{\mathbb Z} ^+$. This operator is specified by the theory and we call it a \textit{Hamiltonian}. The operator $H(x_0)$ is usually derived from a \textit{Hamiltonian density} ${\mathcal H} (x)$, $x\in{\mathcal S}$, where ${\mathcal H} (x)$ are again self-adjoint operators on $K$. The \textit{space-volume} at $x_0$ is given by the cardinality $V(x_0)=\ab{\brac{x\colon\doubleab{\bfx}_3\le x_0}}$ and we define
\begin{equation}
\label{eq52}
H(x_0)=\frac{1}{V(x_0)}\sum\brac{{\mathcal H} (x)\colon\doubleab{\bfx}_3\le x_0}
\end{equation}
For a Hamiltonian $H(x_0)$ the corresponding \textit{scattering operator} at time $x_0$ is the unitary operator $S(x_0)=e^{iH(x_0)}$. The \textit{final scattering operator} is defined as $S=\lim\limits _{x_0\to\infty}S(x_0)$ when this limit exists. The scattering operator is used to find scattering amplitudes and probabilities. For example, suppose that two particles with energy-momentum $p$ and $q$ collide at time $0$. We would like to find the probability that their energy-momentum is $p'$ and $q'$ at time $x_0$. (By conservation of energy-momentum, we usually assume that $p'+q'=p+q$.) The axioms of quantum mechanics tell us that the scattering amplitude for this interaction is
$\bra{p'q'}S(x_0)\ket{pq}$ and the probability becomes
\begin{equation*}
\ab{\bra{p'q'}S(x_0)\ket{pq}}^2
\end{equation*}
A fundamental question in quantum field theory is: ``How do we find the Hamiltonian density ${\mathcal H} (x)$, $x\in{\mathcal S}$?'' The answer is usually that ${\mathcal H} (x)$ is determined by the interaction of quantum fields. But the problem now is that we don't know how the fields interact and we frequently have to make an intelligent guess. Suppose $\phi (x,r)$ and $\psi (x,r)$ are quantum fields. As mentioned in Section~4, probably the simplest nontrivial interaction is $\phi (x,r)\psi (x,r)$ so we form the self-adjoint operator
\begin{equation*}
\tau (x,r)=\tfrac{1}{2}\brac{\phi (x,r),\psi (x,r)}
\end{equation*}
To simplify the calculations we can consider the toy interaction field
\begin{equation*}
\tau ^s(x,r)=\tfrac{1}{2}\brac{\phi ^s(x,r),\psi ^s(x,r)}
\end{equation*}
discussed in Section~4. We then define the \textit{toy Hamiltonian density} to be $\tau ^s(x,r)$. This quantity has the extra parameter $r$ which we can let approach infinity at the end of the calculation. We now construct the toy Hamiltonian
\begin{equation*}
H^s(x_0,r)=\tfrac{1}{V(x_0)}\sum\brac{\tau ^s(x,r)\colon\doubleab{\bfx}_3\le x_0}
\end{equation*}
The resulting toy scattering operator becomes
\begin{equation}
\label{eq53}
S^s(x_0,r)=e^{iH^s(x_0,r)}
\end{equation}
In Sections~3 and 4 we stressed the importance of finding the eigenstructure of toy free and interaction quantum fields. Once we have calculated the eigenstructure of $H^s(x_0,r)$ we can write its spectral representation
\begin{equation}
\label{eq54}
H^s(x_0,r)=\sum _{j=1}^n\lambda _jP_j
\end{equation}
where $\lambda _j$ are the distinct eigenvalues of $H^s(x_0,r)$ and $P_j$ is the orthogonal projection onto the eigenspace of $\lambda _j$, $j=1,\ldots ,n$. In this case, $P_jP_k=\delta _{jk}P_j$ and $\sum\limits _{j=1}^nP_j=I$. Applying \eqref{eq53}, \eqref{eq54}, we obtain a complete formula for $S^s(x_0,r)$ given by
\begin{equation*}
S^s(x_0,r)=\sum _{j=1}^ne^{i\lambda _j}P_j
\end{equation*}
Of course, the examples of eigenstructures computed in Sections~3 and 4 were much simpler than that of $H^s(x_0,r)$ and computer assistance must be employed for the latter. It is clear that the present work is only the beginning and much more must be done to develop a compete theory. Once this is accomplished, we can check the theory against experimental results.
|
1,477,468,750,939 | arxiv | \section*{Introduction}
Genetic programming is an evolutionary computation technique, where the objective is to find a program (i.e. a simple expression, a sequence of statements, or a full-scale function) that satisfy a behavioral specification expressed as test cases along with expected results. Grammatical genetic programming is a subfield of genetic programming, where the search space is restricted to a language defined as a BNF grammar, thus ensuring all individuals to be syntactically valid.
Processing power provided by graphic processing units (GPUs) make them an attractive platform for evolutionary computation due to the inherently parallelizable nature of the latter. First genetic programming implementations shown to run on GPUs were \cite{adataparallelapproach} and \cite{fastgeneticprogrammingongpus}.
Just like in the CPU case, genetic programming on GPU requires the code represented by individuals to be rendered to an executable form; this can be achieved by compilation to an executable binary object, by conversion to an intermediate representation of a custom interpreter developed to run on GPU, or by directly generating machine-code for the GPU architecture. Compilation of individuals' codes for GPU is known to have a prohibitive overhead that is hard to offset with the gains from the GPU acceleration.
Compiled approach for genetic programming on GPU is especially important for grammatical genetic programming; the representation of individuals for linear and cartesian genetic programming are inherently suitable for simple interpreters and circuit simulators implementable on a GPU. On the other hand grammatical genetic programming aims to make higher level constructs and structures representable, using individuals that represent strings of tokens belonging to a language defined by a grammar; unfortunately executing such a representation sooner or later requires some form of compilation or complex interpretation.
In this paper we first present three benchmark problems we implemented to measure compilation times with. We use grammatical genetic programming for the experiments, therefore we define the benchmark problems with their grammars, test cases and fitness functions.
Then we set a baseline by measuring the compilation time of individuals for those three problems, using the conventional CUDA compiler Nvcc. Afterwards we measure the speedup obtained by the in-process compilation using the same benchmark problem setups. We proceed by presenting the obstacles encountered on parallelization of in-process compilation. Finally we propose a parallelization scheme for in-process compilation, and measure the extra speedup achieved.
\section*{Prior Work}
\cite{distributedgeneticprogramming} deals with the compilation overhead of individuals for genetic programming on GPU using CUDA. Article proposes a distributed compilation scheme where a cluster of around 16 computers compile different individuals in parallel; and states the need for large number of fitness cases to offset the compilation overhead. It correctly predicts that this mismatch will get worse with increasing number of cores on GPUs, but also states that "a large number of classic benchmark GP problems fit into this category". Based on figure 5 of the article it can be computed that for a population size of 256, authors required \textit{25 ms/individual} in total\footnote{This number includes network traffic, XO, mutation and processing time on GPU, in addition to compilation times. In our case the difference between compilation time and total time has constantly been at sub-millisecond level per population on all problems; thus for comparison purposes compile times we present can also be taken as total time with an error margin of $^{1ms}/_{pop. size}$}.
\cite{evolvingacudakernel} presents first use of grammatical genetic programming on the GPU, applied to a string matching problem to improve gzip compression; with a grammar constructed from fragments of an existing string matching CUDA code. Based on figure 11 of the accompanying technical report\cite{evolvingacudakernel-techreport} a population of 1000 individuals (10 kernels of 100 individuals each) takes around 50 seconds to compile using nvcc from CUDA v2.3 SDK, which puts the average compilation time to approximately \textit{50 ms/individual}.
In \cite{graphicsprocessingunitsandgeneticprogramming} an overview of genetic programming on GPU hardware is provided, along with a brief presentation and comparison of compiled and interpreted approaches. As part of the comparison it underlines the trade-off between the speed of compiled code versus the overhead of compilation, and states that the command line CUDA compiler was especially slow, hence why interpreted approach is usually preferred.
\cite{accelerationofgrammatical} investigate the acceleration of grammatical evolution by use of GPUs, by considering performance impact of different design decisions like thread/block granularity, different types of memory on GPU, host-device memory transactions. As part of the article compilation to PTX form and loading to GPU with JIT compilation on driver level, is compared with directly compiling to cubin object and loading to GPU without further JIT compilation. For a kernel containing 90 individuals takes 540ms to compile to CUBIN with sub-millisecond upload time to GPU, vs 450ms for compilation to PTX and 80ms for JIT compilation and upload to GPU using nvcc compiler from CUDA v3.2 SDK. Thus PTX+JIT case which is the faster of the two achieves average compilation time of \textit{5.88 ms/individual}.
\cite{identifyingsimilaritiesintmbl} proposes an approach for improving compilation times of individuals for genetic programming on GPU, where common statements on similar locations are aligned as much as possible across individuals. After alignment individuals with overlaps are merged to common kernels such that aligned statements become a single statement, and diverging statements are enclosed with conditionals to make them part of the code path only if the value of individual\_ID parameter matches an individual having that divergent statements. Authors state that in exchange for faster compilation times, they get slightly slower GPU runtime with merged kernels as all individuals need to evaluate every condition at the entry of each divergent code block coming from different individuals. In results it is stated that for individuals with 300 instructions, compile time is 347 ms/individual if it's unaligned, and \textit{72 ms/individual} if it's aligned (time for alignment itself not included) with nvcc compiler from CUDA v3.2 SDK.
\cite{evolvinggpumachinecode} provides a comparison of compilation, interpretation and direct generation of machine code methods for genetic programming on GPUs. Five benchmark problems consisting of Mexican Hat and Salutowicz regressions, Mackey-Glass time series forecast, Sobel Filter and 20-bit Multiplexer are used to measure the comparative speed of the three mentioned methods. It is stated that compilation method uses nvcc compiler from CUDA V5.5 SDK. Compilation time breakdown is only provided for Mexican Hat regression benchmark on Table 6, where it is stated that total nvcc compilation time took 135,027 seconds and total JIT compilation took 106,458 seconds. Table 5 states that Mexican Hat problem uses 400K generations and a population size of 36. Therefore we can say that an average compilation time of $^{(135,027 + 106,458)}/_{36\times 400,000} =$ \textit{16.76 ms/individual} is achieved.
\section*{Implemented Problems for Measurement}
We implemented three problems as benchmark to compare compilation speed. They consist of a general program synthesis problem, Keijzer-6 as a regression problem \cite{keijzer}, and 5-bit Multiplier as a multi output boolean problem . The latter two are included in the "Alternatives to blacklisted problems" table on \cite{bettergpbenchmarks}.
We use grammatical genetic programming as our representation and phenotype production method; therefore all problems are defined with a BNF grammar that defines a search space of syntactically valid programs, along with some test cases and a fitness function specific to the problem. For all three problems, a genotype which is a list of (initially random) integers derives to a phenotype which is a valid CUDA C expression, or code block in form of a list of statements. All individuals are appended and prepended with some initialization and finalization code which serves to setup up the input state and write the output to GPU memory afterwards. See Appendix for BNF Grammars and codes used to surround the individuals.
\subsection*{Search Problem}
Search Problem is designed to evolve a function which can identify whether a search value is present in an integer list, and return its position if present or return -1 otherwise.
We first proposed this problem as a general program synthesis benchmark in \cite{effectsofpopulation}. The grammar for the problem is inspired by \cite{experimentsinprogramsynthesis}; we designed it to be a subproblem of the more general \textit{integer sort problem} case along with some others. It also bears some similarity to problems presented in \cite{generalprogramsynthesis} based on, the generality of its usecase, combined with simplicity of its implementation.
Test cases consist of unordered lists of random integers in the range $[0,50]$, and list lengths vary between 3 and 20. Test cases are randomly generated but half of them are ensured to contain the value searched, and others ensured not to contain. We employed a binary fitness function, which returns 1 if the returned result is correct (position of searched value or -1 if it's not present on list) or 0 if it's not correct; hence the fitness of an individual is the sum of its fitnesses over all test cases, which evolutionary engine tries to maximize.
\subsection*{Keijzer-6}
Keijzer-6 function, introduced in \cite{keijzer}, is the function $K_6(x) = \sum_{n=1}^x \frac{1}{n}$ which maps a single integer parameter to the partial sum of harmonic series with number of terms indicated by its parameter. Regression of Keijzer-6 function is one of the recommended alternatives to replace simpler symbolic regression problems like quartic polynomial \cite{bettergpbenchmarks}.
For this problem we used a modified version of the grammar given in \cite{managingrepetition}, and \cite{exploringpositionindependent}, with the only modification of increasing constant and variable token ratio as the expression nesting gets deeper. We used the root mean squared error as fitness function which is the accepted practice for this problem.
\subsection*{5-bit multiplier}
5-bit multiplier problem consists of finding a boolean relation that takes 10 binary inputs to 10 binary outputs, where two groups of 5 inputs each represent an integer up to $2^5-1$ in binary, and the output represents a single integer up to $2^{10}-1$, such that the output is the multiplication of the two input numbers. This problem is generally attacked as 10 independent binary regression problems, with each bit of the output is separately evolved as a circuit or boolean function.
It's easy to show that the number of $n$-bit input $m$-bit output binary relations are ${2^{m(2^n)}}$, which grows super-exponentially. Multiple output multiplier is the recommended alternative to Multiplexer and Parity problems in \cite{bettergpbenchmarks}
We transfer input to and output from GPU with bits packed as a single 32bit integer; hence there is a code preamble before first individual to unpack the input bits, and a post-amble after each individual to pack the 10 bits computed by evolved expressions as an integer.
The fitness function for 5-bit multiplier computes the number of bits different between the individual's response and correct answer, by computing the pop count of these two XORed.
\section*{Development and Experiment Setup}
\subsection*{Hardware Platform}
All experiments have been conducted on a dual Xeon E5-2670 (8 physical 16 logical cores per CPU, 32 cores in total) platform running at 2.6Ghz equipped with 60GB RAM, along with dual SSD storage and four NVidia GRID K520 GPUs. Each GPU itself consists of 1536 cores spread through 8 multiprocessors running at 800Mhz, along with 4GB GDDR5 RAM \footnote{see validation of hardware used at experiment: http://www.techpowerup.com/gpuz/details/7u5xd/} and is able to sustain 2 teraflops of single precision operations (\textit{in total 6144 cores and 16GB GDDR5 VRAM which can theoretically sustain 8 teraflops single precision computation assuming no other bottlenecks}). GPUs are accessed for computation through NVidia CUDA v8 API and libraries, running on top of Windows Server 2012 R2 operating system.
\subsection*{Development Environment}
Codes related to grammar generation, parsing, derivation, genetic programming, evolution, fitness computation and GPU access has been implemented in C\#, using \textit{managedCuda} \footnote{{https://kunzmi.github.io/managedCuda/}} for CUDA API bindings and NVRTC interface, along with \textit{CUDAfy.NET} \footnote{{https://cudafy.codeplex.com/}} for interfacing to NVCC command line compiler. The grammars for the problems has been prepared such that the languages defined are valid subsets of CUDA C language specialized towards the respective problems.
\subsection*{Experiment Parameters}
We ran each experiment with population sizes starting from 20 individual per population, going up to 300 with increments of 20. As the subject of interest is compilation times and not fitness, we measured the following three parameters to evaluate compilation speed:
\begin{enumerate}[label=(\roman*)]
\item \textit{ptx} : Cuda source code to Ptx compilation time per individual
\item \textit{jit} : Ptx to Cubin object compilation time per individual
\item \textit{other} : All remaining operations a GP cycle requires (i.e compiled individuals running on GPU, downloading produced results, computing fitness values, evolutionary selection, cross over, mutation, etc.)
\end{enumerate}
The value of \textit{other} is measured to be always at sub-millisecond level, in all experiments, all problems and for all population sizes. Therefore it does not appear on plots. For all practical purposes $ptx+jit$ can be considered as the total time cost of a complete cycle for a generation, with an error margin of $\frac{1 ms}{pop. size}$.
Each data point on plots corresponds to the average of one of those measurements for the corresponding $(population size, measurement type, experiment)$ triple. Each average is computed over the measurement values obtained for the first 10 generations of 15 different populations for given size (thus effectively the compile times of 150 generations averaged). The reason for not using 150 generations of a single population directly is that a population gains bias towards to a certain type of individuals after certain number of generations, and stops representing the inherent unbiased distribution of grammar.
The number of test cases used is dependent to the nature of problem; on the other hand as each test case is run as a GPU thread, it is desirable that the number of test cases are a multiple of 32 on any problem, as finest granularity for task scheduling on modern GPUs is a group of 32 threads which is called a \textit{Warp}. For non multiple of 32 test cases, GPU transparently rounds up the number to nearest multiple of 32 and allocate cores accordingly, with some threads from the last warp work on cores with output disabled. The number of test cases we used during experiments were 32 for \textit{Search Problem}, 64 for regression of Keijzer-6 function and 1024 ($=2^{(5+5)}$) for 5-bit Binary Multiplier Problem. For all experiments both mutation and crossover rate was set to 0.7; these rates do not affect the compilation times.
\section*{Experiment Results}
\subsection*{Conventional Compilation as Baseline}
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{all-three-nvcc-comparison.eps}
\caption{Per individual compile time}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{all-three-nvcc-total-comparison.eps}
\caption{Total compile time}
\end{subfigure}
\caption{Nvcc compilation times by population size.
\label{fig:all-three-nvcc}
\end{figure}
NVCC is the default compiler of CUDA platform, it is distributed as a command line application. In addition to compilation of cuda C source codes, it performs tasks such as the separation of source code as host code and device code, calling the underlying host compiler (GCC or Visual C compiler) for host part of source code, and linking compiled host and device object files.
Fib\ref{fig:all-three-nvcc}(a) shows that compilation times level out at 11.2 ms/individual for Search Problem, at 7.62 ms/individual for Keijzer-6 regression, and at 17.2 ms/individual for 5-bit multiplier problem. It can be seen on Fig.\ref{fig:all-three-nvcc}(b) that, even though not obvious, the total compilation time does not increase linearly, which is most observable on trace of 5-bit multiplier problem. As Nvcc is a separate process, it isn't possible to measure the distribution of compilation time between source to ptx, ptx to cubin, and all other setup work (i.e. process launch overhead, disk I/O); therefore it is not possible to pinpoint the source of nonlinearity on total compilation time.
The need for successive invocations of Nvcc application, and all data transfers being handled over disk files are the main drawbacks of Nvcc use in a real time\footnote{not as in hard real time, but as prolonged, successive and throughput sensitive use} context, which is the case in genetic programming. Eventhough the repeated creation and teardown of NVCC process most probably guarantees that the application stays on disk cache, this still prevents it to stay cached on processor L1/L2 caches.
\subsection*{In-process Compilation}
NVRTC is a runtime compilation library for CUDA C, it was first released as part of v7 of CUDA platform in 2015. NVRTC accepts CUDA source code and compiles it to PTX in-memory. The PTX string generated by NVRTC can be further compiled to device dependent CUBIN object file and loaded with CUDA Driver API still without persisting it to a disk file. This provides optimizations and performance not possible in off-line static compilation.
Without NVRTC, for each compilation a separate process needs to be spawned to execute nvcc at runtime. This has significant overhead drawback, NVRTC addresses these issues by providing a library interface that eliminates overhead of spawning separate processes, and extra disk I/O.
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{nvcc_nvrtc_search.eps}
\caption{ Per individual}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{nvcc_nvrtc_search_total.eps}
\caption{ Total}
\end{subfigure}
\caption{In-process and out of process compilation times by population size, for Search Problem}
\label{fig:search-nvcc-vs-nvrtc}
\end{figure}
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{nvcc_nvrtc_K6.eps}
\caption{ Per individual}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{nvcc_nvrtc_K6_total.eps}
\caption{ Total}
\end{subfigure}
\caption{In-process and out of process compilation times by population size, for Keijzer-6 Regression}
\label{fig:k6-nvcc-vs-nvrtc}
\end{figure}
\begin{figure}[!htb]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{nvcc_nvrtc_MUL.eps}
\caption{Per individual}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{nvcc_nvrtc_MUL_total.eps}
\caption{ Total}
\end{subfigure}
\caption{In-process and out of process compilation times by population size, for 5-bit Multiplier}
\label{fig:mul-nvcc-vs-nvrtc}
\end{figure}
On figures \ref{fig:search-nvcc-vs-nvrtc},\ref{fig:k6-nvcc-vs-nvrtc} and \ref{fig:mul-nvcc-vs-nvrtc} it can be seen that in-process compilation of individuals not only provides reduced compilation times for all problems on all population sizes, it also allows to reach asymptotically optimal per individual compilation time with much smaller populations.
The fastest compilation times achieved with in-process compilation is 4.14 ms/individual for Keijzer-6 regression (at 300 individuals per population), 10.88 ms/individual for 5-bit multiplier problem (at 100 individuals per population\footnote{compilation speed at 300 individuals per population is 13.29 ms/individual}), and 6.89 ms/individual for Search Problem (at 280 individuals per population\footnote{compilation speed at 300 individuals per population is 7.76 ms/individual}).
The total compilation time speed ups are measured to be in the order of 261\% to 176\% for the K6 regression problem, 288\% to 124\% for the 5-bit multiplier problem, and 272\% to 143\% for the Search Problem, depending on population size (see Fig.\ref{fig:all-speedups}).
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth]{all-speedups.eps}
\caption{Compile time speedup ratios between conventional and in-process compilation by problem}
\label{fig:all-speedups}
\end{figure}
\subsection*{Parallelizing In-process Compilation}
\subsubsection*{Infeasibility of parallelization with threads}
A first approach to parallelize in-process compilation, comes to mind as to partition the individuals and spawn multiple threads that will compile each partition in parallel through NVRTC library. Unfortunately it turns out that NVRTC library is not designed for multi-threaded use; we noticed that when multiple compilation calls are made from different threads at the same time, the execution is automatically serialized.
Stack trace in Fig.\ref{fig:nvrtc-serialized} shows \textit{nvrtc64\_80.dll} calling OS kernel's \textit{EnterCriticalSection} function to block for exclusive execution of a code block, and gets unblocked by another thread which also runs a block from same library, 853ms later via the release of the related lock. The pattern of green blocks on three threads in addition to main thread in Fig.\ref{fig:nvrtc-serialized} shows that calls are perfectly serialized one after another, despite being called at the same time which is hinted by the red synchronization blocks preceding them.
\begin{figure}[!htb]
\center
\includegraphics[width=\textwidth]{nvrtc-serialized.png}
\caption{NVRTC library serializes calls from multiple threads}
\label{fig:nvrtc-serialized}
\end{figure}
Although NVRTC compiles CUDA source to PTX with a single call, the presence of compiler options setup function which affects the following compilation call, and use of critical sections at function entries, show that apparently this is a stateful API. Furthermore, unlike CUDA APIs' design, mentioned state is most likely not stored in thread local storage (TLS), but stored on the private heap of the dynamic loading library, making it impossible for us to trivially parallelize this closed source library using threads, as moving the kept state to TLS requires source level modifications.
\subsubsection*{Parallelization with daemon processes}
Therefore as a second approach we implemented a daemon process which stays resident. It is launched from command line with a unique ID as command line parameter to allow multiple instances. Instances of daemon is launched as many times as the wanted level of parallelism, and each instance identifies itself with the ID received as parameter. Each launched process register two named synchronization events with the operating system, for signaling the state transitions of a simple state machine consisting of $\{starting,available,processing \}$ states which represent the state of that instance. Main process also has copies of same state machines for each instance to track the states of daemons. Thus both processes (main and daemon) keep a consistent view of the mirrored state machine by monitoring the named events which allows state transitions to be performed in lock step. State transition can be initiated by both processes, specifically $(starting \to available)$ and $(processing \to available)$ is triggered by the daemon, and $(available \to processing)$ is triggered by the main process.
\begin{figure}[!htb]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{sequencediagram}
\newthread{A}{Main Process}{}
\newinst[0.5]{B}{Compilation Daemon}{}
\newinst[7.5]{OS}{OS}{}
\begin{messcall}{A}{create synchronization events \%ID\%+"1" and \%ID\%+"2"}{OS}
\begin{messcall}{A}{launch process with command line parameter \%ID\%}{OS} \end{messcall}
\prelevel
\begin{call}{A}{wait for event \%ID\%+"1"}{OS}{unblock as event \%UID\%+"1" signaled}
\begin{messcall}{OS}{create process}{B}
\postlevel \postlevel
\begin{messcall}{B}{ \shortstack{ create named memory map "\%MMAP\%+\%UID\%" \\ create view to memory map \\ open synchronization event \%UID\%+"1" and \%UID\%+"2"}}{OS} \end{messcall}
\begin{messcall}{B}{ \shortstack{ signal event \%UID\%+"1" \\ wait for event \%UID\%+"2" }}{OS} \end{messcall}
\end{messcall}
\end{call}
\end{messcall}
\end{sequencediagram}
}
\caption{Sequence Diagram for creation of a compilation daemon process and related interprocess communication primitives}
\label{fig:sequence1}
\end{figure}
The communication between the main process and compilation daemons are handled via shared views to memory maps. Each daemon register a named memory map and create a memory view, onto which main process also creates a view to after the daemon signals state transition from $starting$ to $available$. (see Fig.\ref{fig:sequence1}) CUDA source is passed through this shared memory, and compiled device dependent CUBIN object file is also returned through the same. To signal the state transition $(starting \to available)$ daemon process signals the first event and starts waiting for the second event at the same time. Once a daemon leaves the $starting$ state, it never returns back to it.
When the main process generate a new population to be compiled it partitions the individuals in a balanced way, such that the difference of number of individuals between any pair of partitions is never more than one. Once the individuals are partitioned, the generated CUDA codes for each partition are passed to the daemon processes. Each daemon waits in the blocked state till main process wakes that specific daemon for a new batch of source to compile by signaling the second named event of that process (see Fig.\ref{fig:sequence2}). Main process signals all daemons asynchronously to start compiling; then starts waiting for the completion of daemon processes' work. To prevent the UI thread of main process getting blocked too, main process maintains a separate thread for each daemon process it communicates with, therefore while waiting for daemon processes to finish their jobs only those threads of main process are blocked. Main process signaling the second event and daemon process unblocking as a result, corresponds to the state transition $(available \to processing)$.
\begin{figure}[!htb]
\centering
\resizebox{0.8\textwidth}{!}{
\begin{sequencediagram}
\newthread{A}{Main Process}{}
\newinst[0.5]{B}{Compilation Daemon}{}
\newinst[7.5]{OS}{OS}{}
\begin{call}{A}{write CUDA code to shared memory}{A}{} \end{call}
\begin{messcall}{A}{\shortstack{signal event \%ID\%+"2" \\ wait for event \%ID\%+"1"}}{OS}
\begin{messcall}{OS}{unblock as event \%ID\%+"2" is signaled}{B}
\postlevel \postlevel \postlevel
\begin{call}{B}{\shortstack{read CUDA code from shared memory,\\compile CUDA code to PTX with NVRTC, \\ compile PTX to CUBIN with Driver API,\\write CUBIN object to shared memory}}{B}{}
\end{call}
\begin{messcall}{B}{\shortstack{signal event \%ID\%+"1" \\ wait for event \%ID\%+"2" } }{OS} \end{messcall}
\end{messcall}
\prelevel
\begin{messcall}{OS}{unblock as event \%ID\%+"1" is signaled}{A} \end{messcall}
\end{messcall}
\prelevel
\begin{call}{A}{read CUBIN object from shared memory}{A}{} \end{call}
\end{sequencediagram}
}
\caption{Sequence Diagram for compilation on daemon process and related interprocess communication}
\label{fig:sequence2}
\end{figure}
When a daemon process arrives to $processing$ state, it reads the CUDA source code from the shared view of the memory map related to its ID, and compiles the code using NVRTC library.
Once a daemon finishes compiling and writes the Cubin object to shared memory, it signals the first event to unblock the related thread in main process and starts to wait for the second event once again. This signaling, blocking pair corresponds to the state transition $(processing \to available)$.
\subsubsection*{Cost of Parallelization}
The parallelization approach we propose is virtually overhead free when compared to a hypothetical parallelization scenario using threads. As the daemon processes are already resident and waiting in the memory along with the loaded NVRTC library, the overhead of both parallelization approaches is limited to the time cost of memory moves from/to shared memory and synchronization by named events\footnote{on Windows operating system named events is the fastest IPC primitive, upon which all others (i.e. mutex, semaphore) are implemented}. The only difference between the two is, in a context switch between threads of same process, processor keeps the Translation Look Aside Buffer (TLB), but in case of a context switch to another process TLB is flushed as processor transitions to a new virtual address space; we conjecture that the impact would be negligible.
About the memory cost, all modern operating systems recognize when an executable binary or shared library gets loaded multiple times; OS keeps a single copy of the related memory pages on physical memory, and separately maps those to virtual address spaces of each process using those. This not only saves physical RAM, but also allows better space locality for L2/L3 processor caches. Hence memory consumption by multiple instances of our daemon processes each loading NVRTC library (\textit{nvrtc64\_80.dll} is almost 15MB) to their own address space, is almost the same as the consumption of a single instance.
\subsubsection*{Speedup Achieved with Parallel Compilation}
\begin{table}[]
\centering
\caption{Compilation Times by Compilation Methods for Search Problem with 300 individuals}
\label{search-table}
\begin{tabular}{|c|r|r|r|r|} \hline
& \multicolumn{2}{c|}{Compilation Time} & \multicolumn{2}{c|}{Speedup ratio} \\
Compilation & & & In-process & Nvcc \\
Method & Per individual & Total & compilation & compilation \\ \hline
Nvcc & 11.20 ms & 3.36 sec & - & 1.00 \\
In-process & 7.76 ms & 2.33 sec & 1.00 & 1.44 \\
2 daemons & 3.81 ms & 1.14 sec & 2.04 & 2.93 \\
4 daemons & 2.53 ms & 0.76 sec & 3.07 & 4.41 \\
6 daemons & 2.23 ms & 0.67 sec & 3.48 & 5.01 \\
8 daemons & 2.13 ms & 0.64 sec & 3.65 & 5.26 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Compilation Times by Compilation Methods for Keijzer-6 Regression with 300 individuals}
\label{K6-table}
\begin{tabular}{|c|r|r|r|r|} \hline
& \multicolumn{2}{c|}{Compilation Time} & \multicolumn{2}{c|}{Speedup ratio} \\
Compilation & & & In-process & Nvcc \\
Method & Per individual & Total & compilation & compilation \\ \hline
Nvcc & 7.63 ms & 2.29 sec & - & 1.00 \\
In-process & 4.14 ms & 1.24 sec & 1.00 & 1.83 \\
2 daemons & 2.92 ms & 0.88 sec & 1.42 & 2.60 \\
4 daemons & 2.45 ms & 0.73 sec & 1.69 & 3.10 \\
6 daemons & 2.20 ms & 0.66 sec & 1.88 & 3.45 \\
8 daemons & 2.25 ms & 0.67 sec & 1.84 & 3.37 \\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Compilation Times by Compilation Methods for 5-bit Multiplier Problem with 300 individuals}
\label{MUL-table}
\begin{tabular}{|c|r|r|r|r|} \hline
& \multicolumn{2}{c|}{Compilation Time} & \multicolumn{2}{c|}{Speedup ratio} \\
Compilation & & & In-process & Nvcc \\
Method & Per individual & Total & compilation & compilation \\ \hline
Nvcc & 17.20 ms & 5.16 sec & - & 1.00 \\
In-process & 13.29 ms & 3.99 sec & 1.00 & 1.24 \\
2 daemons & 6.15 ms & 1.85 sec & 2.16 & 2.69 \\
4 daemons & 3.23 ms & 0.97 sec & 4.12 & 5.12 \\
6 daemons & 2.42 ms & 0.73 sec & 5.49 & 6.82 \\
8 daemons & 2.17 ms & 0.65 sec & 6.11 & 7.60 \\ \hline
\end{tabular}
\end{table}
At the end of each batch of experiments main application dumps the collected raw measurements to a file. We imported this data to Matlab filtered by experiment and measurement types, and aggregated the experiment values for each population size to produce the Tables \ref{search-table},\ref{K6-table},\ref{MUL-table}, and to create the Figures \ref{fig:search-parallel},\ref{fig:search-speedup-vs-nvrtc},\ref{fig:K6-parallel},\ref{fig:K6-speedup-vs-nvrtc},\ref{fig:MUL-parallel},\ref{fig:MUL-speedup-vs-nvrtc}.
It can be seen that parallelized in-process compilation of genetic programming individuals is faster for all problems and population sizes when compared to in-process compilation without parallelization; furthermore in-process compilation without parallelization itself was shown to be faster than regular command line nvcc compilation on previous section.
Parallel compilation brought the per individual compilation time to 2.17 ms/individual for 5-bit Multiplier, to 2.20 ms/individual for Keijzer-6 regression and to 2.13 milliseconds for the Search Problem; these are almost an order of magnitude faster than previous published results. Also we measured a compilation speedup of $\times 3.45$ for regression problem, $\times 5.26$ for search problem, and $\times 7.60$ for multiplication problem, when compared to the latest Nvcc V8 compiler, without requiring any code modification, and without any runtime performance penalty.
Notice that our experiment platform consisted of dual Xeon E5-2670 processors running at 2.6Ghz; for compute bound tasks increase on processor frequency almost directly translates to performance improvement at an equal rate\footnote{assuming all other things being equal}. Therefore we can conjecture that to be able to compile a population of 300 individuals at sub-millisecond durations, the required processor frequency is around $ 2.6 \times 2.13 = 5.54Ghz$\footnote{once again, under assumption of all other things being equal. 2.13 is the compilation time of Search Problem with 8 daemons} which is currently available.
\section*{Conclusion}
In this paper we present a new method to accelerate the compilation of genetic programming individuals, in order to keep the compiled approach as a viable option for genetic programming on gpu.
By using an in-process GPU compiler, we replaced disk file based data transfer to/from the compiler with memory accesses, also we mitigated the overhead of repeated launches and tear downs of the command line compiler. Also we investigated ways to parallelize this method of compilation, and identified that in-process compilation function automatically serializes concurrent calls from different threads. We implemented a daemon process that can have multiple running instances and service another application requesting CUDA code compilation. Daemon processes use the same in-line compilation method and communicate through operating system's Inter Process Communication primitives.
We measured compilation times just above 2.1 ms/individual for all three benchmark problems; and observed compilation speedups ranging from $\times 3.45$ to $\times 7.60$ based on problem, when compared to repeated command line compilation with latest Nvcc v8 compiler.
All data and source code of software presented in this paper is available at https://github.com/hayral/Parallel-and-in-process-compilation-of-individuals-for-genetic-programming-on-GPU
\section*{Acknowledgments}
Dedicated to the memory of Professor Ahmet Coşkun Sönmez.
\\First author was partially supported by Turkcell Academy.
\section*{Appendix}
\subsection*{Search Problem}
\subsubsection*{ Grammar Listing}
\lstinputlisting[captionpos=b,caption=Grammar for Search Problem, basicstyle=\footnotesize, belowcaptionskip=4pt ]{search2grammar.txt}
\subsubsection*{Code Preamble for Whole Population}
\lstinputlisting[captionpos=b,caption=Code preamble for whole population on Search Problem, basicstyle=\footnotesize, belowcaptionskip=4pt ]{searchpreamble.txt}
\subsection*{Keijzer-6 Regression Problem}
\subsubsection*{ Grammar Listing}
\lstinputlisting[captionpos=b,caption=Grammar for Keijzer-6 Regression, basicstyle=\footnotesize, belowcaptionskip=4pt ]{K6grammar.txt}
\subsection*{5-bit Multiplier Problem}
\subsubsection*{ Grammar Listing}
\lstinputlisting[captionpos=b,caption=Grammar for 5-bit Multiplier Problem, basicstyle=\footnotesize, belowcaptionskip=4pt ]{mulgrammar.txt}
\subsubsection*{Code Preamble for each Individual}
\lstinputlisting[captionpos=b,caption=Code preamble for 5-bit Multiplier Problem, basicstyle=\footnotesize, belowcaptionskip=4pt ]{mulpreamble.txt}
\subsection*{Compilation Time and Speedup Ratio Plots}
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{search-parallel.eps}
\caption{Per individual compile time}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{search-parallel-total.eps}
\caption{Total compile time}
\end{subfigure}
\caption{Nvcc compilation times for Search Problem by number of servicing resident processes}
\label{fig:search-parallel}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{search-speedup-vs-nvcc.eps}
\caption{Speedup against conventional compilation }
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{search-speedup-vs-nvrtc.eps}
\caption{Speedup against in-process compilation}
\end{subfigure}
\caption{Parallelization speedup on Search problem}
\label{fig:search-speedup-vs-nvrtc}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{K6-parallel.eps}
\caption{Per individual compile time}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{K6-parallel-total.eps}
\caption{Total compile time}
\end{subfigure}
\caption{Nvcc compilation times for Keijzer-6 regression by number of servicing resident processes}
\label{fig:K6-parallel}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{K6-speedup-vs-nvcc.eps}
\caption{Speedup against conventional compilation }
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{K6-speedup-vs-nvrtc.eps}
\caption{Speedup against in-process compilation}
\end{subfigure}
\caption{Parallelization speedup on Keijzer-6 regression}
\label{fig:K6-speedup-vs-nvrtc}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{MUL-parallel.eps}
\caption{Per individual compile time}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{MUL-parallel-total.eps}
\caption{Total compile time}
\end{subfigure}
\caption{Nvcc compilation times for 5-bit Multiplier by number of servicing resident processes}
\label{fig:MUL-parallel}
\end{figure}
\begin{figure}[!ht]
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{mul-speedup-vs-nvcc.eps}
\caption{Speedup against conventional compilation }
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{mul-speedup-vs-nvrtc.eps}
\caption{Speedup against in-process compilation}
\end{subfigure}
\caption{Parallelization speedup on 5-Bit multiplier}
\label{fig:MUL-speedup-vs-nvrtc}
\end{figure}
\clearpage
\bibliographystyle{plain}
|
1,477,468,750,940 | arxiv | \section{Introduction}
The wireless mesh networks are being extensively studied recently
due to their potential to improve the performance and throughput
of the cellular networks by borrowing the features of ad-hoc
networks \cite{Akyildiz:05CN}. The two-hop interference network
was recently proposed to model the mesh network from an
information theoretic perspective \cite{Simeone_etal:07Allerton}.
The model is in essence a cascade of two interference channels:
the transmitters communicate to two relay nodes through an
interference channel and the two relay nodes communicate to the
two receivers through another interference channel.
In \cite{Simeone_etal:07Allerton}, the authors studied the
achievable region for the model where the relays apply
decode-and-forward scheme. For the interference channel in the
first hop, since the messages of the two users are independent,
the largest achievable region to date was proposed by Han and
Kobayashi \cite{Han&Kobayashi:81IT}. The basic idea is for each
user to split their message into two parts: the private message,
which is only to be decoded by the intended receiver, and the
common message, which is to be decoded by both receivers. Although
the unintended user's common message is discarded by the receivers
in the classic interference channel model,
\cite{Simeone_etal:07Allerton} made use of this common message at
the two relay nodes as knowledge of them can help boost the rate
in the second hop through cooperative transmission. In
\cite{Simeone_etal:07Allerton}, the authors proposed the
superposition coding scheme for each relay node to transmit not
only the intended user's private and common messages but also the
other user's common message, in order to obtain the coherent
combining gain of the common message at the intended receiver.
\cite{Thejaswi_etal:07Allerton} also considered the two-hop
interference network model. Instead of considering the end-to-end
transmission rate, the authors focused on the the second hop and
explored the possibilities for the two relays to utilize the
common message from the unintended user
and proposed multiple transmission schemes, such as
MIMO broadcast strategy, dirty paper coding, beamforming, and
further rate splitting.
However, both \cite{Simeone_etal:07Allerton} and
\cite{Thejaswi_etal:07Allerton} only considered decode-and-forward
relaying and focused on the weak interference case for both hops,
i.e., the interference link gain is less than the direct link
gain. In this paper, we study the model under various parameter
regimes using decode-and-forward relaying as well as
amplify-and-forward relaying. \cite{Simeone_etal:07Allerton} and
\cite{Thejaswi_etal:07Allerton} also suggested that, if the
interference channel in the first hop has strong interference
(interference link gain greater than direct link gain), by the
standard results for the classic interference channel, it is
optimal for the two relays to decode both users messages. Contrary
to this, we will show in later sections that this approach can be
easily outperformed by switching the roles of the two relays which
essentially converts the strong interference channels to weak
interference channels. For amplify and forward, we demonstrate
that the end-to-end rate may exceed the naive use of cut-set bound
which applies to only the decode and forward approach.
The rest of the paper is organized as follows. In section II, we
introduce the model for the two-hop interference network. In
section III, we focus on the end-to-end transmission rate and
analyze the decode-and-forward relaying scheme for the network
under different parameter regimes. In section IV, we analyze the
amplify-and-forward scheme under various parameter regimes.
Section V provides numerical examples to compare various proposed
coding schemes. Concluding remarks are given in section VI.
\section{Channel Model}
The standard two-hop interference network is a cascade of two
interference channels with direct transmission link coefficient
equal to $1$, as shown in Fig. \ref{fig:standard twohop}.
\begin{figure}[htb]
\centerline{
\begin{psfrags}
\psfrag{W1}[l]{$W_1$}\psfrag{W2}[l]{$W_2$}
\psfrag{W11}[l]{$\hat{W}_1$}\psfrag{W22}[l]{$\hat{W}_2$}
\psfrag{X1}[l]{$X_1$} \psfrag{X2}[l]{$X_2$} \psfrag{X3}[l]{$X_3$}
\psfrag{X4}[l]{$X_4$} \psfrag{Y1}[l]{$Y_1$} \psfrag{Y2}[l]{$Y_2$}
\psfrag{Y3}[l]{$Y_3$} \psfrag{Y4}[l]{$Y_4$} \psfrag{Z1}[l]{$Z_1$}
\psfrag{Z2}[l]{$Z_2$} \psfrag{Z3}[l]{$Z_3$} \psfrag{Z4}[l]{$Z_4$}
\psfrag{T1}[l]{$T1$}\psfrag{T2}[l]{$T2$}
\psfrag{R1}[l]{$R1$}\psfrag{R2}[l]{$R2$}
\psfrag{D1}[l]{$D1$}\psfrag{D2}[l]{$D2$}
\psfrag{h1}[l]{$1$}\psfrag{h2}[l]{$a_2$}
\psfrag{h3}[l]{$a_1$}\psfrag{h4}[l]{$1$}
\psfrag{h5}[l]{$1$}\psfrag{h6}[l]{$b_2$}
\psfrag{h7}[l]{$b_1$}\psfrag{h8}[l]{$1$}
\scalefig{.50}\epsfbox{two-hop.eps}
\end{psfrags}
} \caption{\label{fig:standard twohop}Two-hop interference network
in standard form}
\end{figure}
Transmitter 1 ($T_1$) has message
$W_1\in\{1,2,\cdot\cdot\cdot,2^{nR_1}\}$ to be transmitted to
destination $D_1$ and transmitter 2 ($T_2$) has message
$W_2\in\{1,2,\cdot\cdot\cdot,2^{nR_2}\}$ to be transmitted to
destination $D_2$. $a_1$, $a_2$, $b_1$ and $b_2$ are fixed
positive numbers, $Z_1$, $Z_2$, $Z_3$ and $Z_4$ are independent
Gaussian distributed variables with zero mean and unit variance.
The average power constraints for the input signals $X_1$, $X_2$,
$X_3$ and $X_4$ are $P_{11}$, $P_{12}$, $P_{21}$ and $P_{22}$,
respectively.
In order to simplify the analysis of this complicated channel
model and better compare our results with the existing ones, we
follow the convention of \cite{Simeone_etal:07Allerton} and
\cite{Thejaswi_etal:07Allerton} by only considering the symmetric
interference channels, i.e., \begin{eqnarray}
a_1=a_2&\triangleq& a\label{eq:model5}\\
b_1=b_2&\triangleq& b\\
P_{11}=P_{12}&\triangleq& P_1\\
P_{21}=P_{22}&\triangleq& P_2\label{eq:model8} \end{eqnarray} In addition,
we focus primarily on the symmetric rate, i.e., the case with
$R_1=R_2$.
\section{Decode and Forward}
In this section, we propose capacity bounds for the two-hop
interference network in various parameter regimes using
decode-and-forward relaying. Under the full duplex condition, the
transmission is conducted across a large number of blocks. In each
block, the relays receive the new messages of the current block
from the transmitters, and transmit the information of the
previous block to the desitnation. We assume the number of blocks
is large enough to ignore the penalty incurred in the first and
the last blocks.
\subsection{$0<a<1, 0<b<1$}\label{subsection:case1}
In \cite{Simeone_etal:07Allerton}, the authors proposed achievable
transmission rates for the case that both hops have weak
interference, i.e., $a<1$ and $b<1$. Specifically, they applied
Han-Kobayashi's scheme to the first hop by splitting each user's
message into two parts, namely, $W_1$ into private message
$W_{1p}\in \{1, \cdot\cdot\cdot, 2^{nR_{1p}}\}$ and common message
$W_{1c}\in \{1, \cdot\cdot\cdot, 2^{nR_{1c}}\}$ and $W_2$ into
private message $W_{2p}\in \{1, \cdot\cdot\cdot, 2^{nR_{2p}}\}$
and common message $W_{2c}\in \{1, \cdot\cdot\cdot,
2^{nR_{2c}}\}$. Each relay not only decodes the private and common
messages from the intended user, but also decodes the common
message from the other user. Since the Han-Kobayashi region is
based on simultaneous decoding of the three messages (1 private
message and 2 common messages), which is very complicated to
compute, \cite{Thejaswi_etal:07Allerton} simplified it by
proposing sequential decoding: each relay first decodes the two
common messages, subtract them out, then decode the private
message. By restricting the analysis to the symmetric rate
\cite{Thejaswi_etal:07Allerton}, i.e., $R_{1p}=R_{2p}=R_p^{(1)}$,
$R_{1c}=R_{2c}=R_c^{(1)}$, we have achievable rates in the first
hop \begin{eqnarray} R_p^{(1)}\!\!\!\!\!&=&\!\!\!\!\!\gamma\left(\frac{\alpha
P_1}{1+a^2\alpha P_1}\right)\label{eq:1}\\
R_c^{(1)}\!\!\!\!\!&=&\!\!\!\!\!\min\left\{\gamma\left(\frac{a^2\bar{\alpha}P_1}{\sigma_1^2}\right),
\frac{1}{2}\gamma\left(\frac{(1+a^2)\bar{\alpha}P_1}{\sigma_1^2}\right)\right\}\label{eq:2}
\end{eqnarray} where $\alpha P_1$ is the power allocated to the private
message and $\bar{\alpha}P_1=(1-\alpha)P_1$ is the power allocated
to the common message. $\sigma_1^2=1+(1+a^2)\alpha P_1$.
$\gamma(x)$ is defined as $\frac{1}{2}\log(1+x)$. The superscript
``(1)" denotes the first hop. (\ref{eq:2}) is from the capacity
region of the MAC channel consisting of the two common messages,
treating the private messages as noise; (\ref{eq:1}) is the
decoding of the private message treating the other private message
as noise.
For the second hop, \cite{Simeone_etal:07Allerton} proposed
superposition scheme at the two relays such that coherent
combining can be achieved at the destinations. This scheme was
outperformed by the dirty paper coding (DPC) scheme proposed in
\cite{Thejaswi_etal:07Allerton} for the very weak interference
case, i.e., when $b$ is very small. The idea is for the two relays
to encode one of the common messages using DPC, thus treating the
other common message as known interference. Therefore, this known
interference will not affect the unintended destination. However,
due to the nonlinearity of the DPC, the dirty paper decoded common
message cannot be subtracted out. Thus,
\cite{Thejaswi_etal:07Allerton} also suggested to dirty paper code
the private message treating both common messages as known
interference. Besides, the common message that is treated as known
interference is decoded at its intended destination by treating
the other common message (dirty paper coded) as well as the two
private messages as noise. Since either common message can be
dirty paper coded against the other common message, there are two
transmission modes and one should time share between them to
maximize the sum rate \cite{Thejaswi_etal:07Allerton}. Again, by
only considering the symmetric rates, the achievable rates under
the DPC scheme for the second hop are
\cite{Thejaswi_etal:07Allerton} \begin{eqnarray} R_{p,
DPC}^{(2)}&=&\gamma\left(\frac{\beta
P_2}{1+b^2\beta P_2}\right)\label{eq:3}\\
R_{c,DPC}^{(2)}&=&\frac{1}{2}\gamma\left(\frac{(1-b^2)^2\bar{\beta}^2P_2^2}{\sigma_2^4}+\frac{2(1+b^2)\bar{\beta}P_2}{\sigma_2^2}\right)\label{eq:4}
\end{eqnarray} where $\beta P_2$ is the power allocated to the private
message, $\sigma_2^2=1+(1+b^2)\beta P_2$ since the private
messages from both users are treated as noise when decoding common
messages. (\ref{eq:3}) is decoding the private message treating
the other user's private message as noise, since the effect of the
two common messages disappears due to the DPC; (\ref{eq:4}) is
from the optimization problem which maximizes the sum rate of the
two common messages.
In the DPC scheme, the common message that is treated as known
interference is decoded by its intended receiver treating the
other user's common message and private messages as noise.
However, when the interference link of the second hop gets
stronger, i.e., $b$ gets larger, the interference incurred by the
common message and private message from the other user may be too
strong to be treated as noise. Therefore, it may be beneficial for
the receivers to decode the common message and even the private
message from the other user, like in the strong interference
channel, whose capacity is that of the compound MAC. To make the
coding scheme more general, we do not let the receivers decode all
the private messages. Instead, we further split the private
message $W_{1p}$ from the first hop into two parts,
$W_{1pp}\in\{1,2,\cdot\cdot\cdot,2^{nR_{1pp}}\}$ and
$W_{1pc}\in\{1,2,\cdot\cdot\cdot,2^{nR_{1pc}}\}$, where $W_{1pp}$
is the sub-private message only decoded at the intended receiver,
and $W_{1pc}$ is the sub-common message decoded at both receivers.
The private message $W_{2p}$ is split in the same fashion into
$W_{2pp}$ and $W_{2pc}$. There are five messages (two common
messages, two sub-common messages and one sub-private message) to
be decoded by each receiver, which yields very complex expression
for the rate region if we use simultaneous decoding. Instead, we
will adopt sequential decoding and fix the decoding order as
follows: first, simultaneously decode the two common messages
$W_{1c}$ and $W_{2c}$, subtract them out; second, simultaneously
decode the two sub-common messages $W_{1pc}$ and $W_{2pc}$,
subtract them out; third, decode the sub-private message $W_{1pp}$
by receiver 1 (or $W_{2pp}$ by receiver 2). Consequently, the
symmetric achievable rate region is \begin{eqnarray}
R_c&\leq&\gamma\left(\frac{(\sqrt{P_{c1}}+b\sqrt{P_{c2}})^2}{1+(1+b^2)P_p}\right)\\
R_c&\leq&\gamma\left(\frac{(\sqrt{P_{c2}}+b\sqrt{P_{c1}})^2}{1+(1+b^2)P_p}\right)\\
2R_c&\leq&\gamma\left(\frac{(\sqrt{P_{c1}}+b\sqrt{P_{c2}})^2+(\sqrt{P_{c2}}+b\sqrt{P_{c1}})^2}{1+(1+b^2)P_p}\right)\\
R_{pc}&\leq&\gamma\left(\frac{P_{pc}}{1+(1+b^2)P_{pp}}\right)\\
R_{pc}&\leq&\gamma\left(\frac{b^2P_{pc}}{1+(1+b^2)P_{pp}}\right)\\
2R_{pc}&\leq&\gamma\left(\frac{(1+b^2)P_{pc}}{1+(1+b^2)P_{pp}}\right)\\
R_{pp}&\leq&\gamma\left(\frac{P_{pp}}{1+b^2P_{pp}}\right)
\end{eqnarray}
where power $P_p$ is allocated to the private message, $P_{c1}$ is
allocated to the intended common message, $P_{c2}$ is allocated to
the interfering common message, and $P_p+P_{c1}+P_{c2}=P_2$. Also,
$P_{pc}$ is for the sub-common message and $P_{pp}$ is for the
sub-private message and $P_{pc}+P_{pp}=P_p$. If we fix $P_p$ and
maximize $R_c$ under $P_{c1}+P_{c2}\leq P_2-P_p$, the optimal
$R_c^*=\frac{1}{2}\gamma\left(\frac{(1+b)^2(P_2-P_p)}{1+(1+b^2)P_p}\right)$
is achieved when $P_{c1}=P_{c2}=\frac{1}{2}(P_2-P_p)$
\cite{Thejaswi_etal:07Allerton}. Therefore, the symmetric rates
for the second hop under the MAC scheme is \begin{eqnarray}\nonumber
R_{p,MAC}^{(2)}\!\!\!\!&=&\!\!\!\!\max_{\alpha}\left\{\min\left[\gamma\left(\frac{b^2\bar{\alpha}\beta
P_2}{\sigma_3^2}\right),
\frac{1}{2}\gamma\left(\frac{(1+b^2)\bar{\alpha}\beta
P_2}{\sigma_3^2}\right)\right]\right.\\
&&+\left.\gamma\left(\frac{\alpha\beta P_2}{1+b^2\alpha\beta
P_2}\right)\right\}\label{eq:3(2)}\\
R_{c,MAC}^{(2)}\!\!\!\!&=&\!\!\!\!\frac{1}{2}\gamma\left(\frac{(1+b)^2\bar{\beta}P_2}{1+(1+b^2)\beta
P_2}\right)\label{eq:4(2)}
\end{eqnarray}
where $\sigma_3^2=1+(1+b^2)\alpha\beta P_2$ and $\alpha, \beta \in
[0,1]$.
This scheme is more general than the cooperative transmission
scheme in \cite{Simeone_etal:07Allerton} in that we further split
the first hop's private messages into two parts in the second hop.
This scheme is similar to the ``layered coding with beamforming"
scheme in \cite{Thejaswi_etal:07Allerton}, with the difference
that we only consider the coherent beamforming here and disregard
the zero forcing beamforming scheme which proves to be always
worse than the DPC scheme.
\begin{theorem}\label{thm:1}
The achievable symmetric rate ($R_1=R_2=R$) for the symmetric
interference network is the solution to the following optimization
problem: \begin{eqnarray} R&=&\max_{\alpha,\beta\in [0,1]} R_p+R_c\\
&&\mbox{s.t.} (R_p,R_c)\in \mathcal{R}(R_p^{(1)},R_c^{(1)})\cap
\mathcal{R}(R_p^{(2)},R_c^{(2)})\end{eqnarray} where $R_p^{(1)}$ and $R_c^{(1)}$
are given in (\ref{eq:1})-(\ref{eq:2}).
$\mathcal{R}(R_p^{(2)},R_c^{(2)})$ is defined as the convex closure of
the union of $\mathcal{R}(R_{p,DPC}^{(2)}, R_{c,DPC}^{(2)})$ and
$\mathcal{R}(R_{p,MAC}^{(2)}, R_{c,MAC}^{(2)})$, where $R_{p,DPC}^{(2)}$
and $R_{c,DPC}^{(2)}$ are given in (\ref{eq:3})-(\ref{eq:4}), and
$R_{p,MAC}^{(2)}$ and $R_{c,MAC}^{(2)}$ are given in
(\ref{eq:3(2)})-(\ref{eq:4(2)}).
\end{theorem}
\subsection{$a>1,b>1$}\label{subsection:case2}
If the first hop has strong interference, i.e., $a>1$, both
\cite{Simeone_etal:07Allerton} and \cite{Thejaswi_etal:07Allerton}
let both relays decode both users' messages in the first hop, as
this is the optimal scheme for interference channels with strong
interference. Using this scheme, for the symmetric rates
$(R_1=R_2=R^{(1)})$, we have \begin{eqnarray} R^{(1)}&\leq& \gamma(P_1)\\
R^{(1)}&\leq& \gamma(a^2P_1)\\
R^{(1)}+R^{(1)}&\leq& \gamma(P_1+a^2P_1) \end{eqnarray} Thus, \begin{eqnarray}
R^{(1)}=\min\left(\gamma(P_1),
\frac{1}{2}\gamma((1+a^2)P_1)\right) \end{eqnarray} In other words, for very
strong interference case $a^2\geq 1+P_1$, $R^{(1)}=\gamma(P_1)$;
for $1<a^2<1+P_1$, $R^{(1)}=\frac{1}{2}\gamma((1+a^2)P_1)$.
After the first hop, since both relays have knowledge of both
users' messages, the second hop reduces to the Gaussian vector
broadcast channel with per antenna power constraint, for which we
know the DPC scheme is optimal. By time sharing between the two
DPC modes and maximizing the sum rate, we obtain the achievable
symmetric rate for the second hop \begin{eqnarray}
R^{(2)}=\frac{1}{2}\gamma((b^2-1)^2P_2^2+2P_2(1+b^2)). \end{eqnarray}
Therefore the achievable rate for the entire network is \begin{eqnarray}
R=\min\{R^{(1)}, R^{(2)}\}. \label{eq:5} \end{eqnarray}
The above analysis seems to be a natural way to deal with the
strong interference case, and for each hop, the transmission
scheme is optimal. However, optimality in each hop does not
guarantee optimality of the entire network. Indeed, for the entire
system, the combination of the two optimal schemes is no longer
optimal. An easy way to outperform the above scheme is to switch
the role of the two relays. Specifically, we make relay $R_2$ as
the ``intended" relay for the first user $T_1$, and relay $R_1$ as
the intended relay for the second user $T_2$. In this way, the
first hop is converted into an interference channel with weak
interference. Consequently, the second hop is converted into
another weak interference channel as shown in
Fig.\ref{fig:transform}. After some simple scaling, this two-hop
network becomes \begin{eqnarray} Y_1^{'}&=&X_1+\frac{1}{a}X_2+Z_1^{'}\\
Y_2^{'}&=&\frac{1}{a}X_1+X_2+Z_2^{'}\\
Y_3^{'}&=&X_3+\frac{1}{b}X_4+Z_3^{'}\\
Y_4^{'}&=&\frac{1}{b}X_3+X_4+Z_4^{'} \end{eqnarray} where $Z_1^{'},
Z_2^{'}\sim N(0, 1/a^2)$, $Z_3^{'}, Z_4^{'}\sim N(0, 1/b^2)$ are
independent.
\begin{figure}[htb]
\centerline{
\begin{psfrags}
\psfrag{W1}[l]{$W_1$}\psfrag{W2}[l]{$W_2$}
\psfrag{W11}[l]{$\hat{W}_1$}\psfrag{W22}[l]{$\hat{W}_2$}
\psfrag{X1}[l]{$X_1$} \psfrag{X2}[l]{$X_2$} \psfrag{X3}[l]{$X_4$}
\psfrag{X4}[l]{$X_3$} \psfrag{Y1}[l]{$Y_2$} \psfrag{Y2}[l]{$Y_1$}
\psfrag{Y3}[l]{$Y_3$} \psfrag{Y4}[l]{$Y_4$} \psfrag{Z1}[l]{$Z_2$}
\psfrag{Z2}[l]{$Z_1$} \psfrag{Z3}[l]{$Z_3$} \psfrag{Z4}[l]{$Z_4$}
\psfrag{T1}[l]{$T1$}\psfrag{T2}[l]{$T2$}
\psfrag{R1}[l]{$R2$}\psfrag{R2}[l]{$R1$}
\psfrag{D1}[l]{$D1$}\psfrag{D2}[l]{$D2$}
\psfrag{h1}[l]{$a$}\psfrag{h2}[l]{$1$}
\psfrag{h3}[l]{$1$}\psfrag{h4}[l]{$a$}
\psfrag{h5}[l]{$b$}\psfrag{h6}[l]{$1$}
\psfrag{h7}[l]{$1$}\psfrag{h8}[l]{$b$}
\scalefig{.50}\epsfbox{two-hop.eps}
\end{psfrags}
} \caption{\label{fig:transform}Two-hop interference network
transformation}
\end{figure}
Therefore, this strong interference two-hop network reduces to
case \ref{subsection:case1} where both hops are weak interference
channels. Using Han-Kobayashi scheme in the first hop and
combining DPC and MAC in the second hop, and going through the
same derivation, we obtain the symmetric rates in the first hop
\begin{eqnarray} R_p^{(1)}&=&\gamma\left(\frac{a^2\alpha P_1}{1+\alpha
P_1}\right)\label{eq:6(1)}\\
R_c^{(1)}&=&\min\left\{\gamma(\frac{\bar{\alpha}P_1}{\sigma_1^2}),
\frac{1}{2}\gamma\left(\frac{(1+a^2)\bar{\alpha}P_1}{\sigma_1^2}\right)\right\}\label{eq:6(2)}
\end{eqnarray} where $\alpha\in [0,1]$ and $\sigma_1^2=1+(1+a^2)\alpha P_1$.
The symmetric rates in the second hop under DPC is \begin{eqnarray}
R_{p,DPC}^{(2)}&=&\gamma\left(\frac{b^2\beta P_2}{1+\beta P_2}\right)\label{eq:6(3)}\\
R_{c,DPC}^{(2)}&=&\frac{1}{2}\gamma\left(\frac{(b^2-1)^2\bar{\beta}^2P_2^2}{\sigma_2^4}+\frac{2(1+b^2)\bar{\beta}P_2}{\sigma_2^2}\right)\label{eq:6(4)}
\end{eqnarray} where $\beta\in [0,1]$ and $\sigma_2^2=1+(1+b^2)\beta P_2$.
The symmetric rates in the second hop under MAC is \begin{eqnarray}\nonumber
R_{p,MAC}^{(2)}\!\!\!\!&=&\!\!\!\!\max_{\alpha}\left\{\min\left[\gamma\left(\frac{\bar{\alpha}\beta
P_2}{\sigma_3^2}\right),
\frac{1}{2}\gamma\left(\frac{(1+b^2)\bar{\alpha}\beta
P_2}{\sigma_3^2}\right)\right]\right.\\
&&+\left.\gamma\left(\frac{b^2\alpha\beta P_2}{1+\alpha\beta P_2}\right)\right\}\label{eq:6(5)}\\
R_{c,MAC}^{(2)}\!\!\!\!&=&\!\!\!\!\frac{1}{2}\gamma\left(\frac{(1+b)^2\bar{\beta}P_2}{1+(1+b^2)\beta
P_2}\right)\label{eq:6(6)}
\end{eqnarray} where $\sigma_3^2=1+(1+b^2)\alpha\beta P_2$ and $\alpha, \beta \in
[0,1]$.
\begin{theorem}\label{thm:2}
The solution to the following optimization problem is achievable
for the two-hop network when $a>1$ and $b>1$:
\begin{eqnarray} R&=&\max_{\alpha,\beta\in [0,1]} R_p+R_c \label{eq:6}\\
&&\mbox{s.t.} (R_p,R_c)\in \mathcal{R}(R_p^{(1)},R_c^{(1)})\cap
\mathcal{R}(R_p^{(2)},R_c^{(2)})\end{eqnarray} where $R_p^{(1)}$ and $R_c^{(1)}$
are given in (\ref{eq:6(1)})-(\ref{eq:6(2)}).
$\mathcal{R}(R_p^{(2)},R_c^{(2)})$ is defined as the convex closure of
the union of $\mathcal{R}(R_{p,DPC}^{(2)}, R_{c,DPC}^{(2)})$ and
$\mathcal{R}(R_{p,MAC}^{(2)}, R_{c,MAC}^{(2)})$, where $R_{p,DPC}^{(2)}$
and $R_{c,DPC}^{(2)}$ are given in
(\ref{eq:6(3)})-(\ref{eq:6(4)}), and $R_{p,MAC}^{(2)}$ and
$R_{c,MAC}^{(2)}$ are given in (\ref{eq:6(5)})-(\ref{eq:6(6)}).
\end{theorem}
Note that when $\alpha=\beta=0$, let
$\mathcal{R}(R_p^{(2)},R_c^{(2)})=\mathcal{R}(R_{p,DPC}^{(2)},
R_{c,DPC}^{(2)})$, the rate $R$ defined in (\ref{eq:6}) reduces to
that of (\ref{eq:5}). Since $\mathcal{R}(R_p^{(2)},R_c^{(2)})$ is always
a superset of $\mathcal{R}(R_{p,DPC}^{(2)}, R_{c,DPC}^{(2)})$, the
achievable rate (\ref{eq:5}) is always a subset of (\ref{eq:6}).
\subsection{$0<a<1, b>1$}\label{subsection:case3}
For the first hop, it is a weak interference channel, the
transmission strategy is the same as case \ref{subsection:case1}:
the Han-Kobayashi scheme. Thus, the symmetric achievable rate is
$(R_p^{(1)}, R_c^{(1)})$ given in (\ref{eq:1})-(\ref{eq:2}).
For the second hop, we can still use DPC scheme, thus yielding
rates $(R_{p,DPC}^{(2)}, R_{c,DPC}^{(2)})$ given in
(\ref{eq:3})-(\ref{eq:4}). Now consider the MAC scheme. From the
standard result of strong interference channel, the capacity is
achieved when both user's messages are decoded by both receivers,
as in the case of compound MAC. Thus, for the MAC scheme proposed
in section \ref{subsection:case1}, we should modify it by letting
both receivers decode all the messages, both private and common,
instead of further splitting the private message. As such, we
should set $\alpha=0$ in (\ref{eq:3(2)})-(\ref{eq:4(2)}). Also
notice that $b>1$, the symmetric achievable rates for the MAC
scheme become \begin{eqnarray} R_{p,MAC}^{(2)}&=&\min\left\{\gamma(\beta P_2),
\frac{1}{2}\gamma((1+b^2)\beta P_2)\right\}\label{eq:6(7)}\\
R_{c,MAC}^{(2)}&=&\frac{1}{2}\gamma\left(\frac{(1+b)^2\bar{\beta}P_2}{1+(1+b^2)\beta
P_2}\right)\label{eq:6(8)}
\end{eqnarray}
Therefore, for the case $0<a<1, b>1$, the symmetric achievable
rate for the two hop network has the same form of that in Theorem
\ref{thm:1}, except that $R_{p,MAC}^{(2)}$ and $R_{c,MAC}^{(2)}$
are given in (\ref{eq:6(7)})-(\ref{eq:6(8)}).
\subsection{$a>1, 0<b<1$}
If we stick to the roles of the two relays, for the first hop, the
two relays should decode both users' messages; for the second hop,
we apply DPC scheme for the weak interference channel. However,
similar to case \ref{subsection:case2}, it can be verified that
this scheme is easily outperformed if we switch the role of the
two relays. Consequently, the first hop becomes a weak
interference channel and the second hop becomes a strong
interference channel. We can directly apply the results from case
\ref{subsection:case3}, with only minor modifications: change the
channel gains $a$ and $b$ into $\frac{1}{a}$ and $\frac{1}{b}$
respectively, and change the variance of noise $Z_1$ and $Z_2$ to
$\frac{1}{a^2}$, and change the variance of noise $Z_3$ and $Z_4$
to $\frac{1}{b^2}$. Thus, the total symmetric rate of the two hop
network becomes the same form of that in Theorem \ref{thm:2}
except that $R_{p,MAC}^{(2)}$ and $R_{c,MAC}^{(2)}$ are given in
(\ref{eq:6(9)})-(\ref{eq:6(10)}). \begin{eqnarray}
R_{p,MAC}^{(2)}&=&\min\left\{\gamma(b^2\beta P_2),
\frac{1}{2}\gamma((1+b^2)\beta P_2)\right\}\label{eq:6(9)}\\
R_{c,MAC}^{(2)}&=&\frac{1}{2}\gamma\left(\frac{(1+b)^2\bar{\beta}P_2}{1+(1+b^2)\beta
P_2}\right)\label{eq:6(10)}
\end{eqnarray}
For the second hop, the DPC scheme and the MAC scheme are both
needed for all the parameter regimes. Neither scheme can dominate
the other.
From the previous analysis of the four parameter regimes, we have
the following theorem.
\begin{theorem}\label{thm:role_switching}
For the two hop interference network with the transmission scheme
of decode and forward relaying, if the first hop has weak
interference, one should apply the HK scheme directly; if the
first hop has strong interference, it is always favorable to
convert it into a weak interference channel by switching the roles
of the two relays, as in Fig. \ref{fig:transform}, and then apply
the HK scheme. In other words, with strong interference in the
first hop, rate splitting after role switching of the two relays
can always achieve a rate region no smaller than that achieved by
both relays decoding all the messages without role switching.
\end{theorem}
\begin{proof} If the two relays do not switch roles, for strong
interference in the first hop, the optimal scheme is for both the
two relays to decode all the messages of the two users. Then, the
optimal scheme for the second hop is to use DPC scheme as in the
MIMO broadcast channel. However, these schemes are special cases
of the transmission schemes if we switch the roles of the two
relays and apply the HK scheme to the first hop(simply by
allocating zero power to the private messages after rate
splitting). Therefore, role exchange for the two relay nodes is
always preferred for strong interference in the first hop.
\end{proof}
\subsection{Half Duplex}
If the transmission is conducted in the half duplex fashion, the
two relays cannot receive and transmit at the same time. In this
case, the transmission in the two hops cannot proceed
simultaneously. When transmitting in the first hop, the relays are
in the listening mode and the two users $T_1, T_2$ transmit their
messages with $N_1$ channel uses to the relays. In the second hop,
after decoding the received messages, the two relays $R_1, R_2$
transmit with $N_2$ channel uses to the two destinations $D_1,
D_2$. Thus, the transmission schemes discussed for the full duplex
case can be directly applied to the half duplex case, only with
the overall rates reduced due to the extra channel uses needed.
Following the schemes proposed for the full duplex mode, we always
do rate splitting and transmit private as well as common messages
in the first hop. Thus, both private and common messages should be
successfully delivered to the destinations in the second hop,
which yields: \begin{eqnarray} R_p^{(1)}N_1\leq R_p^{(2)}N_2\\
R_c^{(1)}N_1\leq R_c^{(2)}N_2 \end{eqnarray} The minimum channel uses needed
in the second hop is \begin{eqnarray}
N_2=N_1\cdot\max\left(\frac{R_p^{(1)}}{R_p^{(2)}},
\frac{R_c^{(1)}}{R_c^{(2)}}\right) \end{eqnarray} Therefore, the overall
rate achieved for the entire system is \begin{eqnarray}
R=\frac{(R_p^{(1)}+R_c^{(1)})N_1}{N_1+N_2}=\frac{R_p^{(1)}+R_c^{(1)}}{1+\max\left(\frac{R_p^{(1)}}{R_p^{(2)}},
\frac{R_c^{(1)}}{R_c^{(2)}}\right)}\label{eq:7} \end{eqnarray}
\begin{theorem}
$R^*=\max R$ is the achievable symmetric rate ($R_1=R_2=R^*$) in
the half duplex two-hop interference network, where $R$ is defined
in (\ref{eq:7}).
\end{theorem}
\section{Amplify and Forward}
In this section, we focus on the transmission rates achieved by
amplify and forward relaying. We show that this scheme can
outperform decode and forward relaying under certain conditions.
For amplify and forward relaying, we still focus on the symmetric
channel model as defined in (\ref{eq:model5})-(\ref{eq:model8}).
\subsection{In-phase Relaying} \label{subsection:in-phase}
We first analyze the achievable rates for the so-called in-phase
transmission, where the two relays simply scale their received
signals with the same polarity. This is the usual amplify and
forward scheme and we emphasize in-phase here to contrast with the
out-of-phase approach described later. In the first hop, the
received signals at the
relays are \begin{eqnarray} Y_1&=&X_1+aX_2+Z_1\\
Y_2&=&aX_1+X_2+Z_2 \end{eqnarray} If they use the full power for amplifying
in
the second hop, we have \begin{eqnarray} X_3&=&cY_1\\
X_4&=&cY_2 \end{eqnarray} where $c=\sqrt{\frac{P_2}{(1+a^2)P_1+1}}$. Therefore \begin{eqnarray} Y_3&=&cY_1+bcY_2+Z_3\\
Y_4&=&bcY_1+cY_2+Z_4 \end{eqnarray} after scaling, \begin{eqnarray}
\!\!\!Y_3^{'}&=&(1+ab)X_1+(a+b)X_2+Z_1+bZ_2+Z_3/c\label{eq:model9}\\
\!\!\!Y_4^{'}&=&(a+b)X_1+(1+ab)X_2+bZ_1+Z_2+Z_4/c\label{eq:model10}
\end{eqnarray} Due to the fact that receivers $D_1$ and $D_2$ do not talk to
each other, we can modify the model in
(\ref{eq:model9})-(\ref{eq:model10}) to the following one without
affecting its capacity region: \begin{eqnarray}
Y_3&=&(1+ab)X_1+(a+b)X_2+Z_3^{'}\label{eq:model11}\\
Y_4&=&(a+b)X_1+(1+ab)X_2+Z_4^{'}\label{eq:model12} \end{eqnarray} where
$Z_3, Z_4\sim N(0, 1+b^2+1/c^2)$ are independent variables.
\subsubsection{Strong Interference}
It is clear that the model in
(\ref{eq:model11})-(\ref{eq:model12}) will be a strong
interference channel if $a+b>1+ab$, i.e., \begin{eqnarray}
\{a<1,b>1\}\hspace{.3cm} \mbox{or} \hspace{.3cm}\{a>1, b<1\} \end{eqnarray}
For this model, the optimal scheme is for the two receivers to
decode both users messages, and the capacity region is known as
\begin{eqnarray} R_1&\leq&\gamma\left(\frac{(1+ab)^2P_1}{1+b^2+1/c^2}\right)\\
R_2&\leq&\gamma\left(\frac{(1+ab)^2P_1}{1+b^2+1/c^2}\right)\\
R_1+R_2&\leq&\gamma\left(\frac{((1+ab)^2+(a+b)^2)P_1}{1+b^2+1/c^2}\right)
\end{eqnarray} Thus, the symmetric achievable rate $(R_1=R_2=R)$ is \begin{equation}
\begin{array}{ll}
R=\min\left\{\gamma\left(\frac{(1+ab)^2P_1}{1+b^2+1/c^2}\right),
\frac{1}{2}\gamma\left(\frac{((1+ab)^2+(a+b)^2)P_1}{1+b^2+1/c^2}\right)\right\}\end{array}\label{eq:9}
\end{equation}
\subsubsection{Weak Interference}
On the other hand, if $a+b<1+ab$, i.e., \begin{eqnarray}
\{a>1,b>1\}\hspace{.3cm} \mbox{or} \hspace{.3cm}\{a<1, b<1\} \end{eqnarray}
the model (\ref{eq:model11})-(\ref{eq:model12}) becomes a weak
interference channel, for which the Han-Kobayashi's scheme is the
best known scheme. Similar to the analysis in
\ref{subsection:case1}, the symmetric private rate and common rate
are \begin{eqnarray} R_p&\leq& \gamma\left(\frac{(1+ab)^2\alpha
P_1}{(a+b)^2\alpha
P_1+b^2+1+1/c^2}\right)\label{eq:10}\\
R_c&\leq&\min\left\{\gamma\left(\frac{(a+b)^2\bar{\alpha}P_1}{\sigma_1^2}\right),\frac{1}{2}\gamma\left(\frac{\sigma_2^2}{\sigma_1^2}\right)\right\}
\label{eq:11}\end{eqnarray} where $\sigma_1^2=((1+ab)^2+(a+b)^2)\alpha
P_1+b^2+1+1/c^2$ and
$\sigma_2^2=((1+ab)^2+(a+b)^2)\bar{\alpha}P_1$. The symmetric rate
for the whole system is \begin{eqnarray} R=\max_{\alpha\in [0,1]}R_p+R_c. \end{eqnarray}
It is interesting to note that for the method of amplify and
forward relaying, the analysis also shows the four parameter
regimes can actually be divided into two categories, in the sense
of transmission and decoding schemes, where $(a<1, b<1)$ and
$(a>1, b>1)$ belong to one category, and $(a<1, b>1)$ and $(a>1,
b<1)$ belong to the other category. This coincides with the
analysis of the decode and forward relaying in the previous
section.
\subsection{Out-of-phase Relaying}
Besides in-phase relaying, the two relays can also purposely make
the relayed signal out of phase by exactly $180^o$, i.e., change
the sign of the relay output. We show in this subsection that this
scheme can have very nice performance under certain conditions.
Again, by using full power at the two relays and making the
relayed signals out of phase by $180^o$, we have \begin{eqnarray} X_3&=&-cY_1\\
X_4&=&cY_2 \end{eqnarray} where $c=\sqrt{\frac{P_2}{(1+a^2)P_1+1}}$. Therefore, \begin{eqnarray} Y_3&=&-cY_1+bcY_2+Z_3\\
Y_4&=&-bcY_1+cY_2+Z_4 \end{eqnarray} which, after scaling, is \begin{eqnarray}
\!\!\!Y_3^{'}&=&(ab-1)X_1+(b-a)X_2-Z_1+bZ_2+Z_3/c\label{eq:model13}\\
\!\!\!Y_4^{'}&=&(a-b)X_1-(ab-1)X_2-bZ_1+Z_2+Z_4/c\label{eq:model14}
\end{eqnarray}
Since $D_1$ and $D_2$ cannot talk to each other, we can modify the
model (\ref{eq:model13})-(\ref{eq:model14}) to the following model
with the same capacity region: \begin{eqnarray}
Y_3&=&(ab-1)X_1+(b-a)X_2+Z_3^{'}\label{eq:model15}\\
Y_4&=&(a-b)X_1+(1-ab)X_2+Z_4^{'}\label{eq:model16} \end{eqnarray} where
$Z_3, Z_4\sim N(0, 1+b^2+1/c^2)$ are independent random noises.
\subsubsection{Strong Interference}
For model (\ref{eq:model15})-(\ref{eq:model16}), this becomes a
strong interference channel if $|ab-1|<|b-a|$, i.e., \begin{eqnarray}
\{a<1,b>1\}\hspace{.3cm} \mbox{or} \hspace{.3cm}\{a>1, b<1\}.
\label{condition:1}\end{eqnarray} This is exactly the same condition as the
strong interference case in section \ref{subsection:in-phase}.
Similar to the analysis of \ref{subsection:in-phase}, we can
express the symmetric rate for the strong interference case as
\begin{equation}
\begin{array}{ll}\label{eq:8}
R=\min\left\{\gamma\left(\frac{(1-ab)^2P_1}{1+b^2+1/c^2}\right),
\frac{1}{2}\gamma\left(\frac{((1-ab)^2+(a-b)^2)P_1}{1+b^2+1/c^2}\right)\right\}.
\end{array}\end{equation}
Obviously, the rate in (\ref{eq:8}) is less than that in
(\ref{eq:9}). Thus, for amplify and forward relaying, under
condition (\ref{condition:1}), we should employ in-phase relaying
at the two relays.
\subsubsection{Weak Interference}
When $|ab-1|>|b-a|$, the model
(\ref{eq:model15})-(\ref{eq:model16}) becomes a weak interference
channel, i.e., \begin{eqnarray} \{a>1,b>1\}\hspace{.3cm} \mbox{or}
\hspace{.3cm}\{a<1, b<1\} \end{eqnarray} which is also consistent with the
condition of the weak interference case in section
\ref{subsection:in-phase}. Using Han-Kobayashi's scheme, we get
the symmetric private rate and common rate \begin{eqnarray} R_p&\leq&
\gamma\left(\frac{(1-ab)^2\alpha P_1}{(a-b)^2\alpha
P_1+b^2+1+1/c^2}\right)\label{eq:12}\\
R_c&\leq&\min\left\{\gamma\left(\frac{(1-ab)^2\bar{\alpha}P_1}{\sigma_1^2}\right),\frac{1}{2}\gamma\left(\frac{\sigma_2^2}{\sigma_1^2}\right)\right\}
\label{eq:13}\end{eqnarray} where $\sigma_1^2=((1-ab)^2+(a-b)^2)\alpha
P_1+b^2+1+1/c^2$ and
$\sigma_2^2=((1-ab)^2+(a-b)^2)\bar{\alpha}P_1$.
Comparing rates of (\ref{eq:12})-(\ref{eq:13}) and that of
(\ref{eq:10})-(\ref{eq:11}), it can be easily verified that when
$a=b$ and $ab>>1$(or $ab<<1$), (\ref{eq:12})-(\ref{eq:13}) will
outperform (\ref{eq:10})-(\ref{eq:11}).
If we consider this two hop interference channel network as two
water pipes cascaded with each hop as one pipe, it is very nature
to think the total throughput of the entire system should be
bounded by the capacities of both pipes (e.g., min cut), which is
exactly the case for the decode and forward relaying. However, for
the amplify and forward relaying, we show that this natural
analogy is not valid, i.e., the total throughput can be larger
than the capacities of both ``pipes".
If $a=b$, the model (\ref{eq:model15})-(\ref{eq:model16}) becomes
two parallel AWGN channels and the rates for both channels are the
same: \begin{eqnarray} R=\gamma\left(\frac{(1-a^2)^2P_1}{1+a^2+1/c^2}\right)
=\gamma\left(\frac{(1-a^2)^2P_1P_2}{(1+a^2)(P_1+P_2)+1}\right)\label{eq:14}
\end{eqnarray}
If $a=b>1$, according to Theorem \ref{thm:role_switching}, the
capacity of each of the two hops is always less than or equal to
that of the transformed channel where we switch the roles of the
two relays, thus converting the strong interference into weak
interference for both hops. Therefore, without loss of generality,
we only consider the case when $a=b<1$. For the interference
channel of the first hop, by
\cite{Shang&Kramer&Chen:09IT,Motahari&Khandani:09IT,Annapureddy&Veeravalli:09IT},
the channel has ``noisy interference" when \begin{eqnarray} a(a^2P_1+1)\leq
\frac{1}{2} \mbox{i.e.,} P_1\leq
\frac{1}{a^2}\left(\frac{1}{2a}-1\right)\label{eq:16} \end{eqnarray} Under
noisy interference, we know the sum rate capacity of the channel
\cite{Shang&Kramer&Chen:09IT,Motahari&Khandani:09IT,Annapureddy&Veeravalli:09IT},
which is achieved by treating the other user's signal as pure
noise. Thus, the corresponding symmetric capacity is \begin{eqnarray} C_1=
\gamma\left(\frac{P_1}{1+a^2P_1}\right)\label{eq:15} \end{eqnarray}
If $P_2=P_1$, the symmetric capacity of the second hop is also
$C_2=C_1=\gamma\left(\frac{P_1}{1+a^2P_1}\right)$. In order for
the rate (\ref{eq:14}) to exceed the capacity of both hops for
$P_1=P_2$, i.e., \begin{eqnarray}
\gamma\left(\frac{(1-a^2)^2P_1P_2}{(1+a^2)(P_1+P_2)+1}\right)>\gamma\left(\frac{P_1}{1+a^2P_1}\right)
\end{eqnarray} we need to satisfy \begin{eqnarray}
P_1>\frac{1+4a^2-a^4+\sqrt{(1+4a^2-a^4)^2+4a^2(1-a^2)^2}}{2a^2(1-a^2)^2}
\end{eqnarray} Combining (\ref{eq:16}), we get \begin{equation}
\begin{array}{ll}
\frac{1+4a^2-a^4+\sqrt{(1+4a^2-a^4)^2+4a^2(1-a^2)^2}}{2a^2(1-a^2)^2}<P_1<
\frac{1}{a^2}\left(\frac{1}{2a}-1\right)\label{eq:17}
\end{array} \end{equation}
We can easily check that when $a$ is close to 0, the lower bound
of (\ref{eq:17}) is $O(\frac{1}{a^2})$ and the upper bound of
(\ref{eq:17}) is $O(\frac{1}{a^3})$, which indicates that when $a$
is close to 0, such $P_1$ does exist. For example, when $a=0.15$,
the bound in (\ref{eq:17}) becomes $51.6<P_1<103.7$.
The above example is for $a=b<1$. Similarly, for $a=b>1$, due to
the previous analysis that these two cases are essentially
identical (by switching the roles of the two relays), it can be
verified that when $a=b>>1$, the transmission rate for the whole
system can also exceed the capacity of each individual
interference channel. The details are omitted here.
Although the above results are obtained for $a=b$, we comment that
even when $a\neq b$ but are close, one can still find parameter
regimes for which the out-of-phase scheme is favored, i.e., has a
larger symmetric rate.
\section{Numerical Examples}
For both decode-and-forward relaying and amplify-and-forward
relaying, when the the first hop has strong interference, i.e.,
$a>1$, it is always preferred to switch the roles of the two
relays and convert the channel into a weak interference channel.
Without loss of generality, we only focus on the weak interference
case of the first hop, i.e., $a<1$. First, we compare the effect
of the two schemes in the second hop, namely DPC scheme and MAC
scheme, under different channel parameters for the
decode-and-forward relaying.
\begin{figure}[htp]
\begin{tabular}{cc}
\leavevmode \epsfxsize=1.8in \epsfysize=1.3in
\epsfbox{DPC_MAC_1.eps}& \leavevmode \epsfxsize=1.8in
\epsfysize=1.3in \epsfbox{DPC_MAC_2.eps}\\
(a)&(b)\\
\leavevmode \epsfxsize=1.8in \epsfysize=1.3in \epsfbox{DPC_MAC_3.eps}\\
(c)
\end{tabular}
\caption{\label{fig:DPC_vs_MAC} Comparison of DPC scheme and MAC
scheme in the second hop for the decode-and-forward relaying.}
\end{figure}
Fig. \ref{fig:DPC_vs_MAC} (a) shows that when the interference
gain of the second hop $b$ is very small, the DPC scheme is
dominating for $a\in [0,1]$ and the symmetric rate for combining
DPC and MAC will coincide with that of DPC scheme only. The
difference between DPC and MAC becomes dramatic when $a>0.5$. That
is because in this regime, the HK scheme will produce significant
amount of common information in the first hop, and MAC scheme
requires the common information to be decoded by both receivers,
which negatively affects the total rate since $b$ is small at the
second hop. However, when $b$ gets larger, as shown in (b), MAC
scheme will beat DPC for $a<0.5$ but will be outperformed by DPC
for $a>0.5$. Since for $a<0.5$, there is significant amount of
private messages produced by HK scheme in the first hop, which
will be treated as noise in the DPC scheme, but will be partially
decoded in the MAC scheme, thus MAC will perform better. However
for $a>0.5$, the common messages from the first hop dominates.
Since DPC scheme can cancel the interference effect of other
user's common messages, this advantage beats the MAC scheme where
the common messages need to be decoded by both receivers when $b$
is not strong enough $(b=0.8)$. Note that the combination of DPC
and MAC will outperform both of the individual schemes for $a>0.5$
because of the time sharing effect of the two rate regions. When
$b$ gets strong enough as in (c), the MAC scheme will far
outperform DPC when $a$ is small, but will be close to DPC when
$a$ gets larger.
Next, we show in Fig. \ref{fig:DF_vs_AF} the comparison of
decode-and-forward relaying and amplify-and-forward relaying (both
in phase and $180^o$ out of phase) in low SNR regime.
\begin{figure}[htp]
\begin{tabular}{cc}
\leavevmode \epsfxsize=1.8in \epsfysize=1.3in
\epsfbox{DF_AF_1.eps}& \leavevmode \epsfxsize=1.8in
\epsfysize=1.3in \epsfbox{DF_AF_2.eps}\\
(a)&(b)\\
\leavevmode \epsfxsize=1.8in \epsfysize=1.3in \epsfbox{DF_AF_3.eps}\\
(c)
\end{tabular}
\caption{\label{fig:DF_vs_AF} Comparison of decode-and-forward
relaying and amplify-and-forward relaying in low SNR regime}
\end{figure}
It can be seen that for the low SNR regime, when $b$ is small, the
amplify-and-forward relaying scheme (for both in phase and $180^o$
out of phase) will always be outperformed by the
decode-and-forward scheme, as shown in (a) and (b). When $b$ gets
strong enough, as shown in (c), the in phase amplify-and-forward
relaying may outperform, but not by much, the decode-and-forward
scheme when $a$ is close to 1. In other words, for the low SNR
regime, decode-and-forward scheme is preferred over
amplify-and-forward scheme. However, at high SNR regime, it is a
different story.
\begin{figure}[htp]
\begin{tabular}{cc}
\leavevmode \epsfxsize=1.8in \epsfysize=1.3in
\epsfbox{DF_AF_4.eps} & \leavevmode \epsfxsize=1.8in
\epsfysize=1.3in \epsfbox{DF_AF_5.eps}\\
(a)&(b)\\
\leavevmode \epsfxsize=1.8in \epsfysize=1.3in \epsfbox{DF_AF_6.eps}\\
(c)
\end{tabular}
\caption{\label{fig:DF_vs_AF2} Comparison of decode-and-forward
relaying and amplify-and-forward relaying in high SNR regime}
\end{figure}
As shown in Fig. \ref{fig:DF_vs_AF2}, in the high SNR regime, when
$b<1$, the performance of amplify-and-forward relaying with
$180^o$ out of phase is the best when $a$ is close to $b$. This is
because when $a=b$, the channel becomes two parallel AWGN
channels, which has the best performance under the high SNR
regime. However, away from the peak of $a=b$, the
amplify-and-forward relaying with $180^o$ out of phase is still
the worst. When $b\geq 1$, since $a\in [0,1]$, the peak of $a=b$
does not exist any more, thus, the performance of the out of phase
amplify-and-forward relaying becomes the worst for all values of
$a$. In this case, the decode-and-forward scheme remains the best
of all.
\section{Conclusion}
In this paper, we investigated and compared coding schemes for the
two hop interference network under various channel parameters
regimes. Our analysis shows that if the first hop has strong
interference, i.e., $a>1$, it is always beneficial to switch the
roles of the two relays so that the channel is converted to a weak
interference channel with interference gain of $1/a$, and the
strength of the second hop is also changed accordingly.
For the decode-and-forward relaying, the DPC scheme and MAC scheme
are both needed for the second hop. The combination of the two may
sometimes outperform both of the individual schemes due to the
time sharing effect. Generally however, DPC scheme dominates when
$b$ is small and MAC scheme dominates when $b$ is large.
The comparison of decode-and-forward relaying and
amplify-and-forward relaying showed that decode-and-forward
relaying always has better performance except when $a$ is close to
$b$ in the high SNR regime.
\bibliographystyle{C://localtexmf/caoyibib/IEEEbib}
|
1,477,468,750,941 | arxiv | \section{INTRODUCTION}
\begin{tikzpicture}[overlay]
\node [right, text width= 7.3in, align=left] at (-.5,15.7) {Preprints of the \textbf{IEEE Robotics and Automation Letters (RAL)} paper presented at the \\\textbf{2019 International Conference on Robotics and Automation (ICRA)}, Palais des congres de Montreal, Montreal, Canada,\\May 20-24, 2019. The final version of the article can be accessed at DOI: 10.1109/LRA.2018.2890198
};
\end{tikzpicture}The majority of the existing haptic devices providing kinesthetic feedback are world grounded~\cite{pacchierotti2017wearable}. They offer numerous advantages like high forces and torques, many degrees of freedom (DoF), and a wide dynamic range. These features allow such devices to provide more realistic haptic renderings compared to tactile haptic devices that only stimulate the skin.
However, the world-grounded kinesthetic haptic devices generally have a large footprint as well as limited portability and wearability, which limits their application and effectiveness for many virtual and real-world applications. World-grounded haptic devices also offer a limited range of motion to the user due to the scaling of weight and friction with increased size~\cite{sucho@2016}.
On the other hand, wearable haptic devices must be portable and typically offer a large range of motion. But, majority of the existing wearable haptic devices are tactile in nature and provide feedback in the form of vibration or skin deformation. They are commonly grounded against the user's fingertip or the nearby region~\cite{pacchierotti2017wearable}. Though tactile feedback is capable of providing directional cues and aiding users in completing various tasks, it may not be sufficient to perform certain tasks, such as the suture knot-tying in robot-assisted surgery~\cite{okamura2009haptic}, and manipulating objects in virtual reality~\cite{burdea1999keynote}. As demonstrated by Suchoski et al.~\cite{sucho@2016} in their study, kinesthetic feedback is capable to give more sensitive haptic information to carry out a grasp-and-lift task than the skin deformation feedback (a form of tactile feedback). Similarly, the role of kinesthetic (force) feedback in surgical training and skill development looks very promising~\cite{okamura2009haptic}.
Kinesthetic haptic devices, that are not world grounded but instead impart feedback by grounding forces against the user's hand (\emph{hand-grounded haptic devices}), provide a solution to challenges of portability, wearability and limited workspace in kinesthetic haptic devices. As noted by Pacchierotti et al.~\cite{pacchierotti2017wearable}, the primary advantage of wearable kinesthetic devices is their small form factor as compared to the world-grounded devices. Similarly, body-grounded kinesthetic devices, i.e. Exoskeletons, could be another potential solution, but they generally encumber the user movement and are difficult to don and doff.
However, designing these hand or body-grounded devices is challenging due to the need for increased forces/torques and number of degrees-of-freedom (DoF) in comparison to the fingertip tactile devices. Additionally, the effects of hand-grounded kinesthetic feedback on users' perception and haptic experience are still unknown.
There exist numerous examples of hand-grounded kinesthetic haptic devices, including~\cite{jadhav2017soft,fontana2009mechanical,springer2002design,leonardis2015emg,nycz2016design,allotta2015development,ma2015rml,kim2016hapthimble,fu2011design,lambercy2013design,stergiopoulos2003design,lelieveld2006design,cempini2015powered,agarwal2015index,aiple2013pushing,tanaka2002wearable,polygerinos2015soft,stetten2011hand,bouzit2002rutgers,choi2018claw,choi2016wolverine,khurshid2014wearable}. These devices are either grounded against the back of the hand~\cite{jadhav2017soft,fontana2009mechanical,springer2002design,leonardis2015emg,nycz2016design,allotta2015development,ma2015rml,kim2016hapthimble,fu2011design,lambercy2013design,stergiopoulos2003design,lelieveld2006design,cempini2015powered,agarwal2015index,aiple2013pushing}, act like a glove ~\cite{tanaka2002wearable,polygerinos2015soft,stetten2011hand}, are grounded against the user's palm~\cite{bouzit2002rutgers,choi2018claw}, or are grounded against the user's fingers~\cite{choi2016wolverine,khurshid2014wearable}. To the best of our knowledge, there exists no device that can be grounded against different locations on the user's hand or a study that explains the effect of different grounding locations on the user's haptic perception and qualitative experience with kinesthetic (force) feedback.
\begin{figure}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.9\columnwidth}
{\input{./figs/ground_locations.pdf_tex}}
\caption{Three potential grounding locations on the user's hand: Back of the hand, Proximal Phalanx, and Middle Phalanx of the index finger. Arrows indicate directions of applied kinesthetic feedback on the fingertip: (A) along the finger axis and (B) in flexion-extension.}
\label{fig:hand}
\end{figure}
\begin{figure*}[ht]
\fontfamily{cmss}\selectfont
\centering
\begin{subfigure}{0.32\textwidth}
\def1\columnwidth{1\textwidth}
{\input{./figs/mode_a.pdf_tex}}
\vspace{-.6cm}
\caption{}
\label{fig:a}
\end{subfigure}
\centering
\begin{subfigure}{0.32\textwidth}
\def1\columnwidth{1\columnwidth}
{\input{./figs/mode_b.pdf_tex}}
\caption{}
\label{fig:b}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\def1\columnwidth{1\columnwidth}
{\input{./figs/mode_c.pdf_tex}}
\caption{}
\label{fig:c}
\end{subfigure}
\caption{Device design with three different grounding modes: (a) Grounding location is back of the hand, (b) Proximal phalanx is the grounding location, (c) Grounding locations is the Middle phalanx of index finger. In mode (b) and (c), the finger rings are rigidly attached with the base part.}\label{fig:modes}
\end{figure*}
We aim to study the effects of different hand-grounding locations on a user's haptic perception by providing kinesthetic feedback on the user's index finger tip. For this purpose, a wearable 2-DoF haptic device is designed that can provide kinesthetic feedback grounded at three different regions of the user's hand (Fig.~\ref{fig:hand}): (i) back of the hand, (ii) proximal phalanx of the index finger, and (iii) middle phalanx of the index finger. The light-weight and modular design provides kinesthetic feedback in two directions: (A) along the index finger axis, and (B) in flexion-extension.
We aim to understand how different hand-grounding locations affect the user's haptic performance and overall experience. To identify the significance and impact of different hand-grounding locations, two psychophysical experiments are carried out using \emph{the method of constant stimuli} \cite{gescheider1985psychophysics} --- one for each feedback direction. The participants were asked, in separate trials, to discriminate the stiffness of two virtual surfaces based on the kinesthetic feedback provided by the hand-grounded device. The Point of Subjective Equality (PSE) and Just Noticeable Difference (JND) were computed to measure the effective sensitivity and precision of the participants' perception of stiffness for each hand-grounding location, in both feedback directions. The PSE gives insight about the accuracy of the applied/perceived feedback, as it represents the point where the comparison stimulus (stiffness) is perceived by the user as identical to the standard stimulus. JND indicates the resolving power of a user and is defined as the minimum change in the stimulus value required to cause a perceptible increase in the sensation~\cite{gescheider1985psychophysics}.
The results show that the choice of grounding location has profound impact on the user's haptic perception (measured through the metrics described above) and experience (based on user ratings). These findings provide important insights for the design of next-generation kinesthetic feedback devices, particularly in terms of grounding of forces, to achieve compelling and natural kinesthetic haptic interaction in real-world haptic and robotic applications. For example, using these findings we can now design hand-grounded wearable kinesthetic devices with appropriate grounding to offer superior haptic performance and user experience. As hand-grounded devices offer comparatively larger operating range and smaller from factor than their world-grounded counterparts, the knowledge related to the choice of hand-grounding location may help to increase the use of wearable kinesthetic devices in the fields of haptics and robot teleoperation. The contribution of this work is the design of a novel wearable kinesthetic device and study results for understanding of the role played by different hand-grounding locations on user stiffness perception.
\section{DEVICE DESIGN \& CONTROL}
\subsection{Design}
The device has a base (Fig.~\ref{fig:device_cad}) that can be tied to the back of the user's hand using a hook-and-loop fastener.
It has two rings (A and B) which are fitted to the proximal and middle phalanxes of the index finger. The fingertip cap is connected to actuators A and B through two cables, which route through the passage holes on rings A and B, as shown in Fig.~\ref{fig:device_cad}. When both actuators A and B move in the same direction (clockwise or anti-clockwise), a flexion or extension movement at the finger is produced. When both actuators move in opposite directions, a pull force is generated along the finger axis.
To provide hand-grounded kinesthetic feedback at the fingertip, a number of grounding locations can be used. Fig.~\ref{fig:hand} shows the three grounding locations considered in this case: the back of the hand, proximal phalanx of the index finger, and middle phalanx region of the index finger. Another potential location, the palm region, was rejected because such an arrangement may affect the user's ability to open/close the hand and fingers.
\begin{figure}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.8\columnwidth}
{\input{./figs/back_of_hand_illustration.pdf_tex}}
\caption{Design: The base is tied against the back of the hand. When tendon cables are pulled/released by actuators A and B, the fingertip cap provides kinesthetic feedback along the finger axis and/or in flexion-extension.}
\label{fig:device_cad}
\end{figure}
\begin{figure}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.75\columnwidth}
{\input{./figs/kinematics.pdf_tex}}
\caption{A simplified representation of the device's mechanism as a 2-D piece-wise constant-curvature tendon-driven manipulator. Tendon lengths ($l_a, l_b$), their respective distance from tip center-point ($r_a, r_b$), and the arc parameters: length ($l$) and radius ($r$), are used to determine the tip position and finger configuration in the $x-z$ plane.}
\label{fig:simp_rep}
\end{figure}
To achieve different groundings, the device has three different modes. In mode A (Fig.~\ref{fig:modes}(a)), the back of the hand acts as the grounding location. In mode B (Fig.~\ref{fig:modes}(b)), the base is physically connected to the ring A at the proximal phalanx, providing grounding at this region. In mode C (Fig.~\ref{fig:modes}(c)), the base is rigidly connected with both rings to provide grounding at the middle phalanx region. Different device modes enable execution of different joints of the index finger in the flexion-extension direction. For example, in mode A, the torque is applied at all three joints (MP1, PIP, and DIP). In mode B, only PIP and DIP joints are executed. In mode C, the torque applies only at the DIP joint.
Based on its kinematic design and actuator specifications, the device can apply, in different modes, a maximum force of 28.9 N along the finger axis and a torque in the range of 80 to 300 N-mm at the fingertip. It is driven by two Faulhaber 0615 4,5S DC-micromotors with 256:1 gearboxes, and 50-counts-per-revolution optical encoders are used for position sensing. The device prototypes with different grounding modes (Fig.~\ref{fig:prototypes}) weigh 31, 43, and 49 grams, respectively.
\subsection{Kinematics}
The device renders forces on the user's index finger by controlling the tendon lengths. To calculate the position and configuration of the finger, we use a robot-independent kinematic mapping between the actuator space and the task space. The obtained homogeneous transformation remains identical for all three grounding modes of our device. It is assumed that the device's tendons, when fit to the user index finger, exhibit a continuum-curve shape. The geometry of this curve allows determination of the tip position and configuration of the finger. Fig.~\ref{fig:simp_rep} shows a simplified representation of the haptic device in such a scheme.
\begin{figure}
\vspace{.2cm}
\begin{subfigure}{.32\columnwidth}
\centering
\includegraphics[width=1\textwidth]{a.png}
\caption{}
\label{fig:device_a}
\end{subfigure}
\begin{subfigure}{.32\columnwidth}
\centering
\includegraphics[width=1\textwidth]{b.png}
\caption{}
\label{fig:device_b}
\end{subfigure}
\begin{subfigure}{.32\columnwidth}
\centering
\includegraphics[width=1\textwidth]{c.png}
\caption{}
\label{fig:device_c}
\end{subfigure}
\caption{Modular versions of the wearable kinesthetic device with grounding locations: (a) Back of the hand, (b) Proximal Phalanx, and (c) Middle Phalanx}
\label{fig:prototypes}
\end{figure}
\begin{figure*}[ht]
\vspace{0.2cm}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.9\textwidth}
{\input{./figs/control_loop.pdf_tex}}
\caption{Block diagram of the controller used for rendering force on the user's fingertip. The hand position is tracked by a 3-DoF device, and the interaction forces are calculated as the desired force. Forces applied by the hand-grounded device end-effector on the fingertip through tendon displacements are regulated using a proportional-derivative (PD) controller.}
\label{fig:control}
\end{figure*}
As the haptic device aims to provide kinesthetic feedback in two directions (along the finger axis, and in the finger's flexion-extension direction), the kinematic mapping between the inertial frame ($O$) and the fingertip ($p(x,z)$) is described in a 2-D ($x-z$) plane. Tendon lengths ($l_a, l_b$), their respective distance from tip center-point ($r_a, r_b$), and the arc parameters, namely length ($l$) and radius ($r$), are used to determine the tip position and figure configuration in $x-z$ plane. The position of fingertip can be expressed as,
\begin{align}
p(x,z) = \left[r(1 - \cos\theta), r\sin\theta\right]^T.
\end{align}
The homogeneous transformation for tendons $a$ and $b$, from $O$ to $p_a(x,z)$ and $p_b(x,z)$ respectively, is
\begin{align}
T_j &= \left[\begin{matrix}\operatorname{cos}\left(\theta\right) & 0 & \operatorname{sin}\left(\theta\right) & p_{xj}\\0 & 1 & 0 & 0\\- \operatorname{sin}\left(\theta\right) & 0 & \operatorname{cos}\left(\theta\right) & p_{zj}\\0 & 0 & 0 & 1\end{matrix}\right], \quad (j= a, b),
\end{align}
\begin{align}
p_{xj} &= \left(\frac{l}{\theta} \pm r_{j}\right) \left(1 - \operatorname{cos}\theta\right), \quad (j= a, b), \\
p_{zj} &= \left(\frac{l}{\theta} \pm r_{j}\right) \operatorname{sin}\theta, \quad (j= a, b).
\end{align}
The displacements of tendons $a$ and $b$ can be expressed in terms of arc radius and angles as
\begin{align}
s_a = (r + r_a)(\theta_o - \theta_t), \label{eq:s_a} \\
s_b = (r - r_b)(\theta_o - \theta_t). \label{eq:s_b}
\end{align}
where $\theta_o$ is the initial angle angle of tendon $a$ and $\theta_t$ represents the tendon angle at time $t$.
\subsection{Control System}
Using the tendon displacements (\ref{eq:s_a}) and (\ref{eq:s_b}), a separate control is implemented for each of the actuators to apply force and control the user's finger configuration. Fig.~\ref{fig:control} shows the block diagram of the control in the virtual reality setup. The control of the 2-DoF kinesthetic device was achieved by using a Nucleo-F446ZE board by STMicroelectronics\texttrademark connected to a Desktop computer via USB. The microcontroller reads the encoders of the motors and receives the desired force from the virtual environment sent using a PC's serial port. Using this information, it calculates the desired torque output of the motors. The control loop runs at a frequency of approximately 1 kHz. The CHAI3D framework was used to render the 3-D virtual reality environment \cite{conti2005chai3d} using the god-object algorithm \cite{zilles1995god-object} to calculate desired interaction force. The user can move the cursor (red sphere in Fig.~\ref{fig:studies}(a) \& (b)) in 3-D space. Because the wearable device has only 2 DoFs, the third dimension does not give any force feedback to the user. Given the nature of the tasks in the user studies, the third dimension ($y$-axis) is not required to display the force feedback.
\begin{figure}
\begin{subfigure}{.49\columnwidth}
\fontsize{8pt}{6}\selectfont
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{1\columnwidth}
{\input{./figs/study_a.pdf_tex}}
\caption{}
\label{fig:study_b}
\end{subfigure}
\begin{subfigure}{.49\columnwidth}
\fontsize{8pt}{6}\selectfont
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{1\columnwidth}
{\input{./figs/study_b.pdf_tex}}
\caption{}
\label{fig:study_a}
\end{subfigure}
\caption{Experimental setup: A user interacts with the virtual environment through a 3-DoF hand position tracking device (Phantom Omni). The new hand-grounded haptic device provides kinesthetic feedback, and a visual display shows the virtual environment. Participants receive force feedback by touching the two virtual surfaces, one carrying the reference stiffness and the other comparison stiffness in a random order. Participants are required to discriminate the stiffness based on the kinesthetic feedback and record their choice through key presses. (a) Study A (the feedback is rendered along the finger-axis) (b) Study B (the feedback is rendered along flexion-extension movements).}
\label{fig:studies}
\vspace{-.3cm}
\end{figure}
The user's hand position ($\vec{x_u}$) is tracked using a Phantom Omni haptic device (set up to provide no haptic feedback, just position tracking) from SensAble Technologies, Inc. and sent to the virtual environment as $\vec{x_d}$. The resulting interaction force command from the virtual environment ($\vec{F_d}$) is calculated in the computer and then fed to the hand-grounded haptic device. The device then uses a mapping between the force magnitude and the device tip position (force-position translator) to output the desired tip position to the PD controller which, using the encoders mounted on each motor shaft, can estimate the current tip position and configuration and outputs the appropriate tendon displacements ($\vec{s_c}(a,b)$) to the device's motors. As the tendons shorten, the user's finger tip is moved to the right position, allowing him/her to feel a force.
The PD controller error and the control law are
\begin{align}
e(t) &= y(t) - r(t), \label{eq:error}\\
U &= K_P e + K_D \frac{d}{dt} e, \label{eq:law}
\end{align}
where, $K_P$ represents the proportional gain and $K_D$ is the derivative gain. $e(t)$ is the position error, $y(t)$ represents the motor shaft position, and $r(t)$ is the reference position calculated from the desired tendon displacements ($\vec{s_d}(a,b)$ in Fig.~\ref{fig:control}).
\section{USER STUDY}
To evaluate the effects of the three different hand-grounding locations on the user's haptic perception and experience, we conducted two separate user studies (Study A \& Study B); one for each haptic feedback DoF provided by the hand-grounded device. In Study A, the kinesthetic feedback is provided along the axis of the user's index finger. In Study B, the feedback is provided along the flexion-extension movement of the finger. The purpose of evaluating each feedback DoF separately is to develop a clear understanding of the relation between the hand-grounding location and the corresponding feedback direction.
\subsection{Study A: Feedback Along the Finger Axis}
\subsubsection{Experimental Setup}
13 subjects (9 males and 4 females) participated in this study, which was approved by the Stanford University Institutional Review Board. The metrics were PSE and JND of stiffness perception while the hand-grounded device was set up for each of the three grounding locations (back of the hand, proximal, and middle phalanx of the index finger). All subjects participated in the experiment after giving informed consent, under a protocol approved by the Stanford University Institutional Review Board. The participants used the hand-grounded haptic device on their right hand and performed tasks in a virtual environment, while holding the stylus of the Phantom Omni device in the same hand (Fig.~\ref{fig:studies}(a)). A pilot study was conducted to determine a convenient posture to hold the Phantom Omni stylus while the kinesthetic device is donned to the index finger. In the user studies, the participants were instructed to hold the Phantom Omni device in that predefined way to make sure that its stylus does not come into contact with the wearable kinesthetic device.
\subsubsection{Experimental Procedure}
Each participant used the haptic device configured for each of the three hand-grounding locations in a predetermined order to minimize the effect of selection bias. As mentioned earlier, a Phantom Omni device was used to track the user hand position during the experiments as shown in Fig.~\ref{fig:studies}(a). The Phantom Omni only determined the user hand position, while the kinesthetic feedback was rendered by the hand-grounded haptic device.
Participants were wore ear protection to suppress the motor noise in order to avoid sound cues. After the experiments were completed, the participants rated the realism of haptic feedback and comfort/ease-of-use for all three devices with different hand-grounding locations on a scale of 1-7: 1 meaning `not real' and 7 meaning `real', or 1 for `not comfortable' and 7 for `comfortable.' The realism was with respect to the users' feeling as if they would be pressing against a very smooth real surface using their right-hand’s index finger.
\subsubsection{Method}
We conducted a \emph{two-alternative forced-choice} experiment following the \emph{method of constant stimuli}~\cite{gescheider1985psychophysics}. Subjects were asked to freely explore and press against the two virtual surfaces shown on the virtual environment display and state which surface felt stiffer. In each trial, one surface presented a reference stiffness value while the other presented a comparison stiffness value. The reference stiffness value was selected to be 100.0 N/m.
The reference value was included as one comparison value, and the other comparison values were then chosen to be equally spaced: 10, 28, 46, 64, 82, 100, 118, 136, 154, 172, and 190 N/m.
Each of the eleven comparison values was presented ten times in random order for each of the three hand-grounded haptic devices over the course of one study. Each participant completed a total of 110 trials for each grounding mode (330 trials for the entire study). The participants used the kinesthetic feedback from the hand-grounded device to explore the virtual surfaces until a decision was made; they recorded their responses by pressing designated keyboard keys, corresponding to which virtual surface they thought felt stiffer. Subject responses and force/torque data were recorded after every trial. There was no time limit for each trial, and participants were asked to make their best guess if the decision seemed too difficult. Subjects were given an optional two-minute break after every fifty-five trials, and a ten-minute break after the completion of each grounding mode.
\subsection{Study B: Feedback in the Flexion-Extension Direction}
In study B, the kinesthetic feedback was rendered along the flexion extension movement direction of the index finger. A total of 14 subjects (9 males and 5 females) participated, and the study was approved by the Stanford University Institutional Review Board. The procedure was the same as in Study A. However, in Study B the virtual surfaces were presented lying in the horizontal plane (Fig.~\ref{fig:studies}(b)) to make the haptic feedback intuitive for the user.
\begin{figure*}
\renewcommand\thesubfigure{\roman{subfigure}}
\begin{subfigure}{.32\textwidth}
\fontfamily{cmss}\selectfont
\centering
\includegraphics[width=1\columnwidth]{aa.png}
\caption{Back of the hand}
\label{fig:curve_a}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\fontfamily{cmss}\selectfont
\centering
\includegraphics[width=1\columnwidth]{ba.png}
\caption{Proximal Phalanx}
\label{fig:curve_b}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\fontfamily{cmss}\selectfont
\centering
\includegraphics[width=1\columnwidth]{ca.png}
\caption{Middle Phalanx}
\label{fig:curve_c}
\end{subfigure}
\caption{Example psychophysical data and psychometric function fits for a representative subject in Study A, with grounding locations: (i) back of the hand, (ii) proximal phalanx, and (iii) middle phalanx of the index finger. Each data point represents the 'yes' proportion of the user responses over 10 trials. The user identified the difference between the reference and comparison stimulus values correctly 90 \% of the time for grounding location (i), 94 \% of the time for location (ii), and 98 \% of the times for location (iii).}
\label{fig:curves}
\end{figure*}
\begin{table*}
\vspace{.3cm}
\caption{Results of the two psychophysical experiments for stiffness discrimination. In Study A, the hand-grounded haptic device provided feedback along the axis of the finger with three different grounding locations. In Study B, the feedback direction was flexion extension movement of the index finger.}
\begin{tabularx}{\linewidth}{|c|r|XX|XX|XX|}
\hline
\multicolumn{2}{|c|}{Grounding Location} & \multicolumn{2}{c}{Back of the Hand} & \multicolumn{2}{c}{Proximal Phalanx} & \multicolumn{2}{c|}{Mid Phalanx} \\
\hline
\hline
& Subject No.& PSE (N/m) & JND (N/m) & PSE (N/m) & JND (N/m) & PSE (N/m) & JND (N/m) \\
\multirow{14}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering Study A}}} & 1 & 154.42 & 57.15 & 150.74 & 47.35 & 120.34 & 41.32 \\
& 2 & 111.17 & 9.86 & 107.58 & 17.44 & 103.37 & 6.2 \\
& 3 & 139.75 & 48.5 & 129.95 & 34.27 & 81.3 & 7.32 \\
& 4 & 87.56 & 6.07 & 92.95 & 5.72 & 102.27 & 20.24 \\
& 5 & 98.62 & 32.37 & 85.2 & 6.79 & 114.85 & 19.04 \\
& 6 & 113.23 & 13.08 & 104.68 & 25.28 & 100.74 & 20.1 \\
& 7 & 99.6 & 13.21 & 108.99 & 7.97 & 94.68 & 9.48 \\
& 8 & 118.75 & 46.88 & 128 & 31.13 & 109.11 & 38.95 \\
& 9 & 115.5 & 20.23 & 103.4 & 10.16 & 105.51 & 19.72 \\
& 10 & 114.64 & 12.93 & 103.46 & 12.15 & 116.09 & 19 \\
& 11 & 85.87 & 7.32 & 108.22 & 12.65 & 117.25 & 17.41 \\
& 12 & 92.94 & 6.66 & 114.14 & 22.17 & 91.54 & 15.65 \\
\cline{2-8}
& Mean & 111.004 & 22.855 & 111.442 & 19.423 & 104.754 & 19.536 \\
& Std. Dev. & 19.581 & 17.677 & 16.843 & 12.404 & 11.178 & 10.411 \\
\hline
\hline
\multirow{15}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering Study B}}} & 1 & 96.06 & 0.17 & 101.2 & 12.26 & 103.8 & 13.75 \\
& 2 & 92.71 & 13.51 & 104.59 & 13.02 & 92.79 & 8.64 \\
& 3 & 125.72 & 40.06 & 94.9 & 28.36 & 121.88 & 45.98 \\
& 4 & 104.87 & 6.41 & 98.32 & 14.36 & 87.34 & 6.75 \\
& 5 & 115.52 & 41.74 & 114.25 & 40.84 & 102.2 & 20 \\
& 6 & 122.16 & 33.52 & 90.2 & 16.44 & 119.91 & 16.88 \\
& 7 & 121.54 & 20.98 & 116.83 & 36.64 & 125.8 & 64.77 \\
& 8 & 105.76 & 10.69 & 108.89 & 11.67 & 104.6 & 26.4 \\
& 9 & 98.85 & 25.9 & 100 & 16.83 & 117.3 & 25.28 \\
& 10 & 99.45 & 41.48 & 95.07 & 22.3 & 104.98 & 24.18 \\
& 11 & 138.96 & 41.62 & 127 & 30.4 & 122.64 & 24.03 \\
& 12 & 113.34 & 38.78 & 95.35 & 45.78 & 120.79 & 60.43 \\
\cline{2-8}
& Mean & 111.245 & 26.238 & 103.883 & 24.075 & 110.336 & 28.091 \\
& Std. Dev. & 13.426 & 14.761 & 10.435 & 11.520 & 12.181 & 18.222 \\
\hline
\end{tabularx}
\end{table*}
\section{RESULTS \& DISCUSSION}
For both user studies, we determined the number of times each participant responded that the comparison value of stiffness was greater than the reference stiffness value. A psychometric function was then fit for each participant's response data to plot a psychometric curve, using the python-psignifit 4 library (https://github.com/wichmann-lab/python-psignifit). Data from twenty-four out of the twenty-seven subjects fit sufficiently to psychometric functions and the mean JNDs and PSEs for both experiments were determined. Example plots for a representative subject are shown in Fig.~\ref{fig:curves}. Three relevant values: the PSE, the stimulus value corresponding to a proportion of 0.25 ($J_{25}$), and the stimulus value corresponding to a proportion of 0.75 ($J_{75}$) were determined. The JND is defined as the mean of the differences between the PSE and the two J values $J_{25}$ and $J_{75}$:
\begin{align}
JND = \frac{(PSE - J_{25}) + (J_{75} - PSE)}{2}.
\end{align}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{pse.pdf}
\caption{Point of Subjective Equality (PSE) for both feedback DoFs (Study A and B) against each of the three considered hand-grounding locations. Error bars indicate the standard deviation.}
\label{fig:pse}
\end{figure}
\begin{figure}
\vspace{-.3cm}
\centering
\includegraphics[width=1\columnwidth]{jnd.pdf}
\caption{Just Noticeable Differences (JNDs) for both feedback DoFs (Study A and B) against each considered hand-grounding locations.
}
\label{fig:jnd}
\vspace{-.3cm}
\end{figure}
The PSE and JND results of the psychophysical experiments for both studies are summarized in Table 1. Because these studies use a single reference force, the Weber Fractions (WFs) are simply the JNDs scaled by the reference value. Therefore, we do not report WF separately.
In Study A, the best average PSE (closer to the reference value) for stiffness perception among all three grounding locations is found for the middle phalanx location of the index finger (104.75 N/m), shown in Fig.~\ref{fig:pse}. This indicates that the grounding location closer to the fingertip helps users to perceive the stiffness more accurately. This is also supported by the user ratings for the realism of kinesthetic feedback, as shown in Fig.~\ref{fig:ratings}. The smallest average JND was found for grounding at proximal phalanx (19.42 N/m), which is closely followed by average JND values for grounding location at the proximal phalanx (19.54 N/m). Like the PSE, the average JND showed largest value for back of the hand grounding location (see Fig.~\ref{fig:jnd}). This indicates that the proximal and middle phalanx are preferable locations, in the given order, to have a more realistic and accurate feedback perception. However, the user ratings for the comfort and ease-of-use indicate that the back of the hand is a more desirable grounding location.
In Study B, the best average PSE (closer to the reference value) for stiffness perception among all three grounding locations is found in the grounding at the proximal phalanx of the index finger (103.88 N/m), shown in Fig.~\ref{fig:pse}. This grounding location also results in the smallest average JND value (24.07 N/m) among all three grounding locations. The user ratings for kinesthetic feedback realism and the comfort/ease of use, as show in Fig.~\ref{fig:ratings}, also rate this location as the best to impart most realistic and comfortable haptic experience. The second best location in terms of average JND value is the back of the hand. This holds for the feedback realism ratings as well. The grounding location with least realistic feedback ratings and largest average JND (28.09 N/m) was the proximal phalanx location.
\begin{figure}
\centering
\vspace{-0.65cm}
\includegraphics[width=1\columnwidth]{realism_comfort.png}
\caption{Mean user ratings for the realism of feedback and comfort/ease-of-use against each of the three hand-grounding locations. Error bars indicate standard deviations.}
\label{fig:ratings}
\end{figure}
If we compare the average JND values across both studies, the values for feedback along the finger-axis (Study A) are significantly smaller than that of the feedback along flexion-extension direction (study B). This indicates that the haptic device was able to provide better haptic feedback in case of Study A, i.e. along the axis of the index-finger. The reason for this probably relates to the simpler nature of this feedback direction where the finger configuration remains unchanged during all modes. However, the realism and comfort ratings show a distinct pattern; realism is higher for Study B (kinesthetic feedback along flexion-extension) when the grounding locations are the back of the hand and proximal phalanx. The realism in case of Study A is higher than that of the B when grounding location is middle phalanx. This again depends on the different nature of the second feedback DoF, where the finger configuration has to change in order to render a torque at the finer joints. The use of the Phantom Omni for tracking may introduce some passive forces that introduce variance in the study. Despite this, we observed significant performance differences among the studied grounding locations.
On the other hand, the comfort/ease-of-use ratings are higher for Study A than B, when the grounding locations are the back of the hand and the middle phalanx, respectively. The feedback in flexion-extension movement (Study B) has shown higher comfort ratings than for the finger-axis direction (Study A) when grounding is set as the proximal phalanx region. The highest comfort rating among all grounding locations across both studies is given to the proximal phalanx, and that is for the feedback along the flexion-extension movement. Similarly, the highest comfort rating is given to the same grounding location, i.e., proximal phalanx, across both studies, and that too is for the feedback along flexion-extension direction.
\section{CONCLUSION}
A novel hand-grounded kinesthetic feedback device was created for studying the effect of different grounding locations on the user's haptic experience. The device can provide kinesthetic feedback along the user's index finger, and in its flexion-extension movement direction. Two psychophysical experiments -- one for each feedback DoF -- were conducted to evaluate the user's haptic performance and experience. It is shown that the choice of grounding-location in wearable haptic devices has significant impact over the user haptic perception of stiffness. The realism of the haptic feedback increases, while the comfort level decrease, as the grounding location moves closer to the fingertip. The relationship between the grounding-location and user haptic perception is similar in both feedback directions. If the design objective is to achieve maximum comfort, feedback realism, and best haptic perception in both DoFs simultaneously, it is recommended to have grounding at the proximal phalanx region of the finger.
These findings about the choice and impact of different hand-grounding locations give important insights for designing next-generation wearable kinesthetic devices, and to have better performance in a wide range of applications, such as virtual reality and robot teleoperation.
In the future, we plan to conduct further experiments to explore the effects of these hand-grounding locations when the kinesthetic feedback is applied to both DoFs simultaneously.
\bibliographystyle{IEEEtran}
\section{INTRODUCTION}
\begin{tikzpicture}[overlay]
\node [right, text width= 7.3in, align=left] at (-.5,15.7) {Preprints of the \textbf{IEEE Robotics and Automation Letters (RAL)} paper presented at the \\\textbf{2019 International Conference on Robotics and Automation (ICRA)}, Palais des congres de Montreal, Montreal, Canada,\\May 20-24, 2019. The final version of the article can be accessed at DOI: 10.1109/LRA.2018.2890198
};
\end{tikzpicture}The majority of the existing haptic devices providing kinesthetic feedback are world grounded~\cite{pacchierotti2017wearable}. They offer numerous advantages like high forces and torques, many degrees of freedom (DoF), and a wide dynamic range. These features allow such devices to provide more realistic haptic renderings compared to tactile haptic devices that only stimulate the skin.
However, the world-grounded kinesthetic haptic devices generally have a large footprint as well as limited portability and wearability, which limits their application and effectiveness for many virtual and real-world applications. World-grounded haptic devices also offer a limited range of motion to the user due to the scaling of weight and friction with increased size~\cite{sucho@2016}.
On the other hand, wearable haptic devices must be portable and typically offer a large range of motion. But, majority of the existing wearable haptic devices are tactile in nature and provide feedback in the form of vibration or skin deformation. They are commonly grounded against the user's fingertip or the nearby region~\cite{pacchierotti2017wearable}. Though tactile feedback is capable of providing directional cues and aiding users in completing various tasks, it may not be sufficient to perform certain tasks, such as the suture knot-tying in robot-assisted surgery~\cite{okamura2009haptic}, and manipulating objects in virtual reality~\cite{burdea1999keynote}. As demonstrated by Suchoski et al.~\cite{sucho@2016} in their study, kinesthetic feedback is capable to give more sensitive haptic information to carry out a grasp-and-lift task than the skin deformation feedback (a form of tactile feedback). Similarly, the role of kinesthetic (force) feedback in surgical training and skill development looks very promising~\cite{okamura2009haptic}.
Kinesthetic haptic devices, that are not world grounded but instead impart feedback by grounding forces against the user's hand (\emph{hand-grounded haptic devices}), provide a solution to challenges of portability, wearability and limited workspace in kinesthetic haptic devices. As noted by Pacchierotti et al.~\cite{pacchierotti2017wearable}, the primary advantage of wearable kinesthetic devices is their small form factor as compared to the world-grounded devices. Similarly, body-grounded kinesthetic devices, i.e. Exoskeletons, could be another potential solution, but they generally encumber the user movement and are difficult to don and doff.
However, designing these hand or body-grounded devices is challenging due to the need for increased forces/torques and number of degrees-of-freedom (DoF) in comparison to the fingertip tactile devices. Additionally, the effects of hand-grounded kinesthetic feedback on users' perception and haptic experience are still unknown.
There exist numerous examples of hand-grounded kinesthetic haptic devices, including~\cite{jadhav2017soft,fontana2009mechanical,springer2002design,leonardis2015emg,nycz2016design,allotta2015development,ma2015rml,kim2016hapthimble,fu2011design,lambercy2013design,stergiopoulos2003design,lelieveld2006design,cempini2015powered,agarwal2015index,aiple2013pushing,tanaka2002wearable,polygerinos2015soft,stetten2011hand,bouzit2002rutgers,choi2018claw,choi2016wolverine,khurshid2014wearable}. These devices are either grounded against the back of the hand~\cite{jadhav2017soft,fontana2009mechanical,springer2002design,leonardis2015emg,nycz2016design,allotta2015development,ma2015rml,kim2016hapthimble,fu2011design,lambercy2013design,stergiopoulos2003design,lelieveld2006design,cempini2015powered,agarwal2015index,aiple2013pushing}, act like a glove ~\cite{tanaka2002wearable,polygerinos2015soft,stetten2011hand}, are grounded against the user's palm~\cite{bouzit2002rutgers,choi2018claw}, or are grounded against the user's fingers~\cite{choi2016wolverine,khurshid2014wearable}. To the best of our knowledge, there exists no device that can be grounded against different locations on the user's hand or a study that explains the effect of different grounding locations on the user's haptic perception and qualitative experience with kinesthetic (force) feedback.
\begin{figure}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.9\columnwidth}
{\input{./figs/ground_locations.pdf_tex}}
\caption{Three potential grounding locations on the user's hand: Back of the hand, Proximal Phalanx, and Middle Phalanx of the index finger. Arrows indicate directions of applied kinesthetic feedback on the fingertip: (A) along the finger axis and (B) in flexion-extension.}
\label{fig:hand}
\end{figure}
\begin{figure*}[ht]
\fontfamily{cmss}\selectfont
\centering
\begin{subfigure}{0.32\textwidth}
\def1\columnwidth{1\textwidth}
{\input{./figs/mode_a.pdf_tex}}
\vspace{-.6cm}
\caption{}
\label{fig:a}
\end{subfigure}
\centering
\begin{subfigure}{0.32\textwidth}
\def1\columnwidth{1\columnwidth}
{\input{./figs/mode_b.pdf_tex}}
\caption{}
\label{fig:b}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\def1\columnwidth{1\columnwidth}
{\input{./figs/mode_c.pdf_tex}}
\caption{}
\label{fig:c}
\end{subfigure}
\caption{Device design with three different grounding modes: (a) Grounding location is back of the hand, (b) Proximal phalanx is the grounding location, (c) Grounding locations is the Middle phalanx of index finger. In mode (b) and (c), the finger rings are rigidly attached with the base part.}\label{fig:modes}
\end{figure*}
We aim to study the effects of different hand-grounding locations on a user's haptic perception by providing kinesthetic feedback on the user's index finger tip. For this purpose, a wearable 2-DoF haptic device is designed that can provide kinesthetic feedback grounded at three different regions of the user's hand (Fig.~\ref{fig:hand}): (i) back of the hand, (ii) proximal phalanx of the index finger, and (iii) middle phalanx of the index finger. The light-weight and modular design provides kinesthetic feedback in two directions: (A) along the index finger axis, and (B) in flexion-extension.
We aim to understand how different hand-grounding locations affect the user's haptic performance and overall experience. To identify the significance and impact of different hand-grounding locations, two psychophysical experiments are carried out using \emph{the method of constant stimuli} \cite{gescheider1985psychophysics} --- one for each feedback direction. The participants were asked, in separate trials, to discriminate the stiffness of two virtual surfaces based on the kinesthetic feedback provided by the hand-grounded device. The Point of Subjective Equality (PSE) and Just Noticeable Difference (JND) were computed to measure the effective sensitivity and precision of the participants' perception of stiffness for each hand-grounding location, in both feedback directions. The PSE gives insight about the accuracy of the applied/perceived feedback, as it represents the point where the comparison stimulus (stiffness) is perceived by the user as identical to the standard stimulus. JND indicates the resolving power of a user and is defined as the minimum change in the stimulus value required to cause a perceptible increase in the sensation~\cite{gescheider1985psychophysics}.
The results show that the choice of grounding location has profound impact on the user's haptic perception (measured through the metrics described above) and experience (based on user ratings). These findings provide important insights for the design of next-generation kinesthetic feedback devices, particularly in terms of grounding of forces, to achieve compelling and natural kinesthetic haptic interaction in real-world haptic and robotic applications. For example, using these findings we can now design hand-grounded wearable kinesthetic devices with appropriate grounding to offer superior haptic performance and user experience. As hand-grounded devices offer comparatively larger operating range and smaller from factor than their world-grounded counterparts, the knowledge related to the choice of hand-grounding location may help to increase the use of wearable kinesthetic devices in the fields of haptics and robot teleoperation. The contribution of this work is the design of a novel wearable kinesthetic device and study results for understanding of the role played by different hand-grounding locations on user stiffness perception.
\section{DEVICE DESIGN \& CONTROL}
\subsection{Design}
The device has a base (Fig.~\ref{fig:device_cad}) that can be tied to the back of the user's hand using a hook-and-loop fastener.
It has two rings (A and B) which are fitted to the proximal and middle phalanxes of the index finger. The fingertip cap is connected to actuators A and B through two cables, which route through the passage holes on rings A and B, as shown in Fig.~\ref{fig:device_cad}. When both actuators A and B move in the same direction (clockwise or anti-clockwise), a flexion or extension movement at the finger is produced. When both actuators move in opposite directions, a pull force is generated along the finger axis.
To provide hand-grounded kinesthetic feedback at the fingertip, a number of grounding locations can be used. Fig.~\ref{fig:hand} shows the three grounding locations considered in this case: the back of the hand, proximal phalanx of the index finger, and middle phalanx region of the index finger. Another potential location, the palm region, was rejected because such an arrangement may affect the user's ability to open/close the hand and fingers.
\begin{figure}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.8\columnwidth}
{\input{./figs/back_of_hand_illustration.pdf_tex}}
\caption{Design: The base is tied against the back of the hand. When tendon cables are pulled/released by actuators A and B, the fingertip cap provides kinesthetic feedback along the finger axis and/or in flexion-extension.}
\label{fig:device_cad}
\end{figure}
\begin{figure}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.75\columnwidth}
{\input{./figs/kinematics.pdf_tex}}
\caption{A simplified representation of the device's mechanism as a 2-D piece-wise constant-curvature tendon-driven manipulator. Tendon lengths ($l_a, l_b$), their respective distance from tip center-point ($r_a, r_b$), and the arc parameters: length ($l$) and radius ($r$), are used to determine the tip position and finger configuration in the $x-z$ plane.}
\label{fig:simp_rep}
\end{figure}
To achieve different groundings, the device has three different modes. In mode A (Fig.~\ref{fig:modes}(a)), the back of the hand acts as the grounding location. In mode B (Fig.~\ref{fig:modes}(b)), the base is physically connected to the ring A at the proximal phalanx, providing grounding at this region. In mode C (Fig.~\ref{fig:modes}(c)), the base is rigidly connected with both rings to provide grounding at the middle phalanx region. Different device modes enable execution of different joints of the index finger in the flexion-extension direction. For example, in mode A, the torque is applied at all three joints (MP1, PIP, and DIP). In mode B, only PIP and DIP joints are executed. In mode C, the torque applies only at the DIP joint.
Based on its kinematic design and actuator specifications, the device can apply, in different modes, a maximum force of 28.9 N along the finger axis and a torque in the range of 80 to 300 N-mm at the fingertip. It is driven by two Faulhaber 0615 4,5S DC-micromotors with 256:1 gearboxes, and 50-counts-per-revolution optical encoders are used for position sensing. The device prototypes with different grounding modes (Fig.~\ref{fig:prototypes}) weigh 31, 43, and 49 grams, respectively.
\subsection{Kinematics}
The device renders forces on the user's index finger by controlling the tendon lengths. To calculate the position and configuration of the finger, we use a robot-independent kinematic mapping between the actuator space and the task space. The obtained homogeneous transformation remains identical for all three grounding modes of our device. It is assumed that the device's tendons, when fit to the user index finger, exhibit a continuum-curve shape. The geometry of this curve allows determination of the tip position and configuration of the finger. Fig.~\ref{fig:simp_rep} shows a simplified representation of the haptic device in such a scheme.
\begin{figure}
\vspace{.2cm}
\begin{subfigure}{.32\columnwidth}
\centering
\includegraphics[width=1\textwidth]{a.png}
\caption{}
\label{fig:device_a}
\end{subfigure}
\begin{subfigure}{.32\columnwidth}
\centering
\includegraphics[width=1\textwidth]{b.png}
\caption{}
\label{fig:device_b}
\end{subfigure}
\begin{subfigure}{.32\columnwidth}
\centering
\includegraphics[width=1\textwidth]{c.png}
\caption{}
\label{fig:device_c}
\end{subfigure}
\caption{Modular versions of the wearable kinesthetic device with grounding locations: (a) Back of the hand, (b) Proximal Phalanx, and (c) Middle Phalanx}
\label{fig:prototypes}
\end{figure}
\begin{figure*}[ht]
\vspace{0.2cm}
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{.9\textwidth}
{\input{./figs/control_loop.pdf_tex}}
\caption{Block diagram of the controller used for rendering force on the user's fingertip. The hand position is tracked by a 3-DoF device, and the interaction forces are calculated as the desired force. Forces applied by the hand-grounded device end-effector on the fingertip through tendon displacements are regulated using a proportional-derivative (PD) controller.}
\label{fig:control}
\end{figure*}
As the haptic device aims to provide kinesthetic feedback in two directions (along the finger axis, and in the finger's flexion-extension direction), the kinematic mapping between the inertial frame ($O$) and the fingertip ($p(x,z)$) is described in a 2-D ($x-z$) plane. Tendon lengths ($l_a, l_b$), their respective distance from tip center-point ($r_a, r_b$), and the arc parameters, namely length ($l$) and radius ($r$), are used to determine the tip position and figure configuration in $x-z$ plane. The position of fingertip can be expressed as,
\begin{align}
p(x,z) = \left[r(1 - \cos\theta), r\sin\theta\right]^T.
\end{align}
The homogeneous transformation for tendons $a$ and $b$, from $O$ to $p_a(x,z)$ and $p_b(x,z)$ respectively, is
\begin{align}
T_j &= \left[\begin{matrix}\operatorname{cos}\left(\theta\right) & 0 & \operatorname{sin}\left(\theta\right) & p_{xj}\\0 & 1 & 0 & 0\\- \operatorname{sin}\left(\theta\right) & 0 & \operatorname{cos}\left(\theta\right) & p_{zj}\\0 & 0 & 0 & 1\end{matrix}\right], \quad (j= a, b),
\end{align}
\begin{align}
p_{xj} &= \left(\frac{l}{\theta} \pm r_{j}\right) \left(1 - \operatorname{cos}\theta\right), \quad (j= a, b), \\
p_{zj} &= \left(\frac{l}{\theta} \pm r_{j}\right) \operatorname{sin}\theta, \quad (j= a, b).
\end{align}
The displacements of tendons $a$ and $b$ can be expressed in terms of arc radius and angles as
\begin{align}
s_a = (r + r_a)(\theta_o - \theta_t), \label{eq:s_a} \\
s_b = (r - r_b)(\theta_o - \theta_t). \label{eq:s_b}
\end{align}
where $\theta_o$ is the initial angle angle of tendon $a$ and $\theta_t$ represents the tendon angle at time $t$.
\subsection{Control System}
Using the tendon displacements (\ref{eq:s_a}) and (\ref{eq:s_b}), a separate control is implemented for each of the actuators to apply force and control the user's finger configuration. Fig.~\ref{fig:control} shows the block diagram of the control in the virtual reality setup. The control of the 2-DoF kinesthetic device was achieved by using a Nucleo-F446ZE board by STMicroelectronics\texttrademark connected to a Desktop computer via USB. The microcontroller reads the encoders of the motors and receives the desired force from the virtual environment sent using a PC's serial port. Using this information, it calculates the desired torque output of the motors. The control loop runs at a frequency of approximately 1 kHz. The CHAI3D framework was used to render the 3-D virtual reality environment \cite{conti2005chai3d} using the god-object algorithm \cite{zilles1995god-object} to calculate desired interaction force. The user can move the cursor (red sphere in Fig.~\ref{fig:studies}(a) \& (b)) in 3-D space. Because the wearable device has only 2 DoFs, the third dimension does not give any force feedback to the user. Given the nature of the tasks in the user studies, the third dimension ($y$-axis) is not required to display the force feedback.
\begin{figure}
\begin{subfigure}{.49\columnwidth}
\fontsize{8pt}{6}\selectfont
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{1\columnwidth}
{\input{./figs/study_a.pdf_tex}}
\caption{}
\label{fig:study_b}
\end{subfigure}
\begin{subfigure}{.49\columnwidth}
\fontsize{8pt}{6}\selectfont
\fontfamily{cmss}\selectfont
\centering
\def1\columnwidth{1\columnwidth}
{\input{./figs/study_b.pdf_tex}}
\caption{}
\label{fig:study_a}
\end{subfigure}
\caption{Experimental setup: A user interacts with the virtual environment through a 3-DoF hand position tracking device (Phantom Omni). The new hand-grounded haptic device provides kinesthetic feedback, and a visual display shows the virtual environment. Participants receive force feedback by touching the two virtual surfaces, one carrying the reference stiffness and the other comparison stiffness in a random order. Participants are required to discriminate the stiffness based on the kinesthetic feedback and record their choice through key presses. (a) Study A (the feedback is rendered along the finger-axis) (b) Study B (the feedback is rendered along flexion-extension movements).}
\label{fig:studies}
\vspace{-.3cm}
\end{figure}
The user's hand position ($\vec{x_u}$) is tracked using a Phantom Omni haptic device (set up to provide no haptic feedback, just position tracking) from SensAble Technologies, Inc. and sent to the virtual environment as $\vec{x_d}$. The resulting interaction force command from the virtual environment ($\vec{F_d}$) is calculated in the computer and then fed to the hand-grounded haptic device. The device then uses a mapping between the force magnitude and the device tip position (force-position translator) to output the desired tip position to the PD controller which, using the encoders mounted on each motor shaft, can estimate the current tip position and configuration and outputs the appropriate tendon displacements ($\vec{s_c}(a,b)$) to the device's motors. As the tendons shorten, the user's finger tip is moved to the right position, allowing him/her to feel a force.
The PD controller error and the control law are
\begin{align}
e(t) &= y(t) - r(t), \label{eq:error}\\
U &= K_P e + K_D \frac{d}{dt} e, \label{eq:law}
\end{align}
where, $K_P$ represents the proportional gain and $K_D$ is the derivative gain. $e(t)$ is the position error, $y(t)$ represents the motor shaft position, and $r(t)$ is the reference position calculated from the desired tendon displacements ($\vec{s_d}(a,b)$ in Fig.~\ref{fig:control}).
\section{USER STUDY}
To evaluate the effects of the three different hand-grounding locations on the user's haptic perception and experience, we conducted two separate user studies (Study A \& Study B); one for each haptic feedback DoF provided by the hand-grounded device. In Study A, the kinesthetic feedback is provided along the axis of the user's index finger. In Study B, the feedback is provided along the flexion-extension movement of the finger. The purpose of evaluating each feedback DoF separately is to develop a clear understanding of the relation between the hand-grounding location and the corresponding feedback direction.
\subsection{Study A: Feedback Along the Finger Axis}
\subsubsection{Experimental Setup}
13 subjects (9 males and 4 females) participated in this study, which was approved by the Stanford University Institutional Review Board. The metrics were PSE and JND of stiffness perception while the hand-grounded device was set up for each of the three grounding locations (back of the hand, proximal, and middle phalanx of the index finger). All subjects participated in the experiment after giving informed consent, under a protocol approved by the Stanford University Institutional Review Board. The participants used the hand-grounded haptic device on their right hand and performed tasks in a virtual environment, while holding the stylus of the Phantom Omni device in the same hand (Fig.~\ref{fig:studies}(a)). A pilot study was conducted to determine a convenient posture to hold the Phantom Omni stylus while the kinesthetic device is donned to the index finger. In the user studies, the participants were instructed to hold the Phantom Omni device in that predefined way to make sure that its stylus does not come into contact with the wearable kinesthetic device.
\subsubsection{Experimental Procedure}
Each participant used the haptic device configured for each of the three hand-grounding locations in a predetermined order to minimize the effect of selection bias. As mentioned earlier, a Phantom Omni device was used to track the user hand position during the experiments as shown in Fig.~\ref{fig:studies}(a). The Phantom Omni only determined the user hand position, while the kinesthetic feedback was rendered by the hand-grounded haptic device.
Participants were wore ear protection to suppress the motor noise in order to avoid sound cues. After the experiments were completed, the participants rated the realism of haptic feedback and comfort/ease-of-use for all three devices with different hand-grounding locations on a scale of 1-7: 1 meaning `not real' and 7 meaning `real', or 1 for `not comfortable' and 7 for `comfortable.' The realism was with respect to the users' feeling as if they would be pressing against a very smooth real surface using their right-hand’s index finger.
\subsubsection{Method}
We conducted a \emph{two-alternative forced-choice} experiment following the \emph{method of constant stimuli}~\cite{gescheider1985psychophysics}. Subjects were asked to freely explore and press against the two virtual surfaces shown on the virtual environment display and state which surface felt stiffer. In each trial, one surface presented a reference stiffness value while the other presented a comparison stiffness value. The reference stiffness value was selected to be 100.0 N/m.
The reference value was included as one comparison value, and the other comparison values were then chosen to be equally spaced: 10, 28, 46, 64, 82, 100, 118, 136, 154, 172, and 190 N/m.
Each of the eleven comparison values was presented ten times in random order for each of the three hand-grounded haptic devices over the course of one study. Each participant completed a total of 110 trials for each grounding mode (330 trials for the entire study). The participants used the kinesthetic feedback from the hand-grounded device to explore the virtual surfaces until a decision was made; they recorded their responses by pressing designated keyboard keys, corresponding to which virtual surface they thought felt stiffer. Subject responses and force/torque data were recorded after every trial. There was no time limit for each trial, and participants were asked to make their best guess if the decision seemed too difficult. Subjects were given an optional two-minute break after every fifty-five trials, and a ten-minute break after the completion of each grounding mode.
\subsection{Study B: Feedback in the Flexion-Extension Direction}
In study B, the kinesthetic feedback was rendered along the flexion extension movement direction of the index finger. A total of 14 subjects (9 males and 5 females) participated, and the study was approved by the Stanford University Institutional Review Board. The procedure was the same as in Study A. However, in Study B the virtual surfaces were presented lying in the horizontal plane (Fig.~\ref{fig:studies}(b)) to make the haptic feedback intuitive for the user.
\begin{figure*}
\renewcommand\thesubfigure{\roman{subfigure}}
\begin{subfigure}{.32\textwidth}
\fontfamily{cmss}\selectfont
\centering
\includegraphics[width=1\columnwidth]{aa.png}
\caption{Back of the hand}
\label{fig:curve_a}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\fontfamily{cmss}\selectfont
\centering
\includegraphics[width=1\columnwidth]{ba.png}
\caption{Proximal Phalanx}
\label{fig:curve_b}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\fontfamily{cmss}\selectfont
\centering
\includegraphics[width=1\columnwidth]{ca.png}
\caption{Middle Phalanx}
\label{fig:curve_c}
\end{subfigure}
\caption{Example psychophysical data and psychometric function fits for a representative subject in Study A, with grounding locations: (i) back of the hand, (ii) proximal phalanx, and (iii) middle phalanx of the index finger. Each data point represents the 'yes' proportion of the user responses over 10 trials. The user identified the difference between the reference and comparison stimulus values correctly 90 \% of the time for grounding location (i), 94 \% of the time for location (ii), and 98 \% of the times for location (iii).}
\label{fig:curves}
\end{figure*}
\begin{table*}
\vspace{.3cm}
\caption{Results of the two psychophysical experiments for stiffness discrimination. In Study A, the hand-grounded haptic device provided feedback along the axis of the finger with three different grounding locations. In Study B, the feedback direction was flexion extension movement of the index finger.}
\begin{tabularx}{\linewidth}{|c|r|XX|XX|XX|}
\hline
\multicolumn{2}{|c|}{Grounding Location} & \multicolumn{2}{c}{Back of the Hand} & \multicolumn{2}{c}{Proximal Phalanx} & \multicolumn{2}{c|}{Mid Phalanx} \\
\hline
\hline
& Subject No.& PSE (N/m) & JND (N/m) & PSE (N/m) & JND (N/m) & PSE (N/m) & JND (N/m) \\
\multirow{14}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering Study A}}} & 1 & 154.42 & 57.15 & 150.74 & 47.35 & 120.34 & 41.32 \\
& 2 & 111.17 & 9.86 & 107.58 & 17.44 & 103.37 & 6.2 \\
& 3 & 139.75 & 48.5 & 129.95 & 34.27 & 81.3 & 7.32 \\
& 4 & 87.56 & 6.07 & 92.95 & 5.72 & 102.27 & 20.24 \\
& 5 & 98.62 & 32.37 & 85.2 & 6.79 & 114.85 & 19.04 \\
& 6 & 113.23 & 13.08 & 104.68 & 25.28 & 100.74 & 20.1 \\
& 7 & 99.6 & 13.21 & 108.99 & 7.97 & 94.68 & 9.48 \\
& 8 & 118.75 & 46.88 & 128 & 31.13 & 109.11 & 38.95 \\
& 9 & 115.5 & 20.23 & 103.4 & 10.16 & 105.51 & 19.72 \\
& 10 & 114.64 & 12.93 & 103.46 & 12.15 & 116.09 & 19 \\
& 11 & 85.87 & 7.32 & 108.22 & 12.65 & 117.25 & 17.41 \\
& 12 & 92.94 & 6.66 & 114.14 & 22.17 & 91.54 & 15.65 \\
\cline{2-8}
& Mean & 111.004 & 22.855 & 111.442 & 19.423 & 104.754 & 19.536 \\
& Std. Dev. & 19.581 & 17.677 & 16.843 & 12.404 & 11.178 & 10.411 \\
\hline
\hline
\multirow{15}{*}{\rotatebox[origin=c]{90}{\parbox[c]{1cm}{\centering Study B}}} & 1 & 96.06 & 0.17 & 101.2 & 12.26 & 103.8 & 13.75 \\
& 2 & 92.71 & 13.51 & 104.59 & 13.02 & 92.79 & 8.64 \\
& 3 & 125.72 & 40.06 & 94.9 & 28.36 & 121.88 & 45.98 \\
& 4 & 104.87 & 6.41 & 98.32 & 14.36 & 87.34 & 6.75 \\
& 5 & 115.52 & 41.74 & 114.25 & 40.84 & 102.2 & 20 \\
& 6 & 122.16 & 33.52 & 90.2 & 16.44 & 119.91 & 16.88 \\
& 7 & 121.54 & 20.98 & 116.83 & 36.64 & 125.8 & 64.77 \\
& 8 & 105.76 & 10.69 & 108.89 & 11.67 & 104.6 & 26.4 \\
& 9 & 98.85 & 25.9 & 100 & 16.83 & 117.3 & 25.28 \\
& 10 & 99.45 & 41.48 & 95.07 & 22.3 & 104.98 & 24.18 \\
& 11 & 138.96 & 41.62 & 127 & 30.4 & 122.64 & 24.03 \\
& 12 & 113.34 & 38.78 & 95.35 & 45.78 & 120.79 & 60.43 \\
\cline{2-8}
& Mean & 111.245 & 26.238 & 103.883 & 24.075 & 110.336 & 28.091 \\
& Std. Dev. & 13.426 & 14.761 & 10.435 & 11.520 & 12.181 & 18.222 \\
\hline
\end{tabularx}
\end{table*}
\section{RESULTS \& DISCUSSION}
For both user studies, we determined the number of times each participant responded that the comparison value of stiffness was greater than the reference stiffness value. A psychometric function was then fit for each participant's response data to plot a psychometric curve, using the python-psignifit 4 library (https://github.com/wichmann-lab/python-psignifit). Data from twenty-four out of the twenty-seven subjects fit sufficiently to psychometric functions and the mean JNDs and PSEs for both experiments were determined. Example plots for a representative subject are shown in Fig.~\ref{fig:curves}. Three relevant values: the PSE, the stimulus value corresponding to a proportion of 0.25 ($J_{25}$), and the stimulus value corresponding to a proportion of 0.75 ($J_{75}$) were determined. The JND is defined as the mean of the differences between the PSE and the two J values $J_{25}$ and $J_{75}$:
\begin{align}
JND = \frac{(PSE - J_{25}) + (J_{75} - PSE)}{2}.
\end{align}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{pse.pdf}
\caption{Point of Subjective Equality (PSE) for both feedback DoFs (Study A and B) against each of the three considered hand-grounding locations. Error bars indicate the standard deviation.}
\label{fig:pse}
\end{figure}
\begin{figure}
\vspace{-.3cm}
\centering
\includegraphics[width=1\columnwidth]{jnd.pdf}
\caption{Just Noticeable Differences (JNDs) for both feedback DoFs (Study A and B) against each considered hand-grounding locations.
}
\label{fig:jnd}
\vspace{-.3cm}
\end{figure}
The PSE and JND results of the psychophysical experiments for both studies are summarized in Table 1. Because these studies use a single reference force, the Weber Fractions (WFs) are simply the JNDs scaled by the reference value. Therefore, we do not report WF separately.
In Study A, the best average PSE (closer to the reference value) for stiffness perception among all three grounding locations is found for the middle phalanx location of the index finger (104.75 N/m), shown in Fig.~\ref{fig:pse}. This indicates that the grounding location closer to the fingertip helps users to perceive the stiffness more accurately. This is also supported by the user ratings for the realism of kinesthetic feedback, as shown in Fig.~\ref{fig:ratings}. The smallest average JND was found for grounding at proximal phalanx (19.42 N/m), which is closely followed by average JND values for grounding location at the proximal phalanx (19.54 N/m). Like the PSE, the average JND showed largest value for back of the hand grounding location (see Fig.~\ref{fig:jnd}). This indicates that the proximal and middle phalanx are preferable locations, in the given order, to have a more realistic and accurate feedback perception. However, the user ratings for the comfort and ease-of-use indicate that the back of the hand is a more desirable grounding location.
In Study B, the best average PSE (closer to the reference value) for stiffness perception among all three grounding locations is found in the grounding at the proximal phalanx of the index finger (103.88 N/m), shown in Fig.~\ref{fig:pse}. This grounding location also results in the smallest average JND value (24.07 N/m) among all three grounding locations. The user ratings for kinesthetic feedback realism and the comfort/ease of use, as show in Fig.~\ref{fig:ratings}, also rate this location as the best to impart most realistic and comfortable haptic experience. The second best location in terms of average JND value is the back of the hand. This holds for the feedback realism ratings as well. The grounding location with least realistic feedback ratings and largest average JND (28.09 N/m) was the proximal phalanx location.
\begin{figure}
\centering
\vspace{-0.65cm}
\includegraphics[width=1\columnwidth]{realism_comfort.png}
\caption{Mean user ratings for the realism of feedback and comfort/ease-of-use against each of the three hand-grounding locations. Error bars indicate standard deviations.}
\label{fig:ratings}
\end{figure}
If we compare the average JND values across both studies, the values for feedback along the finger-axis (Study A) are significantly smaller than that of the feedback along flexion-extension direction (study B). This indicates that the haptic device was able to provide better haptic feedback in case of Study A, i.e. along the axis of the index-finger. The reason for this probably relates to the simpler nature of this feedback direction where the finger configuration remains unchanged during all modes. However, the realism and comfort ratings show a distinct pattern; realism is higher for Study B (kinesthetic feedback along flexion-extension) when the grounding locations are the back of the hand and proximal phalanx. The realism in case of Study A is higher than that of the B when grounding location is middle phalanx. This again depends on the different nature of the second feedback DoF, where the finger configuration has to change in order to render a torque at the finer joints. The use of the Phantom Omni for tracking may introduce some passive forces that introduce variance in the study. Despite this, we observed significant performance differences among the studied grounding locations.
On the other hand, the comfort/ease-of-use ratings are higher for Study A than B, when the grounding locations are the back of the hand and the middle phalanx, respectively. The feedback in flexion-extension movement (Study B) has shown higher comfort ratings than for the finger-axis direction (Study A) when grounding is set as the proximal phalanx region. The highest comfort rating among all grounding locations across both studies is given to the proximal phalanx, and that is for the feedback along the flexion-extension movement. Similarly, the highest comfort rating is given to the same grounding location, i.e., proximal phalanx, across both studies, and that too is for the feedback along flexion-extension direction.
\section{CONCLUSION}
A novel hand-grounded kinesthetic feedback device was created for studying the effect of different grounding locations on the user's haptic experience. The device can provide kinesthetic feedback along the user's index finger, and in its flexion-extension movement direction. Two psychophysical experiments -- one for each feedback DoF -- were conducted to evaluate the user's haptic performance and experience. It is shown that the choice of grounding-location in wearable haptic devices has significant impact over the user haptic perception of stiffness. The realism of the haptic feedback increases, while the comfort level decrease, as the grounding location moves closer to the fingertip. The relationship between the grounding-location and user haptic perception is similar in both feedback directions. If the design objective is to achieve maximum comfort, feedback realism, and best haptic perception in both DoFs simultaneously, it is recommended to have grounding at the proximal phalanx region of the finger.
These findings about the choice and impact of different hand-grounding locations give important insights for designing next-generation wearable kinesthetic devices, and to have better performance in a wide range of applications, such as virtual reality and robot teleoperation.
In the future, we plan to conduct further experiments to explore the effects of these hand-grounding locations when the kinesthetic feedback is applied to both DoFs simultaneously.
\bibliographystyle{IEEEtran}
|
1,477,468,750,942 | arxiv | \section{Introduction}
The scientific and technological advancement in the last century greatly increases our understanding of the universe.
Nowadays, we are able to build giant telescopes and observe astronomical objects billions of light years away.
Apart from deepening our scientific understanding, our astronomical knowledge also stimulates our imagination
of interstellar civilizations.
A lot of great science fiction has been written, and scientists have proposed ideas like
the Fermi paradox~\cite{gray2015fermi}, the Dyson sphere~\cite{wright2020dyson} and the Kardashev scale~\cite{gray2020extended}.
While many of these ideas are physically plausible,
it would be interesting to discuss these ideas from a social science perspective.
Due to the highly hypothetical nature of the problem,
we suggest that agent-based model (ABM) enables formal academic discussion on interstellar society.
An ABM is a simulation model bridging microscopic behaviours of agents and macroscopic observations.
Depending on the context of the model, an agent can be an individual, an organization, or even a country.
First, assumption about the social behaviours of the agents are made, then the agents are placed and evolved
in a computational environment.
For a research problem for which it is not viable to perform data collection and analysis,
ABM can still be used for theoretical exposition~\cite{edmonds2015simulating}.
To model agents in interstellar space, suppose we only consider a scale with normal stellar objects
so that we can ignore the effects of general relativity, such as universe expansion and black holes,
but we still have to consider the effect of special relativity.
In the context of ABM, where the simulation is computed under a set of inertial frames that are at rest to each other,
we can simplify the theory of relativity into two core
phenomena: speed of light as the upper bound of the speed of information travel, and
time dilation relative to any stationary observer in the inertial frames.
This also implies that we have to take care of four dimensions: three space dimensions, plus one time dimension.
Typically, an ABM is constructed using an ABM framework to facilitate model development and communication.
There are a lot of existing ABM frameworks, to name a few, NetLogo~\cite{netlogo}, mesa~\cite{python-mesa-2020},
and Agents.jl~\cite{Agents2021}, see~\cite{pal2020review} for a detailed review.
While it is possible to build a 4D relativistic model in some existing ABM frameworks,
those frameworks do not have native support for the necessary 4D data structures,
and it can be error-prone to enforce relativistic effects via custom implementations of data structures and algorithms.
Therefore, we have developed a simulation framework we call ``Relativitization''~\cite{relativitization},
to help social scientists to build an ABM in relativistic spacetime.
In this paper, the mathematics and the algorithms underlying the framework will be presented.
\section{Definitions}
In Relativitization, an agent is called a ``player''.
Players live in a universe.
Ideally, computation should be done in every local frame following all players,
and the computation results can be synchronized by a Lorentz transformation.
However, this will make the framework and the model substantially more complex.
Therefore, all computations are done according to some inertial frames that are at rest to each other.
The spatial coordinates of a player are represented by floating-point numbers $x$, $y$ and $z$
and the time coordinate of a player is represented by a floating-point number $t$.
To simplify computation and visualization, the universe is partitioned into unit cubes.
A player with floating-point coordinates $(t, x, y, z)$ is located at the cube
with integer coordinates $T = \lfloor t \rfloor$, $X = \lfloor x \rfloor$, $Y = \lfloor y \rfloor$, $Z = \lfloor z \rfloor$,
note that the computations of a simulation are done at unit time steps and we can actually assume $T = t$.
Denote the speed of light as $c$.
In vector notation, define $\textbf{s} = (t, \overrightarrow{u}) = (t, x, y, z)$, and
$\textbf{S} = (T, \overrightarrow{U}) = (T, X, Y, Z)$.
\subsection{Interval and time delay}
The spacetime interval between coordinates $\textbf{s}_i$ and $\textbf{s}_j$ is
\begin{equation}
\|\textbf{s}_i - \textbf{s}_j\| = c^2 (t_i - t_j)^2 - (x_i - x_j)^2 - (y_i - y_j)^2 - (z_i - z_j)^2.
\end{equation}
If $\|\textbf{s}_i - \textbf{s}_j\| < 0$, it is called a spacelike interval, and events that happen at the two coordinates
are not causally connected because no information can travel faster than the speed of light $c$.
It is often needed to compute intervals in integer coordinates.
We define the spatial distance between $\overrightarrow{U_i}$
and $\overrightarrow{U_j}$ as the maximum distance between all points in the cubes at
$\overrightarrow{U_i}$ and $\overrightarrow{U_j}$
\begin{equation} \label{eq:delay}
|\overrightarrow{U_i} - \overrightarrow{U_j}| = (X_i - X_j + 1)^2 + (Y_i - Y_j + 1)^2 + (Z_i - Z_j + 1)^2.
\end{equation}
Suppose there is a signal sent from $\overrightarrow{U_i}$ to $\overrightarrow{U_j}$.
To ensure that the information travels slower than the speed of light,
the integer time delay $\tau(\overrightarrow{U_i}, \overrightarrow{U_j})$ is computed as
\begin{equation}
\tau(\overrightarrow{U_i}, \overrightarrow{U_j}) = \left \lceil \frac{|\overrightarrow{U_i} - \overrightarrow{U_j}|}{c} \right \rceil.
\end{equation}
\subsection{Group id}
From Eq.~\ref{eq:delay}, even if $\overrightarrow{U_i} = \overrightarrow{U_j}$, the time delay is non-zero.
To implement zero time delay for players that are really close to each other,
we divide a unit cube into several sub-cubes with edge length $d_e$,
and information travel within the same sub-cubes is instantaneous.
To improve the computational speed when checking whether two players belong to the same sub-cube,
we assign a ``group id'' to each sub-cube in a unit cube.
A unit cube has $n_e^3$ sub-cubes, where $n_e = \left \lceil \frac{1}{d_e} \right \rceil$.
For a player at $\overrightarrow{u}$, it belongs to the $(n_x, n_y, n_z)$ sub-cubes,
where $n_x = \left \lfloor \frac{x - X} {d_e} \right \rfloor$,
$n_y = \left \lfloor \frac{y - Y} {d_e} \right \rfloor$,
and $n_z = \left \lfloor \frac{z - Z} {d_e} \right \rfloor$.
The group id $g(\overrightarrow{u}, \overrightarrow{U})$ of the player
can be computed as
\begin{equation} \label{eq:group}
g(\overrightarrow{u}, \overrightarrow{U}) = n_x n_e^2 + n_y n_e + n_z.
\end{equation}
If two players have the same integer coordinates and the same group id,
then we say the players belong the same group and the time delays between the players are zero.
\subsection{Player data}
A player is characterized by a set of data:
\begin{itemize}
\item player id $i$,
\item integer coordinates $(T_i, X_i, Y_i, Z_i)$,
\item a historical record of integer coordinates $H_i = \{(T_i', X_i', Y_i', Z_i') \mid T_i' < T_i \}$,
\item floating-point coordinates $(t_i, x_i, y_i, z_i)$,
\item time dilation counter $\mu_i$,
a floating point number to keep track of time dilation (see Sec.~\ref{ssec:mechanism_and_ai} and Sec.~\ref{ssec:mechanisms}),
\item group id $g_i$,
\item floating-point velocities $\overrightarrow{v_i} = (v_{ix}, v_{iy}, v_{iz})$,
\item other data $D_i$ relevant to the model.
\end{itemize}
\subsection{Command}
In other frameworks, interactions in ABMs are often presented as one player asking another player to do something.
Because the speed of information travel is bounded by $c$,
a player cannot simply ask other players to do something immediately.
Instead, interactions are mediated by commands.
Whenever player $i$ wants to interact with player $j$,
player $i$ sends a command to $j$.
A command is characterized by:
\begin{itemize}
\item $i_{\textrm{to}}$, the id of the player to receive this command,
\item $i_{\textrm{from}}$ the id of the target player who sent this command,
\item $\textbf{S}_{\textrm{from}}$ the integer coordinates when the player sent this command,
\item $f_{\text{target}}$ a function to modify data of the target player when this is received.
\end{itemize}
Commands travel at the speed of light $c$.
The amount of time needed for a command to reach the target,
measured in the inertial frames we used in the simulation,
depends on the trajectory of the target player $i_{\textrm{to}}$ and the sender coordinates $\textbf{S}_{\textrm{from}}$,
\subsection{Universe data}
Universe is an overarching structure which aggregates all necessary data and functionalities.
An universe has:
\begin{itemize}
\item a current universe time $T_{\textrm{current}}$,
\item a 4-dimension array of maps from player id to lists of player data $M_{TXYZ}$,
so that the data of a player residing at $(T, X, Y, Z)$ is stored in the associated list,
the ``afterimages'' of players are also stored in the corresponding list (Sec~\ref{ssec:move}),
\item a map $M_{\textrm{command}}$ from player id to lists of commands,
such that a command in the list will be executed when the player receive the command,
\item other universe global data $D_G$ relevant to the model,
\end{itemize}
\subsection{Mechanism and AI} \label{ssec:mechanism_and_ai}
Given an instance of a universe,
the dynamics of players are based on predefined rules and the state of the universe observed by the players.
In our framework, we call the rules mechanisms.
A mechanism takes the state of the universe observed by a player,
modifies the state of a player,
and generates a list of commands to send to other players.
To ease the model development to account for the time dilation effect,
we further divide mechanisms into two categories: regular mechanisms and dilated mechanisms.
A regular mechanism is executed once per turn,
while a dilated mechanism is executed once per multiple turns,
adjusted for the time dilation of the player measured in the inertial frames we used in the simulation.
\section{Simulation step} \label{sec:simulation}
The following are needed to define a model:
\begin{itemize}
\item the data structure of other player data $D_i$,
\item the data structure of other universe global data $D_G$,
\item a set of available commands,
\item a function to initialize the universe data,
\item a function to update the universe global data,
\item a set of regular and dilated mechanisms,
\end{itemize}
Along with the universe data,
it is useful to define a map $M_{\textrm{current}}$ from a player id to the current player data,
i.e., $T_i = T_{\textrm{current}}$,
as an internal object of the simulation.
The modifications of player data are first performed on $M_\textrm{current}$,
and then synchronized back to the universe data at appropriate timing.
Suppose we have initialized an universe model and $M_\textrm{current}$, a complete step in a simulation involves:
\begin{enumerate}
\item update the global data (Sec.~\ref{ssec:global}),
\item compute time dilation effects for all players (Sec.~\ref{ssec:dilation}),
\item process mechanisms for each player (Sec.~\ref{ssec:mechanisms}),
\item process the command map (Sec.~\ref{ssec:command}),
\item move players, add afterimages, and update time (Sec.~\ref{ssec:move}).
\end{enumerate}
The simulation can be ran for a fixed amount of steps, or stop when a stopping condition is met.
\subsection{Update global data} \label{ssec:global}
A model may rely on a mutable global data $D_G$ to implement the dynamics.
If the model depends on some player data to update $D_G$,
and the effect is observable by players,
we need to ensure that no information is transferred faster than the speed of light via
the global data update.
For example, if the global data is modified if ``all'' player data satisfy a condition,
we have to be careful about what we mean by ``all'' here.
In the universe, the maximum time delay equals $\tau_{\textrm{max}} = \tau((0, 0, 0), (max(X), max(Y), max(Z)))$.
To fulfill the speed of light constraint, the update function has to check whether all player data in
$M_{TXYZ}$, where $T_{\textrm{current}} - \tau_{\textrm{max}} \leq T \leq T_{\textrm{current}}$,
$0 \leq X \leq max(X)$, $0 \leq Y \leq max(Y)$, and $0 \leq Z \leq max(Z)$,
satisfy that condition.
\subsection{Compute time dilation} \label{ssec:dilation}
Relative to a stationary observer $j$ in an inertial frame,
special relativity predicts that a moving observer $i$ experiences a time dilation effect:
\begin{equation} \label{eq:gamma}
\gamma_i = \frac{1}{\sqrt{1 - \frac{v_i^2}{c^2}}},
\end{equation}
\begin{equation}
\Delta t_i = \frac{\Delta t_j}{\gamma_i},
\end{equation}
where $\gamma_i$ is called the Lorentz factor.
To account for the time-dilation effect,
the time dilation counter $\mu_i$ is updated by Algorithm~\ref{alg:dilation}
every turn for every player.
$\mu_{i}$ will then affect the mechanism processing in Sec.~\ref{ssec:mechanisms}.
\begin{algorithm}
\KwInput{$M_\textrm{current}$, map from player id to current player data}
\ForEach{player $i$ in $M_\textrm{current}$}{
$\mu_i \gets \mu_i + \sqrt{1 - \frac{v_i^2}{c^2}}$\;
\If{$\mu_i \geq 0$} {
$\mu_i \gets \mu_i - 1$\;
}
}
\caption{Update time dilation counter.}
\label{alg:dilation}
\end{algorithm}
\subsection{Process mechanisms} \label{ssec:mechanisms}
Before processing any mechanism for a player,
we need to compute the state of the universe viewed by the player.
At an instance in our discretized relativistic universe,
player $i$ sees other players located at the unit cubes closest
to the surface of the past light cone of player $i$,
while the entire cubes are still within the past light cone.
The computation consists of two steps:
(1) Algorithm~\ref{alg:viewAtCube} computes the view centered at a specific cube,
ignoring the zero time delay when players are within the same group,
(2) Algorithm~\ref{alg:viewAtGroup} computes the view for players in a group.
Assuming each line of these algorithms takes $O(1)$ and iterating over all $(X, Y, Z)$,
the time complexity $O(mn)$ from Algorithm~\ref{alg:viewAtCube} dominates,
where $m=X_{\textrm{max}} Y_{\textrm{max}} Z_{\textrm{max}}$ is the spatial size of the universe,
and $n$ is the number of player.
\begin{algorithm}
\KwInput{\\
\Indp
$T_i, X_i, Y_i, Z_i$ position of the viewing location\\
$M_{TXYZ}$ 4D array of maps from player id to lists of player data
}
\KwOutput{\\
\Indp
$M$ map from player id to player data\\
$\Lambda_{XYZ}$ 3D array of maps from group id to lists of player id
}
Initialize empty $M$ and $\Lambda_{XYZ}$\;
\ForEach{$X_j, Y_j, Z_j$}{
$T_j \gets T_i - \tau(\overrightarrow{U_i}, (X_j, Y_j, Z_j))$\;
\ForEach{player data in $M_{T_j X_j Y_j Z_j}[k]$}{
\If{$M$ has key $k$}{
\If{$T$ of $M[k]$ < $T$ of the new player data}{
Replace $M[k]$ by this new player data\;
}
}
\Else{
Store data of player $k$ to $M[k]$\;
}
}
}
Associate the player id from $M$ to the corresponding list in $\Lambda_{XYZ}$ by spatial coordinates and group id\;
\Return $(M, \Lambda_{XYZ})$
\caption{Compute the view of the universe at a cube, ignore the zero time delay when player are in the same group}
\label{alg:viewAtCube}
\end{algorithm}
\begin{algorithm}
\KwInput{\\
\Indp
$g_i$ group id\\
$T_j, X_j, Y_j, Z_j$ position of the viewing location\\
$M$ map from player id to player data\\
$\Lambda_{XYZ}$ 3D array of maps from group id to lists of player id\\
$M_{TXYZ}$ 4D array of maps from player id to lists of player data
}
\KwOutput{\\
\Indp
$M'$ map from player id to player data\\
$\Lambda'_{XYZ}$ 3D array of maps from group id to lists of player id
}
$M' \gets M$\;
$\Lambda'_{XYZ} \gets \Lambda_{XYZ}$\;
\ForEach{player data in $M_{T_j X_j Y_j Z_j}[k]$ where $g(\overrightarrow{u_k}, \overrightarrow{U_k}) = g_i$}{
\If{$T$ of $M[k]$ < $T$ of the new player data}{
Replace $M'[k]$ by this new player data\;
Update the corresponding position of player $k$ in $\Lambda'_{XYZ}$\;
}
}
\Return $(M', \Lambda'_{XYZ})$
\caption{Compute the view of the universe for players in a group.}
\label{alg:viewAtGroup}
\end{algorithm}
The view of the universe of a player is used by mechanisms to update the player data and generate commands to send.
Regular mechanisms update the data of the player each turn,
while dilated mechanisms update the player if the time dilation counter is greater than zero to account for the time dilation effect.
The generated commands are executed immediately if the target player is within the same group of the sender,
otherwise the commands are stored in $M_{\textrm{command}}$.
Algorithm~\ref{alg:mechanisms} shows the overall iterative process.
\begin{algorithm}
\KwInput{\\
\Indp
$M_\textrm{current}$ map from player id to current player data\\
Universe data
}
\ForEach{$(X_j, Y_j, Z_j)$}{
Compute the view of the universe at this cube by algorithm~\ref{alg:viewAtCube}\;
\ForEach{group in this cube}{
Compute the view of the universe at this group by algorithm~\ref{alg:viewAtGroup}\;
\ForEach{data of player $k$ in this group}{
Update $M_\textrm{current}[k]$ by all regular mechanisms\;
\If{$\mu_k \geq 0$}{
Update $M_\textrm{current}[k]$ by all dilated mechanisms\;
}
}
\ForEach{generated command where target player $l$ is in this group}{
Update $M_\textrm{current}[l]$ by $f_{\text{target}}$ of the command\;
}
}
}
Add the rest of commands to $M_{\textrm{command}}$ by the target player id of the commands\;
\caption{Update all players by mechanisms.}
\label{alg:mechanisms}
\end{algorithm}
\subsection{Process command map} \label{ssec:command}
The command map $M_{\textrm{command}}$ is a map from player id to lists of commands that is being sent to that player.
At each turn, the distance between the target player and the sent positions of all commands in the list are calculated,
and the command is executed on the player if the spacetime interval is larger than zero.
Algorithm~\ref{alg:command} illustrates the process.
\begin{algorithm}
\KwInput{\\
\Indp
$M_\textrm{current}$ map from player id to current player data\\
$M_{\textrm{command}}$ map from player id to lists of commands
}
\ForEach{key $i$ in $M_{\textrm{command}}$}{
Get the integer coordinates $\textbf{S}_i$ of player $i$ from $M_\textrm{current}[i]$\;
\ForEach{command $C$ in the list $M_{\textrm{command}}[i]$}{
\If{$\|\textbf{S}_{\textrm{from}} - \textbf{S}_i\| \geq 0$}{
Update $M_\textrm{current}[i]$ by $f_{\text{target}}$ of the command\;
}
}
}
Remove all executed commands\;
\caption{Process command map.}
\label{alg:command}
\end{algorithm}
\subsection{Move players and add afterimages} \label{ssec:move}
Moving players and storing their data requires additional considerations in this simulation framework.
Consider the following example:
\begin{enumerate}
\item assume player $i$ and player $j$ are located in the same cube,
\item player $j$ moves to the other cube,
\item the new information takes time to travel to player $i$, so player $i$ cannot see the new position of player $j$,
\item player $i$ cannot see the old information of player $j$ either, because player $j$ is no longer there,
\item player $j$ disappears from the sight of player $i$.
\end{enumerate}
This ``disappearance'' is caused by the problem of the integer-based coordinates used in the computation of player's 3D view.
Consider a more generic situation: suppose player $i$ is located at $\overrightarrow{U_i}$,
and player $j$ moves from $\overrightarrow{U_j}$ to $\overrightarrow{U_k}$.
Ignoring the possibility of zero time delay, the maximum time player $i$ has to wait to see player $j$
is bounded by Eq.~\ref{eq:deltaT},
\begin{align} ~\label{eq:deltaT}
\Delta T &= \tau(\overrightarrow{U_i}, \overrightarrow{U_j}) - \tau(\overrightarrow{U_i}, \overrightarrow{U_k}), \\
&= \left \lceil \frac{|\overrightarrow{U_i} - \overrightarrow{U_j}|}{c} \right \rceil - \left \lceil \frac{|\overrightarrow{U_i} - \overrightarrow{U_k}|}{c} \right \rceil, \\
&\leq \left \lceil \frac{|\overrightarrow{U_i} - \overrightarrow{U_j}|}{c} - \frac{|\overrightarrow{U_i} - \overrightarrow{U_k}|}{c} \right \rceil, \\
&\leq \left \lceil \frac{|\overrightarrow{U_j} - \overrightarrow{U_k}|}{c} \right \rceil, \\
&=\tau(\overrightarrow{U_j}, \overrightarrow{U_k}).
\end{align}
Therefore, if we include back the possibility where the time delay between player $i$ and player $j$ can be zero,
the maximum duration of the disappearance produced by the movement is bounded by $\Delta T_{\textrm{max}} = \tau((0, 0, 0), (1, 1, 1))$.
To prevent the unrealistic disappearance from happening, the old player data has to stay at the original
position for at least $\Delta T_{\textrm{max}}$ turn, we call this the ``afterimage'' of the player.
Note that afterimages only participate in the 3D view of players, they should not be updated by commands or mechanisms.
Algorithm~\ref{alg:move} does multiple things: it updates the universe time,
it moves players by their velocities, it synchronizes time of players,
it stores old coordinates to the history of player, it cleans the history if the stored coordinates is too old,
and it adds the current player and afterimages to the latest spatial 3D array in the 4D data array $M_{TXYZ}$.
Since the universe time has been updated, this simulation step has finished,
the universe should go to the next step and loop over all algorithms in Sec.~\ref{sec:simulation} again.
\begin{algorithm}
\KwInput{\\
\Indp
$M_\textrm{current}$ map from player id to current player data\\
Universe data
}
$T_{\textrm{current}} \gets T_{\textrm{current}} + 1$\;
Initialize a 3D array of maps from player id to lists of player data $M_{XYZ}$\;
\ForEach{data of player $i$ in $M_\textrm{current}$}{
$t_i \gets T_{\textrm{current}}$\;
$x_i \gets x_i + v_{ix}$\;
$y_i \gets y_i + v_{iy}$\;
$z_i \gets z_i + v_{iz}$\;
$T_i \gets T_{\textrm{current}}$\;
$X_i \gets \lfloor x_i \rfloor$\;
$Y_i \gets \lfloor y_i \rfloor$\;
$Z_i \gets \lfloor z_i \rfloor$\;
$g_i \gets g(\overrightarrow{u_i}, \overrightarrow{U_i})$ by Eq.~\ref{eq:group}\;
\If{coordinates or group is new}{
Save the previous coordinates to history $H_i$\;
}
\ForEach{$(T_i', X_i', Y_i', Z_i')$ in $H_i$}{
Remove from $H_i$ if $T_{\textrm{current}} - T' > \Delta T_{\textrm{max}}$\;
}
Save the new data to $M_{X_i Y_i Z_i}[i]$\;
\ForEach{$(T_i', X_i', Y_i', Z_i')$ in $H_i$}{
Find the old player data from $M_{T_i' X_i' Y_i' Z_i'}[i']$\;
Add the old player data to $M_{X_i' Y_i' Z_i'}[i']$\;
}
}
Drop the oldest 3D spatial array from $M_{TXYZ}$\;
Add $M_{XYZ}$ as the latest spatial array to $M_{TXYZ}$\;
\caption{Move player and add afterimages.}
\label{alg:move}
\end{algorithm}
\section{Discussion}
The presented algorithms form the backbone of our computational framework,
``Relativitization''~\cite{relativitization}.
There are technical subtleties that are not discussed here,
such as creating new players, removing dead players,
introducing randomness to models, parallelization of the algorithms,
generating deterministic outcomes from parallelized simulations with random number generators,
interactive human input to intervene in a simulation, etc.
Nevertheless, the framework implements the major part of the technical subtleties,
and provides a suitable interface to ease the development of any 4D, relativistic ABM.
It can be interesting to implement a classical ABM into the framework.
Spatial ABMs with non-local interactions,
such as the classical flocking model~\cite{reynolds1987flocks},
are particularly suitable.
These models are naturally affected by the time delay
imposed by the speed of light limitation.
Simulating such a model in the Relativitization framework allows us to explore the effects of time delay on the model.
Ultimately, existing ABMs might not be suitable to describe interstellar society.
A solid understanding of social mechanisms and physics,
together with some artistic imagination,
are needed to build inspiring interstellar ABMs.
As a first step,
we have integrated a few social mechanisms to build a big ``model'', which is also a game.
The ``model'' can be found on the GitHub\footnote{https://github.com/Adriankhl/relativitization} repository of our framework.
Apart from the possibility of implementing different models using the framework,
the algorithms may also be optimized further.
For example, the iteration in Sec.~\ref{ssec:mechanisms} has a time complexity of $O(mn)$.
A naive alternative implementation to iterate over all the combinations of players could change the complexity to $O(n^2)$,
which could have better performance when the density of players is low.
We leave these potential improvements to future research.
\section{Conclusion}
In this paper, we have presented a set of algorithms to implement ABM simulations in a 4D, relativistic spacetime.
Based on these algorithms, we have developed a simulation framework we call ``Relativitization''~\cite{relativitization}.
Our framework will lower the barrier of entry for social scientists
to apply their expertise to explore the interstellar future of human civilization.
We hope our framework can be used to initiate meaningful and academically interesting discussions
about our future.
\section*{Acknowledgement}
We thank Diego Garlaschelli, Alexandru Babeanu, Michael Szell, and all QSS members of the CWTS institute for useful discussion.
\printbibliography
\end{document}
|
1,477,468,750,943 | arxiv | \section{Introduction}
In 1966 Greisen, Zatsepin and
Kuz'min \cite{G,Z} noted that the microwave background radiation
(MBR) makes the universe opaque
to cosmic rays of sufficiently high energy, yielding a steep drop
in the energy cosmic ray spectrum at approximately $ 5
\times 10^{19}$ eV (GZK cutoff). More recently, a fresh interest in
the topic has been rekindled since several extensive air
showers have been observed which
imply the arrival of cosmic rays with energies above $10^{20}$ eV.
In particular, the Akeno Giant Air Shower Array (AGASA)
experiment recorded an event
with energy 1.7 - 2.6 $\times 10^{20}$ eV \cite{Yoshi,Hasha},
the Fly's Eye experiment reported the highest energy cosmic ray
event ever
detected on Earth, with an energy 2.3 - 4.1 $\times 10^{20}$ eV
\cite{Bird1,Bird2}, both events being well above the GZK cutoff.
Deepening the mystery, the identification of the primary
particle in these showers is still uncertain. On the one hand, the Fly's Eye
group claims that there is evidence of a transition from a spectrum
dominated by heavy nuclei
to one of a predominantly light composition \cite{Bird1}, while
on the other hand,
it has also been suggested that a medium mass nucleus also fits the
shower profile of the highest energy Fly's Eye event \cite{H}.
In addition, there is
an unexpected energy gap
before these events. Although heavy nuclei can be accelerated to high
terminal energies by ``bottom up'' mechanisms,
one should note that, for energies above 100 EeV the range of
the corresponding sources
is limited to a few Mpc \cite{JCronin}. Sigl and co-workers \cite{Sigl} have
analysed the structure of the
high energy end
of the cosmic ray spectrum. They found that most
``bottom up''
models can be ruled out except for
those involving a nearby source, which is
consistent
with data at the 1$\sigma$ level. Their argument for this is that a
nearby source can account for the ultrahigh energy events but would also
produce events in the apparent gap in data obtained to date. In this
direction,
Elbert and Sommers have suggested that the highest energy event recorded by
Fly's Eye, could have been accelerated in the neighborhood of M82, which is
around
3 Mpc away \cite{ES,W}.
In relation to the aforementioned possibilities,
we have re-examined the interaction of ultrahigh energy nuclei with the
microwave background radiation and we have found a new feature
in the ultrahigh energy cosmic ray spectrum from iron sources located
around 3 Mpc
which forms the motivation for the present article.
\section{Energy attenuation length of ultrahigh energy nuclei}
The energy losses that extremely high energy nuclei suffer during
their trip to the Earth are due to their interaction with the
low energy photons of the MBR which they
see as highly blue-shifted. The interaction with other radiation
backgrounds (optical and infrared) can be safely neglected for
nuclei with Lorentz factors above $2 \times 10^9$. Although the
interactions of extremely high energy nuclei with
the relic photons lead to step-by-step energy loss (which needs to be
included in a transport equation as a collision integral), in what
follows we use the continuous energy loss
approximation assuming straight line propagation
which is reasonable for the energies and distances under consideration
in this paper.
The relevant mechanisms for energy losses are
photodisintegration and hadron photoproduction (which has a
threshold energy of $\approx 145$ MeV,
equivalent to a Lorentz factor of $10^{11}$, above
the range treated in this article) \cite{BLUE}.
The disintegration rate of a nucleus of mass $A$ with the subsequent
production of $i$ nucleons is given by the expression \cite{Ste69},
\begin{equation}
R_{Ai} = \frac{1}{2 \Gamma^2} \int_0^{\infty} dw \,
\frac{n(w)}{w^2} \, \int_0^{2\Gamma w} dw_r
\, w_r \sigma_{Ai}(w_r)
\label{rate}
\end{equation}
where $n(w)$ is the density of photons with energy $w$ in the
system of reference in which the microwave background is at 3K and
$w_r$ is the energy of the photons in the rest frame of the nucleus.
As usual, $\Gamma$ is the Lorentz factor and $\sigma_{Ai}$
is the cross section for the interaction. Using the expressions for the cross
section fitted by Puget {\it et al.} \cite{PSB}, it is possible to work out an
analytical solution for the nuclear disintegration rates \cite{sudaf}.
After summating them over all the possible channels for a given
number of nucleons one obtains the effective nucleon loss rate.
The effective $^{56}$Fe
nucleon loss rate obtained after carrying out
these straightforward but rather lengthy steps
can be parametrized by,
\begin{mathletters}
\begin{equation}
R(\Gamma)=3.25 \times 10^{-6}\,
\Gamma^{-0.643}
\exp (-2.15 \times 10^{10}/\Gamma)\,\, {\rm s}^{-1}
\end{equation}
if $\Gamma \,\in \, [1. \times 10^{9}, 3.68 \times 10^{10}]$, and
\begin{equation}
R(\Gamma) =1.59 \times 10^{-12} \,
\Gamma^{-0.0698}\,\, {\rm s}^{-1}
\end{equation}
if $ \Gamma\,
\in\,
[3.68 \times 10^{10}, 1. \times 10^{11}]$.
\end{mathletters}
It is noteworthy that knowledge of the iron
effective nucleons loss rate alone is enough to obtain the corresponding
value of
$R$ for any other nuclei \cite{PSB}.
The emission of nucleons is isotropic in the
rest frame of the nucleus, and so the averaged fractional
energy loss results equal the
fractional loss in mass number of the nucleus, viz., the Lorentz
factor is conserved.
The relation which determines
the attenuation length for energy is then, assuming an initial iron nucleus,
\begin{equation}
E = E_g \,\, e^{-R(\Gamma ) \, t / 56}
\label{constraint}
\end{equation}
where $E_g$ denotes the energy with which the nuclei were
emitted from the source, and $\Gamma = E_g / 56$.
\begin{figure}
\centering
\leavevmode\epsfysize=8cm \epsfbox{crnu.eps}\\
\caption{Energy of the surviving nuclei vs. propagation
distance. It is also included the energy attenuation length of the
surviving nucleon (dot line).}
\end{figure}
In Fig. 1 we have plotted the total energy of the heaviest surviving
fragment as a function of the distance for initial iron nuclei.
Note that the values obtained here are consistent with the
ones obtained by Cronin using Monte Carlo simulation \cite{JCronin}.
One
can see that nuclei with Lorentz factors above $10^{10}$ cannot survive
for more
than 10 Mpc \cite{Ste97}.
\begin{figure}
\centering
\leavevmode\epsfysize=8cm \epsfbox{slider.ps}\\
\caption{Relation between the injection energy of an iron nucleus
and the
final energy of the
photodisintegrated nucleus for different values of the propagation distance
(from grey to black 20 Mpc, 10 Mpc, 3.5 Mpc, 3 Mpc).}
\end{figure}
In Fig. 2 the relation between the injection
energy and the energy at a time $t$ for different propagation distances
is shown. The graph indicates that the final energy of the nucleus is
not a monotonic function. It has a maximum at a critical energy
and then decreases to a minimum before rising again as $\Gamma$
rises, as was first pointed out by Puget {\it et al.} \cite{PSB}.
The fact that the energy $E$ is a multivaluated function of $E_g$
leads to a pile-up in the
energy spectrum. Moreover, this
behaviour enhances a hidden feature of the energy
spectrum for sources located beyond 2.6 Mpc: A depression that preceeds
a bump
that would make the events at the end of the spectrum (just before the cutoff)
around 50\% more probable than those in the depressed region.
To illustrate this,
let us discuss the
evolution of the differential energy spectrum of nuclei.
\section{Modification of the cosmic ray spectrum}
The photodisintegration process
results in the production of nucleons of ultrahigh energies
with the same Lorentz factor of the parent nucleus.
As a consequence, the total number of particles is
not conserved during propagation. However, the solution of the problem
becomes quite simple if we separately
treat both the evolution of the heaviest fragment and those fragments corresponding
to nucleons emitted from the travelling nuclei.
The evolution of the differential spectrum of the
surviving fragments is governed by a
balance equation
that takes into account the conservation of the
total number of particles in the spectrum.
Using the formalism presented by the authors in reference \cite{nos},
and considering the case of a single source located at $t_0$
from the observer, with
injection spectrum $Q(E_g, t) \,= \,\kappa \, E_g^{- \gamma} \,
\delta(t - t_0)$,
the number of particles with energy $E$ at time $t$ is given by,
\begin{equation}
N(E, t) dE = \frac{\kappa E_g^{-\gamma+1}}{E} dE,
\label{espectro}
\end{equation}
with $E_g$ fixed by the constraint (\ref{constraint}).
Let us now consider the evolution of nucleons generated by
decays of nuclei during their propagation.
For Lorentz factors less than $10^{11}$
and distances less than 100 Mpc the energy with which the secondary
nucleons are produced is approximately equal to the energy with which they
are detected here on Earth. The number of nucleons with energy $E$ at time
$t$ can be approximated by the product of the number of nucleons generated
per nucleus and the number of nuclei emitted.
When the nucleons are emitted with energies above 100 EeV
the losses by meson photoproduction start to become significant.
However, these nucleons come from heavy nuclei with
Lorentz factors $\Gamma > 10^{11}$ which are completely disintegrated
in distances of less than 10 Mpc. Given that the mean free path of the
nucleons is about $\lambda_n \approx 10$ Mpc, it is reasonable to
define a characteristic time $\tau_{_{\Gamma}}$ given by the
moment in which the number of nucleons is reduced to $1/e$ of its
initial value $A_0$. In order to determine the modifications of the spectrum
due to the losses which the nucleons suffer due to interactions with the relic
photons, we assume that the iron nucleus emitted at $t = t_0$
is a travelling source which at the end of a time $\tau_{_{\Gamma}}$
has emitted the 56 nucleons together. In this way the injection
spectrum of nucleons ($\Gamma \approx 10^{11}$) can be approximated by,
\begin{equation}
q(E_G,t) =
\kappa \, A_0^{-\gamma+1} \, E_G^{-\gamma} \delta(t - \tau_{_{\Gamma}}),
\end{equation}
where $A_0$ is the mass of the initial nucleus and the energy with
which the nucleons is generated is given by
$E_G = E_g / A_0$.
\begin{figure}
\centering
\leavevmode\epsfysize=8cm \epsfbox{3etafe.eps}\\
\caption{Modification factors for sources of iron nuclei at 20 Mpc together
with the spectra of secondary nucleons.}
\end{figure}
The number of nucleons with energy $E$ at time $t$ is
given by,
\begin{equation}
n(E,t) dE = \frac{\kappa \, A_0^{-\gamma+2} \, E_G^{-\gamma+1}}{E} dE
\end{equation}
and the relation between injection energy and the energy at time $t$ remains
fixed by the relation, $\, A \, (t - \tau_{_{\Gamma}}) \, - \,
{\rm{Ei}}\,(B/E)
+ \, {\rm{Ei}}\, (B/E_G)
= 0
$, Ei being the exponential integral,
and $A$, $B$ the parameters of the fractional energy loss of
nucleons previously fitted by the authors \cite{nos}.
The modification factor $\eta$, is defined as the ratio between the
modified spectrum and the unmodified one. In Fig. 3 we plot the
modification factors for the case of sources of iron nuclei (propagation
distance 20 Mpc) together with the spectra of secondary nucleons. It is
clear that the spectrum of secondary nucleons around the pile-up is
at least one order of magnitude less than the one of the surviving
fragments.
In Figures 4 and 5 we have
plotted the modification factor
for different propagation distances around 3 Mpc.
They display a bump and a cutoff and, in addition,
a depression before the bump. It is important to stress that the
mechanism that produces the pile-up which can be seen
in Figures 3, 4 and 5 is completely different to the one that produces
the bump in the case of nucleons.
In this last case, the photomeson production involves
the creation of new particles that carry off energy yielding
nucleons with energies ever closer to the photomeson production threshold.
This mechanism, modulated by the fractional energy loss, is responsible for
the bump in the spectrum.
The cutoff is a consequence of the conservation of the number
of particles together with the properties of the injection spectrum
($\int_{E_{_{\rm th}}}^\infty
E_g^{-\gamma} dE_g< \infty$).
\begin{figure}
\centering
\leavevmode\epsfysize=4.8cm \epsfbox{gapa.ps}\\
\caption{Modification factor of single-source energy
espectra for different values of propagation distance (from grey to
black 3 Mpc, 2.7 Mpc and 2.6 Mpc) assuming a
differential
power law injection spectrum with spectral index $\gamma = 2$.}
\end{figure}
In the case of nuclei, since the
Lorentz factor is conserved, the surviving fragments see the photons of
the thermal
background always at the same energy.
Then, despite the fact that nuclei injected with energies over the
photodisintegration threshold lose energy by losing mass, they never
reach the threshold.
The observed pile-up in the modification factors is due solely to the
multivalued nature of the energy
at time $t$ as a function of the injection energy: Nuclei injected with
different energies can arrive with the same energy but with different masses.
It is clear that, except in the region of the pile-up, the modification
factor $\eta$ is less than unity, since $\eta = (E/E_g)^{\gamma-1}$.
This assertion seems to be in contradiction with the conservation of
particle number.
Actually, the conservation of the Lorentz factor implies,
\begin{equation}
\kappa \,E_g^{-\gamma}\,dE_g|_{_\Gamma} = N(E,t)\, dE|_{_\Gamma}
\label{es}
\end{equation}
in accord with the conservation of the number of particles in
the spectrum. Moreover, the condition (\ref{es}) completely determines
the evolution of the energy spectrum of the surving fragments
(\ref{espectro}).
Note that in order to compare the modified and unmodified spectra, with
regard to conservation of particle number, one has
to take into account that the corresponding energies are shifted. As follows
from (\ref{es}), the
conservation of the number of particles in the spectrum is
given by,
\begin{equation}
\int_{E_{_{\rm th}}}^{E_{\pi_{\rm th}}} N(E,t) dE =
\int_{E_{g_{\rm th}}}^{E_{g{\pi_{\rm th}}}} \kappa\,E_g^{-\gamma}\ dE_g
\end {equation}
with $E_{_{\rm th}}$ and $E_{\pi_{\rm th}}$ the threshold energies
for photodisintegration, and photopion production processes respectively.
Let us now return to the analysis of Fig. 2
in relation to the depression in the spectrum.
In the case of a nearby iron source, located around 3 Mpc, and
for injection energies below the multivalued region of the
function $E (E_g)$, $E$ is clearly less than $E_g$
and, as a consequence the depression in the
modification factor is apparent.
Then, despite the violence of the photodisintegration process via the
giant dipole resonance, for nearby sources none of the
injected nuclei are completely
disintegrated yielding this unusual depression before the bump.
For a flight distance of 3 Mpc, the composition of the arrival nuclei
changes from $A=50$ (for $\Gamma \approx 10^9$) to $A=13$ (for
$\Gamma \approx 10^{11}$).
However, the most important variation takes place in the region of the bump,
where $A$ runs from 48 to 13, being heavy nuclei of $A=33$ the most abundant.
For propagation distances greater than 10 Mpc one would expect just
nucleons to arrive for injection energies above $9 \times 10^{20}$ eV. In this
case the function becomes multivalued below the
photodisintegration threshold and then there is no depression at all.
For an iron source located at 3.5 Mpc, the
depression in the spectrum is almost invisible $({\cal O}(1\%))$,
in good agreement with the
results previously obtained by Elbert and Sommers using Monte Carlo
simulation \cite{ES}.
\begin{figure}
\centering
\leavevmode\epsfysize=4.8cm \epsfbox{gapb.ps}\\
\caption{Same as Fig. 4 with spectral index $\gamma = 2.5$.}
\end{figure}
\section{Conclusions}
We have
studied the interaction of ultra high energy nuclei with the MBR.
We have
presented a parametrization of the fractional
energy loss for Lorentz factors up to $10^{11}$
that allows us to analyse the evolution of the energy spectrum for
different nuclei sources.
When considering
an iron source located around
3 Mpc, the spectrum exhibits a depression before a bump not previously
reported.
In the light of this finding it is
tempting to speculate whether the apparent gap in the existing data
is due to the relative weight of the depression
and the bump if a source of iron nuclei is responsible
for the end of the cosmic ray spectrum. This speculation, if true,
reclaims "botton up" models as a
possible scenario for the origin of
the highest energy cosmic rays.
The limited statistics in the observed data make
it impossible to resolve the question definitively at this time, and
we are obliged to present this idea as a hypothesis to be tested by experiment.
The existence of a cutoff or a gap
which might be present in the observed spectrum
is of fundamental
interest in cosmic ray physics, allowing stringent tests of
existing models.
The future Pierre Auger Project \cite{Desrep} should provide
enough statistics for a final veredict on these open questions, and in
particular on the ideas discussed in this paper.
\acknowledgments
Special thanks go to Prof. James Cronin for stimulating discussions.
This research was supported in part by CONICET and FONCYT. L.A.A. thanks FOMEC
for financial support.
|
1,477,468,750,944 | arxiv | \section{Introduction}
The concept of \emph{abelian object} plays a key role in
categorical algebra. In the study of categories of non-abelian
algebraic structures---such as groups, Lie algebras, loops, rings,
crossed modules, etc.---the ``abelian case'' is usually seen as a
basic starting point, often simpler than the general case, or
sometimes even trivial. Most likely there are known results which
may or may not be extended to the surrounding non-abelian setting.
Part of categorical algebra deals with such generalisation issues,
which tend to become more interesting precisely where this
extension is not straightforward. Abstract commutator theory for
instance, which is about \emph{measuring non-abelianness}, would
not exist without a formal interplay between the abelian and the
non-abelian worlds, enabled by an accurate definition of
abelianness.
Depending on the context, several approaches to such a
conceptualisation exist. Relevant to us are those considered
in~\cite{Borceux-Bourn}; see also~\cite{Huq, Smith, Pedicchio} and
the references in~\cite{Borceux-Bourn}. The easiest is probably to
say that an \textbf{abelian object} is an object which admits an
internal abelian group structure. This makes sense as soon as the
surrounding category is \emph{unital}---a condition introduced
in~\cite{Bourn1996}, see below for details---which is a rather
weak additional requirement on a pointed category implying that an
object admits at most one internal abelian group structure. So
that, in this context, ``being abelian'' becomes a property of the
object in question.
The full subcategory of a unital category $\ensuremath{\mathbb{C}} $ determined by the
abelian objects is denoted $\ensuremath{\mathsf{Ab}}(\ensuremath{\mathbb{C}}) $ and called the
\textbf{additive core} of $\ensuremath{\mathbb{C}} $. The category $\ensuremath{\mathsf{Ab}}(\ensuremath{\mathbb{C}}) $ is indeed
additive, and if $\ensuremath{\mathbb{C}} $ is a finitely cocomplete
regular~\cite{Barr} unital category, then~$\ensuremath{\mathsf{Ab}}(\ensuremath{\mathbb{C}}) $ is a
reflective~\cite{Borceux-Bourn} subcategory of $\ensuremath{\mathbb{C}} $. If $\ensuremath{\mathbb{C}}$ is
moreover Barr exact~\cite{Barr}, then~$\ensuremath{\mathsf{Ab}}(\ensuremath{\mathbb{C}}) $ is an abelian
category, and called the \textbf{abelian core} of $\ensuremath{\mathbb{C}} $.
For instance, in the category $\ensuremath{\mathsf{Lie}}_{K}$ of Lie algebras over a
field $K$, the abelian objects are $K$-vector spaces, equipped
with a trivial (zero) bracket; in the category $\ensuremath{\mathsf{Gp}}$ of groups,
the abelian objects are the abelian groups, so that
$\ensuremath{\mathsf{Ab}}(\ensuremath{\mathsf{Gp}})=\ensuremath{\mathsf{Ab}}$; in the category $\ensuremath{\mathsf{Mon}}$ of monoids, the abelian
objects are abelian groups as well: $\ensuremath{\mathsf{Ab}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{Ab}}$; etc. In all
cases the resulting commutator theory behaves as expected.
\subsection*{Beyond abelianness: weaker conditions}
The concept of an abelian object has been well studied and
understood. For certain applications, however, it is too strong:
the ``abelian case'' may not just be \emph{simple}, it may be
\emph{too simple}. Furthermore, abelianness may ``happen too
easily''. As explained in~\cite{Borceux-Bourn}, the
Eckmann--Hilton argument implies that any internal monoid in a
unital category is automatically a \emph{commutative} object. For
instance, in the category of monoids any internal monoid is
commutative, so that in particular an internal group is always
abelian: $\ensuremath{\mathsf{Gp}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{Ab}}$. Amongst other things, this fact is well
known to account for the abelianness of the higher homotopy
groups.
If we want to capture groups amongst monoids, avoiding abelianness
turns out to be especially difficult. One possibility would be to
consider gregarious objects~\cite{Borceux-Bourn}, because the
``equation''
\begin{center}
commutative + gregarious = abelian
\end{center}
holds in any unital category. But this notion happens to be too
weak, since examples were found of gregarious monoids which are
not groups. On the other hand, as explained above, the concept of
an internal group is too strong, since it gives us abelian groups.
Whence the subject of our present paper: to find out how to
\begin{center}
characterise \emph{non-abelian} groups inside the category of
monoids
\end{center}
in categorical-algebraic terms. That is to say, is there some
weaker concept than that of an abelian object which, when
considered in $\ensuremath{\mathsf{Mon}}$, gives the category~$\ensuremath{\mathsf{Gp}}$?
This question took quite a long time to be answered. As explained
in~\cite{SchreierBook, BM-FMS2}, the study of monoid actions,
where an \textbf{action} of a monoid $B$ on a monoid $X$ is a monoid
homomorphism $B \to \End(X)$ from~$B$ to the monoid of
endomorphisms of~$X$, provided a first solution to this problem: a
monoid $B$ is a group if and only if all split epimorphisms with
codomain~$B$ correspond to monoid actions of~$B$. However, this
solution is not entirely satisfactory, since it makes use of
features which are typical for the category of monoids, and thus
cannot be exported to other categories.
Another approach to this particular question is to consider the
concept of $\ensuremath{\mathcal{S}}$-protomodularity~\cite{SchreierBook, S-proto,
Bourn2014}, which allows to single out a
protomodular~\cite{Bourn1991} subcategory $\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ of a given
category~$\ensuremath{\mathbb{C}}$, depending on the choice of a convenient class $\ensuremath{\mathcal{S}}$
of points in $\ensuremath{\mathbb{C}}$---see below for details. Unlike the category of
monoids, the category of groups is protomodular. And indeed, when
$\ensuremath{\mathbb{C}}=\ensuremath{\mathsf{Mon}}$, the class $\ensuremath{\mathcal{S}}$ of so-called \emph{Schreier
points}~\cite{BM-FMS} does characterise groups in the sense that
$\ensuremath{\mathcal{S}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{Gp}}$. A~similar characterisation is obtained through the
notion of $\ensuremath{\mathcal{S}}$-Mal'tsev categories~\cite{Bourn2014}. However, these
characterisations are ``relative'', in the sense that they depend on
the choice of a class $\ensuremath{\mathcal{S}}$. Moreover, the definition of the class
$\ensuremath{\mathcal{S}}$ of Schreier points is ad-hoc, given that it again crucially
depends on $\ensuremath{\mathbb{C}}$ being the category of monoids. So the problem is
somehow shifted to another level.
The approach proposed in our present paper is different because it
is \emph{local} and \emph{absolute}, rather than \emph{global} and
\emph{relative}. ``Local'' here means that we consider conditions
defined object by object: \emph{protomodular} objects,
\emph{Mal'tsev} objects, \emph{(strongly) unital} objects and
\emph{subtractive} objects. While $\ensuremath{\mathcal{S}}$-protomodularity deals with
the protomodular subcategory $\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ as a whole. ``Absolute''
means that there is no class $\ensuremath{\mathcal{S}}$ for the definitions to depend
on.
More precisely, we show in Theorem~\ref{groups = protomodular
monoids} that the notions of a protomodular object and a Mal'tsev
object give the desired characterisation of groups amongst
monoids---whence the title of our paper. Moreover, we find
suitable classes of points which allow us to establish the link
between our absolute approach and the relative approach of
$\ensuremath{\mathcal{S}}$-protomodularity and the $\ensuremath{\mathcal{S}}$-Mal'tsev condition
(Proposition~\ref{proto objs=proto core} and
Proposition~\ref{Mal'tsev objs = Mal'tsev core}).
The following table gives an overview of the classes of objects we
consider, and what they amount to in the category of monoids
$\ensuremath{\mathsf{Mon}}$ and in the category of semi\-rings $\ensuremath{\mathsf{SRng}}$. Here $\ensuremath{\mathsf{GMon}}$
denotes the category of gregarious monoids mentioned above.
\begin{table}[h!]
\caption{Special objects in the categories $\ensuremath{\mathsf{Mon}}$ and $\ensuremath{\mathsf{SRng}}$}
\begin{tabular}{cccccc}
\toprule \txt{all\\ objects} & \txt{unital\\ objects} &
\txt{subtractive\\ objects} & \txt{strongly unital\\ objects} &
\txt{Mal'tsev\\ objects} & \txt{protomodular\\objects}\\\midrule
$\ensuremath{\mathbb{C}}$ & $\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})$ & $\S(\ensuremath{\mathbb{C}})$ & $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ & $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ & $\P(\ensuremath{\mathbb{C}})$
\\\midrule
$\ensuremath{\mathsf{Mon}}$ & $\ensuremath{\mathsf{Mon}}$ & $\ensuremath{\mathsf{GMon}}$ & $\ensuremath{\mathsf{GMon}}$ & $\ensuremath{\mathsf{Gp}}$ & $\ensuremath{\mathsf{Gp}}$\\
$\ensuremath{\mathsf{SRng}}$ & $\ensuremath{\mathsf{SRng}}$ & $\ensuremath{\mathsf{Rng}}$ & $\ensuremath{\mathsf{Rng}}$ & $\ensuremath{\mathsf{Rng}}$ & $\ensuremath{\mathsf{Rng}}$\\
\bottomrule
\end{tabular}
\label{overview}
\end{table}
In function of the category $\ensuremath{\mathbb{C}}$ it is possible to separate all
classes of special objects occurring in Table~\ref{overview}.
Indeed, a given category is unital, say, precisely when all of its
objects are unital; while there exist examples of unital
categories which are not subtractive, Mal'tsev categories which
are not protomodular, and so on.
The present paper is the starting point of an exploration of this
new object-wise approach, which is being further developed in
ongoing work. For instance, the article~\cite{GM-ACS} provides a
simple direct proof of a result which implies our
Theorem~\ref{groups = protomodular monoids}, and in~\cite{GM-VdL1}
cocommutative Hopf algebras over an algebraically closed field are
characterised as the protomodular objects in the category of
cocommutative bialgebras.
\subsection*{Example: protomodular objects}
Let us, as an example of the kind of techniques we use, briefly
sketch the definition of a protomodular object. Given an object
$B$, a \textbf{point over $B$} is a pair of morphisms $(f\colon
{A\to B},s\colon{B\to A})$ such that $fs=1_{B}$. A~category with
finite limits is said to be
\textbf{protomodular}~\cite{Bourn1991,Borceux-Bourn} when for every
pullback
\[
\vcenter{\xymatrix@!0@=5em{ C\times_{B}A \ophalfsplitpullback
\ar[r]^-{\pi_A} \ar@<-.5ex>[d]_-{\pi_C} & A \ar@<-.5ex>[d]_-f
\\
C \ar@<-.5ex>[u] \ar[r]_-g & B \ar@<-.5ex>[u]_-s }}
\]
of a point $(f,s)$ over $B$ along some morphism $g$ with codomain
$B$, the morphisms~$\pi_A$ and $s$ are \textbf{jointly strongly
epimorphic}: they do not both factor through a given proper
subobject of $A$. In a pointed context, this condition is
equivalent to the validity of the \emph{split short five
lemma}~\cite{Bourn1991}. This observation gave rise to the notion
of a \textbf{semi-abelian} category---a pointed, Barr exact,
protomodular category with finite
coproducts~\cite{Janelidze-Marki-Tholen}---which plays a
fundamental role in the development of a categorical-algebraic
approach to homological algebra for non-abelian structures; see
for
instance~\cite{Bourn-Janelidze:Torsors,EGVdL,Butterflies,CMM1,RVdL2}.
A point $(f,s)$ satisfying the condition mentioned above (that
$\pi_A$ and $s$ are jointly strongly epimorphic) is called a
\textbf{strong point}. When also all of its pullbacks satisfy this
condition, it is called a \textbf{stably strong point}. We shall say
that $B$ is a \textbf{protomodular object} when all points over $B$
are stably strong points. Writing $\P(\ensuremath{\mathbb{C}})$ for the
full subcategory of $\ensuremath{\mathbb{C}}$ determined by the protomodular objects,
we clearly have that $\P(\ensuremath{\mathbb{C}})=\ensuremath{\mathbb{C}}$ if and only if $\ensuremath{\mathbb{C}}$ is a
protomodular category. In fact, $\P(\ensuremath{\mathbb{C}})$ is \emph{always} a
protomodular category, as soon as it is closed under finite limits
in~$\ensuremath{\mathbb{C}}$. We study some of its basic properties in
Section~\ref{Protomodular objects}, where we also prove one of our
main results: if $\ensuremath{\mathbb{C}}$ is the category of monoids, then $\P(\ensuremath{\mathbb{C}})$ is
the category of groups (Theorem~\ref{groups = protomodular
monoids}). This is one of two answers to the question we set out
to study, the other being a characterisation of groups amongst
monoids as the so-called \emph{Mal'tsev objects} (essentially
Theorem~\ref{Mal'tsev monoids are groups}).
\subsection*{Structure of the text}
Since the concept of a (stably) strong point plays a key role in
our work, we recall its definition and discuss some of its basic
properties in Section~\ref{SSP}. Section~\ref{section S-Mal'tsev
and S-protomodular} recalls the definitions of $\ensuremath{\mathcal{S}}$-Mal'tsev and
$\ensuremath{\mathcal{S}}$-protomodular categories in full detail.
In Section~\ref{SUO} we introduce the concept of \emph{strongly
unital} object. We show that these coincide with the
\emph{gregarious} objects when the surrounding category is
regular. We prove stability properties and characterise rings
amongst semirings as the strongly unital objects
(Theorem~\ref{SU(SRng)=Rng}).
Section~\ref{USO} is devoted to the concepts of \emph{unital} and
\emph{subtractive} object. Our main result here is
Proposition~\ref{SU=SU} which, mimicking Proposition~3
in~\cite{ZJanelidze-Subtractive}, says that an object of a pointed
regular category is strongly unital if and only if it is unital
and subtractive.
In Section~\ref{MO} we introduce \emph{Mal'tsev} objects and prove
that any Mal'tsev object in the category of monoids is a group
(Theorem~\ref{Mal'tsev monoids are groups}).
Section~\ref{Protomodular objects} treats the concept of a
\emph{protomodular} object. Here we prove our paper's main result,
Theorem~\ref{groups = protomodular monoids}: a monoid is a group
if and only if it is a protomodular object, and if and only if it
is a Mal'tsev object. We also explain in which sense the full
subcategory determined by the protomodular objects is a
protomodular core~\cite{S-proto}.
\section{Stably strong points}\label{SSP}
We start by recalling some notions that occur frequently in
categorical algebra, focusing on the concept of a \emph{strong
point}.
\subsection{Jointly strongly epimorphic pairs}
A cospan $(r\colon C\to A, s\colon B\to A)$ in a category $\ensuremath{\mathbb{C}}$ is
said to be \textbf{jointly extremally epimorphic} when it does not
factor through a monomorphism, which means that for any
commutative diagram where $m$ is a monomorphism
\[
\xymatrix@!0@=4em{ & M \ar@{ >->}[d]^- m \\
C \ar[r]_-r \ar[ur] & A & B, \ar[l]^-s \ar[ul]}
\]
the monomorphism $m$ is necessarily an isomorphism. If $\ensuremath{\mathbb{C}}$ is
finitely complete, then it is easy to see that the pair $(r,s)$ is
jointly epimorphic. In fact, in a finitely complete category the
notions of extremal epimorphism and strong epimorphism coincide.
Therefore, we usually refer to the pair $(r,s)$ as being
\textbf{jointly strongly epimorphic}. Recall that, if $\ensuremath{\mathbb{C}}$ is
moreover a regular category~\cite{Barr}, then extremal
epimorphisms and strong epimorphisms coincide with the regular
epimorphisms.
\subsection{The fibration of points}
A \textbf{point} $(f\colon{A\to B},s\colon{B\to A})$ in $\ensuremath{\mathbb{C}}$ is a
split epimorphism $f$ with a chosen splitting $s$. Considering a
point as a diagram in~$\ensuremath{\mathbb{C}}$, we obtain the category of points in
$\ensuremath{\mathbb{C}}$, denoted $\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}}) $: morphisms between points are pairs $(x,
y) \colon(f,s)\to (f',s')$ of morphisms in $\ensuremath{\mathbb{C}}$ making the diagram
\[
\xymatrix@!0@=4em{B \ar[r]^-{s} \ar[d]_y & A \ar[r]^-{f} \ar[d]^-{x} & B \ar[d]^y \\
B' \ar[r]_{s'} & A' \ar[r]_-{f'} & B'}
\]
commute. If $\ensuremath{\mathbb{C}}$ has pullbacks of split epimorphisms, then the
forgetful functor $\cod \colon {\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}}) \to \ensuremath{\mathbb{C}}}$, which associates
with every split epimorphism its codomain, is a fibration, usually
called the
\textbf{fibration of points}~\cite{Bourn1991}. Given an object $B$
of~$\ensuremath{\mathbb{C}}$, we denote the fibre over $B$ by $\ensuremath{\mathsf{Pt}}_B(\ensuremath{\mathbb{C}})$. An object in
this category is a point with codomain $B$, and a morphism is of
the form~$(x, 1_B)$.
\subsection{Strong points}
We now assume $\ensuremath{\mathbb{C}}$ to be a finitely complete category.
\begin{definition}
We say that a point $(f\colon{A\to B},s\colon{B\to A})$ is a
\textbf{strong point} when for every pullback
\begin{equation}
\label{strong point diagram} \vcenter{\xymatrix@!0@=5em{
C\times_{B}A \ophalfsplitpullback \ar[r]^-{\pi_A}
\ar@<-.5ex>[d]_-{\pi_C} & A \ar@<-.5ex>[d]_-f
\\
C \ar@<-.5ex>[u]_(.4){\langle 1_{C}, sg \rangle} \ar[r]_-g & B
\ar@<-.5ex>[u]_-s }}
\end{equation}
along any morphism $g \colon {C \to B}$, the pair $(\pi_A, s)$ is
jointly strongly epimorphic.
\end{definition}
Strong points were already considered
in~\cite{MartinsMontoliSobral2}, under the name of \emph{regular
points} (in a regular context), and independently
in~\cite{Bourn-monad}, under the name of \emph{strongly split
epimorphisms}.
Many algebraic categories have been characterised in terms of
properties of strong points (see~\cite{Bourn1996, Borceux-Bourn}),
some of which we recall throughout the text. For instance, by
definition, a finitely complete category is
\textbf{protomodular}~\cite{Bourn1991} precisely when all points in
it are strong. For a pointed category, this condition is
equivalent to the validity of the split short five
lemma~\cite{Bourn1991}. Examples of protomodular categories are
the categories of groups, of rings, of Lie algebras (over a
commutative ring with unit) and, more generally, every
\emph{variety of $\Omega$-groups} in the sense of Higgins
\cite{Higgins}. Protomodularity is also a key ingredient in the
definition of a \emph{semi-abelian
category}~\cite{Janelidze-Marki-Tholen}.
On the other hand, in the category of sets, a point $(f,s)$ is
strong if and only if $f$ is an isomorphism. To see this, it
suffices to pull it back along the unique morphism from the empty
set $\varnothing$.
\subsection{Pointed categories}
In a pointed category, we denote the kernel of a morphism $f$ by
$\ensuremath{\mathrm{ker}}(f)$. In the pointed case, the notion of strong point
mentioned above coincides with the one considered
in~\cite{MRVdL4}:
\begin{proposition}
Let $\ensuremath{\mathbb{C}}$ be a pointed finitely complete category.
\begin{enumerate}
\item A point $(f,s)$ in $\ensuremath{\mathbb{C}}$ is strong if and only if the pair
$(\ensuremath{\mathrm{ker}} (f),s)$ is jointly strongly epimorphic. \item Any split
epimorphism $f$ in a strong point $(f,s)$ is a normal epimorphism.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) If $(f,s)$ is a strong point, then $(\ensuremath{\mathrm{ker}} (f),s)$ is jointly
strongly epimorphic: to see this, it suffices to take the pullback
of $f$ along the unique morphism with domain the zero object.
Conversely, if we take an arbitrary pullback as in~\eqref{strong
point diagram}, then $\ensuremath{\mathrm{ker}} (f)=\pi_A \langle 0, \ensuremath{\mathrm{ker}} (f) \rangle$.
We conclude that $(\pi_A,s)$ is jointly strongly epimorphic
because $(\ensuremath{\mathrm{ker}} (f),s)$ is.
(2) Since $(f,s)$ is a strong point, the pair $(\ensuremath{\mathrm{ker}}(f),s)$ is
jointly strongly epimorphic; thus it is jointly epimorphic. It
easily follows that~$f$ is the cokernel of its kernel~$\ensuremath{\mathrm{ker}}(f)$.
\end{proof}
In a pointed finitely complete context, asking that certain
product projections are strong points gives rise to the notions of
a unital and of a strongly unital category. In fact, when for all
objects $X$, $Y$ in $\ensuremath{\mathbb{C}}$ the point
\[
(\pi_{X}\colon {X\times Y\to X},\quad \langle 1_{X},0 \rangle\colon X\to X\times Y)
\]
is strong, $\ensuremath{\mathbb{C}}$ is said to be a \textbf{unital}
category~\cite{Bourn1996}. The category $\ensuremath{\mathbb{C}}$ is called
\textbf{strongly unital} (\cite{Bourn1996}, see also Definition 1.8.3 and Theorem 1.8.15 in \cite{Borceux-Bourn}) when for every object~$X$ in $\ensuremath{\mathbb{C}}$
the point
\[
(\pi_{1}\colon {X\times X\to X},\quad \Delta_{X}=\langle 1_{X},1_{X}\rangle\colon X\to X\times X)
\]
is strong. Observe that we could equivalently ask the point
$(\pi_2, \Delta_X)$ to be strong. It is well known that every
strongly unital category is necessarily unital~\cite[Proposition
1.8.4]{Borceux-Bourn}.
\begin{example}\label{Examples unital}
As shown in~\cite[Theorem 1.2.15]{Borceux-Bourn}, a variety in the
sense of universal algebra is a unital category if and only if it
is a \textbf{J\'{o}nsson--Tarski variety}. This means that the
corresponding theory contains a unique constant~$0$ and a binary
operation $+$ subject to the equations $0 + x = x= x + 0$.
\end{example}
In particular, the categories of monoids and of semirings are
unital. Moreover, every pointed protomodular category is strongly
unital.
\subsection{Stably strong points}
We are especially interested in those points for which the
property of being strong is pullback-stable.
\begin{definition}
We say that a point $(f,s)$ is
\textbf{stably strong} if every pullback of it along any morphism is
a strong point. More explicitly, for any morphism~$g$, the point
$(\pi_C, \langle 1_C, sg \rangle)$ in Diagram~\eqref{strong point
diagram} is strong.
\end{definition}
Note that a stably strong point is always strong (it suffices to
pull it back along the identity morphism) and that the collection
of stably strong points determines a subfibration of the fibration
of points. In a protomodular category, \emph{all} points are
stably strong (since all points are strong). In the category of
sets, all strong points are stably strong (since isomorphisms are
preserved by pullbacks). Nevertheless, in a finitely complete
category not all strong points are stably strong as can be seen in
the following examples.
\begin{example}
Let $\ensuremath{\mathbb{C}}$ be any pointed non-unital category. (For instance, the
category of Hopf algebras over a field is such~\cite{GM-VdL1}.)
Necessarily then, certain product inclusions are not jointly
strongly epimorphic. Let $(\pi_{X},\langle 1_{X},0 \rangle)\colon
{X\times Y\leftrightarrows X}$ be a product projection which is
not a strong point. It is a pullback of the point $Y
\leftrightarrows 0$, which is obviously strong---but not stably
strong.
\end{example}
\begin{example}\label{only strong}
A variety of universal algebras is said to be \textbf{subtractive}~\cite{Ursini3} when the corresponding
theory contains a unique constant $0$ and a binary operation $s$, called a \textbf{subtraction}, subject to the
equations $s(x,0) = x$ and $s(x,x) = 0$. We write $\ensuremath{\mathsf{Sub}}$ for the subtractive variety of \textbf{subtraction algebras}, which are
triples $(X,s,0)$ where $X$ is a set, $s$ a subtraction on $X$ and $0$ the corresponding constant.
Let $T$ be the subtraction algebra
$$
\begin{array}{c|cc}
s & 0 & a \\
\hline
0 & 0 & 0 \\
a & a & 0
\end{array}
$$
Then $(\pi_1, \Delta_T)\colon {T\times T\leftrightarrows T}$ is a
strong point, since $(\langle 0,1_{T}\rangle, \Delta_T)$ is a
jointly strongly epimorphic pair of arrows. Indeed,
$(a,0)=(s(a,0),s(a,a))=s((a,a),(0,a))$.
Let $X$ be the subtraction algebra
$$
\begin{array}{c|ccc}
s & 0 & u & v \\
\hline
0 & 0 & 0 & 0\\
u & u & 0 & 0 \\
v & v & 0 & 0
\end{array}
$$
and consider the constant map $f\colon X\to T\colon x\mapsto 0$.
The pullback of the point $(\pi_1, \Delta_T)\colon {T\times
T\leftrightarrows T}$ along $f$ gives the point $(\pi_X,\langle
1_X,0\rangle)\colon {X\times T\leftrightarrows X}$.
It is easy to see that this point is not strong: the only way the
pair $(u,a)\in X\times T$ can be written as a difference is
$(u,a)=(s(u,0),s(a,0))=s((u,a),(0,0))$. Alternatively, we can
consider the subalgebra $M=\{(0,0), (0,a),(u,0),(v,0)\}$ of the
product~${X\times T}$. $M$ is strictly smaller than $X\times T$,
since it does not contain the element~$(u,a)$. Note that the
restriction of the subtraction on $X\times T$ to $M$ is given by
$$
\begin{array}{c|cccc}
s & (0,0) & (0,a) & (u,0) & (v,0) \\
\hline
(0,0) & (0,0) & (0,0) & (0,0) & (0,0) \\
(0,a) & (0,a) & (0,0) & (0,a) & (0,a) \\
(u,0) & (u,0) & (u,0) & (0,0) & (0,0) \\
(v,0) & (v,0) & (v,0) & (0,0) & (0,0)
\end{array}
$$
so it does indeed define an operation on $M$. On the other hand,
the two product inclusions $\langle1_{X},0\rangle$ and $\langle
0,1_{T}\rangle$ do factor through $M$.
This allows us to conclude that the point $(\pi_1, \Delta_T)\colon
{T\times T\leftrightarrows T}$ is not stably strong.
\end{example}
\subsection{The regular case}
In the context of regular categories~\cite{Barr}, (stably) strong
points are
\textbf{closed under quotients}: this means that in any commutative
diagram
\[
\xymatrix@!0@=4em{
A \ar@<-.5ex>[d]_f \ar@{->>}[r]^\alpha & A' \ar@<-.5ex>[d]_{f'} \\
B \ar@<-.5ex>[u]_s \ar@{->>}[r]_\beta & B', \ar@<-.5ex>[u]_{s'} }
\]
where $\alpha$ and $\beta $ are regular epimorphisms and $(f,s)$
is (stably) strong, also $(f',s')$ is (stably) strong.
\begin{proposition} \label{stably strong points closed under quotients}
In a finitely complete category, strong points are closed under
quotients and stably strong points are closed under retractions.
In a regular category, stably strong points are closed under
quotients.
\end{proposition}
\begin{proof}
Let us first prove that the quotient of a strong point is always
strong. So let $(f,s)$ be a strong point, and consider the diagram
\[
\xymatrix@=4em@!0{ P \cubepullback \ar@{->}[rr]^{\alpha'}
\ar@<-.5ex>[dd] \ar[dr]^-{\pi_{A}} & & P' \cubepullback
\ar@<-.5ex>[dd]|{\hole} \ar[dr]^-{\pi_{A'}} & \\
& A \ar@<-.5ex>[dd]_(.3)f \ar@{->>}[rr]^(.3)\alpha & & A'
\ar@<-.5ex>[dd]_{f'} \\
C \bottomcubepullback \ar[dr]_{g} \ar@<-.5ex>[uu]
\ar@{->}[rr]|(.47){\hole}|(.53){\hole}^(.7){\beta '} & & C'
\ar@<-.5ex>[uu]|{\hole} \ar[dr]_{g'} & \\
& B \ar@<-.5ex>[uu]_(.7)s \ar@{->>}[rr]_\beta & & B',
\ar@<-.5ex>[uu]_-{s'} }
\]
where $P'$ is the pullback of $f'$ along an arbitrary morphism
$g'$, $C$ is the pullback of~$g'$ along $\beta $, and~$P$ is the
pullback of $f$ along $g$. By pullback cancelation, the upper
square is a pullback too. Since $\alpha$ is a regular epimorphism,
we have that $\alpha\pi_{A}$ and $\alpha s$ are jointly strongly
epimorphic. Then it easily follows that $\pi_{A'}$ and $s'$ are
jointly strongly epimorphic, so that the point $(f', s')$ is a
strong point.
If now $(f,s)$ is stably strong, then the point $P
\leftrightarrows C$ is strong. If $\alpha$ and $\beta$ are
retractions, then so are $\alpha'$ and $\beta'$. If $\alpha$ and
$\beta$ are regular epimorphisms in a regular category, then so
are $\alpha'$ and $\beta'$. In both cases, $P' \leftrightarrows
C'$ is strong as a quotient of $P \leftrightarrows C$. Hence
$(f',s')$ is stably strong.
\end{proof}
As a consequence, in a regular category, a point $(f,s)$ is stably
strong if and only if the point $(\pi_1,\langle 1_A, sf \rangle)$
induced by its kernel pair is stably strong. Equivalently one
could consider the point $(\pi_2,\langle sf, 1_A \rangle)$.
Certain pushouts involving strong points satisfy a stronger
property. Recall from~\cite{Bourn2003} that a \textbf{regular
pushout} in a regular category is a commutative square of regular
epimorphisms
\[
\vcenter{\xymatrix@!0@=4em{A' \ar@{->>}[d]_-{f'} \ar@{->>}[r]^-{\alpha} & A \ar@{->>}[d]^-f\\
B' \ar@{->>}[r]_-{\beta} & B}}
\]
where also the comparison arrow $\langle f',\alpha\rangle\colon
A'\to B'\times_{B}A$ is a regular epimorphism. Every regular
pushout is a pushout.
A \textbf{double split epimorphism} in a category $\ensuremath{\mathbb{C}}$ is a point in
the category of points in $\ensuremath{\mathbb{C}}$, so a commutative diagram
\begin{equation}
\label{double split extension} \vcenter{ \xymatrix@!0@=4em{ D
\ar@<-.5ex>[d]_{g'} \ar@<-.5ex>[r]_{f'} & C \ar@<-.5ex>[d]_g
\ar@<-.5ex>[l]_{s'} \\
A \ar@<-.5ex>[u]_{t'} \ar@<-.5ex>[r]_f & B \ar@<-.5ex>[l]_s
\ar@<-.5ex>[u]_t }}
\end{equation}
where the four ``obvious'' squares commute.
\begin{lemma}\label{Lemma Double}
In a regular category, every double split epimorphism as in
\eqref{double split extension}, in which $(g,t)$ is a stably
strong point, is a regular pushout.
\end{lemma}
\begin{proof}
Take the pullback $A\times_{B}C$ of $f$ and $g$, consider the
comparison morphism $\langle g',f'\rangle\colon {D\to
A\times_{B}C}$ and factor it as a regular epimorphism $e\colon
{D\to M}$ followed by a monomorphism $m\colon {M\to
A\times_{B}C}$. Since $(g,t)$ is a stably strong point, its
pullback $(\pi_{A},\langle1_{A},tf\rangle)$ in the diagram
\[
\xymatrix@!0@C=5em@R=4em{ C \ophalfsplitpullback \ar@<-.5ex>[d]_-g
\ar[r]^-{\langle sg, 1_{C} \rangle} & A\times_{B}C \ophalfsplitpullback \ar[r]^-{\pi_C}
\ar@<-.5ex>[d]_-{\pi_A} & C \ar@<-.5ex>[d]_-g
\\
B \ar@<-.5ex>[u]_(.4)t \ar[r]_-{s} & A \ar@<-.5ex>[u]_(.4){\langle
1_{A},tf \rangle} \ar[r]_-f & B \ar@<-.5ex>[u]_-t }
\]
is a strong point. As a consequence, the pair $(\langle
sg,1_{C}\rangle, \langle1_{A},tf\rangle)$ is jointly strongly
epimorphic. They both factor through the monomorphism $m$ as in
the diagram
\[
\xymatrix@!0@R=4em@C=5em{ & M \ar@{{ >}->}[d]^-{m} & \\
C \ar[ur]^-{es'} \ar[r]_-{\langle sg, 1_{C} \rangle} &
A\times_{B}C & A, \ar[l]^-{\langle 1_{A},tf \rangle} \ar[ul]_{et'}
}
\]
so that $m$ is an isomorphism.
\end{proof}
\begin{lemma}\label{Bourn Lemma}
In a regular category, consider a commutative square of regular
epimorphisms with horizontal kernel pairs
\begin{equation*}\label{SpecialRG}
\vcenter{\xymatrix@!0@=4em{\Eq(g) \ar@<-1ex>[r] \ar@{->>}[d]_-{f''} \ar@<1ex>[r] & A' \ar[l] \ar@{->>}[d]^-{f'} \ar@{->>}[r]^-{g} & A \ar@{->>}[d]^-f\\
\Eq(h) \ar@<-1ex>[r] \ar@<1ex>[r]
& B' \ar[l] \ar@{->>}[r]_-{h} & B. }}
\end{equation*}
If any of the commutative squares on the left is a regular pushout
(and so, in particular, $f''$ is a regular epimorphism), then the
square on the right is also a regular pushout.
\end{lemma}
\begin{proof}
The proof is essentially the same as the one of Proposition 3.2 in
\cite{Bourn2003}.
\end{proof}
\begin{proposition}
In a regular category, every regular epimorphism of points
\[
\xymatrix@!0@=3em{ D \ar@<-.5ex>[d] \ar@{->>}[r] & C \ar@<-.5ex>[d] \\
A \ar@<-.5ex>[u] \ar@{->>}[r] & B, \ar@<-.5ex>[u] }
\]
where the point on the left (and hence also the one on the right)
is stably strong, is a regular pushout.
\end{proposition}
\begin{proof}
This follows immediately from Lemma~\ref{Lemma Double} and
Lemma~\ref{Bourn Lemma}.
\end{proof}
\section{$\ensuremath{\mathcal{S}}$-Mal'tsev and $\ensuremath{\mathcal{S}}$-protomodular categories}
\label{section S-Mal'tsev and S-protomodular}
As mentioned in Section~\ref{SSP}, a finitely complete category
$\ensuremath{\mathbb{C}}$ in which all points are (stably) strong defines a
protomodular category. If such an ``absolute'' property fails, one
may think of protomodularity in ``relative'' terms, i.e., with
respect to a class $\ensuremath{\mathcal{S}}$ of stably strong points. We also recall
the absolute and relative notions for the Mal'tsev context.
Recall that a finitely complete category $\ensuremath{\mathbb{C}}$ is called a
\textbf{Mal'tsev category}~\cite{CLP, CPP} when every internal
reflexive relation in $\ensuremath{\mathbb{C}}$ is automatically symmetric or,
equivalently, transitive; thus an equivalence relation.
Protomodular categories are always Mal'tsev
categories~\cite{Bourn1996}. If $\ensuremath{\mathbb{C}}$ is a regular category, then
$\ensuremath{\mathbb{C}}$ is a Mal'tsev category when the composition of any pair of
(effective) equivalence relations $R$ and $S$ on a same object
commutes: $RS=SR$~\cite{CLP, Carboni-Kelly-Pedicchio}. Moreover,
Mal'tsev categories admit a well-known characterisation through
the fibration of points:
\begin{proposition}\cite[Proposition 10]{Bourn1996}\label{Mal'tsev via fibres}
A finitely complete category $\ensuremath{\mathbb{C}}$ is a Mal'\-tsev category if and
only if every fibre $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathbb{C}})$ is (strongly) unital.\hfill \qed
\end{proposition}
The condition that $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathbb{C}})$ is unital means that, for
every pullback of split epimorphisms
\begin{equation}
\label{pb of split epis} \vcenter{\xymatrix@!0@=5em{ A\times_{Y}C
\splitsplitpullback \ar@<-.5ex>[d]_{\pi_A}
\ar@<-.5ex>[r]_(.7){\pi_C} & C \ar@<-.5ex>[d]_g
\ar@<-.5ex>[l]_-{\langle sg,1_C \rangle} \\
A \ar@<-.5ex>[u]_(.4){\langle 1_A,tf \rangle} \ar@<-.5ex>[r]_f & Y
\ar@<-.5ex>[l]_s \ar@<-.5ex>[u]_t }}
\end{equation}
(which is a binary product in $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathbb{C}})$), the morphisms $\langle
1_{A}, tf \rangle$ and $\langle sg, 1_{C} \rangle$ are jointly
strongly epimorphic.
Let $\ensuremath{\mathbb{C}}$ be a finitely complete category, and $\ensuremath{\mathcal{S}}$ a class of
points which is stable under pullbacks along any morphism.
\begin{definition} \label{S-Mal'tsev and S-protomodular categories} Suppose that the full subcategory of
$\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}})$ whose objects are the points in $\ensuremath{\mathcal{S}}$ is closed in
$\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}})$ under finite limits. The category $\ensuremath{\mathbb{C}}$ is said to be:
\begin{enumerate}
\item \textbf{$\ensuremath{\mathcal{S}}$-Mal'tsev}~\cite{Bourn2014} if, for every pullback
of split epimorphisms~\eqref{pb of split epis} where the point
$(f,s)$ is in the class $\ensuremath{\mathcal{S}}$, the morphisms $\langle 1_{A}, tf
\rangle$ and $\langle sg, 1_{C} \rangle$ are jointly strongly
epimorphic; \item \textbf{$\ensuremath{\mathcal{S}}$-protomodular}~\cite{SchreierBook,
S-proto, Bourn2014} if every point in $\ensuremath{\mathcal{S}}$ is strong.
\end{enumerate}
\end{definition}
The notion of $\ensuremath{\mathcal{S}}$-protomodular category was introduced to
describe, in categorical terms, some convenient properties of
\emph{Schreier split epimorphisms} of monoids and of semirings.
Such split epimorphisms were introduced in~\cite{BM-FMS} as those
points which correspond to classical monoid actions and, more
generally, to actions in every category of \emph{monoids with
operations}, via a semidirect product construction.
In~\cite{SchreierBook, BM-FMS2} it was shown that, for Schreier
split epimorphisms, relative versions of some properties of all
split epimorphisms in a protomodular category hold, like for
instance the \emph{split short five lemma}.
In~\cite{S-proto} it is proved that every category of monoids with
operations, equipped with the class $\ensuremath{\mathcal{S}}$ of Schreier points, is
$\ensuremath{\mathcal{S}}$-protomodular, and hence an $\ensuremath{\mathcal{S}}$-Mal'tsev category. Indeed, as
shown in~\cite{S-proto, Bourn2014}, every $\ensuremath{\mathcal{S}}$-protomodular
category is an $\ensuremath{\mathcal{S}}$-Mal'tsev category. Later, in
\cite{MartinsMontoliSH} it was proved that every
J\'{o}nsson--Tarski variety is an $\ensuremath{\mathcal{S}}$-proto\-modular category
with respect to the class $\ensuremath{\mathcal{S}}$ of Schreier points.
A~(non-absolute) example of an $\ensuremath{\mathcal{S}}$-Mal'tsev category which is not
$\ensuremath{\mathcal{S}}$-protomodular, given in \cite{Bourn-quandles}, is the category
of quandles.
The following definition first appeared in~\cite[Definition
6.1]{S-proto} for pointed $\ensuremath{\mathcal{S}}$-protomodular categories, then it
was extended in~\cite{Bourn2014} to $\ensuremath{\mathcal{S}}$-Mal'tsev categories.
\begin{definition}
Let $\ensuremath{\mathbb{C}}$ be a finitely complete category and $\ensuremath{\mathcal{S}}$ a class of
points which is stable under pullbacks along any morphism. An
object $X$ in $\ensuremath{\mathbb{C}}$ is
\textbf{$\ensuremath{\mathcal{S}}$-special} if the point
\begin{equation*}
(\pi_{1}\colon{X\times X\to X},\quad \Delta_{X}=\langle
1_{X},1_{X} \rangle\colon {X\to X\times X})
\end{equation*}
belongs to $\ensuremath{\mathcal{S}}$ or, equivalently, if the point $(\pi_2, \Delta_X)$
belongs to $\ensuremath{\mathcal{S}}$. We write $\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ for the full subcategory of
$\ensuremath{\mathbb{C}}$ determined by the $\ensuremath{\mathcal{S}}$-special objects.
\end{definition}
According to Proposition 6.2 in~\cite{S-proto} and its
generalisation~\cite[Proposition 4.3]{Bourn2014} to $\ensuremath{\mathcal{S}}$-Mal'tsev
categories, if $\ensuremath{\mathbb{C}}$ is an $\ensuremath{\mathcal{S}}$-Mal'tsev category, then the
subcategory $\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ of $\ensuremath{\mathcal{S}}$-special objects of $\ensuremath{\mathbb{C}}$ is a Mal'tsev
category, called the
\textbf{Mal'tsev core} of $\ensuremath{\mathbb{C}}$ relatively to the class $\ensuremath{\mathcal{S}}$. When
$\ensuremath{\mathbb{C}}$ is $\ensuremath{\mathcal{S}}$-protomodular, $\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ is a protomodular category,
called the \textbf{protomodular core} of $\ensuremath{\mathbb{C}}$ relatively to the
class $\ensuremath{\mathcal{S}}$.
Proposition 6.4 in~\cite{S-proto} shows that the protomodular core
of the category $\ensuremath{\mathsf{Mon}}$ of monoids relatively to the class $\ensuremath{\mathcal{S}}$ of
Schreier points is the category $\ensuremath{\mathsf{Gp}}$ of groups; similarly, the
protomodular core of the category $\ensuremath{\mathsf{SRng}}$ of semirings is the
category $\ensuremath{\mathsf{Rng}}$ of rings, also with respect to the class of
Schreier points.
Our main problem in this work is to obtain a categorical-algebraic
characterisation of groups amongst monoids, and of rings amongst
semirings. Based on the previous results, one direction is to look
for a suitable class $\ensuremath{\mathcal{S}}$ of stably strong points in a general
finitely complete category $\ensuremath{\mathbb{C}}$ such that the full subcategory
$\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ of $\ensuremath{\mathcal{S}}$-special objects gives the category of groups when
$\ensuremath{\mathbb{C}}$ is the category of monoids and gives the category of rings
when $\ensuremath{\mathbb{C}}$ is the category of semirings: $\ensuremath{\mathcal{S}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{Gp}}$ and
$\ensuremath{\mathcal{S}}(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{Rng}}$.
We explore different possible classes in the following sections as
well as the outcome for the particular cases of monoids and
semirings. A first ``obvious'' choice is to consider $\ensuremath{\mathcal{S}}$ to be
the class of \emph{all} stably strong points in $\ensuremath{\mathbb{C}}$. Then an
$\ensuremath{\mathcal{S}}$-special object is precisely what we call a strongly unital
object in the next section. We shall see that the subcategory
$\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ of $\ensuremath{\mathcal{S}}$-special objects is the protomodular core (namely
$\ensuremath{\mathsf{Rng}}$) in the case of semirings, but not so in the case of
monoids. Moreover, we propose an alternative ``absolute'' solution
to our main problem, not depending on the choice of a class $\ensuremath{\mathcal{S}}$
of points, and we compare it with this ``relative'' one.
\section{Strongly unital objects}\label{SUO}
The aim of this section is to introduce the concept of a strongly
unital object. We characterise rings amongst semirings as the
strongly unital objects (Theorem~\ref{SU(SRng)=Rng}). We prove
stability properties for strongly unital objects and show that, in
the regular case, they coincide with the \emph{gregarious} objects
of~\cite{Borceux-Bourn}.
Let $\ensuremath{\mathbb{C}}$ be a pointed finitely complete category.
\begin{definition} \label{definition SU}
Given an object $Y$ of $\ensuremath{\mathbb{C}}$, we say that $Y$ is \textbf{strongly
unital} if the point
\begin{equation*}
(\pi_{1}\colon{Y\times Y\to Y},\quad \Delta_{Y}=\langle
1_{Y},1_{Y} \rangle\colon {Y\to Y\times Y})
\end{equation*}
is stably strong.
\end{definition}
Note that we could equivalently ask that the point
$(\pi_{2},\Delta_{Y})$ is stably strong. We write $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ for
the full subcategory of $\ensuremath{\mathbb{C}}$ determined by the strongly unital
objects.
\begin{remark}\label{Stably strong not Schreier}
An object $Y$ in $\ensuremath{\mathbb{C}}$ is strongly unital if and only if it is
$\ensuremath{\mathcal{S}}$-special, when $\ensuremath{\mathcal{S}}$ is the class of all stably strong points
in $\ensuremath{\mathbb{C}}$.
\end{remark}
\begin{theorem}
\label{SU(SRng)=Rng} If $\ensuremath{\mathbb{C}}$ is the category $\ensuremath{\mathsf{SRng}}$ of semirings,
then $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ is the category $\ensuremath{\mathsf{Rng}}$ of rings. In other words, a
semiring $X$ is a ring if and only if the point
\[
(\pi_{1}\colon{X\times X\to X},\quad \Delta_{X}=\langle
1_{X},1_{X} \rangle\colon {X\to X\times X})
\]
is stably strong in $\ensuremath{\mathsf{SRng}}$.
\end{theorem}
\begin{proof}
If $X$ is a ring, then every point over it is stably strong: by
Proposition~6.1.6 in~\cite{SchreierBook} it is a Schreier point,
and Schreier points of semirings are stably strong by Lemma~6.1.1
combined with Proposition~6.1.8 of~\cite{SchreierBook}. Hence, it
suffices to show that any strongly unital semiring is a ring.
Suppose that the point
\[
(\pi_{1}\colon{X\times X\to X},\quad \Delta_{X}=\langle
1_{X},1_{X} \rangle\colon {X\to X\times X})
\]
is stably strong. Given any element $x \neq 0_X$ of $X$, consider
the pullback of $\pi_1$ along the morphism $x \colon \ensuremath{\mathbb{N}} \to X$
sending $1$ to $x$:
\[
\xymatrix@!0@C=6em@R=4em{ & X \ar@{{ >}->}[dl]_{\langle 0, 1_{X}
\rangle}
\ar@{{ >}->}[d]^{\langle 0, 1_{X} \rangle} \\
\ensuremath{\mathbb{N}} \times X \ophalfsplitpullback \ar[r]^{x \times 1_{X}}
\ar@<-.5ex>[d]_{\pi_1} & X \times X
\ar@<-.5ex>[d]_{\pi_1} \\
\ensuremath{\mathbb{N}} \ar@<-.5ex>[u]_(.4){\langle 1_{\ensuremath{\mathbb{N}}}, x \rangle} \ar[r]_x & X.
\ar@<-.5ex>[u]_{\langle 1_{X}, 1_{X} \rangle} } \] Consider the
element $(1,0_X) \in \ensuremath{\mathbb{N}} \times X$. Since the morphisms $\langle
1_{\ensuremath{\mathbb{N}}}, x \rangle$ and $\langle 0, 1_{X} \rangle$ are jointly
strongly epimorphic, $(1,0_X)$ can be written as the sum of
products of chains of elements of the form $(0, \bar{x})$ and $(n,
nx)$. Using the fact that $0 \in \ensuremath{\mathbb{N}}$ is absorbing for the
multiplication in $\ensuremath{\mathbb{N}}$ and that in every semiring the sum is
commutative and the multiplication is distributive with respect to
the sum, we get that $(1,0_X)$ can be written as
\[
(1, 0_X) = (0, y) + (1, x)
\]
for a certain $y \in X$. Then $y+ x = 0_X$ and hence the element
$x$ is invertible for the sum. Thus we see that $X$ is a ring.
\end{proof}
\begin{remark}
Note that, in particular, $\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{Rng}}$ is a protomodular
category, so that $\ensuremath{\mathsf{Rng}}$ is the protomodular core of $\ensuremath{\mathsf{SRng}}$ with
respect to the class $\ensuremath{\mathcal{S}}$ of all stably strong points. As such, it
is necessarily the largest protomodular core of~$\ensuremath{\mathsf{SRng}}$ induced by
some class $\ensuremath{\mathcal{S}}$.
\end{remark}
Recall from~\cite{Borceux-Bourn,Bourn2002} that a \textbf{split
right punctual span} is a diagram of the form
\begin{equation}\label{srps}
\xymatrix@!0@=4em{ X \ar@<.5ex>[r]^s & Z \ar@<.5ex>[l]^f
\ar@<-.5ex>[r]_g & Y \ar@<-.5ex>[l]_t }
\end{equation}
where $fs = 1_X$, $gt = 1_Y$ and $ft = 0$.
\begin{proposition}\label{SU characterisation}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, then the
following conditions are equivalent:
\begin{tfae}
\item $Y$ is a strongly unital object of $\ensuremath{\mathbb{C}}$; \item for every
morphism $f\colon {X\to Y}$, the point
\[
(\pi_{X}\colon{X\times Y\to X},\quad \langle 1_{X},f \rangle\colon
{X\to X\times Y})
\]
is stably strong; \item for every $f\colon {X\to Y}$, the point
$(\pi_{X}, \langle 1_{X},f \rangle)$ is strong; \item given any
split right punctual span~\eqref{srps}, the map $\langle f, g
\rangle \colon {Z \to X \times Y}$ is a strong epimorphism.
\end{tfae}
\end{proposition}
\begin{proof}
The equivalence between (i), (ii) and (iii) hold since any
pullback of the point $(\pi_1,\Delta_Y)$ is of the form
$(\pi_X,\langle 1_X, f\rangle)$ and any pullback of $(\pi_X,
\langle 1_X,f\rangle)$ is also a pullback of $(\pi_1, \Delta_Y)$.
To prove that (iii) implies (iv), consider a split right punctual
span as in~\eqref{srps}. By assumption, the point $(\pi_{X}\colon
{X\times Y \to X}, \langle 1_X, gs \rangle \colon {X\to X\times
Y})$ is strong. Suppose that $\langle f, g \rangle$ factors
through a monomorphism $m$
\[ \xymatrix@!0@=4em{ X \ar@{=}[dd] \ar@<.5ex>[rr]^s & & Z \ar[dl]_e \ar[dd]^{\langle f, g \rangle} \ar@<.5ex>[ll]^f \ar@<-.5ex>[rr]_g & & Y
\ar@<-.5ex>[ll]_t \ar@{=}[dd] \\
& M \ar@{{ >}->}[dr]^m & & & \\
X \ar@<.5ex>[rr]^{\langle 1_X, gs \rangle} & & X \times Y
\ar@<.5ex>[ll]^{\pi_X} & & Y. \ar[ll]^{\langle 0, 1_Y \rangle} }
\] Both $\langle 1_{X}, gs \rangle$ and $\langle 0, 1_{Y} \rangle$
factor through $m$, indeed $\langle 1_{X}, gs \rangle = mes$ and
$\langle 0, 1_{Y} \rangle = met$. Since $\langle 1_X, gs \rangle$
and $\langle 0, 1_{Y} \rangle$ are jointly strongly epimorphic,
$m$ is an isomorphism.
To prove that (iv) implies (iii), we must show that $\langle 0,
1_{Y} \rangle \colon Y \to X \times Y$ and $\langle 1_{X}, f
\rangle \colon X \to X \times Y$ are jointly strongly epimorphic.
Suppose that they factor through a monomorphism $m = \langle m_1,
m_2 \rangle \colon M \to X \times Y$:
\[
\xymatrix@!0@=4em{ & M \ar@{{ >}->}[d]|{\langle m_1, m_2 \rangle} & \\
X \ar[ur]^a \ar[r]_-{\langle 1_X, f \rangle} & X \times Y & Y.
\ar[l]^-{\langle 0, 1_Y \rangle} \ar[ul]_b }
\]
Then we have $m_1 a = 1_X$, $m_1 b = 0$ and $m_2 b = 1_Y$. Hence
we get a diagram
\[
\xymatrix@!0@=4em{ X \ar@<.5ex>[r]^-a & M \ar@<.5ex>[l]^-{m_1}
\ar@<-.5ex>[r]_-{m_2} & Y \ar@<-.5ex>[l]_-b }
\]
as in~\eqref{srps}. By assumption, the monomorphism $\langle m_1,
m_2 \rangle$ is also a strong epimorphism, so it is an
isomorphism.
\end{proof}
In general, a given point $(\pi_{1},\Delta_Y)$ can be strong
without being stably strong (Example~\ref{only strong}).
Nevertheless, if all such points are strong (so that $\ensuremath{\mathbb{C}}$ is
strongly unital), then they are stably strong (by Propositions
1.8.13 and 1.8.14 in~\cite{Borceux-Bourn} and Proposition~\ref{SU
characterisation}). This gives:
\begin{corollary}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, then $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})=\ensuremath{\mathbb{C}}$
if and only if $\ensuremath{\mathbb{C}}$ is strongly unital.\hfill \qed
\end{corollary}
\begin{corollary}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category and $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ is
closed under finite limits in $\ensuremath{\mathbb{C}}$, then $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ is a strongly
unital category.
\end{corollary}
\begin{proof}
The category $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ is obviously pointed. Its inclusion into
$\ensuremath{\mathbb{C}}$ preserves monomorphisms and binary products and it reflects
isomorphisms.
\end{proof}
\begin{proposition}\label{4.8}
If $\ensuremath{\mathbb{C}}$ is a pointed regular category, then $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ is closed
under quotients in $\ensuremath{\mathbb{C}}$.
\end{proposition}
\begin{proof}
This follows readily from Proposition~\ref{stably strong points
closed under quotients}.
\end{proof}
When $\ensuremath{\mathbb{C}}$ is a regular unital category, an object $Y$ satisfying
condition (iv) of Proposition~\ref{SU characterisation} is called
a
\textbf{gregarious} object (Definition~1.9.1 and Theorem~1.9.7 in~\cite{Borceux-Bourn}). So, in
that case, $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ is precisely the category of gregarious
objects in~$\ensuremath{\mathbb{C}}$.
\begin{example}\label{Greg}
$\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{GMon}}$, the category of gregarious monoids. A
monoid~$Y$ is gregarious if and only if for all $y\in Y$ there
exist $u$, $v\in Y$ such that $uyv=1$ (Proposition~1.9.2
in~\cite{Borceux-Bourn}). Counterexample~1.9.3
in~\cite{Borceux-Bourn} provides a gregarious monoid which is not
a group: the monoid $Y$ with two generators $x$,~$y$ and the
relation $xy=1$. Indeed $Y=\{y^{n}x^{m}\mid \text{$n$,
$m\in\ensuremath{\mathbb{N}}$}\}$ and $x^{n}(y^{n}x^{m})y^{m}=1$.
\end{example}
For monoids and the class $\ensuremath{\mathcal{S}}$ of all stably strong points of
monoids, we have $\ensuremath{\mathcal{S}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{GMon}}\neq \ensuremath{\mathsf{Gp}}$ as explained
in Remark~\ref{Stably strong not Schreier}. In particular, there
are in $\ensuremath{\mathsf{Mon}}$ stably strong points which are not Schreier. Since
$\ensuremath{\mathcal{S}}(\ensuremath{\mathsf{Mon}})$ is not protomodular, it is not a protomodular core with
respect to the class $\ensuremath{\mathcal{S}}$. Hence for the case of monoids, such a
class $\ensuremath{\mathcal{S}}$ does not meet our purposes. The major issue here
concerns the closedness of the class $\ensuremath{\mathcal{S}}$ in $\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}})$ under
finite limits. To avoid this difficulty, in the next sections our
work focuses more on objects rather than classes.
\section{Unital objects and subtractive objects}\label{USO}
It is known that a pointed finitely complete category is strongly
unital if and only if it is unital and
subtractive~\cite[Proposition 3]{ZJanelidze-Subtractive}. Having
introduced the notion of a strongly unital object, we now explore
analogous notions for the unital and subtractive cases. Our aim is
to prove that the equivalence above also holds ``locally'' for
objects in any pointed regular category.
Let $\ensuremath{\mathbb{C}}$ be pointed and finitely complete.
\begin{definition} \label{definition U}
Given an object $Y$ of $\ensuremath{\mathbb{C}}$, we say that $Y$ is \textbf{unital} if
the point
\begin{equation*}
(\pi_{1}\colon{Y\times Y\to Y},\quad \langle 1_Y,0 \rangle\colon
{Y\to Y\times Y})
\end{equation*}
is stably strong.
Note that we could equivalently ask that the point
$(\pi_{2},\langle 0, 1_Y \rangle)$ is stably strong. We write
$\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})$ for the full subcategory of $\ensuremath{\mathbb{C}}$ determined by the unital
objects.
\end{definition}
The following results are proved similarly to the corresponding
ones obtained for strongly unital objects. Recall
from~\cite{Borceux-Bourn,Bourn2002} that a \textbf{split punctual
span} is a diagram of the form
\begin{equation}\label{ps}
\xymatrix@!0@=4em{ X \ar@<.5ex>[r]^s & Z \ar@<.5ex>[l]^f
\ar@<-.5ex>[r]_g & Y \ar@<-.5ex>[l]_t }
\end{equation}
where $fs = 1_X$, $gt = 1_Y$, $ft = 0$ and $gs=0$.
\begin{proposition}\label{U characterisation}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, then the
following conditions are equivalent:
\begin{tfae}
\item $Y$ is a unital object of $\ensuremath{\mathbb{C}}$; \item for every object $X$,
the point $(\pi_{X}\colon{X\times Y\to X}, \langle 1_{X},0
\rangle\colon {X\to X\times Y})$ is stably strong; \item for every
object $X$, the point $(\pi_{X}, \langle 1_{X},0 \rangle)$ is
strong; \item given any split punctual span~\eqref{ps}, the map
$\langle f, g \rangle \colon {Z \to X \times Y}$ is a strong
epimorphism.\hfill \qed
\end{tfae}
\end{proposition}
Just as any strongly unital category is always unital, we also
have:
\begin{corollary} \label{strongly unital implies unital}
In a pointed finitely complete category, a strongly unital object
is always unital.
\end{corollary}
\begin{proof}
By Propositions~\ref{SU characterisation} and~\ref{U
characterisation}.
\end{proof}
\begin{corollary}\label{5.4}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, then $\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})=\ensuremath{\mathbb{C}}$
if and only if $\ensuremath{\mathbb{C}}$ is unital.\hfill \qed
\end{corollary}
\begin{examples}
$\ensuremath{\mathsf{Mon}}$ and $\ensuremath{\mathsf{SRng}}$ are not strongly unital, but they are unital,
being J\'onsson--Tarski varieties (see Examples~\ref{Examples
unital}). So, $\ensuremath{\mathsf{U}}(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{Mon}}$ and $\ensuremath{\mathsf{U}}(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{SRng}}$.
\end{examples}
\begin{corollary}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category and $\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})$ is
closed under finite limits in $\ensuremath{\mathbb{C}}$, then $\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})$ is a unital
category.
\end{corollary}
\begin{proof}
Apply Corollary~\ref{5.4} to $\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})$.
\end{proof}
\begin{proposition}
If $\ensuremath{\mathbb{C}}$ is a pointed regular category, then $\ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})$ is closed
under quotients in $\ensuremath{\mathbb{C}}$.\hfill \qed
\end{proposition}
\subsection{Subtractive categories, subtractive objects}\label{SC}
We recall the definition of a subtractive category
from~\cite{ZJanelidze-Subtractive}. A relation $r=\langle
r_{1},r_{2}\rangle\colon {R\to X\times Y}$ in a pointed category
is said to be
\textbf{left (right) punctual}~\cite{Bourn2002} if $\langle
1_X,0\rangle\colon {X\to X\times Y}$ (respectively $\langle
0,1_Y\rangle\colon {Y\to X\times Y}$) factors through $r$. A
pointed finitely complete category~$\ensuremath{\mathbb{C}}$ is said to be
\textbf{subtractive}, if every left punctual reflexive relation on an object $X$ in
$\ensuremath{\mathbb{C}}$ is right punctual. It is equivalent to asking that right
punctuality implies left punctuality---which is the implication we
shall use to obtain a definition of subtractivity for objects.
\begin{example}
A variety of universal algebras is subtractive in the sense of
Example~\ref{only strong} if and only if the condition of~\ref{SC}
is satisfied (see \cite{ZJanelidze-Subtractive}).
\end{example}
It is shown in~\cite{ZJanelidze-Snake} that a pointed regular
category $\ensuremath{\mathbb{C}}$ is subtractive if and only if every span $\langle
s_1,s_2 \rangle \colon A \to B\times C$ is \textbf{subtractive}:
written in set-theoretical terms, its induced relation $r=\langle
r_1,r_2 \rangle\colon R\to B\times C$, where $\langle s_1,s_2
\rangle=rp$ for $r$ a monomorphism and $p$ a regular epimorphism,
satisfies the condition
\[
(b,c),\; (b,0)\in R \quad\Rightarrow\quad (0,c)\in R.
\]
\begin{proposition}\label{subtraction via spans}
In a pointed regular category, consider a split right punctual
span~\eqref{srps}. The span $\langle g,f \rangle$ is subtractive
if and only if $f\ensuremath{\mathrm{ker}}(g)$ is a regular epimorphism.
\end{proposition}
\begin{proof}
Thanks to the Barr embedding theorem \cite{Barr}, in a regular
context it suffices to give a set-theoretical proof (see
Metatheorem~A.5.7 in~\cite{Borceux-Bourn}, for instance). Consider
the factorisation
\[
\xymatrix@!0@C=4em@R=3em{Z\ar[rr]^-{\langle g,f \rangle} \ar@{>>}[dr]_-{p} & & Y\times X \\
& R \ar@{ >->}[ur]_-{\langle r_1, r_2 \rangle}}
\]
of $\langle g,f\rangle$ as a regular epimorphism $p$ followed by a
monomorphism $\langle r_1, r_2 \rangle$. Then $(y,x)\in R$ if and
only if $y=g(z)$ and $x=f(z)$, for some $z\in Z$.
Suppose that $\langle g, f \rangle$ is subtractive. Given any
$x\in X$, we have $(gs(x),x)\in R$ for $z=s(x)$ and $(gs(x),0)\in
R$ for $z=tgs(x)$. Then $(0,x)\in R$ by assumption, which means
that $0=g(z)$ and $x=f(z)$, for some $z\in Z$. Thus $f\ensuremath{\mathrm{ker}}(g)$ is
a regular epimorphism.
The converse implication easily follows since $(0,x)\in R$, for
any $x\in X$, because $f\ensuremath{\mathrm{ker}}(g)$ is a regular epimorphism.
\end{proof}
This result leads us to the following ``local'' definition:
\begin{definition}\label{subtractive object}
Given an object $Y$ of a pointed regular category $\ensuremath{\mathbb{C}}$, we say
that $Y$ is
\textbf{subtractive} when for every split right punctual span~\eqref{srps}, the morphism $f\ensuremath{\mathrm{ker}}(g)$ is a regular epimorphism.
\end{definition}
We write $\S(\ensuremath{\mathbb{C}})$ for the full subcategory of $\ensuremath{\mathbb{C}}$ determined by
the subtractive objects.
\begin{proposition}
If $\ensuremath{\mathbb{C}}$ is a pointed regular category, then $\ensuremath{\mathbb{C}}$ is subtractive if
and only if all of its objects are subtractive.
\end{proposition}
\begin{proof}
As recalled above, if $\ensuremath{\mathbb{C}}$ is subtractive, then every span is
subtractive. Then every object is subtractive by
Proposition~\ref{subtraction via spans}.
Conversely, consider a right punctual reflexive relation $\langle
r_{1},r_{2}\rangle\colon {R\to X\times X}$. By assumption,
$r_1\ensuremath{\mathrm{ker}}(r_2)$ is a regular epimorphism. In the commutative
diagram between kernels
\[
\xymatrix@!0@C=6em@R=5em{ K \ar@{ |>->}[r]^-{\ensuremath{\mathrm{ker}}(r_2)}
\ar@{>>}[d]_-{r_1 \ensuremath{\mathrm{ker}}(r_2)} \pullback & R \ar@{ >->}[d]^-{\langle
r_1, r_2 \rangle} \ar[r]^-{r_2}
& X \ar@{=}[d] \\
X \ar@{ |>->}[r]_-{\langle 1_X, 0 \rangle} & X\times X \ar[r]_-{\pi_2} & X,}
\]
the left square is necessarily a pullback. So, the regular
epimorphism $r_1 \ensuremath{\mathrm{ker}}(r_2)$ is also a monomorphism, thus an
isomorphism. The morphism $\ensuremath{\mathrm{ker}}(r_2)$ gives the factorisation of
$\langle 1_{X}, 0\rangle$ needed to prove that $R$ is a left
punctual relation.
\end{proof}
\begin{corollary}
If $\ensuremath{\mathbb{C}}$ is a pointed regular category and $\S(\ensuremath{\mathbb{C}})$ is closed under
finite limits in $\ensuremath{\mathbb{C}}$, then $\S(\ensuremath{\mathbb{C}})$ is a subtractive category.
\end{corollary}
\begin{proof}
Apply the above proposition to $\S(\ensuremath{\mathbb{C}})$.
\end{proof}
\begin{proposition}[$\S(\ensuremath{\mathbb{C}})\cap \ensuremath{\mathsf{U}}(\ensuremath{\mathbb{C}})=\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$]\label{SU=SU}
Let $\ensuremath{\mathbb{C}}$ be a pointed regular category. An object $Y$ of $\ensuremath{\mathbb{C}}$ is
strongly unital if and only if it is unital and subtractive.
\end{proposition}
\begin{proof}
We already observed that a strongly unital object is unital
(Corollary~\ref{strongly unital implies unital}). To prove that
$Y$ is subtractive, we consider an arbitrary split right punctual
span such as~\eqref{srps}. In the commutative diagram between
kernels
\[
\xymatrix@!0@C=6em@R=5em{ K \ar@{ |>->}[r]^-{\ensuremath{\mathrm{ker}}(g)} \ar[d]_-{f
\ensuremath{\mathrm{ker}}(g)} \pullback & Z \ar[d]^-{\langle f,g \rangle} \ar[r]^-{g}
& Y \ar@{=}[d] \\
X \ar@{ |>->}[r]_-{\langle 1_X,0 \rangle} & X\times Y \ar[r]_-{\pi_Y} & Y,}
\]
the left square is necessarily a pullback. By Proposition~\ref{SU
characterisation}, $\langle f,g \rangle$ is a regular epimorphism,
hence so is $f\ensuremath{\mathrm{ker}}(g)$.
Conversely, given a subtractive unital object $Y$ in a split right
punctual span~\eqref{srps}, by Proposition~\ref{SU
characterisation} we must show that the middle morphism $\langle
f,g \rangle$ of the diagram above is a regular epimorphism. Let
$mp$ be its factorisation as a regular epimorphism $p$ followed by
a monomorphism $m$. The pair $( \langle 1_X,0 \rangle,\langle 0,
1_Y \rangle)$ being jointly strongly epimorphic and $f\ensuremath{\mathrm{ker}}(g)$
being a regular epimorphism, we see that the pair $(\langle 1_X,0
\rangle f\ensuremath{\mathrm{ker}}(g),\langle 0,1_Y \rangle)$ is jointly strongly
epimorphic; moreover it factors through the monomorphism $m$.
Consequently, $m$ is an isomorphism.
\end{proof}
\begin{corollary}\label{corollary subtractive}
$\S(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{GMon}}$, $\S(\ensuremath{\mathsf{CMon}})=\ensuremath{\mathsf{Ab}}$ and $\S(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{Rng}}$.
\end{corollary}
\begin{proof}
This is a combination of Examples~\ref{Examples unital} with,
respectively, Example~\ref{Greg}; \cite[Example
1.9.4]{Borceux-Bourn} with Proposition~\ref{SU characterisation}
and the remark following Proposition~\ref{4.8}; and
Theorem~\ref{SU(SRng)=Rng}.
\end{proof}
\begin{example}
Groups are (strongly) unital objects in the category $\ensuremath{\mathsf{Sub}}$ of
subtraction algebras (Example~\ref{only strong}). In fact, if for
every $y\in Y$ there is a $y^{*}\in Y$ such that $s(0,y^{*})=y$,
then $Y$ is a unital object; in particular, any group is unital.
To see this, we must prove that for any subtraction algebra $X$,
the pair
\[
(\langle1_{X},0\rangle\colon X\to X\times Y,\quad
\langle0,1_{Y}\rangle\colon Y\to X\times Y)
\]
is jointly strongly epimorphic. This follows from the fact that
\[ s((x,0), (0, y^{*})) = (s(x,0), s(0, y^{*})) = (x,y)
\]
for all $x\in X$ and $y\in Y$. Note that the inclusion $\ensuremath{\mathsf{Gp}}\subset
\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{Sub}})$ is strict, because the three-element subtraction
algebra
\begin{center}
\begin{tabular}{c|ccccc}
$s$ & $0$ & $1$ & $2$ \\
\hline
$0$ & $0$ & $1$ & $2$ \\
$1$ & $1$ & $0$ & $0$ \\
$2$ & $2$ & $0$ & $0$
\end{tabular}
\end{center}
satisfies the condition on the existence of $y^{*}$. However, it
is not a group, since the unique group of order three has a
different induced subtraction.
\end{example}
\begin{proposition}
Let $\ensuremath{\mathbb{C}}$ be a pointed regular category. Then $\S(\ensuremath{\mathbb{C}})$ is closed
under quotients in~$\ensuremath{\mathbb{C}}$.
\end{proposition}
\begin{proof}
Suppose that $Y$ is a subtractive object in $\ensuremath{\mathbb{C}}$ and consider a
regular epimorphism $w\colon{Y\to W}$. To prove that $W$ is also
subtractive, consider a split right punctual span
$$
\xymatrix@!0@=4em{ X \ar@<.5ex>[r]^s & Z \ar@<.5ex>[l]^f
\ar@<-.5ex>[r]_g & W; \ar@<-.5ex>[l]_t }
$$
we must prove that $f\ensuremath{\mathrm{ker}}(g)$ is a regular epimorphism. Consider
the following diagram where all squares are pullbacks:
$$
\xymatrix@!0@C=6em@R=4em{& X' \ar@{.>}[ld]_-{s''} \ar@{>>}[r]^-x \ar@{ >->}[d]_-{s'} \pullback & X \ar@{ >->}[d]^-s \\
Z'' \pullback \ar@{>>}[r]^-{z'} \ar[d]_-{\langle f'', g'' \rangle} & Z' \pullback \ar@{>>}[r]^-z \ar[d]_-{\langle f', g' \rangle} & Z \ar[d]^-{\langle f, g \rangle} \\
X'\times Y \ar@{>>}[r]_-{x\times 1_Y} & X\times Y \ar@{>>}[r]_-{1_X\times w} & X\times W.}
$$
Note that from the bottom right pullback we can deduce that the
pullback of $g$ along $w$ is $g'$. Since $f's'=x$, there is an
induced morphism $s''\colon X'\to Z''$ such that $\langle f'', g''
\rangle s''=\langle 1_{X'}, g's' \rangle$ and $z's''=s'$. There is
also an induced morphism $t''\colon Y\to Z''$ such that $\langle
f'', g'' \rangle t''=\langle 0,1_Y \rangle$ and $zz't''=tw$. So,
we get a split right punctual span
$$
\xymatrix@!0@=4em{ X' \ar@<.5ex>[r]^{s''} & Z''
\ar@<.5ex>[l]^{f''}
\ar@<-.5ex>[r]_{g''} & Y, \ar@<-.5ex>[l]_{t''} }
$$
so that $f''\ensuremath{\mathrm{ker}}(g'')$ is a regular epimorphism, by assumption.
Since $g'$ is a pullback of $g$ and $g''=g'z'$, we have the
commutative diagram
\[
\xymatrix@!0@C=6em@R=4em{K'' \pullback \ar@{ |>->}[r]^-{\ensuremath{\mathrm{ker}}(g'')} \ar@{.>}[d]_-{\lambda} & Z'' \ar@{>>}[d]^(.25){z'} \\
K \pullback \ar@{ |>->}[r]_-{\ensuremath{\mathrm{ker}}(g')} \ar[d] \ar@(ur,ul)@/^1.75pc/[rr]|(.5)\hole^(0.75){\ensuremath{\mathrm{ker}}(g)} & Z' \ar[d]^-{g'} \pullback \ar@{>>}[r]^-z & Z \ar[d]^-g \\
0 \ar[r] & Y \ar@{>>}[r]_-w & W}
\]
between their kernels. Finally, the morphism $xf''\ensuremath{\mathrm{ker}}(g'')$ is a
regular epimorphism (since both $x$ and $f''\ensuremath{\mathrm{ker}}(g'')$ are) and
from
\[
xf''\ensuremath{\mathrm{ker}}(g'') = fzz'\ensuremath{\mathrm{ker}}(g'')=fz\ensuremath{\mathrm{ker}}(g')\lambda=f\ensuremath{\mathrm{ker}}(g)\lambda
\]
we conclude that $f\ensuremath{\mathrm{ker}}(g)$ is a regular epimorphism, as desired.
\end{proof}
In the presence of binary coproducts, a pointed regular category
$\ensuremath{\mathbb{C}}$ is subtractive if and only if any split right punctual span
of the form
$$
\xymatrix@!0@=5em{ X \ar@<.5ex>[r]^-{\iota_1} & X+X
\ar@<.5ex>[l]^-{\lgroup 1_X\; 0 \rgroup}
\ar@<-.5ex>[r]_-{\lgroup 1_X\;1_X \rgroup} & X \ar@<-.5ex>[l]_-{\iota_2} }
$$
is such that $\delta_X=\lgroup 1_X\; 0 \rgroup \ensuremath{\mathrm{ker}}(\lgroup 1_X\;
1_X \rgroup)$ is a regular epimorphism (see Theorem~5.1
in~\cite{DB-ZJ-2009}). This result leads us to the following
characterisation, where an extra morphism $f$ appears as in
Proposition~\ref{SU characterisation}, to be compatible with the
pullback-stability in the definitions of unital and strongly
unital objects.
\begin{proposition} In a pointed regular category $\ensuremath{\mathbb{C}}$ with binary coproducts the following conditions are equivalent:
\begin{tfae}
\item an object $Y$ in $\ensuremath{\mathbb{C}}$ is subtractive;
\item for any morphism $f\colon X\to Y$, the split right punctual span
$$ \xymatrix@!0@=5em{ X \ar@<.5ex>[r]^-{\iota_X} & X+Y \ar@<.5ex>[l]^-{\lgroup 1_X\; 0 \rgroup}
\ar@<-.5ex>[r]_-{\lgroup f\; 1_Y \rgroup} & Y \ar@<-.5ex>[l]_-{\iota_Y} }
$$
is such that $\delta_f=\lgroup 1_X\; 0 \rgroup \ensuremath{\mathrm{ker}}(\lgroup f\;1_Y \rgroup)$ is a regular epimorphism.
\end{tfae}
\end{proposition}
\begin{proof}
The implication (i) $\Rightarrow$ (ii) is obvious. Conversely,
given any split right punctual span~\eqref{srps}, we have a
morphism $gs\colon X\to Y$, so for the split right punctual span
$$ \xymatrix@!0@=5em{ X \ar@<.5ex>[r]^-{\iota_X} & X+Y \ar@<.5ex>[l]^-{\lgroup 1_X\; 0 \rgroup}
\ar@<-.5ex>[r]_-{\lgroup gs\; 1_Y \rgroup} & Y \ar@<-.5ex>[l]_-{\iota_Y} }
$$
we have that $\delta_{gs}=\lgroup 1_X\;0 \rgroup \ensuremath{\mathrm{ker}}(\lgroup
gs\;1_Y \rgroup)$ is a regular epimorphism. The induced morphism
$\sigma$ between kernels in the diagram
\[
\xymatrix@!0@C=7em@R=5em{ K \ar@{ |>->}[r]^-{\ensuremath{\mathrm{ker}}(\lgroup gs\;1_Y \rgroup)} \ar@{.>}[d]_-{\sigma} \pullback & X+Y \ar[d]^-{\lgroup s\; t\rgroup} \ar[r]^-{\lgroup gs\;1_Y \rgroup} & Y \ar@{=}[d] \\
K_g \ar@{ |>->}[r]_-{\ensuremath{\mathrm{ker}}(g)} & Z \ar[r]_-{g} & Y}
\]
is such that $f\ensuremath{\mathrm{ker}}(g)\sigma = f\lgroup s\; t \rgroup \ensuremath{\mathrm{ker}}(\lgroup
gs\;1_Y \rgroup)=\delta_{gs}$ is a regular epimorphism;
consequently, $f\ensuremath{\mathrm{ker}}(g)$ is a regular epimorphism as well.
\end{proof}
\section{Mal'tsev objects}\label{MO}
Even though the concept of a strongly unital object is strong
enough to characterise rings amongst semirings as in
Theorem~\ref{SU(SRng)=Rng}, it fails to give us a characterisation
of groups amongst monoids. For that purpose we need a stronger
concept. The aim of the present section is two-fold: first to
introduce Mal'tsev objects, then to prove that any Mal'tsev object
in the category of monoids is a group (Theorem~\ref{Mal'tsev
monoids are groups}). In fact, also the opposite inclusion holds:
groups are precisely the Mal'tsev monoids. This follows from the
results in the next section, where the even stronger concept of a
protomodular object is introduced.
\begin{definition} \label{definition Mal'tsev objects}
We say that an object $Y$ of a finitely complete category $\ensuremath{\mathbb{C}}$ is
a
\textbf{Mal'tsev object} if the category $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathbb{C}})$ is unital.
As explained after Proposition~\ref{Mal'tsev via fibres}, this
means that for every pullback of split epimorphisms over $Y$ as
in~\eqref{pb of split epis}, the morphisms $\langle 1_{A}, tf
\rangle$ and $\langle sg, 1_{C} \rangle$ are jointly strongly
epimorphic.
\end{definition}
We write $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ for the full subcategory of $\ensuremath{\mathbb{C}}$ determined by
the Mal'tsev objects.
\begin{proposition}\label{double split epi Mal'tsev}
Let $\ensuremath{\mathbb{C}}$ be a regular category. For any object $Y$ in $\ensuremath{\mathbb{C}}$, the
following conditions are equivalent:
\begin{tfae}
\item $Y$ is a Mal'tsev object; \item every double split
epimorphism
\[
\xymatrix@!0@=4em{ D \ar@<-.5ex>[d]_{g'} \ar@<-.5ex>[r]_{f'} & C
\ar@<-.5ex>[d]_g
\ar@<-.5ex>[l]_-{s'} \\
A \ar@<-.5ex>[u]_{t'} \ar@<-.5ex>[r]_f & Y \ar@<-.5ex>[l]_s
\ar@<-.5ex>[u]_t }
\]
over $Y$ is a regular pushout; \item every double split
epimorphism over $Y$ as above, in which $f'$ and $g'$ are jointly
monomorphic, is a pullback.
\end{tfae}
\end{proposition}
\begin{proof}
The equivalence between (ii) and (iii) is immediate.
(i) $\Rightarrow$ (ii). Consider a double split epimorphism over
$Y$ as above. We want to prove that the comparison morphism
$\langle g', f' \rangle \colon D \to A\times_Y C$ is a regular
epimorphism. Suppose that $\langle g',f' \rangle =me$ is its
factorisation as a regular epimorphism followed by a monomorphism.
We obtain the commutative diagram
\[
\xymatrix@!0@C=5em@R=4em{ & M \ar@{ >->}[d]^- m \\
A \ar[r]_-{\langle 1_A, tf\rangle} \ar[ur]^-{et'} & A\times_Y C & C. \ar[l]^-{\langle sg,1_C\rangle} \ar[ul]_-{es'}}
\]
By assumption $(\langle 1_A, tf\rangle, \langle sg,1_C\rangle)$ is
jointly strongly epimorphic, which proves that~$m$ is an
isomorphism and, consequently, $\langle g',f' \rangle$ is a
regular epimorphism.
(ii) $\Rightarrow$ (i). Consider a pullback of split epimorphisms
\eqref{pb of split epis} and a monomorphism~$m$ such that $\langle
1_A,tf \rangle$ and $\langle sg,1_C \rangle$ factor through $m$
$$
\xymatrix@!0@C=5em@R=4em{ & M \ar@{ >->}[d]^- m \\
A \ar[r]_-{\langle 1_A, tf\rangle} \ar[ur]^-{a} & A\times_Y C & C. \ar[l]^-{\langle sg,1_C\rangle} \ar[ul]_-{c}}
$$
We obtain a double split epimorphism over $Y$ given by
\[
\xymatrix@!0@=5em{ M \ar@<-.5ex>[d]_{\pi_A m}
\ar@<-.5ex>[r]_{\pi_C m} & C \ar@<-.5ex>[d]_g
\ar@<-.5ex>[l]_-{c} \\
A \ar@<-.5ex>[u]_{a} \ar@<-.5ex>[r]_f & Y, \ar@<-.5ex>[l]_s
\ar@<-.5ex>[u]_t }
\]
whose comparison morphism to the pullback of $f$ and $g$ is
$m\colon {M \to A\times_Y C}$. By assumption, $m$ is a regular
epimorphism, hence it is an isomorphism.
\end{proof}
\begin{proposition} \label{Mal'tsev implies strongly unital}
Let $\ensuremath{\mathbb{C}}$ be a pointed regular category. Every Mal'tsev object in
$\ensuremath{\mathbb{C}}$ is a strongly unital object.
\end{proposition}
\begin{proof}
Let $Y$ be a Mal'tsev object. By Proposition~\ref{SU
characterisation}, given a split right punctual span
\[
\xymatrix@!0@=4em{ X \ar@<.5ex>[r]^s & Z \ar@<.5ex>[l]^f
\ar@<-.5ex>[r]_g & Y \ar@<-.5ex>[l]_t }
\]
we need to prove that the the morphism $\langle f, g \rangle
\colon {Z \to X \times Y}$ is a strong epimorphism. Consider the
commutative diagram on the right
\begin{equation*}
\vcenter{\xymatrix@!0@=4em{\Eq(f) \ar@<-1ex>[r]
\ar@<-.5ex>[d]_-{g'} \ar@<1ex>[r]^-{\pi_{1}} & Z \ar[l]
\ar@<-.5ex>[d]_-{g} \ar@{->>}[r]^-{f} & X
\ar@<-.5ex>[d] \\
\Eq(!_{Y}) \ar@<-.5ex>[u]_-{t'} \ar@<-1ex>[r]
\ar@<1ex>[r]^-{\pi_{1}}
& Y \ar[l] \ar@<-.5ex>[u]_-{t} \ar@{->>}[r]_-{!_{Y}} & 0 \ar@<-.5ex>[u]}}
\end{equation*}
and take kernel pairs to the left. Note that the square on the
right is a regular epimorphism of points. Since $Y$ is a Mal'tsev
object, by Proposition~\ref{double split epi Mal'tsev} the double
split epimorphism of first (or second) projections on the left is
a regular pushout. Lemma~\ref{Bourn Lemma} tells us that the
square on the right is a regular pushout as well, which means that
the morphism $\langle f, g \rangle \colon {Z \to X \times Y}$ is a
regular, hence a strong, epimorphism.
\end{proof}
For a pointed finitely complete category $\ensuremath{\mathbb{C}}$, the category
$\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$ obviously contains the zero object. By the following
proposition we see that the zero object is not necessarily a
Mal'tsev object. Hence if $\ensuremath{\mathbb{C}}$ is pointed and regular, but not
unital, then $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is strictly contained in $\ensuremath{\mathsf{SU}}(\ensuremath{\mathbb{C}})$.
\begin{proposition}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, then the zero
object is a Mal'tsev object if and only if $\ensuremath{\mathbb{C}}$ is unital.
\end{proposition}
\begin{proof}
The zero object $0$ is a Mal'tsev object if and only if, for any
$X$, $Y \in \ensuremath{\mathbb{C}}$, in the diagram
\[
\xymatrix@!0@=5em{X \times Y \splitsplitpullback
\ar@<-.5ex>[d]_{\pi_X} \ar@<-.5ex>[r]_(.7){\pi_Y} & Y
\ar@<-.5ex>[l]_-{\langle 0, 1_{Y} \rangle}
\ar@<-.5ex>[d] \\
X \ar@<-.5ex>[u]_(.4){\langle 1_{X}, 0 \rangle} \ar@<-.5ex>[r] &
0, \ar@<-.5ex>[l] \ar@<-.5ex>[u] }
\]
the morphisms $\langle 1_{X}, 0 \rangle$ and $\langle 0, 1_{Y}
\rangle$ are jointly strongly epimorphic. This happens if and only
if $\ensuremath{\mathbb{C}}$ is unital.
\end{proof}
\begin{remark}
By Proposition~\ref{Mal'tsev via fibres}, $\ensuremath{\mathbb{C}}$ is a Mal'tsev
category if and only if all fibres $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathbb{C}})$ are unital if and
only if they are strongly unital. For a Mal'tsev object $Y$ in a
category $\ensuremath{\mathbb{C}}$ the fibre $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathbb{C}})$ is unital, but not strongly
unital in general. The previous proposition provides a
counterexample: if $\ensuremath{\mathbb{C}}=\ensuremath{\mathsf{Mon}}$ and $Y=0$, then~$Y$ is a Mal'tsev
object, but the category $\ensuremath{\mathsf{Pt}}_Y(\ensuremath{\mathsf{Mon}})=\ensuremath{\mathsf{Mon}}$ is not strongly
unital~\cite[Example 1.8.2]{Borceux-Bourn}.
\end{remark}
Next we see that some well-known properties which hold for
Mal'tsev categories are still true for Mal'tsev objects.
\begin{proposition}
In a finitely complete category, a reflexive graph whose object of
objects is a Mal'tsev object admits at most one structure of
internal category.
\end{proposition}
\begin{proof}
Given a reflexive graph
\[
\xymatrix@!0@=4em{ X_1 \ar@<1ex>[r]^d \ar@<-1ex>[r]_c & X \ar[l]|e
}
\]
where $X$ is a Mal'tsev object, let $m \colon {X_2 \to X_1}$ be a
multiplication, where $X_2$ is the object of composable arrows. If
this multiplication endows the graph with a structure of internal
category, then it must be compatible with the identities, which
means that
\begin{equation} \label{internal category}
m \langle 1_{X_1}, ec \rangle = m \langle ed, 1_{X_1} \rangle =
1_{X_1}.
\end{equation}
Considering the pullback
\[
\xymatrix@!0@=5em{ X_2 \splitsplitpullback \ar@<-.5ex>[d]_{\pi_1}
\ar@<-.5ex>[r]_{\pi_2} & X_1 \ar@<-.5ex>[d]_d
\ar@<-.5ex>[l]_-{\langle ed ,1_{X_1} \rangle} \\
X_1 \ar@<-.5ex>[u]_(.4){\langle 1_{X_1}, ec \rangle}
\ar@<-.5ex>[r]_c & X, \ar@<-.5ex>[l]_e \ar@<-.5ex>[u]_e }
\]
we see that $\langle 1_{X_1}, ec \rangle$ and $\langle ed, 1_{X_1}
\rangle$ are jointly (strongly) epimorphic, because~$X$ is a
Mal'tsev object. Then there is at most one morphism $m$ satisfying
the equalities~\eqref{internal category}.
\end{proof}
\begin{proposition}
In a finitely complete category, any reflexive relation on a
Mal'\-tsev object is transitive.
\end{proposition}
\begin{proof}
The proof is essentially the same as that of~\cite[Proposition
5.3]{S-proto}.
\end{proof}
\begin{example}
Unlike the case of Mal'tsev categories, it is not true that every
internal category with a Mal'tsev object of objects is a groupoid.
Neither is it true that every reflexive relation on a Mal'tsev
object is symmetric. The category $\ensuremath{\mathsf{Mon}}$ of monoids provides
counterexamples. Indeed, as we show below in
Theorems~\ref{Mal'tsev monoids are groups} and~\ref{groups =
protomodular monoids}, the Mal'tsev objects in $\ensuremath{\mathsf{Mon}}$ are
precisely the groups. As a consequence of Propositions 2.2.4 and
3.3.2 in~\cite{SchreierBook}, in $\ensuremath{\mathsf{Mon}}$ an internal category over
a group is a groupoid if and only if the kernel of the domain
morphism is a group. Similarly, a reflexive relation on a group is
symmetric if and only if the kernels of the two projections of the
relation are groups. A~concrete example of a (totally
disconnected) internal category which is not a groupoid is the
following. If $M$ is a commutative monoid and $G$ is a group,
consider the reflexive graph
\[
\xymatrix@!0@=7em{ M \times G \ar@<1ex>[r]^-{\pi_G}
\ar@<-1ex>[r]_-{\pi_G} & G. \ar[l]|-{\langle 0, 1_{G} \rangle} }
\]
It is an internal category by Proposition 3.2.3 in
\cite{SchreierBook}, but in general it is not a groupoid, since
the kernel of $\pi_G$, which is $M$, need not be a group.
\end{example}
\begin{proposition}\label{RS=SR}
In a regular category, any pair of reflexive relations $R$ and $S$
on a Mal'tsev object $Y$ commutes: $RS=SR$.
\end{proposition}
\begin{proof}
The proof of this result is similar to that of Proposition 2.8
in~\cite{Bourn2014}. Consider the double relation $R\square S$ on
$R$ and $S$:
$$
\xymatrix@!0@=7em{R\square S \ar@<-1ex>[d]_{\pi_{12}}
\ar@<1ex>[d]^-{\pi_{34}} \ar@<-1ex>[r]_-{\pi_{24}}
\ar@<1ex>[r]^-{\pi_{13}} &
S \ar@<-1ex>[d]_{s_1} \ar@<1ex>[d]^-{s_2} \ar[l] \\
R \ar@<-1ex>[r]_-{r_1} \ar@<1ex>[r]^-{r_2} \ar[u] & Y. \ar[l] \ar[u]}
$$
In set-theoretical terms, $R\square S$ is given by the subobject
of $Y\times Y\times Y\times Y$ whose elements are quadruples
$(a,b,c,d)$ such that
$$
\begin{array}{ccc} a & \!\!S\!\! & c \\ R & & R \\ b & \!\!S\!\! & d. \end{array}
$$
Let $R\times_Y S$ denote the pullback of $r_2$ and $s_1$, and
$S\times_Y R$ the pullback of $s_2$ and~$r_1$. By
Proposition~\ref{double split epi Mal'tsev}, the comparison
morphisms $\langle \pi_{12},\pi_{24} \rangle \colon R\square S \to
R\times_Y S$ and $\langle \pi_{13},\pi_{34} \rangle \colon
R\square S \to S\times_Y R$ are regular epimorphisms. Applying
Proposition~2.3 in~\cite{Bourn-Gran-Normal-Sections} to these
regular epimorphisms, it easily follows that $SR\leq RS$
and~${RS\leq SR}$.
\end{proof}
\begin{proposition}\label{Mal'tsev objects Mal'tsev cat}
If $\ensuremath{\mathbb{C}}$ is a finitely complete category, then $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})=\ensuremath{\mathbb{C}}$ if and
only if $\ensuremath{\mathbb{C}}$ is a Mal'tsev category.
\end{proposition}
\begin{proof}
By Proposition~\ref{Mal'tsev via fibres}.
\end{proof}
\begin{corollary} \label{Mal'tsev objects Mal'tsev cat 2}
If $\ensuremath{\mathbb{C}}$ is a finitely complete category and $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is closed
under finite limits in $\ensuremath{\mathbb{C}}$, then $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is a Mal'tsev category.
\end{corollary}
\begin{proof}
Apply Proposition~\ref{Mal'tsev objects Mal'tsev cat} to $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$.
\end{proof}
\begin{proposition}\label{Mal'tsev closed for quots}
If $\ensuremath{\mathbb{C}}$ is a regular category, then $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is closed under
quotients in $\ensuremath{\mathbb{C}}$.
\end{proposition}
\begin{proof}
Given a Mal'tsev object $X$ and a regular epimorphism $f\colon
{X\to Y}$, any double split epimorphism over $Y$ may be pulled
back to a double split epimorphism over $X$, which is a regular
pushout by assumption. It is straightforward to check that the
given double split epimorphism over $Y$ is then a regular pushout.
\end{proof}
\begin{example}
As a consequence of Example~\ref{semirings PM} below, in the
category of semi\-rings the Mal'tsev objects are precisely the
rings: $\ensuremath{\mathsf{M}}(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{Rng}}$.
\end{example}
\begin{theorem} \label{Mal'tsev monoids are groups}
If $\ensuremath{\mathbb{C}}$ is the category $\ensuremath{\mathsf{Mon}}$ of monoids, then $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is
contained in the subcategory $\ensuremath{\mathsf{Gp}}$ of groups. In other words, if
the category $\ensuremath{\mathsf{Pt}}_{M}(\ensuremath{\mathsf{Mon}})$ is unital then the monoid $M$ is a
group.
\end{theorem}
\begin{proof}
Let $M$ be a Mal'tsev object in the category of monoids. Given any
element $m \neq e_{M}$ of $M$, we are going to prove that it is
right invertible. This suffices for the monoid $M$ to be a group.
Consider the pullback diagram
\begin{equation}\label{M(Mon)}
\vcenter{\xymatrix@!0@=5em{ P \splitsplitpullback
\ar@<-.5ex>[r]_(.5){\pi_2} \ar@<-.5ex>[d]_-{\pi_1} & M + M
\ar@<-.5ex>[l]_-{i_2}
\ar@<-.5ex>[d]_-{\links1_M\;1_M\rgroup} \\
M + \ensuremath{\mathbb{N}} \ar@<-.5ex>[u]_-{i_1} \ar@<-.5ex>[r]_-{\links1_M\;m\rgroup}
& M \ar@<-.5ex>[l]_-{\iota_M} \ar@<-.5ex>[u]_-{\iota_1} }}
\end{equation}
where $m \colon \ensuremath{\mathbb{N}} \to M$ is the morphism sending $1$ to $m$.
Recall that $M + M$ may be seen as the set of words of the form
\[
\underline{l}_1\sqbullet \overline{r}_1\sqbullet \cdots \sqbullet
\underline{l}_s \sqbullet \overline{r}_s
\]
for $\underline{l}_i$, $\overline{r}_i \in M$, subject to the rule
that we may multiply underlined with underlined elements and
overlined with overlined ones, or any of such with the neutral
element $e_M$. The two coproduct inclusions can be described as
\[ \iota_1(l) = \underline{l} \qquad \iota_2(r) = \overline{r}
\]
for $l$, $r\in M$. We use essentially the same notations for the
elements of $M+\ensuremath{\mathbb{N}}$, writing a generic element as
$\underline{m}_1\sqbullet \overline{n}_1\sqbullet \cdots \sqbullet
\underline{m}_t\sqbullet \overline{n}_t$.
We see that the pullback $P$ consists of pairs
\[
(\underline{m}_1\sqbullet \overline{n}_1\sqbullet \cdots \sqbullet
\underline{m}_t\sqbullet \overline{n}_t,\;\underline{l}_1\sqbullet
\overline{r}_1\sqbullet \cdots \sqbullet \underline{l}_s \sqbullet
\overline{r}_s)\in (M+\ensuremath{\mathbb{N}})\times(M+M)
\]
such that $m_1 m^{n_1} \cdots m_t m^{n_t}=l_1r_1 \cdots l_s r_s$.
We also know that
\begin{align*}
i_1(\underline{m}_1\sqbullet \overline{n}_1\sqbullet \cdots
\sqbullet \underline{m}_t\sqbullet \overline{n}_t) &=
(\underline{m}_1\sqbullet \overline{n}_1\sqbullet \cdots \sqbullet
\underline{m}_t\sqbullet \overline{n}_t,\;
\underline{m_1 m^{n_1} \cdots m_t m^{n_t}}),\\
i_2(\underline{l}_1\sqbullet \overline{r}_1\sqbullet \cdots
\sqbullet \underline{l}_s \sqbullet \overline{r}_s) &=
(\underline{l_1r_1 \cdots l_s r_s},\; \underline{l}_1\sqbullet
\overline{r}_1\sqbullet \cdots \sqbullet \underline{l}_s \sqbullet
\overline{r}_s).
\end{align*}
Note that $(\overline{1}, \overline{m})$ belongs to $P$, where
$\overline{1}$ is our way to view $1 \in \ensuremath{\mathbb{N}}$ as an element
of~$M+\ensuremath{\mathbb{N}}$. Since by assumption $i_1$ and $i_2$ are jointly
strongly epimorphic, we have
\begin{align*} \label{equation Mal'tsev monoid}
(\overline{1}, \overline{m})=\,&(\underline{m}_1^1\sqbullet
\overline{n}_1^1\sqbullet \cdots \sqbullet
\underline{m}_{t_1}^1\sqbullet \overline{n}_{t_1}^1,
\underline{m_1^1
m^{n_1^1} \cdots m_{t_1}^1 m^{n_{t_1}^1}})\\
&\sqbullet(\underline{l^{1}_1r^{1}_1 \cdots l^{1}_{s_{1}} r^{1}_{s_{1}}},\;
\underline{l}^{1}_1\sqbullet \overline{r}^{1}_1\sqbullet \cdots
\sqbullet
\underline{l}^{1}_{s_{1}} \sqbullet \overline{r}^{1}_{s_{1}})\\
&\;\vdots \\
&\sqbullet(\underline{m}_1^k\sqbullet \overline{n}_1^k\sqbullet
\cdots \sqbullet \underline{m}_{t_k}^k\sqbullet
\overline{n}_{t_k}^k, \underline{m_1^k m^{n_1^k}
\cdots m_{t_k}^k m^{n_{t_k}^k}})\\
&\sqbullet(\underline{l^{k}_1r^{k}_1 \cdots l^{k}_{s_{k}}
r^{k}_{s_{k}}},\; \underline{l}^{k}_1\sqbullet
\overline{r}^{k}_1\sqbullet \cdots \sqbullet
\underline{l}^{k}_{s_{k}} \sqbullet \overline{r}^{k}_{s_{k}})
\end{align*}
for some $m^{i}_{j}$, $l^{i}_{j}$, $r^{i}_{j}\in M$ and
$n^{i}_{j}\in \ensuremath{\mathbb{N}}$. Computing the first component we get that
$\overline{1}$ is equal to
\[
\underline{m}_1^1\sqbullet \overline{n}_1^1\sqbullet \cdots
\sqbullet \underline{m}_{t_1}^1\sqbullet \overline{n}_{t_1}^1
\sqbullet \underline{l^{1}_1r^{1}_1 \cdots l^{1}_{s_{1}}
r^{1}_{s_{1}}} \sqbullet \cdots \sqbullet
\underline{m}_1^k\sqbullet \overline{n}_1^k\sqbullet \cdots
\sqbullet \underline{m}_{t_k}^k\sqbullet \overline{n}_{t_k}^k
\sqbullet \underline{l^{k}_1r^{k}_1 \cdots l^{k}_{s_{k}}
r^{k}_{s_{k}}}.
\]
Since $1$ cannot be written as a sum
$n^{1}_{1}+\cdots+n^{k}_{t_{k}}$ in $\ensuremath{\mathbb{N}}$ unless all but one of the
$n^{i}_{j}$ is zero, we see that the equality above reduces to
$(\overline{1}, \overline{m})$ being equal to
\begin{multline*}
(\underline{l_1r_1 \cdots l_s r_s},\; \underline{l}_1\sqbullet
\overline{r}_1\sqbullet \cdots \sqbullet \underline{l}_s \sqbullet
\overline{r}_s) \sqbullet(\overline{1}, \underline{m})\sqbullet
(\underline{l'_1r'_1 \cdots l'_{s'} r'_{s'}},\;
\underline{l}'_1\sqbullet \overline{r}'_1\sqbullet \cdots
\sqbullet \underline{l}'_{s'} \sqbullet \overline{r}'_{s'}).
\end{multline*}
Equality of the first components gives us
\[
\overline{1}=\underline{l_1r_1 \cdots l_s r_s} \sqbullet
\overline{1}\sqbullet \underline{l'_1r'_1 \cdots l'_{s'} r'_{s'}}
\]
from which we deduce that
\begin{equation}\label{in K}
l_1r_1 \cdots l_s r_s=e_{M}=l'_1r'_1 \cdots l'_{s'} r'_{s'}.
\end{equation}
This means that $\underline{l}_1\sqbullet \overline{r}_1\sqbullet
\cdots \sqbullet \underline{l}_s \sqbullet \overline{r}_s$ and
$\underline{l}'_1\sqbullet \overline{r}'_1\sqbullet \cdots
\sqbullet \underline{l}'_{s'} \sqbullet \overline{r}'_{s'}$ are in
the kernel of $\links1_M\;1_M\rgroup\colon {M+M\to M}$. Without
loss of generality we may assume that these two products are
written in their reduced form, meaning that no further
simplification is possible, besides perhaps when
$\underline{l}_{1}$, $\overline{r}_{s}$, $\underline{l}'_{1}$ or
$\overline{r}'_{s'}$ happens to be equal to $e_{M}$. Computing the
second component, we see that
\begin{align*}
\overline{m} &= \underline{l}_1\sqbullet \overline{r}_1\sqbullet
\cdots \sqbullet \underline{l}_s \sqbullet \overline{r}_s
\sqbullet \underline{m} \sqbullet \underline{l}'_1\sqbullet
\overline{r}'_1\sqbullet \cdots \sqbullet
\underline{l}'_{s'} \sqbullet \overline{r}'_{s'} \\
&= \underline{l}_1\sqbullet \overline{r}_1\sqbullet \cdots
\sqbullet \underline{l}_s \sqbullet \overline{r}_s \sqbullet
\underline{ml'_1}\sqbullet \overline{r}'_1\sqbullet \cdots
\sqbullet \underline{l}'_{s'} \sqbullet \overline{r}'_{s'}.
\end{align*}
This leads to a proof that $m$ is right invertible. Indeed, for
such an equality to hold, certain cancellations must be possible
so that the overlined elements can get together on the right. Next
we study four basic cases which all others reduce to.
\emph{Case $s=s'=1$.} For the equality
\[
\overline{m} = \underline{l}_1\sqbullet \overline{r}_1 \sqbullet
\underline{ml'_1}\sqbullet \overline{r}'_1
\]
to hold, we must have $\overline{r}_1=e_M$ or
$\underline{ml'_1}=e_M$. In the latter situation, $m$ is right
invertible. If, on the other hand, $\overline{r}_1=e_M$, then
$\underline{l}_1=e_M$ by~\eqref{in K}. The equality $\overline{m}
= \underline{ml'_1}\sqbullet \overline{r}'_1$ implies that
$\underline{ml'_1}=e_{M}$.
\emph{Case $s=2$, $s'=1$.} For the equality
\[
\overline{m} = \underline{l}_1\sqbullet \overline{r}_1 \sqbullet
\underline{l}_2\sqbullet \overline{r}_2 \sqbullet
\underline{ml'_1}\sqbullet \overline{r}'_1
\]
to hold, we must have one of the ``inner'' elements on the right
side of the equality equal to $e_M$.
\begin{itemize}
\item If $\underline{ml'_1}=e_M$, then $m$ is right invertible.
\item If $\overline{r}_1=e_M$ or $\underline{l}_2=e_M$, then the word $\underline{l}_1\sqbullet \overline{r}_1\sqbullet
\underline{l}_2\sqbullet \overline{r}_2$ is not reduced.
\item If $\overline{r}_2=e_M$, then $\overline{m} = \underline{l}_1\sqbullet
\overline{r}_1 \sqbullet \underline{l_2ml'_1}\sqbullet
\overline{r}'_1$. Since $\overline{r}_1$ is different from $e_M$,
we have that $l_2ml'_1=e_M$, so that $l_2$ admits an inverse on
the right and $l'_1$ admits one on the left. From~\eqref{in K}, we
also know that $l_2$ is admits an inverse on the left and $l'_1$
admits one on the right. Thus, they are both invertible elements,
and hence so is $m$.
\end{itemize}
\emph{Case $s=1$, $s'=2$.} For the equality
\[
\overline{m} = \underline{l}_1\sqbullet \overline{r}_1\sqbullet
\underline{ml'_1}\sqbullet \overline{r}'_1 \sqbullet
\underline{l}'_2\sqbullet \overline{r}'_2
\]
to hold, we must have one of the ``inner'' elements on the right
side of the equality equal to $e_M$.
\begin{itemize}
\item If $\underline{ml'_1}=e_M$, then $m$ is right invertible.
\item If $\overline{r}'_1=e_M$ or $\underline{l}'_2=e_M$, then the word $\underline{l'_1}\sqbullet \overline{r}'_1
\sqbullet \underline{l}'_2\sqbullet \overline{r}'_2$ is not
reduced.
\item If $\overline{r}_1=e_M$, then $\underline{l}_1=e_M$ by~\eqref{in K}, so that
$\overline{m} = \underline{ml'_1}\sqbullet \overline{r}'_1
\sqbullet \underline{l}'_2\sqbullet \overline{r}'_2$. This is
impossible, since $\overline{r}'_1$ and $\underline{l}'_2$ are
non-trivial.
\end{itemize}
\emph{Case $s=2$, $s'=2$.} For the equality
\[
\overline{m} = \underline{l}_1\sqbullet \overline{r}_1\sqbullet
\underline{l}_2\sqbullet
\overline{r}_2\sqbullet\underline{ml'_1}\sqbullet \overline{r}'_1
\sqbullet \underline{l}'_2\sqbullet \overline{r}'_2
\]
to hold, we must have one of the ``inner'' elements on the right
side of the equality equal to $e_M$.
\begin{itemize}
\item If $\underline{ml'_1}=e_M$, then $m$ is right invertible.
\item If $\overline{r}_1=e_M$ or $\underline{l}_2=e_M$, then the word
$\underline{l}_1\sqbullet \overline{r}_1\sqbullet
\underline{l}_2\sqbullet \overline{r}_2$ is not reduced.
\item If $\overline{r}'_1=e_M$ or $\underline{l}'_2=e_M$, then the word $\underline{l'_1}\sqbullet \overline{r}'_1
\sqbullet \underline{l}'_2\sqbullet \overline{r}'_2$ is not
reduced.
\item If $\overline{r}_2=e_M$, then $\overline{m} = \underline{l}_1\sqbullet
\overline{r}_1\sqbullet \underline{l_2ml'_1}\sqbullet
\overline{r}'_1 \sqbullet \underline{l}'_2\sqbullet
\overline{r}'_2$. Again, $\underline{l_2ml'_1}=e_{M}$ as in the
second case, and~\eqref{in K} implies that $m$ is invertible.
\end{itemize}
We see that the last case reduces to one of the previous ones and
it is straightforward to check that the same happens for general
$s$, $s'\geq 2$.
\end{proof}
Below, in Theorem~\ref{groups = protomodular monoids}, we shall
prove that groups are precisely the Mal'tsev monoids: $\ensuremath{\mathsf{M}}(\ensuremath{\mathsf{Mon}}) =
\ensuremath{\mathsf{Gp}}$.
\subsection{$\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is a Mal'tsev core}
As we already recalled in Section~\ref{section S-Mal'tsev and
S-protomodular}, if $\ensuremath{\mathbb{C}}$ is an $\ensuremath{\mathcal{S}}$-Mal'tsev category, then the
subcategory of $\ensuremath{\mathcal{S}}$-special objects $\ensuremath{\mathcal{S}}(\ensuremath{\mathbb{C}})$ is a Mal'tsev
category, called the Mal'tsev core of $\ensuremath{\mathbb{C}}$ relatively to~$\ensuremath{\mathcal{S}}$. We
now show that the subcategory $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ of Mal'tsev objects is a
Mal'tsev core with respect to a suitable class $\ensuremath{\mathcal{M}}$ of points,
provided that $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is closed under finite limits in $\ensuremath{\mathbb{C}}$.
Let $\ensuremath{\mathbb{C}}$ be a finitely complete category such that $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is
closed under finite limits. We define $\ensuremath{\mathcal{M}}$ as the class of points
$(f,s)$ in $\ensuremath{\mathbb{C}}$ for which there exists a pullback of split
epimorphisms
\begin{equation}
\label{diagram for MM} \vcenter{
\xymatrix@!0@=4em{ A \splitsplitpullback \ar@<-.5ex>[r] \ar@<-.5ex>[d]_-{f} & A' \ar@<-.5ex>[l]_-{a} \ar@<-.5ex>[d]_{f'} \\
X \ar@<-.5ex>[r] \ar@<-.5ex>[u]_-{s} & X', \ar@<-.5ex>[l] \ar@<-.5ex>[u]_{s'}
}}
\end{equation}
for some point $(f',s')$ in $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$. Note that the class $\ensuremath{\mathcal{M}}$ is
obviously stable under pullbacks along split epimorphisms.
Moreover, all points in $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ belong to $\ensuremath{\mathcal{M}}$.
\begin{proposition}
\label{MM-Mal'tsev} Let $\ensuremath{\mathbb{C}}$ be a finitely complete category.
Given any pullback of split epimorphisms with $(f,s)$ a point in
$\ensuremath{\mathcal{M}}$
\[
\xymatrix@!0@=5em{ A\times_{X}C \splitsplitpullback
\ar@<-.5ex>[d]_{\pi_A} \ar@<-.5ex>[r]_(.7){\pi_C} & C
\ar@<-.5ex>[d]_-g
\ar@<-.5ex>[l]_-{\langle sg,1_C \rangle} \\
A \ar@<-.5ex>[u]_(.4){\langle 1_A,tf \rangle} \ar@<-.5ex>[r]_-f &
X, \ar@<-.5ex>[l]_-s \ar@<-.5ex>[u]_-t }
\]
the pair $(\langle 1_A,tf\rangle, \langle sg, 1_C\rangle)$ is
jointly strongly epimorphic.
\end{proposition}
\begin{proof}
Since $(f,s)$ is a pullback of a point in $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ as in
\eqref{diagram for MM}, we see that the pair $(\langle
1_A,tf\rangle a, \langle sg, 1_C\rangle)$ is jointly strongly
epimorphic. It easily follows that also $(\langle 1_A,tf\rangle,
\langle sg, 1_C\rangle)$ is jointly strongly epimorphic.
\end{proof}
Note that the property above already occurred in
Definition~\ref{S-Mal'tsev and S-protomodular categories}(1).
\begin{proposition} \label{Mal'tsev objs = Mal'tsev core}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, and the
subcategory $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ of Mal'tsev objects is closed under finite
limits in~$\ensuremath{\mathbb{C}}$, then it coincides with the subcategory $\ensuremath{\mathcal{M}}(\ensuremath{\mathbb{C}})$ of
$\ensuremath{\mathcal{M}}$-special objects of~$\ensuremath{\mathbb{C}}$.
\end{proposition}
\begin{proof}
If $X$ is a Mal'tsev object, it is obviously $\ensuremath{\mathcal{M}}$-special, since
the point
\begin{equation*}
(\pi_{2}\colon{X\times X\to X},\quad \Delta_{X}=\langle
1_{X},1_{X} \rangle\colon {X\to X\times X})
\end{equation*}
belongs to the subcategory $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$, which is closed under binary
products.
Conversely, suppose that $X$ is $\ensuremath{\mathcal{M}}$-special. Then there is a
point $(f',s')$ in $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ and a point $X\leftrightarrows B'$ in
$\ensuremath{\mathbb{C}}$ such that the square
\[
\xymatrix@!0@=5em{ X\times X \splitsplitpullback \ar@<-.5ex>[r] \ar@<-.5ex>[d]_{\pi_1} & A' \ar@<-.5ex>[l] \ar@<-.5ex>[d]_{f'} \\
X \ar@<-.5ex>[u]_(.4){\langle 1_{X},1_{X} \rangle} \ar@<-.5ex>[r] & B' \ar@<-.5ex>[l] \ar@<-.5ex>[u]_{s'} }
\]
is a pullback. But then $X$, which is the kernel of~$\pi_1$, is
also the kernel of $f'$, and hence it belongs to $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$.
\end{proof}
Strictly speaking, we cannot apply Proposition 4.3 in
\cite{Bourn2014} to conclude that $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is the Mal'tsev core of
$\ensuremath{\mathbb{C}}$ relatively to $\ensuremath{\mathcal{M}}$, since the class $\ensuremath{\mathcal{M}}$ we are considering
does not satisfy all the conditions of Definition~\ref{S-Mal'tsev
and S-protomodular categories}. Indeed, our class $\ensuremath{\mathcal{M}}$ is not
stable under pullbacks, neither need it to be closed in $\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}})$
under finite limits, in general. However, all the arguments of the
proof of Proposition~4.3 in~\cite{Bourn2014} are still applicable
to our context, since, by definition of the class~$\ensuremath{\mathcal{M}}$, we know
that every point between objects in $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ belongs to~$\ensuremath{\mathcal{M}}$. So,
we can conclude that, if $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is closed in $\ensuremath{\mathbb{C}}$ under finite
limits, then it is a Mal'tsev category, being the Mal'tsev core of
$\ensuremath{\mathbb{C}}$ relatively to the class $\ensuremath{\mathcal{M}}$. Observe that we could also
conclude that $\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is a Mal'tsev category simply by
Corollary~\ref{Mal'tsev objects Mal'tsev cat 2}.
\section{Protomodular objects}\label{Protomodular objects}
In this final section we introduce the (stronger) concept of a
protomodular object and prove our paper's main result,
Theorem~\ref{groups = protomodular monoids}: a monoid is a group
if and only if it is a protomodular object, and if and only if it
is a Mal'tsev object.
\begin{definition} \label{definition protomodular objects}
Given an object $Y$ of a finitely complete category $\ensuremath{\mathbb{C}}$, we say
that~$Y$ is
\textbf{protomodular} if every point with codomain $Y$ is stably
strong.
We write $\P(\ensuremath{\mathbb{C}})$ for the full subcategory of $\ensuremath{\mathbb{C}}$ determined by
the protomodular objects.
\end{definition}
Obviously, every protomodular object is strongly unital. Hence it
is also unital and subtractive (Proposition~\ref{SU=SU}). We also
have:
\begin{proposition}\label{PM then Mal}
Let $\ensuremath{\mathbb{C}}$ be a finitely complete category. Every protomodular
object is a Mal'tsev object.
\end{proposition}
\begin{proof}
Let $Y$ be a protomodular object and consider the following
pullback of split epimorphisms:
\[
\xymatrix@!0@=5em{ A\times_{Y}C \splitsplitpullback
\ar@<-.5ex>[d]_{\pi_A} \ar@<-.5ex>[r]_(.7){\pi_C} & C
\ar@<-.5ex>[d]_g
\ar@<-.5ex>[l]_-{\langle sg,1_C \rangle} \\
A \ar@<-.5ex>[u]_(.4){\langle 1_A,tf \rangle} \ar@<-.5ex>[r]_f &
Y. \ar@<-.5ex>[l]_s \ar@<-.5ex>[u]_t }
\]
Since $Y$ is protomodular, the point $(g, t)$ is stably strong
and, consequently, also $(\pi_A, \langle 1_A,tf\rangle)$ is a
strong point. Moreover, the pullback of $s$ along $\pi_A$ is
precisely $\langle sg, 1_C\rangle$, so that the pair $(\langle
1_A,tf\rangle, \langle sg, 1_C\rangle)$ is jointly strongly
epimorphic, as desired. Observe that this proof is a simplified
version of that of Theorem~3.2.1 in~\cite{S-proto}.
\end{proof}
Note that, in the regular case, the above result follows from
Proposition~\ref{double split epi Mal'tsev} via Lemma~\ref{Lemma
Double}.
The inclusion $\P(\ensuremath{\mathbb{C}})\subset \ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})$ is strict, in general, by the
following proposition, Proposition~\ref{Mal'tsev objects Mal'tsev
cat} and the fact that there exist Mal'tsev categories which are
not protomodular.
\begin{proposition}\label{proto objects proto cat}
If $\ensuremath{\mathbb{C}}$ is a finitely complete category, then $\P(\ensuremath{\mathbb{C}})=\ensuremath{\mathbb{C}}$ if and
only if $\ensuremath{\mathbb{C}}$ is protomodular.
\end{proposition}
\begin{proof}
By definition, a finitely complete category is protomodular if and
only if all points in it are strong. When this happens,
automatically all of them are stably strong.
\end{proof}
\begin{corollary}
If $\ensuremath{\mathbb{C}}$ is a finitely complete category and $\P(\ensuremath{\mathbb{C}})$ is closed
under finite limits in $\ensuremath{\mathbb{C}}$, then $\P(\ensuremath{\mathbb{C}})$ is a protomodular
category.
\end{corollary}
\begin{proof}
Apply Proposition~\ref{proto objects proto cat} to $\P(\ensuremath{\mathbb{C}})$.
\end{proof}
Observe that this hypothesis is satisfied when $\ensuremath{\mathbb{C}}$ is the
category $\ensuremath{\mathsf{Mon}}$ of monoids, or the category $\ensuremath{\mathsf{SRng}}$ of semirings,
as can be seen as a consequence of Example~\ref{semirings PM} and
Theorem~\ref{groups = protomodular monoids} below.
\begin{proposition}
If $\ensuremath{\mathbb{C}}$ is regular, then $\P(\ensuremath{\mathbb{C}})$ is closed under quotients in
$\ensuremath{\mathbb{C}}$.
\end{proposition}
\begin{proof}
This follows immediately from Proposition~\ref{stably strong
points closed under quotients}.
\end{proof}
\begin{example} $\P(\ensuremath{\mathsf{SRng}})= \ensuremath{\mathsf{M}}(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{SRng}})=\S(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{Rng}}$. \label{semirings PM}
If $X$ is a protomodular semi{\-}ring, then it is obviously a
strongly unital semiring, thus a ring by
Theorem~\ref{SU(SRng)=Rng}. We already mentioned that if $X$ is a
ring, then every point over it in $\ensuremath{\mathsf{SRng}}$ is stably strong, since
it is a Schreier point by~\cite[Proposition~6.1.6]{SchreierBook}.
In particular, the category $\P(\ensuremath{\mathsf{SRng}})=\ensuremath{\mathsf{Rng}}$ is closed under
finite limits and it is protomodular. Thanks to Propositions
\ref{PM then Mal} and~\ref{Mal'tsev implies strongly unital}, we
also have that $\ensuremath{\mathsf{M}}(\ensuremath{\mathsf{SRng}}) = \ensuremath{\mathsf{Rng}}$.
\end{example}
\begin{theorem} \label{groups = protomodular monoids}
If $\ensuremath{\mathbb{C}}$ is the category $\ensuremath{\mathsf{Mon}}$ of monoids, then
$\P(\ensuremath{\mathbb{C}})=\ensuremath{\mathsf{M}}(\ensuremath{\mathbb{C}})=\ensuremath{\mathsf{Gp}}$, the category of groups. In other words, the
following conditions are equivalent, for any monoid $M$:
\begin{tfae}
\item $M$ is a group; \item $M$ is a Mal'tsev object, i.e.,
$\ensuremath{\mathsf{Pt}}_{M}(\ensuremath{\mathsf{Mon}})$ is a unital category; \item $M$ is a protomodular
object, i.e., all points over $M$ in the category of monoids are
stably strong.
\end{tfae}
\end{theorem}
\begin{proof}
If $M$ is a group, then every point over it is stably strong: by
Proposition~3.4 in \cite{BM-FMS2} it is a Schreier point, and
Schreier points are stably strong by Lemma 2.1.6 and Proposition
2.3.4 in \cite{SchreierBook}. This proves that (i) implies (iii).
(iii)~implies (ii) by Proposition~\ref{PM then Mal}, and (ii)
implies (i) by Theorem~\ref{Mal'tsev monoids are groups}.
\end{proof}
\begin{remark}
Note that, in particular, $\P(\ensuremath{\mathsf{Mon}})$ is closed under finite limits
in the category $\ensuremath{\mathsf{Mon}}$.
\end{remark}
\begin{remark}
The proof of Theorem~\ref{Mal'tsev monoids are groups} may be
simplified to obtain a direct proof that (iii) implies (i) in
Theorem~\ref{groups = protomodular monoids}. Instead of the
pullback diagram~\eqref{M(Mon)}, we may consider the simpler
pullback of $\lgroup 1_{M}\;1_M\rgroup\colon M+M\to M$ along
$m\colon{\ensuremath{\mathbb{N}}\to M}$. This idea is further simplified and at the same time strengthened in the article~\cite{GM-ACS}.
\end{remark}
\begin{remark}
As recalled in Example~\ref{Greg}, there are gregarious monoids
that are not groups. Hence, in $\ensuremath{\mathsf{Mon}}$, the subcategory $\P(\ensuremath{\mathsf{Mon}})$
is strictly contained in $\ensuremath{\mathsf{SU}}(\ensuremath{\mathsf{Mon}})$.
\end{remark}
\begin{example}
In the category $\ensuremath{\mathsf{Cat}}_{X}(\ensuremath{\mathbb{C}})$ of internal categories over a fixed
base object~$X$ in a finitely complete category $\ensuremath{\mathbb{C}}$, any internal
groupoid over~$X$ is a protomodular object. This follows from
results in~\cite{Bourn2014}: any pullback of any split epimorphism
over such an internal groupoid ``has a fibrant splitting'', which
implies that it is a strong point. So, over a given internal
groupoid over~$X$, all points are stably strong, which means that
this internal groupoid is a protomodular object.
\end{example}
Similarly to the Mal'tsev case, we also have:
\begin{proposition}
If $\ensuremath{\mathbb{C}}$ is a pointed finitely complete category, then the zero
object is protomodular if and only if $\ensuremath{\mathbb{C}}$ is unital.
\end{proposition}
\begin{proof}
The zero object $0$ is protomodular if and only if every point
over it is stably strong. This means that, for any $X$, $Y \in
\ensuremath{\mathbb{C}}$, in the diagram
\[
\xymatrix@!0@=5em{ X \ar@{{ |>}->}[r]^-{\langle 1_{X},0 \rangle}
\ar@{=}[dr] & X \times Y \halfsplitpullback
\ar@<-.5ex>[r]_(.7){\pi_Y}
\ar[d]^(.5){\pi_X} & Y \ar@<-.5ex>[l]_-{\langle 0,1_{Y} \rangle} \ar[d] \\
& X \ar@<-.5ex>[r] & 0, \ar@<-.5ex>[l] }
\]
the morphisms $\langle 1_{X}, 0 \rangle$ and $\langle 0, 1_{Y}
\rangle$ are jointly strongly epimorphic. This happens if and only
if $\ensuremath{\mathbb{C}}$ is unital.
\end{proof}
\begin{proposition}\label{PM via sum}
If $\ensuremath{\mathbb{C}}$ is a regular category with binary coproducts, then the
following conditions are equivalent:
\begin{tfae}
\item $Y$ is a protomodular object; \item for every morphism
$f\colon {X\to Y}$, the point
\[
(\lgroup f\;1_{Y}\rgroup\colon{X+Y\to Y},\quad \iota_{Y}\colon
{Y\to X+Y})
\]
is stably strong.
\end{tfae}
\end{proposition}
\begin{proof}
This follows from Proposition~\ref{stably strong points closed
under quotients} applied to the morphism of points
\[
\xymatrix@!0@=5em{ X+Y \ar@{->>}[r]^-{\lgroup 1_{X}\; s\rgroup}
\ar@<-.5ex>[d]_-{\lgroup f\; 1_{Y}\rgroup} & X
\ar@<-.5ex>[d]_f \\
Y \ar@{=}[r] \ar@<-.5ex>[u]_{\iota_{Y}} & Y, \ar@<-.5ex>[u]_s
}
\]
for any given point $(f\colon X\to Y,s\colon Y\to X)$.
\end{proof}
\subsection{$\P(\ensuremath{\mathbb{C}})$ is a protomodular core}
Similarly to what we did for Mal'tsev objects, we now show that
the subcategory $\P(\ensuremath{\mathbb{C}})$ of protomodular objects is a protomodular
core with respect to a suitable class $\ensuremath{\mathcal{P}}$ of points, provided
that $\P(\ensuremath{\mathbb{C}})$ is closed under finite limits in $\ensuremath{\mathbb{C}}$.
Let $\ensuremath{\mathbb{C}}$ be a finitely complete category such that $\P(\ensuremath{\mathbb{C}})$ is
closed under finite limits. We define the class $\ensuremath{\mathcal{P}}$ in the
following way: a point $(f,s)$ belongs to~$\ensuremath{\mathcal{P}}$ if and only if it
is the pullback
\begin{equation*}
\label{diagram for PP} \vcenter{
\xymatrix@!0@=4em{ A \ophalfsplitpullback \ar[r] \ar@<-.5ex>[d]_-{f} & A' \ar@<-.5ex>[d]_{f'} \\
X \ar[r] \ar@<-.5ex>[u]_-{s} & X' \ar@<-.5ex>[u]_{s'}
}}
\end{equation*}
of some point $(f',s')$ in $\P(\ensuremath{\mathbb{C}})$. Note that $\ensuremath{\mathcal{P}}$ is a class of
strong points, since they are pullbacks of stably strong points
(the codomain $X'$ is a protomodular object). The class $\ensuremath{\mathcal{P}}$ is
also a pullback-stable class since any pullback of a point $(f,s)$
in~$\ensuremath{\mathcal{P}}$ is also a pullback of a point in $\P(\ensuremath{\mathbb{C}})$. The class $\ensuremath{\mathcal{P}}$
is not closed under finite limits in~$\ensuremath{\mathsf{Pt}}(\ensuremath{\mathbb{C}})$, in general. So,
strictly speaking, it does not give rise to an $\ensuremath{\mathcal{S}}$-protomodular
category. However, as we observed for the Mal'tsev case, the fact
(which follows immediately from the definition of $\ensuremath{\mathcal{P}}$) that all
points in $\P(\ensuremath{\mathbb{C}})$ belong to $\ensuremath{\mathcal{P}}$ allows us to apply the same
arguments as in the proof of Proposition~6.2 in~\cite{S-proto}
(and its generalisation to the non-pointed case, given in
\cite{Bourn2014}) to conclude that $\P(\ensuremath{\mathbb{C}})$ is a protomodular
category. Indeed, as we now show, it is the protomodular core
$\ensuremath{\mathcal{P}}(\ensuremath{\mathbb{C}})$ of $\ensuremath{\mathbb{C}}$ relative to the class of points $\ensuremath{\mathcal{P}}$. In other
words, it is the category of $\ensuremath{\mathcal{P}}$-special objects of $\ensuremath{\mathbb{C}}$.
\begin{proposition}
\label{proto objs=proto core} If $\ensuremath{\mathbb{C}}$ is a pointed finitely
complete category, and the subcategory $\P(\ensuremath{\mathbb{C}})$ of protomodular
objects is closed under finite limits in $\ensuremath{\mathbb{C}}$, then it coincides
with the protomodular core $\ensuremath{\mathcal{P}}(\ensuremath{\mathbb{C}})$ consisting of the $\ensuremath{\mathcal{P}}$-special
objects of~$\ensuremath{\mathbb{C}}$.
\end{proposition}
\begin{proof}
If $X$ is a protomodular object, it is obviously $\ensuremath{\mathcal{P}}$-special,
since the point
\begin{equation*}
(\pi_{1}\colon{X\times X\to X},\quad \Delta_{X}=\langle
1_{X},1_{X} \rangle\colon {X\to X\times X})
\end{equation*}
belongs to the subcategory $\P(\ensuremath{\mathbb{C}})$, which is closed under binary
products.
Conversely, suppose that $X$ is $\ensuremath{\mathcal{P}}$-special. Then the point
$(\pi_1,\Delta_X)$ is a pullback of a point $(f',s')$ in $\P(\ensuremath{\mathbb{C}})$
\[
\xymatrix@!0@=5em{ X\times X \ophalfsplitpullback \ar[r]^-{h'} \ar@<-.5ex>[d]_{\pi_1} & A' \ar@<-.5ex>[d]_{f'} \\
X \ar@<-.5ex>[u]_(.4){\langle 1_{X},1_{X} \rangle} \ar[r]_-h & B'. \ar@<-.5ex>[u]_{s'} }
\]
But then $X$, which is the kernel of~$\pi_2$, is also the kernel
of $f'$, and hence it belongs to $\P(\ensuremath{\mathbb{C}})$.
\end{proof}
\section*{Acknowledgements}
We are grateful to Dominique Bourn for proposing the problem that
led to the present paper. We would also like to thank Alan Cigoli
and Xabier Garc\'ia-Mart\'inez for fruitful discussions, and the referee for careful comments and suggestions on the text.
\providecommand{\noopsort}[1]{}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,477,468,750,945 | arxiv | \section{Introduction}
The growing popularity of mobile devices and applications that require bandwidth hungry services has fueled an increase in mobile traffic which in many cases, limits the ability of systems to offer high quality communications.
For example, in peak demand hours, or during a popular event in specific areas, communication systems become congested and service quality is critically impaired. In addition, applications that require operations in areas without infrastructure, called \textit{tactical field operations}, raise the need for fast and reliable reaction from the communication systems.
In such situations, it is desirable to have helpers to offload traffic from congested networks or areas without infrastructure \cite{zhao2019caching}, \cite{baek2018design}.
Unmanned Aerial Vehicles (UAVs) can fly and serve congested network areas or specific areas that require urgently specific information. However, UAVs have limited flight time duration. Therefore, a UAV may not have the energy resources to visit all the areas. In this paper, our goal is to design an optimal trajectory in order to serve areas with higher emergency within a certain time.
Recently, research on UAVs that act as small base stations or caching helpers, has attracted a lot of interest \cite{zhao2019caching,baek2018design,zeng2016wireless}. The authors in \cite{cao2018mobile} consider a UAV that flies from one location to another and accomplishes a certain amount of computation tasks.
UAVs are used as small cells with caches in \cite{lakiotakis2019joint}. Caching and UAV placement strategies are provided while considering limited UAV battery budget constraints. Authors in \cite{xu2018overcoming} consider that a UAV transmitting files to a group of ground terminals (GTs). Based on device-to-device (D2D) communications, GTs can share the files, received by the UAV, to their adjacent GTs when they are requested. In \cite{chen2017caching}, a proactive caching technique is considered. The authors propose a solution for UAVs deployment and caching content placement in order to maximize the quality-of-service (QoS). In \cite{samir2019trajectory}, the authors consider the UAV deployment for data delivery in vehicular networks. \emph{To the best of our knowledge, there is no work that considers a UAV that flies over multiple areas with high importance and serve as many as possible within a certain time.}
The importance of each area is expressed with a score. We formulate an optimization problem whose solution provides a trajectory for the UAV for which the collected score is maximized. By drawing an analogy from the orienteering problem \cite{golden1987orienteering}, we prove that the problem is NP-hard. We provide a greedy algorithm that finds an approximate solution to the optimization problem in scalable manner.
\section{System model and Problem Formulation}
We consider a UAV that flies over multiple geographical areas and collects the corresponding scores. Location $i$ has a score which is denoted by $\lambda_{i}$ \footnote{E.g. the reward can play the role of user demand in that location.}.
Our goal is to design a trajectory that maximizes the scores collected by the UAV.
\textbf{Trajectory selection.}
We consider that each location $i\in \mathcal{I}$ may be the barycenter (or centroid) of that area, or some central hotspot point. Let $x_i$ denote its coordinates on the plane.
The UAV forms a \emph{trajectory} by visiting a subset of locations in a specified order.
For two locations $i,j\in \mathcal{I}$, let $d_{ij}=d_{ji}\propto\|x_i-x_j\|^2$ be a \emph{distance} that measures the amount of time it takes the UAV to move from one location to the other (for example the Euclidean distance of $x_i,x_j$ divided by the maximum velocity of the UAV).
Consider an undirectional complete graph $G=(\mathcal{I},E,\boldsymbol d)$, where $E=\left\{\{i,j\}:i,j \in \mathcal{I}, i\neq j\right\}$ is the set of links connecting the locations, and for each link $\{i,j\}$ we have an associated distance $d_{ij}$ (or $d_{ji}$).
A {trajectory} $\mathcal{T}$ is a tour on graph $G$, i.e., an ordered set of nodes $\mathcal{T}\triangleq(i_1,i_2 \dots, i_k, i_1)$, such that the UAV visits the nodes in the described order and all nodes are visited once, except $i_1$. An example of our system model is shown in Fig. \ref{fig:drone2}, where $\mathcal{T} = (6,4,2,3,6)$.
\begin{figure}
\centering
\includegraphics[scale=0.335]{drone2}
\caption{Illustration of our system model with six areas.}
\label{fig:drone2}
\end{figure}
For each link $(i,j)\in E$, we introduce flow variables $f_{ij}\in\{0,1\}$, where $f_{ij}=1$ if and only if nodes $i,j$ appear in the trajectory $\mathcal{T}$. The flow that enters node $i$ must be equal with the one that goes out:
\begin{align}
\sum_{j}f_{ji} =\sum_{j}f_{ij}=\left\{\begin{array}{ll}\label{eq: flow}
1\text{,} & \text{if } i\in \mathcal{T} \\
0\text{,} & \text{if } i\notin \mathcal{T}
\end{array}
\right.,\quad \forall i\in \mathcal{I}.
\end{align}
However, including only the constraints in (\ref{eq: flow}), we may produce tours that are not connected. In order to create solutions that do not contain disconnected tours, we introduce the following \emph{subtour elimination constraints}:
\begin{align}\label{cons: subtour}
\sum_{(i,j) \in \mathcal{I}\text{, } i\neq j} f_{ij} \leq |\mathcal{S}| - 1 \text{, } \forall \mathcal{S} \subset \mathcal{I}\text{.}
\end{align}
For each nonempty subset $\mathcal{S}$ of the set of nodes $\mathcal{I}$, (\ref{cons: subtour}) ensures that the number of edges of $\mathcal{S}$ must be at most $|\mathcal{S}| - 1 $. Hence, (\ref{cons: subtour}) eliminates solutions with two or more disconnected subtours.
Note, that any non-zero integer flow $\boldsymbol f$ that satisfies \eqref{eq: flow} and \eqref{cons: subtour} is a tour, i.e., a path on graph $G$ that starts and ends at the same node. In case we need a trajectory that starts and ends at a specific node $s\in \mathcal{I}$, we can include $\sum_j f_{sj}=1$ as a constraint.
The total time of the trajectory $\mathcal{T}$ is denoted by $D$ and is equal to the sum of all traversed distances, $D=\sum_{(i,j)} d_{ij}f_{ij}$.
A trajectory is called \emph{feasible} if its total time is no larger than a specified limit $D_{\max}$:
\begin{align}\label{cons: deadline}
D = \sum_{(i,j)} d_{ij}f_{ij} \leq D_{\max}.
\end{align}
Our target is to compute a UAV trajectory that passes through the areas with highest scores in time less than $D_{\text{max}}$. To this end, we formulate the UAV Trajectory Design (UTD) problem as:
\begin{subequations}\label{optproblem: trajdesign}
\begin{align}
\max_{\boldsymbol{f}} & \sum_{(i,j)}f_{ij} \lambda_i \label{eq:obj1}\\
\text{s.~t.}&\text{ }(\ref{eq: flow}), (\ref{cons: subtour}), (\ref{cons: deadline}), \\
& \text{ }\sum_{j}f_{ij} \leq 1\text{, } \forall i \in \mathcal{I} \label{eq: singleflowcon}\text{,} \\
&\text{ } \boldsymbol f\in \{0,1\}^{|\mathcal{I}|\times |\mathcal{I}|} \text{.}\label{eq:cst4}
\end{align}
\end{subequations}
Constraint (\ref{eq: singleflowcon}) ensures that up to one flow can enter and exit node $i$.
\subsection{Strategic Content Placement Use Case}
In this subsection, we discuss the potential application of the trajectory design to the content caching problem. We assume that each area contains users that request popular file content.
Furthermore, we assume that the UAV carries a cache in which we can cache popular file content and deliver the files to the users. However, the capacity of the cache is limited and the time flight of the UAV is limited as well. Therefore, we should jointly decide the trajectory and the file placement. Then, in problem \eqref{optproblem: trajdesign}, the reward would be the product of demand times the popularities of the files at each visited location.
\section{Algorithm for UAV Trajectory Design}
\subsection{Orienteering}
First, we characterize the complexity of this problem by drawing an analogy from the \emph{Orienteering Problem} (OP) \cite{golden1987orienteering}, a sport in which starting and ending points are specified in a forest along with other locations (checkpoints) with associated scores for visiting. Boyscouts must travel from the starting to the ending point before a certain deadline expires, and on their way they seek to visit a subset of the locations that maximizes the total collected score. Consider, in the OP problem, that $\lambda_{i}$ are the boyscout rewards collecting from each checkpoint $i$, $d_{ij}$ is the travel distance between the checkpoints $i$ and $j$, and $D_{\text{max}}$ is the total time frame available for waypoint collection. Then, there is an 1-1 mapping between the UTD and OP problem. Since the OP problem is NP-hard and particularly APX-hard, we get the following result.\\
\noindent\textbf{Corollary 1.} \textit{The UTD problem in \eqref{optproblem: trajdesign} is NP-hard, and particularly, APX-hard. \footnote{
It is APX-hard because it has a 2+e poly-time algorithm, but it has no Polynomial-time Approximation Scheme (PTAS), i.e., it cannot be approximated to within any constant larger than 1 \cite{chekuri2012improved}.
}}\\
\noindent\textbf{Remark. }The authors in \cite{golden1987orienteering} first defined the Orienteering problem, and show that is NP-hard with a reduction from the \emph{Travelling Salesman Problem}. The work in \cite{blum2007approximation} shows that the Orienteering is APX-hard, i.e., any polynomial time algorithm will fail to approximate the optimal within $\frac{1481}{1480}$ (unless P=NP).
Also, it was provided a 4--approximation using dynamic programming to compute min-excess paths, i.e., paths that achieve a targeted prize by introducing a minimum amount of excess cost. \cite{friggstad2017compact} provides a 3--approximation of the rooted Orienteering problem, based on Linear Programming relaxation and rounding.
Improved guarrantees are also given in \cite{chekuri2004maximum,chekuri2012improved} where a 2--approximation guarantee is provided using $k$-TSP techniques.
\vspace{1mm}
\subsection{Subtour elimination: lazy constraints approach}
Note that if the number of nodes is of size $n$, then, there are $2^{n}-2$ subsets of $\mathcal{S}$ of $\mathcal{I}$, excluding $\mathcal{S} = \mathcal{I}$ and $\mathcal{S}=\emptyset$. In order to avoid constructing an exponential number of constraints for each scenario resulting in a formulation that is complex even to state, we include the constraints in ($\ref{cons: subtour}$) in a \textit{lazy fashion}. More specifically, we relax all subtour elimination constraints (SECs) \eqref{cons: subtour} and solve the remaining Integer Linear Program (ILP) by using Gurobi solver\footnote{Note that in order to obtain an optimal solution from the solver, we do not restrict its runtime.}. When the solver finds a feasible solution that satisfies the other constraints, we check the number of edges and determine whether the found solution has disconnected subtours or not. If the number of edges of the shortest tour is equal to the number of visited nodes, the found solution has no subtours, hence it satisfies the subtour elimination constraints (even if we did not require them) and the optimization problem is solved. Otherwise, we add the corresponding subtour elimination constraint that is violated and solve the problem again. We repeat until the found solution has no subtours. \\
\noindent\textbf{Theorem 1.} \textit{Lazy constraints approach is optimal.}
\begin{proof}
At each iteration, we find a solution of minimum cost of the relaxed problem. We denote the relaxed minimum cost by $c_{\text{min}}$. $c_{\text{min}}$ is a lower bound to the optimal cost of the original problem $c_\text{opt}$. After the last iteration of SEC approach, we find an optimal solution to the relaxed problem that is actually feasible in the original problem. Hence, it must be $c_\text{min}\geq c_\text{opt}$. Therefore we conclude that $c_\text{min}=c_\text{opt}$.
\end{proof}
We note that this approach provides no guarantees that we will not have to eventually add all subtour elimination constraints (and hence it requires exponentially many steps), however, experience shows that it can be quite efficient in some problems \cite{pferschy2017generating}.
\subsection{UAV trajectory algorithm}
\begin{algorithm}[!t]
\small
\caption{UAV trajectory design algorithm}\label{alg}
\textbf{Input}: Graph$=(\mathcal{I},E,\bm{d})$, budget tour $D_{\text{max}}$, start/end node $s$ \\
\textbf{Output}: trajectory $\mathcal{T}$\\
$t_d\leftarrow 0$ //traversed time\\
next\_node $\leftarrow \infty$ \\
$\mathcal{P}\leftarrow \left\{s,s\right\}$ //we start and end at the node $s$ \\
next\_segment $\leftarrow \infty$\\
\Do{\text{next\_node} $\neq \emptyset$}
{\If{$(\text{next\_node}<I+1)$}
{$\ell \leftarrow |\mathcal{P}|$\\
$m\leftarrow$ next\_segment\\
$t_d\leftarrow t_d + d_{m,\text{next\_node}} + d_{\text{next\_node},m+1} - d_{m,m+1}$ \\
$\mathcal{P}_{m+2:\ell+1}\leftarrow \mathcal{P}_{m+1:\ell}$ \\
$\mathcal{P}_{m+1} \leftarrow \text{next\_node}$
}
\For{$\forall j \in |\mathcal{T}|-1$}{
$\ell_c\leftarrow \emptyset$, $\ell_e\leftarrow \emptyset$\\
\For{$\forall i \in \mathcal{I}$}
{\If{($i\notin \mathcal{T}$)}
{$i_{1}\leftarrow \mathcal{T}_{j}$\\
$i_{2}\leftarrow i$\\
$i_{3} \leftarrow \mathcal{T}_{j+1}$\\
\If{$(t_d+d_{j,i}+d_{i,j+1}-d_{j,j+1}\leq D_{\text{max}})$}
{$\ell_c \leftarrow \ell_c \cup i $ //local candidates \\
$\ell_e \leftarrow \ell_e \cup \left\{t_d+d_{j,i}+d_{i,j+1}-d_{j,j+1}\right\}$ //add extra distance
}}
}
\eIf{$(\ell_c = \emptyset)$}{
$g_{c_{j,:}} \leftarrow \left\{I+2,0\right\}$ // we didn't find any candidate
}
{$u_{k} \leftarrow \frac{\lambda_{ k}}{\ell_{e_{k}}}\text{, } \forall k \in \ell_c$ // Score \\
$\ell_{c_{max}} \leftarrow \text{find the node with the maximum utility value}$\\
$g_c\leftarrow \ell_{c_{max}} \cup max(u) $ //assign the maximum utility value and the corresponding node}
next\_node $\leftarrow \emptyset$, next\_segment $\leftarrow \emptyset$\\
\If{$(max(g_{c_{:,2}}>0))$}
{ //we found some candidates\\
$u \leftarrow g_{c_{:,2}}$//assign all rows of the second column\\
next\_segment $\leftarrow$ \text{find node with the maximum utility}\\
next\_node $\leftarrow g_{c_{\text{next\_segment,1}}} $}
\textbf{return} $\mathcal{T}$
\end{algorithm}
\begin{table*}[t!] \caption{The case with $80$ nodes. Execution time. Optimal solution vs greedy algorithm.}
\centering
\begin{tabular}{ | c | c | c | c | c | c | c | c | c |}
\hline
$D_{\text{max}}$ (min) & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 \\
\hline
Solver & 103.22 sec & 377.95 sec & $>2$h & $>2$h & $>2$h & $>2$h & $>2$h & 41 sec \\
\hline
Greedy & 0.35 sec & 0.27 sec & 0.25 sec & 0.25 sec & 0.25 sec & 0.39 sec & 0.31 sec & 0.31 sec\\
\hline
\end{tabular}
\label{table: extime}
\end{table*}%
\begin{figure*}[t!]
\begin{center}
\begin{tabular}{ccc}
\begin{subfigure}{0.235\textwidth}\centering\includegraphics[scale=0.25]{CollectedScore}\caption{Collected score by the UAV.}\label{Fig: Score}\end{subfigure}&
\begin{subfigure}{0.248\textwidth}\centering\includegraphics[scale=0.25]{ExecutionTime}\caption{Execution time.}\label{Fig: ExTime}\end{subfigure}
\begin{subfigure}{0.248\textwidth}\centering\includegraphics[scale=0.25]{VisitedNodes}\caption{Number of visited nodes.}\label{Fig: VisitedNodes}\end{subfigure}
\begin{subfigure}{0.23\textwidth}\centering\includegraphics[scale=0.25]{TotalJourney}\caption{Total journey time.}\label{Fig: TotalJourney}\end{subfigure}\\[2\tabcolsep]
\end{tabular}
\end{center}
\caption{The case with $60$ nodes. Optimal solutions vs greedy algorithm.}\label{Fig: results}
\end{figure*}
Even with the lazy constraints approach, the OP problem is APX-hard, and as the instance of the problem increases, the solver will take too long to return a solution (if ever). In order to have a solution in reasonable time, we propose a heuristic that is described in Algorithm \ref{alg}. Our approach is inspired from a recent study that proposes touristic itineraries on Google maps \cite{friggstad2018orienteering}. Although our algorithm provides no guarantees, it follows the ideas of the knapsack relaxation, i.e., greedily adding waypoints that maximize the efficiency ratio $\frac{\text{reward}}{\text{added time}}$.
The algorithm builds a trajectory by progressively adding waypoints considering: (i) the feasibility of the tour (step 21), (ii) the cost efficiency of a waypoint addition by means of ratio of reward/added travel time (step 23). Specifically, we begin with the origin and add the waypoint $a$ that maximizes the ratio reward/distance (step 28). At this point the trajectory is simply origin $\rightarrow$ $a$ $\rightarrow$ origin. Next, for every hop in the trajectory, we find the maximal node that if added in the hop, it will maximize the ratio reward/added distance (step 34). Hence it could be $o\rightarrow b \rightarrow a \rightarrow o$ or $o\rightarrow a\rightarrow b \rightarrow o$. Specifically for the second step, there is symmetry and both solutions will be equal. But for the following steps, every hop results in a possibly different maximal waypoint, and we must select the best. At every step of the way, a waypoint can be added only if the new total travel time does not exceed our constraint. When no such node can be found (step 36), our heuristic has converged.
\section{Simulation Results}
We consider that the velocity of the UAV is equal to $70$ km/h\footnote{ \url{https://www.drone-world.com/dji-phantom-4-specs/}.} and $100$ different topologies with $50$ nodes each. The location of each node is randomly generated according to a normal distribution that takes values in $[-1,1]$. For each topology, we generate score $\lambda_{i}$ for each node. Each score takes values according to a normal distribution in $[0,10]$. We consider that the UAV always starts from $(0,0)$ location points. Optimal and suboptimal trajectories are designed by the solver and algorithm, respectively, for each topology. Then, we take the average of the collected score, execution time, number of visited nodes, and total journey over the topologies. We repeat for different values of $D_{\text{max}}$. We obtain the optimal solution by using \textit{Gurobi} software.
In Fig. \ref{Fig: Score}, we compare the collected score for the solution provided by the solver and algorithm. We observe that the score collected by greedy algorithm solution is very close to the optimal one. The algorithm utilizes the available budget in an efficient way, as shown in Fig. \ref{Fig: VisitedNodes} and Fig. \ref{Fig: TotalJourney}.
The algorithm needs less than $1$ sec to provide an approximate solution, as shown in Fig. \ref{Fig: ExTime}, that is important when we have a large system to solve or need to run the routine multiple times within another algorithm. On the other hand, the solver needs more than $1$ min to provide an optimal solution, and as $D_{\text{max}}$ increases its runtime increases dramatically. However, we observe that the runtime of the solver decreases after a certain point, as shown in Fig. \ref{Fig: ExTime}. The constraint that affects the number of the trajectory options is \eqref{cons: deadline}, i.e., the flight time budget of the UAV. As the time flight budget increases, the number of nodes that can be visited without violating the constraint, approaches the number of the nodes that cannot be visited as shown in Fig. \ref{Fig: VisitedNodes}. Therefore, the trajectory options increase and the solver needs more time to provide the optimal solution. However, when $D_{\text{max}}$ is greater than $6$ min, we observe that the number of visited nodes is greater than the number of not visited ones. For example, for $D_{\text{max}}=8$ min, the UAV visits $35$ nodes out of total $60$ nodes, as shown in Fig. \ref{Fig: VisitedNodes}. Therefore, it is easier now for the solver to find an optimal solution. To give a better intuition on this, consider that the flight time budget is infinite. Then, the solution is trivial; visit all the nodes without taking into account the order. The order does not affect the value of the objective function.
Additional results are provided in Table \ref{table: extime}, for a larger topology with $80$ nodes. We see that for some cases, the solver needs more than $2$ h to provide the solution\footnote{We set up the program to stop after $2$h of waiting period.}. On the other hand, our proposed algorithm can provide an approximate solution in reasonable time for arbitrary number of nodes.
\section{Conclusions}
In this paper, we study the trajectory design problem of a UAV that flies over multiple areas and collects the corresponding scores. We formulate an optimization problem in order to maximize the collected score over multiple geographical locations. We show that the problem is equivalent to the Orienteering Problem from operation research, and therefore it is APX-hard. We then provide a fast heuristic algorithm, and a simplified MIP approach and compare their performance. Simulation results show that the algorithm performs well and provides solutions for the cases where the solver collapses. The proposed UAV trajectory design problem can be applied for tactical network and strategic content caching applications.
\bibliographystyle{IEEEtran}
|
1,477,468,750,946 | arxiv | \section{Introduction}
Fractional wave equations, describing the disturbance propagation in a
viscoelastic or non-local material, are obtained through the system of
equations consisting of: equation of motion corresponding to one-dimensional
deformable bod
\begin{equation}
\partial _{x}\sigma (x,t)=\rho \,\partial _{tt}u(x,t), \label{eq-motion}
\end{equation
where $u$ and $\sigma $ are displacement and stress, assumed as functions of
space $x\in
\mathbb{R}
$ and time $t>0,$ with $\rho $ being constant material density, strain for
small local deformation
\begin{equation}
\varepsilon (x,t)=\partial _{x}u(x,t), \label{strejn}
\end{equation
and constitutive equation connecting stress and strain, which can model
either hereditary or non-local material properties.
The aim is to investigate the energy conserving properties of such obtained
wave equations, namely hereditary and non-local wave equations. Hereditary
materials are modelled by the fractional-order constitutive equations of
viscoelastic body including distributed-order model containing fractional
differentiation orders up to the first order, as well as fractional Burgers
models containing also the differentiation orders up to the second order.
Energy dissipation is expected for hereditary wave equations, since the
thermodynamical requirements on the model parameters impose dissipativity of
such constitutive models. On the other hand, non-local materials, modelled
by the non-local Hooke law and fractional Eringen stress gradient model, are
not expected to dissipate energy.
Hereditary effects in a viscoelastic body are modelled either by the
distributed-order constitutive equatio
\begin{equation}
\int_{0}^{1}\phi _{\sigma }(\alpha )\,{}_{0}\mathrm{D}_{t}^{\alpha }\sigma
(x,t)\,\mathrm{d}\alpha =\int_{0}^{1}\phi _{\varepsilon }(\alpha )\,{}_{0
\mathrm{D}_{t}^{\alpha }\varepsilon (x,t)\,\mathrm{d}\alpha ,
\label{const-eq}
\end{equation
where $\phi _{\sigma }$ and $\phi _{\varepsilon }$ are constitutive
functions or distributions and where fractional differentiation orders do
not exceed the first order, or by the thermodynamically consistent
fractional Burgers models where fractional differentiation orders are up to
the second order. The fractional Burgers models are represented by unified
models belonging to two classes: the first class is represented by the
unified constitutive equatio
\begin{equation}
\left( 1+a_{1}\,{}_{0}\mathrm{D}_{t}^{\alpha }+a_{2}\,{}_{0}\mathrm{D
_{t}^{\beta }+a_{3}\,{}_{0}\mathrm{D}_{t}^{\gamma }\right) \sigma \left(
x,t\right) =\left( b_{1}\,{}_{0}\mathrm{D}_{t}^{\mu }+b_{2}\,{}_{0}\mathrm{D
_{t}^{\mu +\eta }\right) \varepsilon \left( x,t\right) , \label{UCE-1-5}
\end{equation
while the second one is represented b
\begin{equation}
\left( 1+a_{1}\,{}_{0}\mathrm{D}_{t}^{\alpha }+a_{2}\,{}_{0}\mathrm{D
_{t}^{\beta }+a_{3}\,{}_{0}\mathrm{D}_{t}^{\beta +\eta }\right) \sigma
\left( x,t\right) =\left( b_{1}\,{}_{0}\mathrm{D}_{t}^{\beta }+b_{2}\,{}_{0
\mathrm{D}_{t}^{\beta +\eta }\right) \varepsilon \left( x,t\right) ,
\label{UCE-6-8}
\end{equation
where $a_{1},a_{2},a_{3},b_{1},b_{2}>0,$ $\alpha ,\beta ,\mu \in \left[ 0,
\right] ,$ with $\alpha \leq \beta ,$ $\gamma \in \left[ 0,2\right] ,$ and
\eta \in \left\{ \alpha ,\beta \right\} .$ The operator of Riemann-Liouville
fractional derivative ${}_{0}\mathrm{D}_{t}^{\xi }$ of order $\xi \in \left[
n,n+1\right] ,$ $n\in
\mathbb{N}
_{0},$ used in constitutive models (\ref{const-eq}), (\ref{UCE-1-5}), and
\ref{UCE-6-8}), is defined b
\begin{equation*}
{}_{0}\mathrm{D}_{t}^{\xi }y\left( t\right) =\frac{\mathrm{d}^{n+1}}{\mathrm
d}t^{n+1}}\left( \frac{t^{-\left( \xi -n\right) }}{\Gamma \left( 1-\left(
\xi -n\right) \right) }\ast _{t}y\left( t\right) \right) ,\;\;t>0,
\end{equation*
see \cite{TAFDE}, where $\ast _{t}$ denotes the convolution in time:
f\left( t\right) \ast _{t}g\left( t\right) =\int_{0}^{t}f\left( t^{\prime
}\right) g\left( t-t^{\prime }\right) \mathrm{d}t^{\prime },$ $t>0.$
Non-locality effects in a material are described either by the non-local
Hooke la
\begin{equation}
\sigma (x,t)=\frac{E}{\ell ^{1-\alpha }}\frac{|x|^{-\alpha }}{2\Gamma
(1-\alpha )}\ast _{x}\varepsilon (x,t),\;\;\alpha \in \left( 0,1\right) ,
\label{nl-Huk}
\end{equation
or by the fractional Eringen constitutive equation
\begin{equation}
\sigma \left( x,t\right) -\ell ^{\alpha }\,\mathrm{D}_{x}^{\alpha }\sigma
\left( x,t\right) =E\,\varepsilon \left( x,t\right) ,\;\;\alpha \in \left(
1,3\right) , \label{frac-Eringen}
\end{equation
where $E$ is Young modulus, $\ell $ is non-locality parameter, and $\mathrm{
}_{x}^{\alpha }$ is defined a
\begin{eqnarray}
\mathrm{D}_{x}^{\alpha }y\left( x\right) &=&\frac{\left\vert x\right\vert
^{1-\alpha }}{2\Gamma \left( 2-\alpha \right) }\ast _{x}\frac{\mathrm{d}^{2
}{\mathrm{d}x^{2}}y\left( x\right) ,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{for}\;\;\alpha \in \left(
1,2\right) , \label{Dx-1} \\
\mathrm{D}_{x}^{\alpha }y\left( x\right) &=&\frac{\left\vert x\right\vert
^{2-\alpha }\func{sgn}x}{2\Gamma \left( 3-\alpha \right) }\ast _{x}\frac
\mathrm{d}^{3}}{\mathrm{d}x^{3}}y\left( x\right) ,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{for}\;\;\alpha
\in \left( 2,3\right) , \label{Dx-2}
\end{eqnarray
with $\ast _{x}$ denoting the convolution in space: $f\left( x\right) \ast
_{x}g\left( x\right) =\int_
\mathbb{R}
}f\left( x^{\prime }\right) g\left( x-x^{\prime }\right) \mathrm{d}x^{\prime
},$ $x\in
\mathbb{R}
.$
The Cauchy problem on the real line $x\in \mathbb{R}$ and $t>0$ is
considered, so the system of governing equations (\ref{eq-motion}), (\re
{strejn}), and one of the constitutive equations (\ref{const-eq}), or (\re
{UCE-1-5}), or (\ref{UCE-6-8}), or (\ref{nl-Huk}), or (\ref{frac-Eringen})
is subject to initial and boundary conditions:
\begin{gather}
u(x,0)=u_{0}(x),\;\;\frac{\partial }{\partial t}u(x,0)=v_{0}(x),
\label{ic-u} \\
\sigma (x,0)=0,\;\;\varepsilon (x,0)=0,\;\;\partial _{t}\sigma
(x,0)=0,\;\;\partial _{t}\varepsilon (x,0)=0, \label{ic-sigma-eps} \\
\lim_{x\rightarrow \pm \infty }u(x,t)=0,\;\;\lim_{x\rightarrow \pm \infty
}\sigma (x,t)=0, \label{bc}
\end{gather
where $u_{0}$ is the initial displacement and $v_{0}$ is the initial
velocity. The initial conditions (\ref{ic-sigma-eps}) are needed for the
hereditary constitutive equations: distributed-order constitutive equation
\ref{const-eq}) needs (\ref{ic-sigma-eps})$_{1,2}$ and fractional Burgers
models (\ref{UCE-1-5}), (\ref{UCE-6-8}) require all initial conditions (\re
{ic-sigma-eps}), while non-local constitutive models (\ref{nl-Huk}) and (\re
{frac-Eringen}) do not need any of the initial conditions (\ref{ic-sigma-eps
).
The distributed-order constitutive model (\ref{const-eq}) generalizes
integer and fractional order constitutive models of linear viscoelasticity
having differentiation orders up to the first order, since it reduces to the
linear fractional mode
\begin{equation}
\sum_{i=1}^{n}a_{i}\,{}_{0}\mathrm{D}_{t}^{\alpha _{i}}\sigma
(x,t)=\sum_{j=1}^{m}b_{j}\,{}_{0}\mathrm{D}_{t}^{\beta _{j}}\varepsilon
(x,t), \label{gen-lin}
\end{equation
with model parameters $a_{i},b_{j}>0$ and $\alpha _{i},\beta _{j}\in \left[
0,1\right] ,$ $i=1,\ldots ,n$, $j=1,\ldots ,m,$ if the constitutive
distributions $\phi _{\sigma }$ and $\phi _{\varepsilon }$ in (\ref{const-eq
) are chosen as
\begin{equation*}
\phi _{\sigma }(\alpha )=\sum_{i=1}^{n}a_{i}\,\delta (\alpha -\alpha
_{i}),\;\;\phi _{\varepsilon }(\alpha )=\sum_{j=1}^{m}b_{i}\,\delta (\alpha
-\beta _{j}),
\end{equation*
where $\delta $ denotes the Dirac delta distribution. Moreover, the
power-type distributed-order model
\begin{equation}
\int_{0}^{1}a^{\alpha }\,{}_{0}\mathrm{D}_{t}^{\alpha }\sigma (x,t)\,\mathrm
d}\alpha =E\int_{0}^{1}b^{\alpha }\,{}_{0}\mathrm{D}_{t}^{\alpha
}\varepsilon (x,t)\,\mathrm{d}\alpha , \label{DOCE}
\end{equation
is obtained from (\ref{const-eq}) as the genuine distributed-order model, if
constitutive functions $\phi _{\sigma }$ and $\phi _{\varepsilon }$ in (\re
{const-eq}) are chosen a
\begin{equation*}
\phi _{\sigma }(\alpha )=a^{\alpha },\;\;\phi _{\varepsilon }(\alpha
)=E\,b^{\alpha },
\end{equation*
with model parameters $E,a,b>0$ ensuring dimensional homogeneity.
Thermodynamical consistency of linear fractional constitutive equation (\re
{gen-lin}) is examined in \cite{AKOZ}, where it is shown that there are four
cases of (\ref{gen-lin}) when the restrictions on model parameters guarantee
its thermodynamical consistency, while power-type distributed-order model
\ref{DOCE}) is considered in \cite{a-2003} and revisited in \cite{AKOZ},
where the conditions $E>0$ and $0\leq a\leq b,$ guaranteeing model's
thermodynamical consistency, are obtained. Four cases of thermodynamically
acceptable models corresponding to (\ref{gen-lin}) are given in Appendix \re
{LFMS}.
Fractional wave equations, corresponding to the system of governing
equations (\ref{eq-motion}), (\ref{strejn}), and distributed-order
constitutive model (\ref{const-eq}), are considered for the Cauchy problem
in \cite{KOZ19}, generalizing the results of \cite{KOZ10,KOZ11}, where
respectively fractional Zener model and its generalization
\begin{gather}
\left( 1+a\,{}_{0}\mathrm{D}_{t}^{\alpha }\right) \sigma (x,t)=E\left(
1+b\,{}_{0}\mathrm{D}_{t}^{\alpha }\right) \varepsilon (x,t),\;\;0\leq a\leq
b,\;\alpha \in \left[ 0,1\right] , \label{FZM} \\
\sum_{i=1}^{n}a_{i}\,{}_{0}\mathrm{D}_{t}^{\alpha _{i}}\sigma
(x,t)=\sum_{i=1}^{n}b_{i}\,{}_{0}\mathrm{D}_{t}^{\alpha _{i}}\varepsilon
(x,t),\;\;0\leq \alpha _{1}\leq \ldots \leq \alpha _{n}<1,\;\frac{a_{1}}
b_{1}}\geq \ldots \geq \frac{a_{n}}{b_{n}}\geq 0, \notag
\end{gather
are considered as special cases of (\ref{gen-lin}). Considering the wave
propagation speed, it is found in \cite{KOZ19} that the finite wave speed,
so as the infinite, is the property of both solid-like and fluid-like
materials. Solid-like and fluid-like materials are differed in the creep
test, representing the deformation response of a material to a sudden but
later on constant stress, where the deformation for first type of materials
is bounded for large time in contrary to the second type of materials that
have unbounded deformation for large time.
Eight thermodynamically consistent fractional Burgers models, formulated in
\cite{OZ-1}, all describing fluid-like material behavior are divided into
two classes. The first class, represented by (\ref{UCE-1-5}), contains five
models, such that the highest fractional differentiation order of strain is
\mu +\eta \in \left[ 1,2\right] ,$ with $\eta \in \left\{ \alpha ,\beta
\right\} ,$ while the highest fractional differentiation order of stress is
either $\gamma \in \left[ 0,1\right] $ in the case of Model I, with $0\leq
\alpha \leq \beta \leq \gamma \leq \mu \leq 1$ and $\eta \in \left\{ \alpha
,\beta ,\gamma \right\} ,$ or $\gamma \in \left[ 1,2\right] $ in the case of
Models II - V, with $0\leq \alpha \leq \beta \leq \mu \leq 1$ and $\left(
\eta ,\gamma \right) \in \left\{ \left( \alpha ,2\alpha \right) ,\left(
\alpha ,\alpha +\beta \right) ,\left( \beta ,\alpha +\beta \right) ,\left(
\beta ,2\beta \right) \right\} $. Note that the fractional differentiation
order of stress is less than the differentiation order of strain regardless
on the interval $\left[ 0,1\right] $ or $\left[ 1,2\right] .$ The second
class, represented by (\ref{UCE-6-8}), contains three models, such that
0\leq \alpha \leq \beta \leq 1$ and $\beta +\eta \in \left[ 1,2\right] ,$
with $\eta =\alpha ,$ in the case of Model VI; $\eta =\beta $ in the case of
Model VII; and $\alpha =\eta =\beta ,$ $\bar{a}_{1}=a_{1}+a_{2},$ and $\bar{
}_{2}=a_{3}$ in the case of Model VIII. Note that considering the interval
\left[ 0,1\right] ,$ the highest fractional differentiation orders of stress
and strain are equal, which also holds true for the orders from interval
\left[ 1,2\right] .$ The explicit forms of Models I - VIII, along with
corresponding thermodynamical restrictions, can be found in Appendix \re
{FBMS}.
Fractional Burgers wave equation, represented by the governing equations
\ref{eq-motion}), (\ref{strejn}), and either (\ref{UCE-1-5}) or (\re
{UCE-6-8}), is solved for the Cauchy problem in \cite{OZO}. The wave
propagation speed is found to be infinite for models belonging to the first
class, given by (\ref{UCE-1-5}), contrary to the case of models of the
second class (\ref{UCE-6-8}), that yield finite wave propagation speed.
Moreover, numerical examples indicated that at the wave front there might
exist a jump from finite to a zero value of displacement, obtained as the
fundamental solution of the fractional Burgers equation.
The non-local Hooke law (\ref{nl-Huk}) is introduced in \cite{A-S-09}
through the non-local strain measure and used with the classical Hooke law
as a constitutive equation for modeling wave propagation in non-local media,
while in \cite{AJOPZ} the constitutive equation including both memory and
non-local effects is constructed using fractional Zener model (\ref{FZM})
and non-local Hooke law (\ref{nl-Huk}), further to be used in describing
wave propagation in non-local viscoelastic material. The tools of microlocal
analysis are employed in \cite{HOZ16} to investigate properties of this
memory and non-local type fractional wave equation.
Generalizing the integer-order Eringen stress gradient non-local
constitutive law, the fractional Eringen model (\ref{frac-Eringen}) is
postulated in \cite{CZAS}, where the optimal values of non-locality
parameter and order of fractional differentiation are obtained with respect
to the Born-K\'{a}rm\'{a}n model of lattice dynamics. Further, wave
propagation, as well as propagation of singularities, in non-local material
described by the fractional Eringen model (\ref{frac-Eringen}) is analyzed
in \cite{HOZ18}.
The energy estimates for proving existence and uniqueness of the solution to
three-dimensional wave equation corresponding to material of fractional
Zener type using the Galerkin method are considered in \cit
{OparnicaSuli,Saedpanah}, while three-dimensional wave equation as a singular kernel integrodifferential equation, with kernel being the relaxation modulus unbounded at the initial time, is analyzed in \cite{Carillo2019}.
The positivity of Green's functions corresponding to a
three-dimensional integrodifferential{\ }wave equation, that has completely
monotonic relaxation modulus as a kernel, is established in \cite{Ser-1},
while the exponential energy decay of non-linear viscoelastic wave equation
under the potential well is analyzed in \cite{YWangYWang} assuming Dirichlet
boundary conditions.
In the case of {one-dimensional wave equation, written as the
integrodifferential equation including the relaxation modulus assumed as a
wedge continuous function, the solution existence and uniqueness analysis is
performed in \cite{Carillo2019a}, while \cite{Carillo2015} aimed to
underline the similarities between rigid heat conductor having heat flux
relaxation function singular at the origin and viscoelastic material having
relaxation modulus unbounded at the origin. }In \cite{Hanyga2013,Hanyga2019}
one-dimensional wave propagation characteristics, such as wave propagation
speed and wave attenuation, are investigated without and with the Newtonian
viscosity component present in the completely monotonic relaxation modulus.
The extensive overview of wave propagation problems in viscoelastic
materials can be found in \cite{APSZ-2,Holm-book,Mai-10}.
In \cite{Wu} the transient effects, i.e., short-lived seismic wave
propagation through viscoelastic subsurface media, are considered and
asymptotic expansions of the solutions via Buchen-Mainardi algorithm method
introduced in \cite{BuchenMainardi} are obtained. The same method is used in
\cite{ColombaroGiustiMainardi1} in the case of waves in fractional Maxwell
and Kelvin-Voigt viscoelastic materials. Dispersion, attenuation, wave
fronts, and asymptotic behavior of solution to viscoelastic wave equation
near the wave front are studied in \cite{Han7,Han8,Han6}.
The {survey of acoustic wave equations aiming to describe the frequency
dependent attenuation and scattering of acoustic disturbance propagation
through complex media displaying viscous dissipation is presented in }\cit
{Cai2018}, while the frequency responses of viscoelastic materials are
reviewed in \cite{Makris}.
The existence and uniqueness of solutions to three-dimensional wave equation
with the Eringen model as a constitutive equation is studied in \cit
{EvgrafovBellido} and it is found that the problem is in general ill-posed
in the case of smooth kernels and well-posed in the case of singular,
non-smooth kernels. Considering the longitudinal and shear waves propagation
in non-local medium, the influence of geometric non-linearity is
investigated in \cite{MalkhanovErofeevLeontieva}. Combining viscoelastic and
non-locality characteristics of the medium, the wave propagation and wave
decay is studied in \cite{Silling} under the source positioned at the
end of a semi-infinite medium.
\section{Hereditary fractional wave equations expressed through relaxation
modulus and creep compliance}
Relaxation modulus and creep compliance, representing material properties in
stress relaxation and creep tests, are used in order to formulate fractional
wave equation corresponding to the system of governing equations (\re
{eq-motion}), (\ref{strejn}), and (\ref{const-eq}), or (\ref{UCE-1-5}), or
\ref{UCE-6-8}).
Relaxation modulus $\sigma _{sr}$ (creep compliance $\varepsilon _{cr}$) is
the stress (strain) history function obtained as a response to the strain
(stress) assumed as the Heaviside step function $H.$ According to the
material behavior in stress relaxation and creep tests at the initial
time-instant, one differs the materials having either finite or infinite
glass modulus $\sigma _{sr}^{\left( g\right) }=\sigma _{sr}\left( 0\right) ,$
implying the finite or zero value of the glass compliance $\varepsilon
_{cr}^{\left( g\right) }=\varepsilon _{cr}\left( 0\right) .$ The wave
propagation speed, obtained a
\begin{equation*}
c=\sqrt{\sigma _{sr}^{\left( g\right) }}=\frac{1}{\sqrt{\varepsilon
_{cr}^{\left( g\right) }}}
\end{equation*
in \cite{KOZ19} for the distributed-order constitutive model (\ref{const-eq
) and in \cite{OZO} for the fractional Burgers models (\ref{UCE-1-5}) and
\ref{UCE-6-8}), is the implication of these material properties. On the
other hand, according to the material behavior in stress relaxation and
creep tests for large time, one differs fluid-like materials, having the
equilibrium compliance $\varepsilon _{cr}^{\left( e\right)
}=\lim_{t\rightarrow \infty }\varepsilon _{cr}\left( t\right) $ infinite and
therefore the equilibrium modulus $\sigma _{sr}^{\left( e\right)
}=\lim_{t\rightarrow \infty }\sigma _{sr}\left( t\right) $ zero, from
solid-like materials, having both equilibrium compliance and equilibrium
modulus finite. The overview of asymptotic properties for viscoelastic
materials described by constitutive models (\ref{const-eq}), (\ref{UCE-1-5
), and (\ref{UCE-6-8}) is presented in Table \ref{tbl}. \input{tbl.tex}
In order to express the constitutive equations (\ref{const-eq}), (\re
{UCE-1-5}), and (\ref{UCE-6-8}) either in terms of the relaxation modulus,
or in terms of the creep compliance, the Laplace transform with respect to
tim
\begin{equation*}
\tilde{f}\left( s\right) =\mathcal{L}\left[ f\left( t\right) \right] \left(
s\right) =\int_{0}^{\infty }f\left( t\right) \mathrm{e}^{-st}\,\mathrm{d
t,\;\;\func{Re}s>0,
\end{equation*
is applied to (\ref{const-eq}), (\ref{UCE-1-5}), and (\ref{UCE-6-8}), so
that
\begin{equation}
\Phi _{\sigma }(s)\tilde{\sigma}\left( x,s\right) =\Phi _{\varepsilon }(s
\tilde{\varepsilon}\left( x,s\right) ,\;\;\func{Re}s>0, \label{CEs-LT}
\end{equation
is obtained assuming zero initial conditions (\ref{ic-sigma-eps}), wit
\begin{equation}
\Phi _{\sigma }(s)=\int_{0}^{1}\phi _{\sigma }(\alpha )s^{\alpha }\,\mathrm{
}\alpha ,\;\;\Phi _{\varepsilon }(s)=\int_{0}^{1}\phi _{\varepsilon }(\alpha
)s^{\alpha }\,\mathrm{d}\alpha , \label{fiovi}
\end{equation
in the case distributed-order constitutive model (\ref{const-eq}), reducing
t
\begin{equation}
\Phi _{\sigma }(s)=\sum_{i=1}^{n}a_{i}\,s^{\alpha _{i}},\;\;\Phi
_{\varepsilon }(s)=\sum_{j=1}^{m}b_{j}\,s^{\beta _{j}},\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and
\;\;\Phi _{\sigma }(s)=\frac{as-1}{\ln \left( as\right) },\;\;\Phi
_{\varepsilon }(s)=E\frac{bs-1}{\ln \left( bs\right) }, \label{fiovi-lin}
\end{equation
for linear fractional constitutive equation (\ref{gen-lin}) and power-type
distributed-order model (\ref{DOCE}), respectively, as well as wit
\begin{gather}
\Phi _{\sigma }(s)=1+a_{1}s^{\alpha }+a_{2}\,s^{\beta }+a_{3}\,s^{\gamma
},\;\;\Phi _{\varepsilon }(s)=b_{1}\,s^{\mu }+b_{2}\,s^{\mu +\eta },
\label{Burgers1-fiovi} \\
\Phi _{\sigma }(s)=1+a_{1}s^{\alpha }+a_{2}\,s^{\beta }+a_{3}\,s^{\beta
+\eta },\;\;\Phi _{\varepsilon }(s)=b_{1}\,s^{\beta }+b_{2}\,s^{\beta +\eta
}, \label{Burgers2-fiovi}
\end{gather
in the case of fractional Burgers model of the first, respectively second
class, given by (\ref{UCE-1-5}) and (\ref{UCE-6-8}).
The Laplace transform of relaxation modulus and creep complianc
\begin{equation}
\tilde{\sigma}_{sr}\left( s\right) =\frac{1}{s}\frac{\Phi _{\varepsilon }(s
}{\Phi _{\sigma }(s)}\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\;\;\tilde{\varepsilon}_{cr}\left(
s\right) =\frac{1}{s}\frac{\Phi _{\sigma }(s)}{\Phi _{\varepsilon }(s)}
\label{sr-cr}
\end{equation
are respectively obtained by using the Laplace transform of constitutive
equation (\ref{CEs-LT}) for $\tilde{\varepsilon}\left( x,s\right) =\mathcal{
}\left[ H\left( t\right) \right] \left( s\right) =\frac{1}{s}$ and $\tilde
\sigma}\left( x,s\right) =\mathcal{L}\left[ H\left( t\right) \right] \left(
s\right) =\frac{1}{s},$ so that (\ref{sr-cr}) used in (\ref{CEs-LT}) yielded
the Laplace transform of constitutive equation (\ref{CEs-LT}) expressed
either in terms of relaxation modulus, or in terms of creep compliance a
\begin{equation}
\frac{1}{s}\tilde{\sigma}\left( x,s\right) =\tilde{\sigma}_{sr}\left(
s\right) \tilde{\varepsilon}\left( x,s\right) \;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{or}\;\;\frac{1}{s
\tilde{\varepsilon}\left( x,s\right) =\tilde{\varepsilon}_{cr}\left(
s\right) \tilde{\sigma}\left( x,s\right) , \label{CEs-LT-1}
\end{equation
providing six equivalent forms of the hereditary constitutive equation:
three expressed in terms of relaxation modulus
\begin{gather}
\int_{0}^{t}\sigma \left( x,t^{\prime }\right) \mathrm{d}t^{\prime }=\sigma
_{sr}\left( t\right) \ast _{t}\varepsilon \left( x,t\right) ,
\label{int-sigma-eps-1} \\
\sigma \left( x,t\right) =\sigma _{sr}^{\left( g\right) }\varepsilon \left(
x,t\right) +\dot{\sigma}_{sr}\left( t\right) \ast _{t}\varepsilon \left(
x,t\right) , \label{sigma-konv} \\
\sigma \left( x,t\right) =\sigma _{sr}\left( t\right) \ast _{t}\partial
_{t}\varepsilon \left( x,t\right) , \label{sigma-konv-1}
\end{gather
obtained by the Laplace transform inversion in (\ref{CEs-LT-1})$_{1}$ and
three expressed in terms of creep complianc
\begin{gather}
\int_{0}^{t}\varepsilon \left( x,t^{\prime }\right) \mathrm{d}t^{\prime
}=\varepsilon _{cr}\left( t\right) \ast _{t}\sigma \left( x,t\right) ,
\label{int-sigma-eps-2} \\
\varepsilon \left( x,t\right) =\varepsilon _{cr}^{\left( g\right) }\sigma
\left( x,t\right) +\dot{\varepsilon}_{cr}\left( t\right) \ast _{t}\sigma
\left( x,t\right) , \label{eps-konv} \\
\varepsilon \left( x,t\right) =\varepsilon _{cr}\left( t\right) \ast
_{t}\partial _{t}\sigma \left( x,t\right) , \label{eps-konv-1}
\end{gather
obtained by the Laplace transform inversion in (\ref{CEs-LT-1})$_{2},$ with
\dot{f}\left( t\right) =\frac{\mathrm{d}}{\mathrm{d}t}f\left( t\right) $ and
by using $\frac{\mathrm{d}}{\mathrm{d}t}\left( f\left( t\right) \ast
_{t}g\left( t\right) \right) =f\left( 0\right) g\left( t\right) +\dot{f
\left( t\right) \ast _{t}g\left( t\right) ,$ along with the initial
conditions on stress and strain (\ref{ic-sigma-eps}).
Therefore, the equivalent forms of hereditary fractional wave equation
expressed in terms of relaxation modulu
\begin{gather}
\rho \,\partial _{t}u\left( x,t\right) =\rho \,v_{0}\left( x\right) +\sigma
_{sr}\left( t\right) \ast _{t}\partial _{xx}u\left( x,t\right) , \notag \\
\rho \,\partial _{tt}u\left( x,t\right) =\sigma _{sr}^{\left( g\right)
}\,\partial _{xx}u\left( x,t\right) +\dot{\sigma}_{sr}\left( t\right) \ast
_{t}\partial _{xx}u\left( x,t\right) , \label{FWE-sigma-g} \\
\rho \,\partial _{tt}u\left( x,t\right) =\sigma _{sr}\left( t\right) \ast
_{t}\partial _{txx}u\left( x,t\right) , \label{FWE-sigma}
\end{gather
are respectively obtained by differentiation of (\ref{int-sigma-eps-1}),
\ref{sigma-konv}), and (\ref{sigma-konv-1}) with respect to the spatial
coordinate and by the subsequent use of equation of motion (\ref{eq-motion})
and strain (\ref{strejn}) in such obtained expressions, including the
initial condition (\ref{ic-u})$_{2},$ while the equivalent forms of
hereditary fractional wave equation expressed in terms of creep complianc
\begin{gather}
\rho \,\varepsilon _{cr}\left( t\right) \ast _{t}\partial _{tt}u\left(
x,t\right) =\int_{0}^{t}\partial _{xx}u\left( x,t^{\prime }\right) \mathrm{d
t^{\prime }, \notag \\
\rho \varepsilon _{cr}^{\left( g\right) }\,\partial _{tt}u\left( x,t\right)
+\rho \,\dot{\varepsilon}_{cr}\left( t\right) \ast _{t}\partial _{tt}u\left(
x,t\right) =\partial _{xx}u\left( x,t\right) , \label{FWE-epsilon-g} \\
\rho \,\varepsilon _{cr}\left( t\right) \ast _{t}\partial _{tt{}t}u\left(
x,t\right) =\partial _{xx}u\left( x,t\right) , \notag
\end{gather
are respectively obtained by differentiation of (\ref{int-sigma-eps-2}),
\ref{eps-konv}), and (\ref{eps-konv-1}) with respect to the spatial
coordinate and by the subsequent use of equation of motion (\ref{eq-motion})
and strain (\ref{strejn}) in such obtained expressions.
\section{Relaxation modulus and creep compliance}
Starting from the distributed-order viscoelastic model (\ref{const-eq})
having differentiation order below the first order, the conditions for
relaxation modulus to be completely monotonic and simultaneously creep
compliance to be Bernstein function are derived by the means of Laplace
transform method. It is shown that these conditions for relaxation modulus
and creep compliance in cases of linear fractional models (\ref{gen-lin})
and power-type distributed-order model (\ref{DOCE}) are equivalent to the
thermodynamical requirements implying four thermodynamically acceptable
cases of linear fractional models (\ref{gen-lin}), listed in Appendix \re
{LFMS}, and the power-type model (\ref{DOCE}), with $E>0$ and $0\leq a\leq
b. $ These properties of creep compliance and relaxation modulus are proved
to be of crucial importance in establishing dissipativity of the hereditary
fractional wave equation. Recall, completely monotonic function is a
positive, monotonically decreasing convex function, or more precisely
function $f$ satisfying $\left( -1\right) ^{n}f^{\left( n\right) }\left(
t\right) \geq 0,$ $n\in
\mathbb{N}
_{0},$ while Bernstein function is a positive, monotonically increasing,
concave function, or more precisely non-negative function having its first
derivative completely monotonic.
The responses in creep and stress relaxation tests of thermodynamically
consistent fractional Burgers models (\ref{UCE-1-5}) and (\ref{UCE-6-8}) are
examined in \cite{OZ-2}, where it is found that the requirements for
relaxation modulus to be completely monotonic and creep compliance to be
Bernstein function are more restrictive than the thermodynamical
requirements. Conditions guaranteeing the thermodynamical consistency of
fractional Burgers models and narrower conditions guaranteeing monotonicity
properties of relaxation modulus and creep compliance are given in Appendix
\ref{FBMS}.
The relaxation modulus, corresponding to the distributed-order viscoelastic
model (\ref{const-eq}), takes the for
\begin{eqnarray}
\sigma _{sr}\left( t\right) &=&\sigma _{sr}^{\left( e\right) }+\frac{1}{\pi
\int_{0}^{\infty }\frac{K\left( \rho \right) }{\left\vert \Phi _{\sigma
}\left( \rho \mathrm{e}^{\mathrm{i}\pi }\right) \right\vert ^{2}}\frac
\mathrm{e}^{-\rho t}}{\rho }\mathrm{d}\rho ,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{with}
\label{sigma-sr-eq} \\
\sigma _{sr}^{\left( e\right) } &=&\lim_{t\rightarrow \infty }\sigma
_{sr}\left( t\right) =\lim_{s\rightarrow 0}\left( s\tilde{\sigma}_{sr}\left(
s\right) \right) =\lim_{s\rightarrow 0}\frac{\Phi _{\varepsilon }(s)}{\Phi
_{\sigma }(s)},\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \label{sigma-sr-e} \\
K\left( \rho \right) &=&\func{Re}\Phi _{\sigma }\left( \rho \mathrm{e}^
\mathrm{i}\pi }\right) \func{Im}\Phi _{\varepsilon }\left( \rho \mathrm{e}^
\mathrm{i}\pi }\right) -\func{Im}\Phi _{\sigma }\left( \rho \mathrm{e}^
\mathrm{i}\pi }\right) \func{Re}\Phi _{\varepsilon }\left( \rho \mathrm{e}^
\mathrm{i}\pi }\right) , \label{K}
\end{eqnarray
where functions $\Phi _{\sigma }$ and $\Phi _{\varepsilon }$ are defined by
\ref{fiovi}), while the creep compliance may be represented either b
\begin{eqnarray}
\varepsilon _{cr}\left( t\right) &=&\varepsilon _{cr}^{\left( e\right) }
\frac{1}{\pi }\int_{0}^{\infty }\frac{K\left( \rho \right) }{\left\vert \Phi
_{\varepsilon }\left( \rho \mathrm{e}^{\mathrm{i}\pi }\right) \right\vert
^{2}}\frac{\mathrm{e}^{-\rho t}}{\rho }\mathrm{d}\rho ,\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{with}
\label{eps-cr-eq} \\
\varepsilon _{cr}^{\left( e\right) } &=&\lim_{t\rightarrow \infty
}\varepsilon _{cr}\left( t\right) =\lim_{s\rightarrow 0}\left( s\tilde
\varepsilon}_{cr}\left( s\right) \right) =\lim_{s\rightarrow 0}\frac{\Phi
_{\sigma }(s)}{\Phi _{\varepsilon }(s)}, \label{eps-cr-e}
\end{eqnarray
for solid-like materials, or b
\begin{equation}
\varepsilon _{cr}\left( t\right) =\frac{1}{\pi }\int_{0}^{\infty }\frac
K\left( \rho \right) }{\left\vert \Phi _{\varepsilon }\left( \rho \mathrm{e
^{\mathrm{i}\pi }\right) \right\vert ^{2}}\frac{1-\mathrm{e}^{-\rho t}}{\rho
}\mathrm{d}\rho , \label{eps-cr}
\end{equation
for fluid-like materials, where function $K$ is given by (\ref{K}). The
calculation of relaxation modulus (\ref{sigma-sr-eq}) and creep compliances
\ref{eps-cr-eq}) and (\ref{eps-cr}) is performed in Appendix \ref{sr-cr-calc
.
The equilibrium modulus $\sigma _{sr}^{\left( e\right) }$ has either zero or
finite non-zero value, as seen from Table \ref{tbl}, hence the relaxation
modulus (\ref{sigma-sr-eq}) has the same form regardless of the material
type, while the equilibrium compliance $\varepsilon _{cr}^{\left( e\right) }$
has either finite value for solid-like materials (power-type
distributed-order constitutive equation (\ref{DOCE}) and Cases I (\ref{Case
1}) and II (\ref{Case 2})), or infinite value for fluid-like materials
(Cases III (\ref{Case 3}) and IV (\ref{Case 4})), as summarized in Table \re
{tbl}, implying the need for expressing the creep compliance either in form
\ref{eps-cr-eq}), or in the form (\ref{eps-cr}).
The function $K,$ calculated by (\ref{K}), for linear fractional models (\re
{gen-lin}) and power-type distributed-order model (\ref{DOCE}) takes the
respective form
\begin{eqnarray}
K\left( \rho \right) &=&-\sum_{i=1}^{n}\sum_{j=1}^{m}a_{i}b_{j}\,\rho
^{\alpha _{i}+\beta _{j}}\sin \frac{\left( \alpha _{i}-\beta _{j}\right) \pi
}{2}\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \label{K-gen-lin} \\
K\left( \rho \right) &=&E\pi \frac{a\rho +1}{\left\vert \ln \left( a\rho
\right) +\mathrm{i}\pi \right\vert ^{2}}\frac{b\rho +1}{\left\vert \ln
\left( b\rho \right) +\mathrm{i}\pi \right\vert ^{2}}\ln \frac{b}{a},
\label{K-PTDO}
\end{eqnarray
obtained by substitution $s=\rho \mathrm{e}^{\mathrm{i}\pi }$ in (\re
{fiovi-lin}). By requiring non-negativity of function $K,$ the conditions on
model parameters guaranteeing that the relaxation modulus (\ref{sigma-sr-eq
) is completely monotonic, while creep compliances (\ref{eps-cr-eq}) and
\ref{eps-cr}) are Bernstein functions are derived, since the non-negativity
of $K$ implie
\begin{equation*}
\sigma _{sr}\left( t\right) \geq 0\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\;\;\left( -1\right) ^{k
\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}\sigma _{sr}\left( t\right) =\frac{1}
\pi }\int_{0}^{\infty }\frac{K\left( \rho \right) }{\left\vert \Phi _{\sigma
}\left( \rho \mathrm{e}^{\mathrm{i}\pi }\right) \right\vert ^{2}}\rho ^{k-1
\mathrm{e}^{-\rho t}\mathrm{d}\rho \geq 0,\;\;k\in
\mathbb{N}
,\;t>0,
\end{equation*
for relaxation modulus (\ref{sigma-sr-eq}) an
\begin{equation*}
\varepsilon _{cr}\left( t\right) \geq 0\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\;\;\left( -1\right)
^{k}\frac{\mathrm{d}^{k}}{\mathrm{d}t^{k}}\dot{\varepsilon}_{cr}\left(
t\right) =\frac{1}{\pi }\int_{0}^{\infty }\frac{K\left( \rho \right) }
\left\vert \Phi _{\varepsilon }\left( \rho \mathrm{e}^{\mathrm{i}\pi
}\right) \right\vert ^{2}}\rho ^{k}\mathrm{e}^{-\rho t}\mathrm{d}\rho \geq
0,\;\;k\in
\mathbb{N}
_{0},\;t>0,
\end{equation*
with $\dot{\varepsilon}_{cr}\left( t\right) =\frac{\mathrm{d}}{\mathrm{d}t
\varepsilon _{cr}\left( t\right) ,$ for the creep compliances (\re
{eps-cr-eq}) and (\ref{eps-cr}). Note that $\dot{\varepsilon}_{cr}\left(
t\right) \geq 0$ in the case of (\ref{eps-cr-eq}) implies that the creep
compliance $\varepsilon _{cr}\left( t\right) $ monotonically increases from
\varepsilon _{cr}^{\left( g\right) }=\lim_{s\rightarrow \infty }\frac{\Phi
_{\sigma }(s)}{\Phi _{\varepsilon }(s)}$ to $\varepsilon _{cr}^{\left(
e\right) }=\lim_{s\rightarrow 0}\frac{\Phi _{\sigma }(s)}{\Phi _{\varepsilon
}(s)}$ for $t>0,$ thus being a non-negative function since $\Phi _{\sigma }$
and $\Phi _{\varepsilon }$ are non-negative functions.
By requiring non-negativity of function $K,$ given by (\ref{K-gen-lin}), one
reobtains all four cases of linear fractional model (\ref{gen-lin}), listed
in Appendix \ref{LFMS} along with the explicit forms of corresponding
function $K,$ since by (\ref{K-gen-lin}) function $K$ is up to the
multiplication by the positive function exactly the loss modulus, see \cite
Eq. (2.9)]{AKOZ}, whose non-negativity requirement for all (positive)
frequencies yielded four thermodynamically consistent classes of linear
fractional models (\ref{gen-lin}). In the case of function $K$ given by (\re
{K-PTDO}), the thermodynamical requirements $E>0$ and $0\leq a\leq b$
guarantee non-negativity of function $K.$
The relaxation modulus (\ref{sigma-sr-eq}) and creep compliances (\re
{eps-cr-eq}) and (\ref{eps-cr}) are obtained in Appendix \ref{sr-cr-calc}
under the following assumptions.
\begin{itemize}
\item[$\left( A1\right) $] Functions $\Phi _{\sigma }$ and $\Phi
_{\varepsilon },$ given by (\ref{fiovi}), except for $s=0,$ have no other
branching points and also $\Phi _{\sigma }(s)\neq 0$ and $\Phi _{\varepsilon
}(s)\neq 0$ for $s\in
\mathbb{C}
,$ implying the nonexistence of poles of functions $\frac{\Phi _{\sigma }(s
}{\Phi _{\varepsilon }(s)}$ and $\frac{\Phi _{\varepsilon }(s)}{\Phi
_{\sigma }(s)}$ in the complex plane.
\item[$\left( A2\right) $] In order to obtain the relaxation modulus (\re
{sigma-sr-eq}), functions $\Phi _{\sigma }$ and $\Phi _{\varepsilon }$ (\re
{fiovi}) should satisf
\begin{equation*}
\frac{1}{R}\left\vert \frac{\Phi _{\varepsilon }\left( R\mathrm{e}^{\mathrm{
}\frac{\pi }{2}}\right) }{\Phi _{\sigma }(R\mathrm{e}^{\mathrm{i}\frac{\pi }
2}})}\right\vert \rightarrow 0\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and therefore}\;\;\left\vert \frac
\Phi _{\varepsilon }(R\mathrm{e}^{\mathrm{i}\varphi })}{\Phi _{\sigma }(
\mathrm{e}^{\mathrm{i}\varphi })}\right\vert \mathrm{e}^{Rt\cos \varphi
}\rightarrow 0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{as}\;\;R\rightarrow \infty ,
\end{equation*
for $\varphi \in \left( -\pi ,-\frac{\pi }{2}\right) \cup \left( \frac{\pi }
2},\pi \right) .$
\item[$\left( A3\right) $] In order to obtain the creep compliance (\re
{eps-cr-eq}), functions $\Phi _{\sigma }$ and $\Phi _{\varepsilon }$ (\re
{fiovi}) should satisf
\begin{equation*}
\frac{1}{R}\left\vert \frac{\Phi _{\sigma }\left( R\mathrm{e}^{\mathrm{i
\frac{\pi }{2}}\right) }{\Phi _{\varepsilon }(R\mathrm{e}^{\mathrm{i}\frac
\pi }{2}})}\right\vert \rightarrow 0\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and therefore}\;\;\left\vert
\frac{\Phi _{\sigma }(R\mathrm{e}^{\mathrm{i}\varphi })}{\Phi _{\varepsilon
}(R\mathrm{e}^{\mathrm{i}\varphi })}\right\vert \mathrm{e}^{Rt\cos \varphi
}\rightarrow 0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{as}\;\;R\rightarrow \infty ,
\end{equation*
for $\varphi \in \left( -\pi ,-\frac{\pi }{2}\right) \cup \left( \frac{\pi }
2},\pi \right) .$
\item[$\left( A4\right) $] In order to obtain the creep compliance (\re
{eps-cr}), functions $\Phi _{\sigma }$ and $\Phi _{\varepsilon }$ (\re
{fiovi}) should satisf
\begin{equation*}
\left\vert \frac{\Phi _{\sigma }\left( R\mathrm{e}^{\mathrm{i}\frac{\pi }{2
}\right) }{\Phi _{\varepsilon }(R\mathrm{e}^{\mathrm{i}\frac{\pi }{2}})
\right\vert \rightarrow 0\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{and therefore}\;\;R\left\vert \frac{\Phi
_{\sigma }(R\mathrm{e}^{\mathrm{i}\varphi })}{\Phi _{\varepsilon }(R\mathrm{
}^{\mathrm{i}\varphi })}\right\vert \mathrm{e}^{Rt\cos \varphi }\rightarrow
0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{or}\;\;p_{0}=0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{as}\;\;R\rightarrow \infty ,
\end{equation*
for $\varphi \in \left( -\pi ,-\frac{\pi }{2}\right) \cup \left( \frac{\pi }
2},\pi \right) ,$ as well a
\begin{equation*}
r\left\vert \frac{\Phi _{\sigma }(r\mathrm{e}^{\mathrm{i}\varphi })}{\Phi
_{\varepsilon }(r\mathrm{e}^{\mathrm{i}\varphi })}\right\vert \rightarrow
0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{as}\;\;r\rightarrow 0,
\end{equation*
for $\varphi \in \left( -\pi ,\pi \right) .$
\end{itemize}
Assumption $\left( A1\right) $ is satisfied for linear fractional models
\ref{gen-lin}) as well as for the power-type model (\ref{DOCE}), due to the
fractional differentiation orders belonging to the interval between zero and
one. For thermodynamically acceptable cases of linear fractional models (\re
{gen-lin}), listed in Appendix \ref{LFMS}, and for the power-type model (\re
{DOCE}) assumption $\left( A2\right) $ is satisfied, since either
\left\vert \frac{\Phi _{\varepsilon }(R\mathrm{e}^{\mathrm{i}\varphi })}
\Phi _{\sigma }(R\mathrm{e}^{\mathrm{i}\varphi })}\right\vert \sim C$ or
\left\vert \frac{\Phi _{\varepsilon }(R\mathrm{e}^{\mathrm{i}\varphi })}
\Phi _{\sigma }(R\mathrm{e}^{\mathrm{i}\varphi })}\right\vert \sim \frac{C}
R^{\delta }},$ as $R\rightarrow \infty ,$ with $C$ being constant and
\delta \in \left( 0,1\right) ,$ see Table \ref{tbl-1}. As already
anticipated, constitutive equations corresponding to the solid-like
materials (power-type distributed-order constitutive equation (\ref{DOCE})
and Cases I (\ref{Case 1}) and II (\ref{Case 2})) satisfy assumption $\left(
A3\right) ,$ while constitutive equations corresponding to the fluid-like
materials (Cases III (\ref{Case 3}) and IV (\ref{Case 4})) satisfy
assumption $\left( A4\right) ,$ see Table \ref{tbl-1}. \input{tbl-1.tex}
\section{Energy dissipation for hereditary materials}
A priori energy estimates stating that the kinetic energy at arbitrary
time-instant is less than the initial kinetic energy are derived in order to
show the dissipativity of the hereditary fractional wave equations. The
material properties at initial time-instant, differing the materials with
finite and infinite wave propagation speed, prove to have a decisive role in
choosing the form of fractional wave equation and the form of energy
estimates as well. In proving dissipativity properties of the hereditary
fractional wave equations, the key point is that relaxation modulus is
completely monotonic function. Similarly, energy estimate involving the
creep compliance is based on the fact that the creep compliance is Bernstein
function.
\subsection{Materials having finite glass modulus}
The energy estimate for fractional wave equation expressed in terms of
relaxation modulus (\ref{FWE-sigma-g}) correspond to materials that have
finite glass modulus and thus finite wave speed as well, i.e., materials
described by the power-type distributed-order model (\ref{DOCE}), Case I
\ref{Case 1}), and Case III (\ref{Case 3}) of the linear constitutive model
\ref{gen-lin}), as well as materials described by the fractional Burgers
models VI - VIII (\ref{Model 6}), (\ref{Model 7}), (\ref{Model 8}).
Namely, by multiplying the fractional wave equation (\ref{FWE-sigma-g}) by
\partial _{t}u$ and by subsequent integration with respect to the spatial
coordinate along the whole domain
\mathbb{R}
$ and with respect to time over interval $\left[ 0,t\right] ,$ where $t>0$
is the arbitrary time-instant, one ha
\begin{equation}
\frac{1}{2}\rho \left\Vert \partial _{t}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\sigma _{sr}^{\left( g\right) }\left\Vert \partial
_{x}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}=\frac{1}{2}\rho \left\Vert v_{0}\left( \cdot \right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \dot{\sigma
_{sr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial _{xx}u\left(
x,t^{\prime }\right) \right) \,\partial _{t^{\prime }}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime }, \label{ee-sigma-g-skoro}
\end{equation
where the change of kinetic energy (per unit square) of a viscoelastic
(infinite) body is obtained a
\begin{eqnarray}
\rho \int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }t^{\prime
}}u\left( x,t^{\prime }\right) \,\partial _{t^{\prime }}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } &=&\frac{1}{2}\rho
\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }}\left( \partial
_{t^{\prime }}u\left( x,t^{\prime }\right) \right) ^{2}\,\mathrm{d}x\
\mathrm{d}t^{\prime }=\frac{1}{2}\rho \int_{0}^{t}\partial _{t^{\prime
}}\left\Vert \partial _{t^{\prime }}u\left( \cdot ,t^{\prime }\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime } \notag \\
&=&\frac{1}{2}\rho \left\Vert \partial _{t}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}-\frac{1}{2}\rho \left\Vert v_{0}\left( \cdot \right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}, \label{kin-en}
\end{eqnarray
using the initial condition (\ref{ic-u})$_{2},$ while the potential energy
(per unit square) of a viscoelastic (infinite) body follows from
\begin{eqnarray}
\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{xx}u\left( x,t^{\prime }\right)
\,\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm
d}t^{\prime } &=&\int_{0}^{t}\left( \left[ \partial _{x}u\left( x,t^{\prime
}\right) \,\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \right]
_{x\rightarrow -\infty }^{x\rightarrow \infty }-\int_{\mathbb{R}}\partial
_{x}u\left( x,t^{\prime }\right) \,\partial _{xt^{\prime }}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\right) \,\mathrm{d}t^{\prime } \notag \\
&=&-\frac{1}{2}\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime
}}\left( \partial _{x}u\left( x,t^{\prime }\right) \right) ^{2}\,\mathrm{d
x\,\mathrm{d}t^{\prime }=-\frac{1}{2}\int_{0}^{t}\partial _{t^{\prime
}}\left\Vert \partial _{x}u\left( \cdot ,t^{\prime }\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime } \notag \\
&=&-\frac{1}{2}\left\Vert \partial _{x}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\left\Vert \varepsilon \left( x,0\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}=-\frac{1}{2}\left\Vert \partial _{x}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}, \label{pot-en}
\end{eqnarray
using the initial condition (\ref{ic-sigma-eps})$_{2}$ and integration by
parts along with the boundary conditions (\ref{bc})$_{2}$ combined with the
constitutive equation (\ref{sigma-konv}) and strain (\ref{strejn}) yielding
\lim_{x\rightarrow \pm \infty }\partial _{x}u\left( x,t\right) =0.$
The last term on the right-hand-side of (\ref{ee-sigma-g-skoro}) is
transformed a
\begin{eqnarray*}
&&\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \dot{\sigma}_{sr}\left( t^{\prime
}\right) \ast _{t^{\prime }}\partial _{xx}u\left( x,t^{\prime }\right)
\right) \,\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \,\mathrm{d}x\
\mathrm{d}t^{\prime } \\
&&\qquad =\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{x}\left( \dot{\sigma
_{sr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial _{x}u\left(
x,t^{\prime }\right) \right) \,\partial _{t^{\prime }}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad =\int_{0}^{t}\left( \left[ \left( \dot{\sigma}_{sr}\left( t^{\prime
}\right) \ast _{t^{\prime }}\partial _{x}u\left( x,t^{\prime }\right)
\right) \,\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \right]
_{x\rightarrow -\infty }^{x\rightarrow \infty }-\int_{\mathbb{R}}\left( \dot
\sigma}_{sr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial
_{x}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime }x}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\right) \,\mathrm{d}t^{\prime } \\
&&\qquad =\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \left( -\dot{\sigma
_{sr}\left( t^{\prime }\right) \right) \ast _{t^{\prime }}\partial
_{x}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime }}\left(
\partial _{x}u\left( x,t^{\prime }\right) \right) \,\mathrm{d}x\,\mathrm{d
t^{\prime } \\
&&\qquad =\int_{\mathbb{R}}\left( \left[ \left( \left( -\dot{\sigma
_{sr}\left( t^{\prime }\right) \right) \ast _{t^{\prime }}\partial
_{x}u\left( x,t^{\prime }\right) \right) \,\partial _{x}u\left( x,t^{\prime
}\right) \right] _{t^{\prime }=0}^{t^{\prime }=t}-\int_{0}^{t}\partial
_{t^{\prime }}\left( \left( -\dot{\sigma}_{sr}\left( t^{\prime }\right)
\right) \ast _{t^{\prime }}\partial _{x}u\left( x,t^{\prime }\right) \right)
\,\partial _{x}u\left( x,t^{\prime }\right) \,\mathrm{d}t^{\prime }\right) \
\mathrm{d}x \\
&&\qquad =\int_{\mathbb{R}}\left( \left( -\dot{\sigma}_{sr}\left( t\right)
\right) \ast _{t}\partial _{x}u\left( x,t\right) \right) \,\partial
_{x}u\left( x,t\right) \,\mathrm{d}x-\int_{0}^{t}\!\!\int_{\mathbb{R
}\partial _{t^{\prime }}\left( \left( -\dot{\sigma}_{sr}\left( t^{\prime
}\right) \right) \ast _{t^{\prime }}\partial _{x}u\left( x,t^{\prime
}\right) \right) \,\partial _{x}u\left( x,t^{\prime }\right) \,\mathrm{d}x\
\mathrm{d}t^{\prime }
\end{eqnarray*
after the partial integration with respect to spatial coordinate and time,
using previously derived boundary condition $\lim_{x\rightarrow \pm \infty
}\partial _{x}u\left( x,t\right) =0,$ so that (\ref{ee-sigma-g-skoro}) read
\begin{eqnarray}
&&\frac{1}{2}\rho \left\Vert \partial _{t}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\sigma _{sr}^{\left( g\right) }\left\Vert \partial
_{x}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }}\left(
\left( -\dot{\sigma}_{sr}\left( t^{\prime }\right) \right) \ast _{t^{\prime
}}\partial _{x}u\left( x,t^{\prime }\right) \right) \,\partial _{x}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \notag \\
&&\qquad \qquad \qquad =\frac{1}{2}\rho \left\Vert v_{0}\left( \cdot \right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\int_{\mathbb{R}}\left( \left( -\dot{\sigma}_{sr}\left(
t\right) \right) \ast _{t}\partial _{x}u\left( x,t\right) \right) \,\partial
_{x}u\left( x,t\right) \,\mathrm{d}x. \label{ee-sigma-g-skoro-1}
\end{eqnarray}
Using Lemma 1.7.2 in \cite{Siskova-phd}, see also \cite[Eq. (9)]{Zacher},
stating that
\begin{equation}
\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }}\left( k\left(
t^{\prime }\right) \ast _{t^{\prime }}u\left( x,t^{\prime }\right) \right)
\,u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime }\geq
\frac{1}{2}k\left( t\right) \ast _{t}\left\Vert u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\int_{0}^{t}k\left( t^{\prime }\right) \left\Vert u\left(
\cdot ,t^{\prime }\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime }, \label{lema}
\end{equation
provided that $k$ is a positive decreasing function for $t>0,$ the third
term on the left-hand-side of (\ref{ee-sigma-g-skoro-1}) is estimated b
\begin{eqnarray*}
&&\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }}\left( \left(
\dot{\sigma}_{sr}\left( t^{\prime }\right) \right) \ast _{t^{\prime
}}\partial _{x}u\left( x,t^{\prime }\right) \right) \,\partial _{x}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad \geq \frac{1}{2}\left( -\dot{\sigma}_{sr}\left( t\right)
\right) \ast _{t}\left\Vert \partial _{x}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\int_{0}^{t}\left( -\dot{\sigma}_{sr}\left(
t^{\prime }\right) \right) \left\Vert \partial _{x}u\left( \cdot ,t^{\prime
}\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime },
\end{eqnarray*
since $-\dot{\sigma}_{sr}$ is completely monotonic and thus positive
decreasing function for $t>0,$ while the second term on the right-hand-side
of (\ref{ee-sigma-g-skoro-1}) is estimated b
\begin{eqnarray*}
&&\int_{\mathbb{R}}\left( \left( -\dot{\sigma}_{sr}\left( t\right) \right)
\ast _{t}\partial _{x}u\left( x,t\right) \right) \,\partial _{x}u\left(
x,t\right) \,\mathrm{d}x \\
&&\qquad \qquad =\int_{0}^{t}\left( -\dot{\sigma}_{sr}\left( t-t^{\prime
}\right) \right) \int_{\mathbb{R}}\partial _{x}u\left( x,t^{\prime }\right)
\,\partial _{x}u\left( x,t\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad \leq \int_{0}^{t}\left( -\dot{\sigma}_{sr}\left( t-t^{\prime
}\right) \right) \int_{\mathbb{R}}\left( \frac{\left( \partial _{x}u\left(
x,t^{\prime }\right) \right) ^{2}}{2}+\frac{\left( \partial _{x}u\left(
x,t\right) \right) ^{2}}{2}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad \leq \frac{1}{2}\left( -\dot{\sigma}_{sr}\left( t\right)
\right) \ast _{t}\left\Vert \partial _{x}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\left( \sigma _{sr}^{\left( g\right) }-\sigma
_{sr}\left( t\right) \right) \left\Vert \partial _{x}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2},
\end{eqnarray*
transforming (\ref{ee-sigma-g-skoro-1}) int
\begin{equation}
\frac{1}{2}\rho \left\Vert \partial _{t}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\sigma _{sr}\left( t\right) \left\Vert \partial
_{x}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\int_{0}^{t}\left( -\dot{\sigma}_{sr}\left(
t^{\prime }\right) \right) \left\Vert \partial _{x}u\left( \cdot ,t^{\prime
}\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime }\leq \frac{1}{2}\rho \left\Vert
v_{0}\left( \cdot \right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}. \label{ee-sigma-g}
\end{equation}
The energy estimate (\ref{ee-sigma-g}) clearly indicates the dissipativity
of fractional wave equation (\ref{FWE-sigma-g}), since the kinetic energy at
any time-instant $t>0$ is less than the kinetic energy at initial
time-instant $t=0,$ due to the positive terms on the left-hand-side of
energy estimate (\ref{ee-sigma-g}).
\subsection{Materials having infinite glass modulus}
The energy estimate for fractional wave equation expressed in terms of
relaxation modulus (\ref{FWE-sigma}) correspond to materials that have
infinite glass modulus and thus infinite wave speed as well, i.e., materials
described by Case II (\ref{Case 2}) and Case IV (\ref{Case 4}) of the linear
constitutive model (\ref{gen-lin}), as well as materials described by the
fractional Burgers models I - V (\ref{Model 1}), (\ref{Model 2}), (\re
{Model 3}), (\ref{Model 4}), (\ref{Model 5}).
Namely, by multiplying the fractional wave equation (\ref{FWE-sigma}) by
\partial _{t}u$ and by subsequent integration with respect to the spatial
coordinate along the whole domain
\mathbb{R}
$ and with respect to time over interval $\left[ 0,t\right] ,$ one ha
\begin{equation}
\frac{1}{2}\rho \left\Vert \partial _{t}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}=\frac{1}{2}\rho \left\Vert v_{0}\left( \cdot \right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \sigma _{sr}\left(
t^{\prime }\right) \ast _{t^{\prime }}\partial _{t^{\prime }xx}u\left(
x,t^{\prime }\right) \right) \,\partial _{t^{\prime }}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime }, \label{ee-sigma-skoro}
\end{equation
where the change of kinetic energy is obtained according to (\ref{kin-en}).
The second term on the right-hand-side of (\ref{ee-sigma-skoro}) transforms
int
\begin{eqnarray*}
&&\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \sigma _{sr}\left( t^{\prime
}\right) \ast _{t^{\prime }}\partial _{t^{\prime }xx}u\left( x,t^{\prime
}\right) \right) \,\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \
\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad =\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{x}\left( \sigma
_{sr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial _{t^{\prime
}x}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime }}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad =\int_{0}^{t}\left( \left[ \left( \sigma _{sr}\left( t^{\prime
}\right) \ast _{t^{\prime }}\partial _{t^{\prime }x}u\left( x,t^{\prime
}\right) \right) \,\partial _{t^{\prime }}u\left( x,t^{\prime }\right)
\right] _{x\rightarrow -\infty }^{x\rightarrow \infty }-\int_{\mathbb{R
}\left( \sigma _{sr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial
_{t^{\prime }x}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime
}x}u\left( x,t^{\prime }\right) \,\mathrm{d}x\right) \,\mathrm{d}t^{\prime }
\\
&&\qquad =-\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \sigma _{sr}\left(
t^{\prime }\right) \ast _{t^{\prime }}\partial _{t^{\prime }x}u\left(
x,t^{\prime }\right) \right) \,\partial _{t^{\prime }x}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime },
\end{eqnarray*
after the partial integration with respect to spatial coordinate, using the
boundary condition (\ref{bc})$_{2}$ yielding $\lim_{x\rightarrow \pm \infty
}\sigma _{sr}\left( t\right) \ast _{t}\partial _{tx}u\left( x,t\right) =0,$
obtained by combining the constitutive equation (\ref{sigma-konv-1}) and
strain (\ref{strejn}), so that (\ref{ee-sigma-skoro}) read
\begin{equation}
\frac{1}{2}\rho \left\Vert \partial _{t}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \sigma _{sr}\left(
t^{\prime }\right) \ast _{t^{\prime }}\partial _{t^{\prime }x}u\left(
x,t^{\prime }\right) \right) \,\partial _{t^{\prime }x}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime }=\frac{1}{2}\rho \left\Vert
v_{0}\left( \cdot \right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}. \label{ee-sigma}
\end{equation}
The energy estimate (\ref{ee-sigma}) clearly indicates the dissipativity of
fractional wave equation (\ref{FWE-sigma}), since the kinetic energy at any
time-instant $t>0$ is less than the kinetic energy at initial time-instant
t=0,$ due to the positivity of the second term on the right-hand-side of
\ref{ee-sigma}), thanks to the relaxation modulus $\sigma _{sr}$ being
completely monotonic and consequently of the positive type kernels
satisfying
\begin{equation*}
\int_{0}^{t}\!\!\int_{0}^{t^{\prime }}\sigma _{sr}\left( t^{\prime
}-t^{\prime \prime }\right) \,\partial _{t^{\prime \prime }x}u\left(
x,t^{\prime \prime }\right) \,\partial _{t^{\prime }x}u\left( x,t^{\prime
}\right) \,\mathrm{d}t^{\prime \prime }\,\mathrm{d}t^{\prime }\geq 0,
\end{equation*}
as also used in \cite{Saedpanah}.
\subsection{Energy estimates using fractional wave equation (\protect\re
{FWE-epsilon-g})}
The energy estimate for fractional wave equation expressed in terms of creep
compliance (\ref{FWE-epsilon-g}) correspond to all materials described by
the power-type distributed-order model (\ref{DOCE}), as well as to all
materials described by the fractional Burgers models, since for all of these
models the glass compliance has finite value, either zero or non-zero.
Multiplying the fractional wave equation (\ref{FWE-epsilon-g}) by $\partial
_{t}u$ and by subsequent integration with respect to the spatial coordinate
along the whole domain
\mathbb{R}
$ and with respect to time over interval $\left[ 0,t\right] ,$ one ha
\begin{eqnarray}
&&\frac{1}{2}\rho \,\varepsilon _{cr}^{\left( g\right) }\left\Vert \partial
_{t}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\left\Vert \partial _{x}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2} \notag \\
&&\qquad \qquad \qquad +\rho \int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \dot
\varepsilon}_{cr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial
_{t^{\prime }t^{\prime }}u\left( x,t^{\prime }\right) \right) \,\partial
_{t^{\prime }}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d
t^{\prime }=\frac{1}{2}\rho \,\varepsilon _{cr}^{\left( g\right) }\left\Vert
v_{0}\left( \cdot \right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}, \label{ee-epsilon-g-skoro}
\end{eqnarray
where the changes of kinetic and potential energy are obtained according to
\ref{kin-en}) and (\ref{pot-en}), respectively. The last term on the
left-hand-side of (\ref{ee-epsilon-g-skoro}) is calculated a
\begin{eqnarray*}
&&\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \dot{\varepsilon}_{cr}\left(
t^{\prime }\right) \ast _{t^{\prime }}\partial _{t^{\prime }t^{\prime
}}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime }}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad =\int_{0}^{t}\!\!\int_{\mathbb{R}}\left( \partial _{t^{\prime
}}\left( \dot{\varepsilon}_{cr}\left( t^{\prime }\right) \ast _{t^{\prime
}}\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \right) -v_{0}\left(
x\right) \dot{\varepsilon}_{cr}\left( t^{\prime }\right) \right) \,\partial
_{t^{\prime }}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d
t^{\prime } \\
&&\qquad =\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }}\left(
\dot{\varepsilon}_{cr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial
_{t^{\prime }}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime
}}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime
}-\int_{0}^{t}\!\!\int_{\mathbb{R}}v_{0}\left( x\right) \,\dot{\varepsilon
_{cr}\left( t^{\prime }\right) \,\partial _{t^{\prime }}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\mathrm{d}t^{\prime }
\end{eqnarray*
using $f\left( t\right) \ast _{t}\dot{g}\left( t\right) =\frac{\mathrm{d}}
\mathrm{d}t}\left( f\left( t\right) \ast _{t}g\left( t\right) \right)
-f\left( t\right) g\left( 0\right) ,$ transforming (\ref{ee-epsilon-g-skoro
) int
\begin{eqnarray}
&&\frac{1}{2}\rho \,\varepsilon _{cr}^{\left( g\right) }\left\Vert \partial
_{t}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\left\Vert \partial _{x}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\rho \int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime
}}\left( \dot{\varepsilon}_{cr}\left( t^{\prime }\right) \ast _{t^{\prime
}}\partial _{t^{\prime }}u\left( x,t^{\prime }\right) \right) \,\partial
_{t^{\prime }}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d
t^{\prime } \notag \\
&&\qquad \qquad \qquad =\frac{1}{2}\rho \,\varepsilon _{cr}^{\left( g\right)
}\left\Vert v_{0}\left( \cdot \right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\rho \int_{0}^{t}\!\!\int_{\mathbb{R}}v_{0}\left( x\right) \
\dot{\varepsilon}_{cr}\left( t^{\prime }\right) \,\partial _{t^{\prime
}}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime }.
\label{ee-epsilon-g-skoro-1}
\end{eqnarray
The last term on the left-hand-side of (\ref{ee-epsilon-g-skoro-1}) is
estimated as
\begin{eqnarray*}
&&\int_{0}^{t}\!\!\int_{\mathbb{R}}\partial _{t^{\prime }}\left( \dot
\varepsilon}_{cr}\left( t^{\prime }\right) \ast _{t^{\prime }}\partial
_{t^{\prime }}u\left( x,t^{\prime }\right) \right) \,\partial _{t^{\prime
}}u\left( x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad \geq \frac{1}{2}\dot{\varepsilon}_{cr}\left( t\right) \ast
_{t}\left\Vert \partial _{t}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\int_{0}^{t}\dot{\varepsilon}_{cr}\left( t^{\prime
}\right) \left\Vert \partial _{t^{\prime }}u\left( \cdot ,t^{\prime }\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime },
\end{eqnarray*
according to (\ref{lema}), since $\dot{\varepsilon}_{cr}$ is completely
monotonic, while the second term on the right-hand-side of (\re
{ee-epsilon-g-skoro-1}) is estimated b
\begin{eqnarray*}
&&\int_{0}^{t}\!\!\int_{\mathbb{R}}v_{0}\left( x\right) \,\dot{\varepsilon
_{cr}\left( t^{\prime }\right) \,\partial _{t^{\prime }}u\left( x,t^{\prime
}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad =\int_{0}^{t}\dot{\varepsilon}_{cr}\left( t^{\prime }\right)
\int_{\mathbb{R}}v_{0}\left( x\right) \,\,\partial _{t^{\prime }}u\left(
x,t^{\prime }\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad \leq \int_{0}^{t}\dot{\varepsilon}_{cr}\left( t^{\prime
}\right) \int_{\mathbb{R}}\left( \frac{\left( v_{0}\left( x\right) \right)
^{2}}{2}+\frac{\left( \partial _{t^{\prime }}u\left( x,t^{\prime }\right)
\right) ^{2}}{2}\right) \,\mathrm{d}x\,\mathrm{d}t^{\prime } \\
&&\qquad \qquad \leq \frac{1}{2}\left( \varepsilon _{cr}\left( t\right)
-\varepsilon _{cr}^{\left( g\right) }\right) \left\Vert v_{0}\left( \cdot
\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\int_{0}^{t}\dot{\varepsilon}_{cr}\left( t^{\prime
}\right) \left\Vert \partial _{t^{\prime }}u\left( \cdot ,t^{\prime }\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\,\mathrm{d}t^{\prime },
\end{eqnarray*
transforming (\ref{ee-epsilon-g-skoro-1}) int
\begin{equation*}
0\leq \frac{1}{2}\rho \,\varepsilon _{cr}^{\left( g\right) }\left\Vert
\partial _{t}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\rho \,\dot{\varepsilon}_{cr}\left( t\right) \ast
_{t}\left\Vert \partial _{t}u\left( \cdot ,t\right) \right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}\left\Vert \partial _{x}u\left( \cdot ,t\right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\leq \frac{1}{2}\rho \,\varepsilon _{cr}\left( t\right)
\left\Vert v_{0}\left( \cdot \right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2},
\end{equation*
or equivalently t
\begin{equation}
0\leq \frac{1}{2}\rho \,\frac{1}{\varepsilon _{cr}\left( t\right) }\partial
_{t}\left( \varepsilon _{cr}\left( t\right) \ast _{t}\left\Vert \partial
_{t}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\right) +\frac{1}{2\varepsilon _{cr}\left( t\right) }\left\Vert
\partial _{x}u\left( \cdot ,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\leq \frac{1}{2}\rho \,\left\Vert v_{0}\left( \cdot \right)
\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}, \label{ee-epsilon-g}
\end{equation
using $\dot{f}\left( t\right) \ast _{t}g\left( t\right) =\frac{\mathrm{d}}
\mathrm{d}t}\left( f\left( t\right) \ast _{t}g\left( t\right) \right)
-f\left( 0\right) g\left( t\right) .$
The energy estimate (\ref{ee-epsilon-g}) is not appropriate for showing
dissipativity of the fractional wave equation (\ref{FWE-epsilon-g}), since
one cannot identify the kinetic energy on the left-hand-side of (\re
{ee-epsilon-g}), although it figures on the right-hand-side of (\re
{ee-epsilon-g}).
\section{Energy conservation for non-local materials}
A priori energy estimates yield the conservation law for both of the
examined non-local fractional wave equations, stating that the sum of
kinetic energy and non-local potential energy does not change in time.
Non-local potential energy is proportional to the square of fractional
strain, obtained by convoluting the classical strain with the constitutive
model dependant non-locality kernel, i.e., non-local potential energy in a
particular point depends on the square of strain in all other points
weighted by the non-locality kernel.
\subsection{Materials described by the non-local Hooke law}
Eliminating stress and strain from the equation of motion (\ref{eq-motion}),
non-local Hooke law (\ref{nl-Huk}), and strain (\ref{strejn}), the non-local
Hooke-type wave equation is obtained in the for
\begin{equation}
\rho \,\partial _{tt}u(x,t)=\frac{E}{\ell ^{1-\alpha }}\,\frac{|x|^{-\alpha
}{2\Gamma (1-\alpha )}\ast _{x}\partial _{xx}u(x,t),\;\;\alpha \in \left(
0,1\right) , \label{AS}
\end{equation
transforming int
\begin{equation}
\rho \,\partial _{tt}\hat{u}(\xi ,t)=-E\frac{\sin \frac{\alpha \pi }{2}}
\ell ^{1-\alpha }}|\xi |^{1+\alpha }\hat{u}(\xi ,t), \label{AS-ft}
\end{equation
after application of the Fourier transform with respect to the spatial
coordinat
\begin{equation*}
\hat{f}\left( \xi \right) =\mathcal{F}\left[ f\left( x\right) \right] \left(
\xi \right) =\int_
\mathbb{R}
}f\left( x\right) \mathrm{e}^{-\mathrm{i}\xi x}\,\mathrm{d}x,\;\;\xi \in
\mathbb{R}
,
\end{equation*
where $\mathcal{F}\left[ \frac{|x|^{-\alpha }}{2\Gamma (1-\alpha )}\right]
\left( \xi \right) =\frac{\sin \frac{\alpha \pi }{2}}{|\xi |^{1-\alpha }}$
is used along with other well-known properties of the Fourier transform.
Multiplying the non-local Hooke-type wave equation in Fourier domain (\re
{AS-ft}) with $\partial _{t}\hat{u}$ and by subsequent integration over the
whole domain
\mathbb{R}
,$ one obtain
\begin{equation}
\partial _{t}\left( \frac{1}{2}\rho \left\Vert \partial _{t}\hat{u}(\cdot
,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\frac{\sin \frac{\alpha \pi }{2}}{\ell ^{1-\alpha
}\left\Vert |\xi |^{\frac{1+\alpha }{2}}\hat{u}(\xi ,t)\right\Vert
_{L^{2}\left(
\mathbb{R}
\right) }^{2}\right) =0, \label{ZO-Htajp}
\end{equation
yielding the conservation la
\begin{gather}
\partial _{t}\left( \frac{1}{2}\rho \left\Vert \partial _{t}u(\cdot
,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\frac{\sin \frac{\alpha \pi }{2}}{\ell ^{1-\alpha
}\left\Vert \left( -\Delta \right) ^{\frac{1+\alpha }{4}}u\left( \cdot
,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\right) =0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{i.e.,} \notag \\
\frac{1}{2}\rho \left\Vert \partial _{t}u(\cdot ,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\frac{\sin \frac{\alpha \pi }{2}}{\ell ^{1-\alpha
}\left\Vert \left( -\Delta \right) ^{\frac{1+\alpha }{4}}u\left( \cdot
,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}=\mathrm{const.}, \label{ZO-AS}
\end{gather
by the Parseval identity $\left\Vert f\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}=\left\Vert \hat{f}\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2},$ as well as by the Fourier transform of fractional Laplacian
(in one dimension) $\mathcal{F}\left[ \left( -\Delta \right) ^{s}f\left(
x\right) \right] \left( \xi \right) =|\xi |^{2s}\hat{f}\left( \xi \right) ,$
with $s\in \left( 0,1\right) ,$ since $\frac{1+\alpha }{2}\in \left( \frac{
}{2},1\right) $. The fractional strain, being proportional to $\left(
-\Delta \right) ^{\frac{1+\alpha }{4}}u$ in (\ref{ZO-AS}), has a lower
differentiation order than the classical strain $\partial _{x}u,$ since
\frac{1+\alpha }{4}\in \left( \frac{1}{4},\frac{1}{2}\right) $.
However, the conservation law (\ref{ZO-AS}) may also take another for
\begin{gather}
\partial _{t}\left( \frac{1}{2}\rho \left\Vert \partial _{t}u(\cdot
,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\frac{\sin \frac{\alpha \pi }{2}}{2\ell ^{1-\alpha
}\Gamma \left( 1-\frac{1+\alpha }{2}\right) \cos \frac{\left( 1+\alpha
\right) \pi }{4}}\left\Vert \frac{\mathrm{sgn\,}x}{|x|^{\frac{1+\alpha }{2}}
\ast _{x}\partial _{x}u\left( x,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\right) =0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{i.e.,} \notag \\
\frac{1}{2}\rho \left\Vert \partial _{t}u(\cdot ,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\frac{c_{\alpha }}{\ell ^{1-\alpha }}\left\Vert
\frac{\mathrm{sgn\,}x}{|x|^{\frac{1+\alpha }{2}}}\ast _{x}\partial
_{x}u\left( x,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}=\mathrm{const}. \label{ZO-AS-1}
\end{gather
where $c_{\alpha }=\frac{\sin \frac{\alpha \pi }{2}}{2\Gamma \left( 1-\frac
1+\alpha }{2}\right) \cos \frac{\left( 1+\alpha \right) \pi }{4}}$ is a
positive constant, if the term $|\xi |^{\frac{1+\alpha }{2}}\hat{u}(\xi ,t)$
in (\ref{ZO-Htajp}) is rewritten a
\begin{equation*}
|\xi |^{\frac{1+\alpha }{2}}\hat{u}(\xi ,t)=-\mathrm{i}\frac{\mathrm{sgn\,
\xi }{|\xi |^{1-\frac{1+\alpha }{2}}}\left( \mathrm{i}\xi \,\hat{u}(\xi
,t)\right) =\frac{1}{2\Gamma \left( 1-\frac{1+\alpha }{2}\right) \cos \frac
\left( 1+\alpha \right) \pi }{4}}\mathcal{F}\left[ \frac{\mathrm{sgn\,}x}
|x|^{\frac{1+\alpha }{2}}}\right] \left( \xi \right) \,\mathcal{F}\left[
\partial _{x}u\left( x,t\right) \right] \left( \xi \right) ,
\end{equation*
where the Fourier transform $\mathcal{F}\left[ \frac{\mathrm{sgn\,}x}
|x|^{\beta }}\right] \left( \xi \right) =-2\mathrm{i}\Gamma \left( 1-\beta
\right) \cos \frac{\beta \pi }{2}\frac{\mathrm{sgn\,}\xi }{|\xi |^{1-\beta }
,$ with $\beta \in \left( 0,1\right) ,$ is used.
The energy estimates (\ref{ZO-AS}) and (\ref{ZO-AS-1}) clearly indicate the
energy conservation property of the non-local Hooke-type wave equation (\re
{AS}), if the potential energy is reinterpreted to be proportional to the
square of fractional strain, expressed either in terms of fractional
Laplacian, or in terms of classical strain convoluted by the non-locality
kernel of power type.
\subsection{Materials described by the fractional Eringen model}
Fractional Eringen wave equatio
\begin{eqnarray}
&&\rho \,\partial _{tt}u(x,t)=E\,H_{\alpha }\left( x\right) \ast
_{x}\partial _{xx}u\left( x,t\right) ,\;\;\alpha \in \left( 1,3\right) ,\;\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{with} \label{EringenWE} \\
&&H_{\alpha }\left( x\right) =\frac{1}{\pi }\int_{0}^{\infty }\frac{\cos
\left( \xi x\right) }{1+\left( \ell \xi \right) ^{\alpha }\left\vert \cos
\frac{\alpha \pi }{2}\right\vert }\mathrm{d}\xi =\mathcal{F}^{-1}\left[
\frac{1}{1+\left( \ell \left\vert \xi \right\vert \right) ^{\alpha
}\left\vert \cos \frac{\alpha \pi }{2}\right\vert }\right] \left( x\right) ,
\notag
\end{eqnarray
is found as the inverse Fourier transform of
\begin{equation}
\rho \,\partial _{tt}\hat{u}(\xi ,t)=-E\frac{\xi ^{2}}{1+\left( \ell
\left\vert \xi \right\vert \right) ^{\alpha }\left\vert \cos \frac{\alpha
\pi }{2}\right\vert }\hat{u}(\xi ,t), \label{Eringen-ft}
\end{equation
obtained by eliminating $\hat{\sigma}$ and $\hat{\varepsilon}$ from the
system of equations in the Fourier domain
\begin{gather*}
\mathrm{i}\xi \,\hat{\sigma}(\xi ,t)=\rho \,\partial _{tt}\hat{u}(\xi
,t),\;\;\hat{\varepsilon}\left( \xi ,t\right) =\mathrm{i}\xi \,\hat{u}(\xi
,t), \\
\left( 1+\left( \ell \left\vert \xi \right\vert \right) ^{\alpha }\left\vert
\cos \frac{\alpha \pi }{2}\right\vert \right) \hat{\sigma}(\xi ,t)=E\,\hat
\varepsilon}\left( \xi ,t\right) ,
\end{gather*
respectively consisting of the Fourier transforms of equation of motion (\re
{eq-motion}), strain (\ref{strejn}), and fractional Eringen model (\re
{frac-Eringen}), where the Fourier transform of both (\ref{Dx-1}) and (\re
{Dx-2}), yielding $\mathcal{F}\left[ \mathrm{D}_{x}^{\alpha }f\left(
x\right) \right] \left( \xi \right) =-\left\vert \xi \right\vert ^{\alpha
}\left\vert \cos \frac{\alpha \pi }{2}\right\vert \hat{f}\left( \xi \right)
, $ is used.
Multiplying the fractional Eringen wave equation in Fourier domain (\re
{Eringen-ft}) with $\partial _{t}\hat{u}$ and by subsequent integration over
the whole domain
\mathbb{R}
,$ one obtain
\begin{eqnarray}
&&\partial _{t}\left( \frac{1}{2}\rho \left\Vert \partial _{t}\hat{u}(\cdot
,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\left\Vert \hat{h}_{\alpha }\left( \xi \right)
\left( \mathrm{i}\xi \,\hat{u}(\xi ,t)\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\right)
\begin{tabular}{l}
\end{tabular
0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{with} \label{Eringen-CL-ft} \\
&&\hat{h}_{\alpha }\left( \xi \right) =-\mathrm{i}\frac{\mathrm{sgn\,}\xi }
\sqrt{1+\left( \ell \left\vert \xi \right\vert \right) ^{\alpha }\left\vert
\cos \frac{\alpha \pi }{2}\right\vert }}, \notag
\end{eqnarray
so that the conservation law
\begin{gather}
\partial _{t}\left( \frac{1}{2}\rho \left\Vert \partial _{t}u(\cdot
,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\left\Vert h_{\alpha }\left( x\right) \ast
_{x}\partial _{x}u\left( x,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}\right) =0,\;\;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{i.e.,} \notag \\
\frac{1}{2}\rho \left\Vert \partial _{t}u(\cdot ,t)\right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}+\frac{1}{2}E\left\Vert h_{\alpha }\left( x\right) \ast
_{x}\partial _{x}u\left( x,t\right) \right\Vert _{L^{2}\left(
\mathbb{R}
\right) }^{2}=\mathrm{const}. \label{ZO-Eringen}
\end{gather
follows from (\ref{Eringen-CL-ft}) by the Parseval identity and inverse
Fourier transform of $\hat{h}_{\alpha },$ given b
\begin{equation*}
h_{\alpha }\left( x\right) =\frac{1}{\pi }\int_{0}^{\infty }\frac{\sin
\left( \xi x\right) }{\sqrt{1+\left( \ell \xi \right) ^{\alpha }\left\vert
\cos \frac{\alpha \pi }{2}\right\vert }}\mathrm{d}\xi .
\end{equation*}
The energy estimate (\ref{ZO-Eringen}) clearly indicates the energy
conservation property of the fractional Eringen wave equation (\re
{EringenWE}), if the potential energy is again reinterpreted to be
proportional to the square of fractional strain, expressed in terms of
classical strain convoluted by the non-locality kernel $h_{\alpha }.$
\section{Conclusion}
Energy dissipation and conservation properties of fractional wave equations,
respectively corresponding to hereditary and non-local materials, are
considered by employing the method of a priori energy estimates. More
precisely, in the case of hereditary fractional wave equations it is
obtained that the kinetic energy at arbitrary time-instant is less than the
initial kinetic energy, while in the case of non-local fractional wave
equations it is obtained that the sum of kinetic energy and non-local
potential energy does not change in time, with the non-local potential
energy being proportional to the square of fractional strain, obtained by
convoluting the classical strain with the constitutive model dependant
non-locality kernel.
Hereditary fractional models of viscoelastic material having differentiation
orders below the first order are represented by the distributed-order
viscoelastic model (\ref{const-eq}), more precisely by the linear fractional
model (\ref{gen-lin}) and power-type distributed-order model (\ref{DOCE}),
while thermodynamically consistent fractional Burgers models (\ref{UCE-1-5})
and (\ref{UCE-6-8}) represent constitutive models having differentiation
orders up to the second order. In order to formulate the hereditary wave
equation, in addition to the equation of motion (\ref{eq-motion}) and strain
(\ref{strejn}), hereditary constitutive model expressed in terms of material
response in stress relaxation and creep test is used, leading to six
equivalent forms of the hereditary wave equation, three of them expressed in
terms of relaxation modulus and the other three expressed in terms of creep
compliance. It is found that the hereditary wave equation expressed in terms
of relaxation modulus, either as (\ref{FWE-sigma-g}) for materials having
finite glass modulus and thus finite wave speed as well, or as (\re
{FWE-sigma}) for materials having infinite glass modulus and thus infinite
wave speed as well, leads to the physically meaningful energy estimates
either (\ref{ee-sigma-g}) or (\ref{ee-sigma}) corresponding to energy
dissipation. Therefore, the form of energy estimate depends on the material
properties at the initial time-instant defining the wave propagation speed,
rather than the material properties for large time differing the solid- and
fluid-like materials. The energy estimate (\ref{ee-epsilon-g}), implied by
the hereditary wave equation expressed in terms of creep compliance (\re
{FWE-epsilon-g}), did not prove to have physical meaning.
Monotonicity property of the relaxation modulus, being completely monotonic
function and creep compliance, being Bernstein function, is the key point in
proving dissipativity properties of the hereditary fractional wave
equations. It is shown that the requirement for relaxation modulus to be
completely monotonic, i.e., creep compliance to be Bernstein function, is
equivalent with the thermodynamical conditions for linear fractional model
\ref{gen-lin}) and power-type distributed-order model (\ref{DOCE}), while in
the case of the fractional Burgers models these monotonicity requirements
are more restrictive than the thermodynamical requirements, as found in \cit
{OZ-2}.
Non-local Hooke and Eringen fractional wave equations, given by (\ref{AS})
and (\ref{EringenWE}), are respectively obtained by coupling non-local
constitutive models of Hooke- and Eringen-type, (\ref{nl-Huk}) and (\re
{frac-Eringen}), with the equation of motion (\ref{eq-motion}) and strain
\ref{strejn}). A priori energy estimates (\ref{ZO-AS}) and (\ref{ZO-AS-1})
for non-local Hooke and energy estimate (\ref{ZO-Eringen}) for the
fractional Eringen wave equation imply the energy conservation, with the
reinterpreted notion of the potential energy, being in a particular point
dependant on the square of strain in all other points weighted by the model
dependent non-locality kernel. In particular, in the energy estimates (\re
{ZO-AS}) the non-local potential energy is proportional to the fractional
strain, represented by the action of fractional Laplacian on the
displacement.
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\msi@int\textstyle\int}%
\def\tiint{\msi@int\textstyle\iint}%
\def\tiiint{\msi@int\textstyle\iiint}%
\def\tiiiint{\msi@int\textstyle\iiiint}%
\def\tidotsint{\msi@int\textstyle\idotsint}%
\def\toint{\msi@int\textstyle\oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{%
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}
\futurelet\@nextcs
\@int
}
\def\@int{%
\ifx\@nextcs\limits
\typeout{Found limits}%
\temptoksc={\limits}%
\let\@next\@intgobble%
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}%
\temptoksc={\nolimits}%
\let\@next\@intgobble%
\else
\typeout{Did not find limits or no limits}%
\temptoksc={}%
\let\@next\msi@limits%
\fi\fi
\@next
}%
\def\@intgobble#1{%
\typeout{arg is #1}%
\msi@limits
}
\def\msi@limits{%
\temptoksa={}%
\temptoksb={}%
\@ifnextchar_{\@limitsa}{\@limitsb}%
}
\def\@limitsa_#1{%
\temptoksa={#1}%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsb{%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsc^#1{%
\temptoksb={#1}%
\@ifnextchar_{\@limitsd}{\@temp
}
\def\@limitsd_#1{%
\temptoksa={#1}%
\@temp
}
\def\dint{\msi@int\displaystyle\int}%
\def\diint{\msi@int\displaystyle\iint}%
\def\diiint{\msi@int\displaystyle\iiint}%
\def\diiiint{\msi@int\displaystyle\iiiint}%
\def\didotsint{\msi@int\displaystyle\idotsint}%
\def\doint{\msi@int\displaystyle\oint}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
|
1,477,468,750,947 | arxiv | \section{Introduction}\label{introduction}
\label{intro}
In this paper, we consider the integer programming (IP) with linear inequalities:
\begin{equation}\label{IP}
\min_x \{\sum\limits_{i=1}^{n} x_i: ~Ax\geq b,~ x\in\{0, 1\}^n\},
\end{equation}
where $A\geq0\in R^{m\times n}$ is a given matrix, $b\in R^m$ is a given vector. Problem (\ref{IP}) arises from many combinatorial optimization problems, such as the minimum vertex covering \cite{Chen2016}, the maximum independent set, and the max-cut problems \cite{BILLIONNET2010}, etc. These problems have been proven to be NP-hard, and many exact and heuristics or approximation algorithms have been proposed, including cutting-plane method \cite{GOMORY1958}, branch and bound \cite{DAKIN1965} and local branch \cite{Lodi2010} algorithms, relaxation induced neighbourhood search \cite{Danna2005}, and objective scaling ensemble approach \cite{Zhang2020}.
It is worth to note that most of the above mentioned algorithms are based on the corresponding linear programming relaxation. Therefore, it is natural to ask under which conditions the integer programming problem \eqref{IP} is equivalent to its corresponding linear programming relaxation problem:
\begin{eqnarray*}\nonumber
\min_x \{\sum\limits_{i=1}^{n} x_i: ~Ax\geq b,~ 0\leq x\leq1\}.
\end{eqnarray*}
At present, there are some theories for ensuring that the above linear programming problem has integral solution.
The first well known theory is totally unimodular ($TUM$ for short), under which it holds that
\begin{theorem} (\cite{HOFFMAN1976})
If $A$ is totally unimodular and $b$ is an integer vector, then the vertices of the polyhedron $P = \{x:~Ax \leq b,~ 0\leq x\leq1\}$ are integral.
\end{theorem}
Another well known theory is the totally dual integrality ($TDI$ for short) proposed by Edmonds and Giles \cite{Edmonds1977}, which is a weaker sufficient condition than TUM.
\begin{theorem}(\cite{Edmonds1977})
If the linear system $Ax \leq b$ is TDI and b is integer valued, then $P = \{x: ~Ax \leq b,~ 0\leq x\leq1\}$ is an integral polyhedron.
\end{theorem}
Note that the linear programming relaxation of problem \eqref{IP} can be written into the form of linear complementarity problem (LCP):
\begin{equation}\label{lcp}
\begin{array}{l}
q+Mz\geq0,\\
z^T(q+Mz)=0,\\
z\geq0.
\end{array}
\end{equation}
where $z\in R^n$ is the vector of variables.
Assuming that the solution set of LCP is non-empty, Chandrasekaran et al. \cite{Chandrasekaran1998} gave the class $I$ of integral matrices $M$ if the corresponding LCP has an integer solution for each integral vector $q$.
Utilizing TDI, Dubey and Neogy \cite{Dubey2018} obtained some new conditions for the existence of an integer solution to LCP with a hidden $Z-$matrix and hidden $K-$matrix. These results are extended from $TUM$ or $TDI$.
In this paper, we wish to establish a sufficient and necessary condition other than $TUM$ and $TDI$, such that the integer program \eqref{IP} is solvable by linear programming relaxation.
We consider providing an adjustable weight $c$, $0<c\leq1$,
such that the optimal solution of the weighted linear programming relaxation problem
\begin{eqnarray}\label{wlp}
\min_x\{c^Tx:Ax\geq b, 0\leq x\leq1\},
\end{eqnarray}
is an optimal solution of problem \eqref{IP}.
Let the $l_0-$norm $\|x\|_0$ be defined as the number of nonzero elements in the vector $x$. In Section \ref{chaper2} of this paper, we will prove that problem \eqref{IP} can be written equivalently as the $l_0-$norm minimization problem
\begin{equation}\label{la1}
\min_x\{\|x\|_0:Ax\geq b,0\leq x\leq 1\}.
\end{equation}
Therefore, we expect to establish a sufficient condition through the sparse optimization approach, such that problems \eqref{la1} and \eqref{wlp} have the same unique optimal solution, and thus provides an optimal solution to problem \eqref{IP}.
Eq. \eqref{la1} is a sparse optimization problem. In this field, the problem
\begin{equation}\label{wo}
\min\limits_x\{\|x\|_0: A_1x+A_2y=b\}
\end{equation}
has been studied mostly, for the optimal solution of $\min\limits_x\{\|x\|_1: A_1x+A_2y=b\}$ to be an optimal solution of the above problem. The proposed well known conditions are the partial restricted isometry property (PRIP) of the parameter $\delta^r_{s-r}$ \cite{Bandeira2013}, and the partial null space property (PNSP) of order $s-r$ \cite{Bandeira2013}.
In \cite{Zhao2014,zhao2017,Zhao2018}, the authors provided the $k-$th order range space property ($k$-RSP) of the constraint matrix $A$ and a given matrix $W$, under which the $l_0-$norm minimization problem has the same unique optimal solution as the problem $\min\limits_x\{\|W_1x+W_2y\|_1: A_1x+A_2y=b\}$. The condition was further extended to the $l_0-$norm minimization problem with non-negative constraints \cite{zhao2012}.
Another condition is the $s$-goodness of the constraint matrix $A$ \cite{Juditsky2011}, under which the unique optimal solution of the $l_1$-norm minimization problem $\min\limits_x\{\|x+y\|_1: A_1x+A_2y=b\}$ is exactly an optimal one of problem \eqref{wo}. In \cite{Kong2014}, the condition was extended for considering the problem
\begin{equation}
\min\limits_x\{\|x\|_1: A_1x+A_2y=b\},
\end{equation}
and the partial $s$-goodness of the constraint matrix $(A_1, A_2)$ was established, for the exact partial $s$-sparse optimal solution via the partial $l_1-$norm minimization.
In this paper, we will propose the nonnegative partial $s$-goodness condition for problems \eqref{wlp} and \eqref{la1}, such that they have the same unique optimal solution. Specifically, we propose a definition of nonnegative partial $s$-goodness and its characterization $\gamma_{s,K}(\cdot)$ and $\hat{\gamma}_{s,K}(\cdot)$ with respect to the constraint matrices and the coefficients of the objective function. On this basis, we give a computable upper bound of $\hat{\gamma}_{s,K}(\cdot)$, and thus obtain verifiable sufficient conditions for the nonnegative partial $s$-goodness. Through which we provide conditions such that the optimal solution of problem \eqref{IP} can be obtained from problem \eqref{wlp}.
This paper is organized as follows. Section \ref{chaper2} gives an example of problem \eqref{IP}, and proves the equivalence between problems (\ref{IP}) and (\ref{la1}). In Section \ref{chaper3}, we define nonnegative partial $s$-goodness and its characterization for a constraint matrix $A^\prime$ and the coefficient $c$ of the objective function in problem \eqref{wlp}. Moreover, we derive necessary condition and sufficient condition for $(A^\prime,c)$ to be nonnegative partial $s$-goodness, and discuss efficiently computable upper bound of $\hat{\gamma}_{s,K}(\cdot)$ in Section \ref{chaper4}.
In Section \ref{chaper5}, we give a heuristic algorithm and three examples to show the feasibility of the proposed theory. Finally, Section \ref{chaper6} concludes this paper.
\section{Problem reformulation}\label{chaper2}
In this section, we prepare some preliminary work for the consequent research. First, we give an example of the considered problem \eqref{IP}. Then, we show that the considered integer programming problem \eqref{IP} can be converted equivalently to an $l_0$-norm minimization problem.
\subsection{An example of the considered integer programming problem}\label{chaper21}
We take the maximum independent set problem as an example to show that it is a special form of problem \eqref{IP}.
Given an undirected graph $G=(V,E)$, where $V=\{1,\cdots,n\}$ is the set of vertices and $E$ is the set of edges, the maximum independent set problem can be formulated as
\begin{equation}\label{MIS}
\begin{array}{cl}
\max\limits_x & \sum\limits_{i=1}^{n}x_i\\
s.t. &Ax\leq 1\\
&x\in\{0,1\}^n,
\end{array}\end{equation}
where $A$ is the adjacency matrix of graph $G$.
Let $\tilde{x}_i=1-x_i$, $i=1, 2, \cdots, n$. Then problem (\ref{MIS}) can be written equivalently as
\begin{equation*}
\begin{array}{cl}
\min\limits_{\tilde{x}}&\sum\limits_{i=1}^{n}\tilde{x}_i\\
s.t. &A\tilde{x}\geq 1\\
&\tilde{x}\in\{0,1\}^n,
\end{array}\end{equation*}
which is a special form of problem \eqref{IP}.
\subsection{Equivalence between integer programming problem and $l_0$-norm minimization problem}\label{chaper22}
In this subsection, we show the equivalence between optimal solution of the $l_0$-norm minimization problem \eqref{la1} and the integer programming problem \eqref{IP} after a certain operation. First, we have the following result.
\begin{theorem}\label{thm21}
For any optimal solution $x^*$ of the $l_0$-norm minimization problem $\eqref{la1}$, $\hat{x}^*=(\lceil x_1^* \rceil, \lceil x_2^* \rceil,$ $\cdots, \lceil x_n^* \rceil)^T$ is an optimal solution of the integer programming problem $\eqref{IP}$ and the $l_0$-norm minimization problem $\eqref{la1}$ respectively.
\end{theorem}
\begin{proof} For any optimal solution $x^*$ of problem $\eqref{la1}$, let $\hat{x}^*=(\lceil x_1^* \rceil, \lceil x_2^* \rceil, \cdots,$ $\lceil x_n^* \rceil)^T\in\{0,1\}^n$. By noting that $\hat{x}^*\geq x^*$ and $A\geq 0$, we have $A\hat{x}^*\geq Ax^* \geq b$. Hence, $\hat{x}^*$ is a feasible solution of problems $\eqref{IP}$ and $\eqref{la1}$ respectively. Further, since $\sum\limits_{i=1}^{n}|x^*_i|_0=\sum\limits_{i=1}^{n}|\lceil x_i^* \rceil|_0=\sum\limits_{i=1}^{n}|\hat{x}^*_i|_0$, $\hat{x}^*$ is an optimal solution of problem $\eqref{la1}$.
Next, for any $x\in \{x\in \{0,1\}^n: Ax\ge b\}$, $x$ is also a feasible solution of problem $\eqref{la1}$. Since $\sum\limits_{i=1}^{n}x_i=\sum\limits_{i=1}^{n}|x_i|_0$, and by $\sum\limits_{i=1}^{n}|x^*_i|_0=\sum\limits_{i=1}^{n}|\hat{x}^*_i|_0$, we can obtain that $\sum\limits_{i=1}^{n}x_i \geq \sum\limits_{i=1}^{n}|\hat{x}^*_i|_0=\sum\limits_{i=1}^{n}\hat{x}^*_i$. Hence $\hat{x}^*$ is also an optimal solution of problem $\eqref{IP}$. $\hfill\square$
\end{proof}
Theorem \ref{thm21} implies that the following corollary holds.
\begin{corollary}
If problem $\eqref{la1}$ has a unique integer optimal solution $\hat{x}^*$, then $\hat{x}^*$ is also an optimal solution of the integer programming problem $\eqref{IP}$.
\end{corollary}
Conversely, we next prove that an optimal solution of problem $\eqref{IP}$ is also an optimal solution of problem $\eqref{la1}$.
\begin{theorem}\label{thm22}
If $x$ is an optimal solution of problem $\eqref{IP}$, then $x$ is also an optimal solution of problem $\eqref{la1}$.
\end{theorem}
\begin{proof}
If $x$ is an optimal solution of problem $\eqref{IP}$, then $x$ is a feasible solution of problem $\eqref{la1}$, and satisfies that $\sum\limits_{i=1}^{n}x_i=\sum\limits_{i=1}^{n}|x_i|_0$. For any optimal solution $x^*$ of problem $\eqref{la1}$, by Theorem \ref{thm21}, $\hat{x}^*=(\lceil x_1^* \rceil ,\lceil x_2^* \rceil ,\cdots,\lceil x_n^* \rceil)^T$ is an optimal solution of problems $\eqref{IP}$ and $\eqref{la1}$. Hence, $\sum\limits_{i=1}^{n}x_i=\sum\limits_{i=1}^{n}\hat{x}^*_i=\sum\limits_{i=1}^{n}|\lceil x_i^* \rceil|_0=\sum\limits_{i=1}^{n}|x^*_i|_0$. So $x$ is also an optimal solution of problem $\eqref{la1}$. $\hfill\square$
\end{proof}
By introducing slack variables, letting
$A^{\prime}: =[A_1,A_2]\in R^{(m+n)\times(m+2n)}$, where $A_1=\big(\begin{array}{c}A \\ I\end{array}\big)\in R^{(m+n)\times n}$,
$A_2 = \big(\begin{array}{cc}-I&0\\0&I\end{array}\big)\in R^{(m+n)\times (m+n)}$, $b^\prime=\big(\begin{array}{c} b \\ 1 \end{array}\big)
\in R^{m+n}$, then problem (\ref{la1}) can be rewritten as the form
\begin{equation}\label{la2}
\min\limits_{x,y}\{\|x\|_0: A_1x+A_2y=b^\prime,x\geq 0,y\geq0\}.
\end{equation}
Correspondingly, the weighted linear programming problem (\ref{wlp}) can be written in the form of the partially weighted linear programming problem
\begin{equation}\label{la3}
\min\limits_{x,y}\{c^Tx: A_1x+A_2y=b^\prime,x\geq 0,y\geq0\}.
\end{equation}
Then, to study the equivalence between the integer programming problem \eqref{IP} and the weighted linear programming problem \eqref{wlp}, we turn to derive conditions under which problems \eqref{la2} and \eqref{la3} have the same optimal solutions. In the sequel, we adapt the concept of $s$-goodness \cite{Juditsky2011} to problem \eqref{la2}, such that problems \eqref{la2} and \eqref{la3} have the same unique optimal solution. Through this optimal solution, we can obtain the optimal solution of problem \eqref{IP}.
\section{ Nonnegative partial $s$-goodness}\label{chaper3}
Since problem \eqref{la2} is NP-hard, we are interested in establishing some conditions, under which both of problems \eqref{la2} and \eqref{la3} have the same unique optimal solution.
Firstly, we give the following nonnegative partial s-goodness definition of matrix $A^{\prime}$ and weighted vector $c$, where $A^\prime=(A_1, A_2)$.
\subsection{Definition of nonnegative partial $s$-goodness}\label{dnps}
\begin{definition}\label{dingyi1}
Let $A^{\prime}$ be a $(m+n)\times (m+2n)$ matrix and $s$ be an integer, $0\le s \leq n$. $0< c_i\leq 1$, $i=1, 2, ..., n$. We say that $(A^{\prime}, c)$ is nonnegative partially $s$-good with respect to the columns of $A_1$, if for any pair of vectors $w^1\geq0\in R^n$, $w^2\geq0\in R^{m+n}$ with that $w^1\in R^n$ has at most $s$ nonzero elements, $(w^1, w^2)^T$ is the unique optimal solution to the optimization problem
\begin{equation}\label{EQ}
\min_{x, y}\big\{c^Tx:A_1x+A_2y=A_1w^1+A_2w^2,x\geq 0,y\geq0\big\}.
\end{equation}
\end{definition}
For the convenient of description, we say $(A^{\prime}, c)$ is nonnegative partially $s$-good to refer that $(A^{\prime}, c)$ is nonnegative partially $s$-good with respect to the columns of $A_1$. Moreover, without loss of generality, for $s\in \{0,1,2,\cdots,n\}$, we say $w^1\in R^n$ with $\|w^1\|_0\leq s$ to mean that the nonzero entries of $w^1$ is no more than $s$. Meanwhile, in this paper, a vector is said to be $s$-sparse when this vector contains at most $s$ nonzero components.
It is obvious that, if the partially weighted linear problem \eqref{la3} has multiple optimal solutions, then $(A^{\prime},c)$ is not nonnegative partially $s$-good. However, we want to recover the optimal solution of problem \eqref{la2} from problem \eqref{la3}, in which $c$ is not fixed. So we adjust the coefficient $c$ in problem \eqref{la3}, such that one of the optimal solutions is the unique optimal solution of problem \eqref{la3}.
By Definition \ref{dingyi1}, taking a step closer to our goal, we obtain the following results, which characterize the consistency of solutions to the problems \eqref{la2} and \eqref{la3}.
\begin{theorem}\label{thmknownS}
For any optimal solution $(w^1, w^2)^T$ of problem $(\ref{la2})$, where $w^1$ is an $s$-sparse vector, if $(A^\prime, c)$ is nonnegative partially $s$-good, then $(w^1,w^2)^T$ is the unique optimal solution to the partially weighted linear programming problem $(\ref{la3})$.
\end{theorem}
\begin{proof}
For any optimal solution $(w^1, w^2)^T$ of problem $(\ref{la2})$, where $w^1$ is an $s$-sparse vector, it holds that $A_1x+A_2y=b^\prime=A_1w^1+A_2w^2$ and $w^1\geq 0$, $w^2\geq 0$. That is, $(w^1,w^2)^T$ is a feasible solution of problem $(\ref{la3})$. If $(A^\prime, c)$ is nonnegative partially $s$-good, then according to Definition \ref{dingyi1}, $(w^1,w^2)^T$ with $\|w^1\|_0\leq s$ is a unique optimal solution of problem $(\ref{la3})$. $\hfill\square$
\end{proof}
Under the nonnegative partial $s$-goodness condition, Theorem \ref{thmknownS} shows that an optimal solution of problem $(\ref{la2})$ is a unique optimal solution of problem $(\ref{la3})$. In the above theorem, there is no requirement for the uniqueness of the solution of the $l_0$-norm minimization problem $(\ref{la2})$, while only the uniqueness of the partially weighted linear problem $(\ref{la3})$ is required.
Next, we give a stronger result that both problems $\eqref{la2}$ and $\eqref{la3}$ have a unique optimal solution.
\begin{theorem}\label{thm32}
Given an integer $0\leq s\leq n$, and let $(w^1,w^2)^T$ with $||w^1||_0\le s$ be a feasible solution to problem $(\ref{la3})$. Suppose $(A^\prime, c)$ is nonnegative partially $s$-good, then $(w^1,w^2)^T$ is both the unique optimal solution to the partially weighted linear problem $(\ref{la3})$ and the $l_0$-norm minimization problem $(\ref{la2})$.
\end{theorem}
\begin{proof}
Suppose $(A^\prime,c)$ is nonnegative partially $s$-good, and $(w^1,w^2)^T$ with $||w^1||_0\le s$ is a feasible solution to problem $(\ref{la3})$. Then by Definition \ref{dingyi1}, $(w^1,w^2)^T$ is the unique optimal solution to problem $(\ref{la3})$.
Next, we prove that $(w^1,w^2)^T$ is also the unique optimal solution to the $l_0-$norm minimization problem (\ref{la2}). Suppose $(x^1,y^1)^T$ is another solution to problem (\ref{la2}), and $\|x^1\|_0\leq \|w^1\|_0\leq s$. Then $A_1x^1+A_2y^1=b^\prime=A_1w^1+A_2w^2$ and $x^1\geq 0$, $y^1\geq 0$. By Definition \ref{dingyi1},
$(x^1, y^1)^T$ is also a unique optimal solution to problem (\ref{la3}), and then we can get $(x^1,y^1)^T=(w^1,w^2)^T$. Hence $(w^1, w^2)^T$ is the unique optimal solution of the $l_0-$norm minimization problem (\ref{la2}). $\hfill\square$
\end{proof}
According to Section \ref{chaper22} and Theorems \ref{thmknownS} and \ref{thm32}, we can immediately get that an optimal solution of the integer programming problem \eqref{IP} can be recovered from problem $(\ref{la3})$, as in the following corollary.
\begin{corollary}
Suppose $(A^\prime, c)$ is nonnegative partially $s$-good, let $(w^1, w^2)^T$ with $||w^1||_0$ $\le s$ be an optimal solution to the partially weighted linear programming problem $(\ref{la3})$. Then $\lceil w^1 \rceil$ is an optimal solution of integer programming problem \eqref{IP}.
\end{corollary}
It seems not easy to completely characterize the nonnegative partial $s$-goodness of the constraint matrix $A^\prime$ and the coefficient $c$ of the objective function \eqref{wlp}. In the next subsection, we utilize two quantities to characterize nonnegative partial $s$-goodness.
\subsection{Two quantities of nonnegative partial $s$-goodness}\label{chaper31}
In this section, we introduce two quantities: $\gamma_{s,K}\big(A^\prime,c,\beta\big)$ and $\hat\gamma_{s,K}\big(A^\prime,c,\beta\big)$, where $K:=\{1,2,\cdots,n\}$ is the index set of $x$, i.e., the index set of the columns of matrix $A_1$.
In particular, for a vector $\theta\in R^{m+n}$, let $\|\cdot\|_*$ be the dual norm of $\|\cdot\|$ specified by $\|\theta\|_*=\max\limits_{d}\{d^T\theta:\|d\|\leq1\}$. In this paper, we consider the dual norm of $\|\cdot\|_1$.
\begin{definition}\label{def2}
Let $A^\prime\in R^{(m+n)\times (m+2n)}$, $s$ is an integer and $0\leq s\leq n$, $0< c_i\leq 1$, $i=1, 2, \dots,n$, $\beta\in [0,\infty]$.
We define $\gamma_{s,K}\big(A^\prime,c,\beta\big),\ \hat\gamma_{s,K}\big(A^\prime,c,\beta\big)$ as follows:
(1) $\gamma_{s,K}\big(A^\prime,c,\beta\big)$ is the infimum of $\gamma>0$ such that for every pair of vectors $z^1\in R^n,z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta\in R^{m+n}$ such that
\begin{eqnarray} \label{gamma1}
\| \theta\|_*\leq\beta,~ \big(A_1^{ T}\theta\big)_i\left\{\begin{array}{cc}=c_iz^1_i,& ~if~ z^1_i=1;\\
\in [-\gamma,\gamma],&~if~ z^1_i=0,\end{array}\right.
~\text{and}~ \big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq 0;\\
\leq 0,&~if~ z^2_i=0.
\end{array}\right.
\end{eqnarray}
If for some pair of vectors $z^1\in R^n,z^2\in R^{m+n}$ as above, there does no $\theta$ with $\| \theta\|_*\leq\beta$, such that $A_1^{ T}\theta$ coincides with $c\circ z^1$ on the support set of $z^1$, and $A_2^{ T}\theta$ coincides with $0$ on the support set of $z^2$, then we let $\gamma_{s,K}\big(A^\prime,c,\beta\big)=+\infty$.
(2) $\hat\gamma_{s,K}\big(A^\prime,c,\beta\big)$ is the infimum of $\gamma>0$ such that for every pair of vectors $z^1\in R^n,z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\hat{\theta}\in R^{m+n}$ such that
\begin{eqnarray} \label{gamma2}
\|\hat{\theta}\|_*\leq\beta,~\|\big(A_1^{T}\hat{\theta}\big)-c\circ z^1\|_{\infty}\leq \gamma~
\text{and}~ \big(A_2^{ T}\hat{\theta}\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq 0;\\
\leq 0,&~if~ z^2_i=0,
\end{array}\right.
\end{eqnarray}
\end{definition}
here $c\circ z^1$ means entry-wise product of the two vectors.
Furthermore, when $\beta=\infty$, we write $\gamma_{s,K}\big(A^\prime, c\big)$, $\hat{\gamma}_{s,K}\big(A^\prime, c\big)$ instead of $\gamma_{s,K}\big(A^\prime, c,$ $ \infty\big)$ and $\hat{\gamma}_{s,K}\big(A^\prime,c,\infty\big)$, respectively
\begin{remark}\label{re1}
Obviously, the set of values of the $\gamma$ is closed. Thus, if $\gamma_{s,K}\big(A^\prime,c,\beta)<+\infty$, then for every pair of vectors $z^1\in R^n,z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta\in R^{m+n}$ such that
\begin{equation}\label{B}
\begin{array}{ll}
\| \theta\|_*\leq\beta, & \big(A_1^{ T}\theta\big)_i\left\{\begin{array}{cc}=c_iz^1_i, & ~if~ z^1_i=1;\\
\in [-\gamma_{s,K}\big(A^\prime,c,\beta),\gamma_{s,K}\big(A^\prime,c,\beta)],&~if~ z^1_i=0,\end{array}\right.
\\
\\
&\big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq0;\\
\leq 0,&~if~ z^2_i=0.\end{array}\right.
\end{array}
\end{equation}
Similarly, for every pair of vectors $z^1\in R^n$, $z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta\in R^{m+n}$ such that
\begin{equation}\label{C}
\|\hat{\theta}\|_*\leq\beta, ~\|\big(A_1^{T}\hat{\theta}\big)-c\circ z^1\|_{\infty}\leq \hat{\gamma}_{s,K}\big(A^\prime,c,\beta\big)~\text{and}~ \big(A_2^{ T}\hat{\theta}\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq 0;\\
\leq 0,&~if~ z^2_i=0.
\end{array}\right.
\end{equation}
\end{remark}
Before characterizing nonnegative partial $s$-goodness of $(A^{\prime}, c)$ more specifically, we need to give some basic properties of $\gamma_{s,K}\big(A^\prime,c,\beta\big)$ and $\hat{\gamma}_{s,K}\big(A^\prime,c,\beta\big)$, such as convexity and monotonicity.
Since nonnegative partial $s$-goodness of $(A^\prime,c)$ requires $\gamma_{s,K}(A^\prime,c)<\infty$, we assume it holds without loss of generality in the sequel.
\begin{lemma}\label{lam2}
$\gamma_{s,K}(A^\prime,c,\beta)$ and $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ are convex nonincreasing function of $\beta\in [0,+\infty]$.
\end{lemma}
\begin{proof}
Here, we only need to prove that the $\gamma_{s,K}\big(A^\prime,c,\beta)$ is a convex nonincreasing function with respect to $\beta\in [0,+\infty]$. The property with respect to $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ can be proved similarly.
Firstly, for the given $A^\prime, c$ and $s$, we demonstrate that the $\gamma_{s,K}\big(A^\prime, c, \beta)$ is a nonincreasing function of $\beta$. For any $ \beta_2\ge \beta_1$, according to the definition of $\gamma_{s,K}\big(A^\prime,c,\beta\big)$ and Remark \ref{re1},
for every pair of vectors $z^1\in R^n,z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta\in R^{m+n}$ such that
\begin{equation*}
\begin{array}{ll}
\| \theta\|_*\leq\beta_1 ~,&\big(A_1^{T}\theta\big)_i\left\{\begin{array}{cc}=c_iz^1_i,& ~if~ z^1_i=1;\\
\in [-\gamma_{s,K}\big(A^\prime,c,\beta_1),\gamma_{s,K}\big(A^\prime,c,\beta_1)],&~if~ z^1_i=0,\end{array}\right.
\\
\\
&\big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq0;\\
\leq 0,&~if~ z^2_i=0.\end{array}\right.
\end{array}
\end{equation*}
Since $\beta_2\ge\beta_1$, the $\theta$ in the above equation also satisfies that
\begin{equation*}
\begin{array}{ll}
\| \theta\|_*\leq \beta_2 ~,&\big(A_1^{T}\theta\big)_i\left\{\begin{array}{cc}=c_iz^1_i,& ~if~ z^1_i=1;\\
\in [-\gamma_{s,K}\big(A^\prime,c,\beta_1),\gamma_{s,K}\big(A^\prime,c,\beta_1)],&~if~ z^1_i=0,\end{array}\right.
\\
\\&\big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq0;\\
\leq 0,&~if~ z^2_i=0.\end{array}\right.
\end{array}
\end{equation*}
Hence by the definition of $\gamma_{s,K}\big(A^\prime,c,\beta_2\big)$, $\gamma_{s,K}\big(A^\prime,c,\beta_1\big)\geq\gamma_{s,K}\big(A^\prime,c,\beta_2\big)$.
Next, we prove that $\gamma_{s,K}(A^\prime,c,\beta)$ is a convex function of $\beta$. That is to say, for any $\beta_1$, $\beta_2\in[0,+\infty]$, for any $\alpha\in[0,1]$, we need to prove that
\begin{equation}\label{eq9}
\gamma_{s,K}\big(A^\prime,c,\alpha\beta_1+(1-\alpha)\beta_2)\leq\alpha \gamma_{s,K}(A^\prime,c,\beta_1)+(1-\alpha)\gamma_{s,K}(A^\prime,c,\beta_2).
\end{equation}
Note that, the above inequality (\ref{eq9}) obviously holds if one of $\beta_1$ and $\beta_2$ is $+\infty$. Therefore, we only need to verify that for $\beta_1, \beta_2\in [0,+\infty)$, the inequality (\ref{eq9}) still holds. By the definition of $\gamma_{s,K}\big(A^\prime,c,\beta)$, it is easy to know that for every pair of vectors $z^1\in R^n,z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta_\ell\in R^{m+n}$, $\ell\in\{1,2\}$ such that
\begin{equation*}
\begin{array}{ll}
\|\theta_{\ell}\|_*\leq\beta_{\ell} ~,&(A_1^{T}\theta_\ell)_{i}\left\{\begin{array}{cc}=c_iz^1_i,& ~if~ z^1_i=1;\\
\in [-\gamma_{s,K}\big(A', c, \beta_{\ell}), \gamma_{s,K}\big(A', c, \beta_{\ell})], &~if~ z^1_i=0,\end{array}\right.
\\
\\&\big(A_2^{ T}\theta_\ell\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq0;\\
\leq 0,&~if~ z^2_i=0.\end{array}\right.
\end{array}
\end{equation*}
Clearly, for any $\alpha\in[0,1]$, we can easily get $$\|\alpha\theta_1+(1-\alpha)\theta_2\|_*\leq\alpha\beta_1+(1-\alpha)\beta_2.$$
Moreover,
\begin{equation*}
\begin{array}{ll}
&[A_1^T(\alpha\theta_1+(1-\alpha)\theta_2)]_i\left\{\begin{array}{cc}=c_iz^1_i,& ~if~ z^1_i=1;\\
\in [-k\varrho,\varrho],&~if~ z^1_i=0,\end{array}\right.
\\
\\&[A_2^{T}(\alpha\theta_1+(1-\alpha)\theta_2)\big]_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq0;\\
\leq 0,&~if~ z^2_i=0,\end{array}\right.
\end{array}
\end{equation*}
where $\varrho=\alpha\gamma_{s,K}(A^\prime, c, \beta_1)+(1-\alpha)\gamma_{s,K}(A^\prime, c,\beta_2)$. Hence, by the definition of $\gamma_{s,K}(\cdot)$,
it holds that
$$\gamma_{s,K}\big(A^\prime, c,\alpha\beta_1+(1-\alpha)\beta_2)\leq\alpha \gamma_{s,K}(A^\prime, c,\beta_1)+(1-\alpha)\gamma_{s,K}(A^\prime, c,\beta_2). \qquad\square $$
\end{proof}
By Definition \ref{def2} and Lemma \ref{lam2}, the set of values of the $\gamma$ is closed and has a infimum $\gamma_{s,K}(A^\prime, c,\beta)$. Namely, for the given $A^\prime, c$ and $s$, if $\beta$ is large enough, we can set $\gamma_{s,K}(A^\prime,c,\beta)=\gamma_{s,K}(A^\prime,c)$. In the same way, for the given $A^\prime, c$ and $s$, if $\beta$ is large enough, we can set $\hat{\gamma}_{s,K}(A^\prime,c,\beta)=\hat{\gamma}_{s,K}(A^\prime,c)$.
From the definitions of $\gamma_{s,K}(A^\prime,c,\beta)$ and $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$, it is obvious that $s$ is another important parameter of $\gamma$. Next, we give a property of $\gamma_{s,K}(A^\prime,c,\beta)$ and $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ with respect to $s$.
\begin{lemma}\label{lam1}
$\gamma_{s,K}(A^\prime,c,\beta)$ and $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ are monotonically nondecreasing functions of the parameter $s$.
\end{lemma}
\begin{proof}
Firstly, we prove that $\gamma_{s,K}(A^\prime,c,\beta)$ is a monotonically nondecreasing function of the parameter $s$. Let $\gamma_{s,K}(A^\prime,c,\beta)<\infty$. According to the definition of $\gamma_{s,K}(A^\prime,c,\beta)$ and Remark \ref{re1}, for every pair of vectors $z^1\in R^n$, $z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta\in R^{m+n}$ such that
\begin{equation}\label{gammas}
\begin{array}{ll}
\| \theta\|_*\leq\beta ~,&\big(A_1^{T}\theta\big)_i\left\{\begin{array}{cc}=c_iz^1_i,& ~if~ z^1_i=1;\\
\in [-\gamma_{s,K}\big(A^\prime,c,\beta),\gamma_{s,K}\big(A^\prime,c,\beta)],&~if~ z^1_i=0,\end{array}\right.
\\
\\
&\big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq0;\\
\leq 0,&~if~ z^2_i=0.\end{array}\right.
\end{array}
\end{equation}
Then, let $t_s(z^1, z^2)$ be the minimal value of the optimization problem
\begin{equation}
\begin{array}{cl}\label{ops}
\min\limits_{\theta} & \|(A_1^T\theta)\|_\infty \\
s.t. & \|\theta\|_*\leq\beta \\
& (A_1^T\theta)_i=c_iz^1_i, ~i\in I_1\\
& (A_2^{T}\theta)_i=0,~ i\in I_2 \\
& (A_2^{T}\theta)_i\leq 0, ~i\in \bar{I}_2,
\end{array}
\end{equation}
where $I_1=\{i: ~z^1_i=1\}$, $I_2=\{i: ~z^2_i\neq0\}$ and $\bar{I}_2=\{i:~z^2_i=0\}$. Obviously, $t_s(z^1, z^2)\le \gamma_{s,K}(A^\prime,c,\beta)$.
Let $s^\prime=s-1<s$. For every pair of vectors $z^{1\prime}\in R^n$, $z^2\in R^{m+n}$, where $z^{1\prime}\in R^n$ has $s-1$ nonzero entries, each is equal to 1, we can construct a pair of vectors $z^1\in R^n$, $z^2\in R^{m+n}$, where $z^1$ is obtained from $z^{1\prime}$ by changing one entry with value $0$ to $1$. According to \eqref{ops}, it is obvious that
$$t_{s'}(z^{1\prime}, z^2)\le t_s(z^1, z^2)\le \gamma_{s,K}(A^\prime,c,\beta). $$
Hence, $\gamma_{s', K}(A^\prime, c, \beta)\le \gamma_{s, K}(A^\prime,c,\beta)$.
Using the way similar to the above proof,
we can also show that $\hat{\gamma}_{s^\prime,K}(A^\prime,c,\beta)\leq \hat{\gamma}_{s,K}(A^\prime,c,\beta)$, i.e., $\hat{\gamma}_{s,K}(A^\prime, c, \beta)$ is a monotonically nondecreasing function of $s$. $\hfill\square$
\end{proof}
\begin{remark} According to Lemma \ref{lam1},
$\gamma_{s,K}(A^\prime,c,\beta)$ and $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ are nondecreasing functions of $s$, hence for all $s'\le s$, Remark \ref{re1} holds.
That means, in Remark \ref{re1} for every pair of vectors $z^1\in R^n$, $z^2\in R^{m+n}$ with $\|z^1\|_0\le s$, there exists $\theta$ with $\|\theta\|_*\leq\beta$ such that Equations \eqref{gamma1} and \eqref{gamma2} hold.
\end{remark}
\subsection{Sufficient condition and necessary condition of nonnegative partial $s$-goodness}\label{chaper32}
In this subsection, via $\gamma_{s,K}(A^\prime,c)$ we propose a sufficient condition and a necessary condition for nonnegative partial $s$-goodness of the constraint matrix $A^\prime$ and the coefficient $c$ of the objective function.
\begin{theorem}\label{EPI1}
Given $A^\prime\in R^{(m+n)\times (m+2n)}$, $s$ is an integer and $0\leq s\leq n$, $0< c\leq 1$, we can obtain:
$(a)$ if $(A^\prime,c)$ is nonnegative partially $s$-good, then $\gamma_{s,K}(A^\prime,c)\leq\max\limits_{0< i\leq n}c_i$;
$(b)$ if $\gamma_{s,K}(A^\prime,c)<\min\limits_{0< i\leq n} c_i$, then $(A^\prime,c)$ is nonnegative partially $s$-good.
\end{theorem}
\proof
$(a)$ Suppose $(A^\prime, c)$ is nonnegative partially $s$-good.
For any given $w=(w^1, w^2)^T$\\$\geq 0\in R^{n}\times R^{m+n}$ with $\|w^1\|_0\leq s$,
let $I_1=\{i:~w^1_i>0\}$, $\bar{I}_1=\{i:~w^1_i=0\}$, $I_2=\{i:~w^2_i>0\}$ and $\bar{I}_2=\{i:~w^2_i=0\}$.
By Definition \ref{dingyi1}, $w$ is the unique optimal solution to problem ($\ref{EQ}$). According to the optimality condition, there exists $\theta\in R^{m+n}$ such that $f_\theta(x, y)=\sum\limits_{i=1}^{n}c_i|x_i|-\theta^T(A_1x+A_2y-A_1w^1-A_2w^2)$ attains its minimum value at $(x,y)^T=(w^1,w^2)^T$, i.e., $0\in\partial f_\theta(w^1, w^2)$. This implies that
\begin{eqnarray*}
\begin{array}{ll}
(A_1^T\theta)_i\left\{\begin{array}{cc}=c_i,& ~ i\in I_1;\\
\in[-\max\limits_{0< i\leq n}c_i, \max\limits_{0< i\leq n}c_i],& ~i\in \bar{I}_1,\end{array}\right.
~and~(A_2^{T}\theta)_i\left\{\begin{array}{cc}
=0,& ~i\in I_2;\\
\leq 0,&~i\in \bar{I}_2.\end{array}\right.
\end{array}
\end{eqnarray*}
Since $w\geq 0$, hence, for the optimization problem
\begin{eqnarray*}
\min\limits_{\theta,
\gamma}\Bigg\{\gamma:(A_1^T\theta)_i\left\{\begin{array}{cc}=c_i,&~i\in I_1;\\
\in [-\gamma,\gamma],&~i\in \bar{I}_1,\end{array}\right.
~and~(A_2^{T}\theta)_i\left\{\begin{array}{cc}
=0,&~ i\in I_2;\\
\leq 0,&~i\in \bar{I}_2,\end{array}\right. \Bigg \}
\end{eqnarray*}
the optimal value $\gamma\leq\max\limits_{0< i\leq n}c_i$. By Definition \ref{def2}, $\gamma_{s,K}(A^\prime, c)$ is the infimum of $\gamma$, thus $\gamma_{s,K}(A^\prime,c)\leq\max\limits_{0< i\leq n}c_i$.
$(b)$ Suppose $\gamma_{s,K}(A^\prime,c)<\min\limits_{0< i\leq n} c_i$, next we prove $(A^\prime,c)$ is nonnegative partially $s$-good. That is, for a vector $w=(w^1,w^2)^T\geq0$ with $A_1w^1+A_2w^2=b^\prime$ and $\|w^1\|_0\leq s$, we need to prove that $w$ is the unique optimal solution to problem ($\ref{EQ}$).
First, we consider the special case that $(w^1,w^2)^T=(0, w^2)^T$. Obviously, $(x,y)^T=(0,w^2)^T$ is the unique optimal solution to problem ($\ref{EQ}$). The reason is that, when $w^1=0$, then $A_2y=A_2w^2$, and since $A_2 = \big(\begin{array}{ll}-I&0\\0&I\end{array}\big)$ is injective, it is easy to see that $y=w^2$ is unique.
Now, suppose $\|w^1\|_0=s^\prime$, $0\leq s^\prime\le s$, and its support set is $I_1$. Meanwhile, let $I_2$ be the support index set of $w^2$.
According to Lemma \ref{lam1}, we have $\gamma:=\gamma_{s^{\prime},K}(A^\prime,c)\leq\gamma_{s,K}(A^\prime,c)$. Since $\gamma_{s,K}(A^\prime,c)<\min\limits_{0< i\leq n}c_i$, we get that $\gamma <\min\limits_{0< i\leq n}c_i$. Moreover, by the definition of $\gamma_{s,K}(\cdot)$, there exists $\theta\in R^{m+n}$ such that
\begin{eqnarray}\label{eq16}
\begin{array}{ll}
\|\theta\|_*\leq\beta,~
(A_1^T\theta)_i\left\{\begin{array}{cc}=c_isign(w^1_i),& ~ i\in I_1;\\
\in [-\gamma,\gamma],& ~i\in \bar{I}_1,\end{array}\right.
~(A_2^{T}\theta)_i\left\{\begin{array}{cc}
=0,& ~i\in I_2;\\
\leq 0,&~i\in \bar{I}_2,\end{array}\right.
\end{array}
\end{eqnarray}
where $I_1=\{i:~w^1_i>0\}$, $\bar{I}_1=\{i:~w^1_i=0\}$, $I_2=\{i:~w^2_i>0\}$ and $\bar{I}_2=\{i:~w^2_i=0\}$. Furthermore by (a), there exists $\theta\in R^{m+n}$ satisfying Eq. \eqref{eq16} which is the optimal Lagrange multiplier of problem (\ref{EQ}). Then for any feasible solution $(x, y)^T$ of problem (\ref{EQ}), it holds that
\begin{equation}\label{f}
\begin{aligned}
f(x,y)&=c^Tx-\theta^T(A_1x+A_2y-A_1w^1-A_2w^2)\\
&=c^Tx-(A_1^T\theta)^T(x-w^1)-(A_2^T\theta)^T(y-w^2)\\
&=\sum_{i\in I_1}c_iw^1_i+\sum_{i\in \bar{I}_1}(c_i-(A_1^T\theta)_i)x_i-\sum_{i\in \bar{I}_2}(A_2^{T}\theta)_iy_i\\
&\geq\sum_{i\in I_1}c_iw^1_i+\sum_{i\in \bar{I}_1}(c_i-(A_1^T\theta)_i)x_i\\
&\geq c^Tw^1.
\end{aligned}
\end{equation}
According to \eqref{f}, it is obvious that the minimum value of the Lagrange function can be attained at $x=w^1$. Further, since $(x,y)$ and $(w^1,w^2)$ have the relationship $A_1x+A_2y=A_1w^1+A_2w^2$, $A_1x=A_1w^1$ and $A_2=\big(\begin{array}{ll}-I&0\\0&I\end{array}\big)$ is an injective matrix, it is easy to show that $A_2y=A_2w^2$. Therefore, $(x,y)^T=(w^1,w^2)^T$ is an optimal solution of problem (\ref{EQ}).
Next, it is necessary to prove that this optimal solution is unique. Suppose $(\tilde{x},\tilde{y})^T$ is another optimal solution of problem (\ref{EQ}). That is, \begin{equation*}
\begin{aligned}
f(\tilde{x},\tilde{y})-f(w^1,w^2)=\sum\limits_{i\in \bar{I}_1}(c_i-(A_1^T\theta)_i)\tilde{x}_i-\sum\limits_{i\in \bar{I}_2}(A_2^{T}\theta)_i\tilde{y}_i=0.
\end{aligned}
\end{equation*}
By the assumption that $\gamma<\min\limits_{0< i\leq n}c_i$, and
from Eq. \eqref{eq16}, $|A_1^T\theta|_{i}< \min\limits_{0< i\leq n} c_i$ for all $i\in \bar {I}_1$, which means that $\tilde{x}_i=0$ and $\sum\limits_{i\in \bar{I}_2}(A_2^{T}\theta)_i\tilde{y}_i=0$.
Therefore, $\tilde{x}_i=w^1_i=0$ for all $i\in \bar{I}_1$. Hence, $\|\tilde{x}-w^1\|_0\leq s$.
Further, for the above vector $\tilde{x}_i-w^1_i$, define
\begin{equation*}
\begin{aligned}
h(\tilde{x}-w^1, \tilde{y}-w^2):=\sum\limits_{i=1}^{n}c_i|(\tilde{x}-w^1)_i|-\tilde\theta^T(A_1(\tilde{x}-w^1)+A_2(\tilde{y}-w^2)).
\end{aligned}
\end{equation*}
Similar to the proof of part (a) in Theorem \ref{EPI1}, there exists $\tilde\theta\in R^{m+n}$ such that
\begin{eqnarray*}
\begin{array}{ll}
&(A_1^T\tilde\theta)_i\left\{\begin{array}{cc}=c_isign((\tilde{x}-w^1)_i),& ~ if ~ (\tilde{x}-w^1)_i\neq 0;\\
\in [-\max\limits_{0< i\leq n}c_i, \max\limits_{0< i\leq n}c_i],& ~if~ (\tilde{x}-w^1)_i= 0,\end{array}\right.
\end{array}
\end{eqnarray*}
and
\begin{eqnarray*}
\begin{array}{ll}
(A_2^{T}\tilde\theta)_i\left\{\begin{array}{cc}
=0,& ~if ~(\tilde{y}-w^2)_i\neq 0;\\
\leq 0,&~if~ (\tilde{y}-w^2)_i= 0.\end{array}\right.
\end{array}
\end{eqnarray*}
Therefore, for the $\tilde{\theta}$ in the function $h(\tilde{x}-w^1, \tilde{y}-w^2)$, we have
\begin{equation*}
\begin{aligned}
0&=(A_1^T\tilde{\theta})^T(\tilde{x}-w^1)+(A_2^T\tilde{\theta})^T(\tilde{y}-w^2)\\
&=\sum\limits_{i\in I_1}(A_1^T\tilde{\theta})^T_i(\tilde{x}_i-w^1_i)+\sum\limits_{i\in I_2}(A_2^T\tilde{\theta})^T_i(\tilde{y}_i-w^2_i),
\end{aligned}
\end{equation*}
and then we can get $\tilde{x}_i=w^1_i$ for all $i\in I_1$. This combined with the fact $A_1\tilde{x}+A_2\tilde{y} =A_1w^1+A_2w^2$ can lead to $A_2\tilde{y} = A_2w^2$. Further, since $A_2=\big(\begin{array}{ll}-I&0\\0&I\end{array}\big)$ is an injective matrix, we have $\tilde{y} =w^2$ and then $(\tilde{x},\tilde{y})^T=(w^1,w^2)^T$.$\hfill\square$
Below, we show the relationship between $\gamma_{s,K}(\cdot)$ and $\hat\gamma_{s,K}(\cdot)$.
\begin{proposition}\label{pro1}
For arbitrary $\beta\in[0,\infty]$, if $\hat{\gamma}:=\hat{\gamma}_{s,K}(A^\prime,c,\beta)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$, then
\begin{eqnarray}\label{eqpro1}
\gamma_{s,K}(A^\prime,c,\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}\beta)\leq\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}\hat\gamma<\min\limits_{0<i\leq n}c_i.
\end{eqnarray}
\end{proposition}
\begin{proof}
Suppose $\hat\gamma:=\hat\gamma_{s,K}(A^\prime,c,\beta)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$. Now let $I_1$ be an $s$-element subset of $\{1, 2, \dots, n\}$, $\bar{I}_1:=\{1, 2, \dots, n\}\setminus I_1$, and let $I_2$ be a subset of $\{n+1, n+2, \dots, m+2n\}$, $\bar{I}_2:=\{n+1, n+2, \dots, m+2n\}\setminus I_2$.
For the $I_1$, $\bar{I}_1$ and $I_2$, $\bar{I}_2$, we define a closed convex set $\Pi_{I_1}$ in $R^n$ as
\begin{small}
\begin{eqnarray*}
\begin{array}{ll}
\Pi_{I_1}=\left\{\tau^\prime\in R^{n}:\exists \theta\in R^{m+n},\|\theta\|_*\leq\beta,(A_1^{T}\theta)_i\left\{\begin{array}{cc}=c_i\tau^\prime_i,~i\in I_1;\\ \in [-\hat\gamma,
\hat\gamma],~i\in\bar{I}_1,\end{array}\right.
(A_2^{ T}\theta)_i\left\{\begin{array}{cc}
=0,&~i\in I_2;\\
\leq 0,&~i\in \bar{I_2}
\end{array}\right.\right\}.
\end{array}
\end{eqnarray*}
\end{small}
Similar to the proof of Proposition 2.1 in \cite{Juditsky2011}, we claim that $\Pi_{I_1}$ contains the $\|\cdot\|_\infty$-ball $B$ centered at the origin and with the radius $\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}$. The proof is as follows.
Define a subspaces $L_{I_1}:=\{\tau^\prime\in R^n:\tau^\prime_i=0,i\in \bar{I}_{1}\}$ and let $L^{\perp}_{I_1}:=\{\tau^\prime\in R^n:\tau^\prime_i=0,i\in I_{1}\}$ be the orthogonal
complement of $L_{I_1}$. Let $P$ be the projection of $\Pi_{I_1}$ onto $L_{I_1}$ and $P^\prime$ be the projection of $\Pi_{I_1}$ onto $L^{\perp}_{I_1}$. Note that $\Pi_{I_1}$ is the direct sum of $P$ and $P^\prime$. Thus, $P$ is a closed convex set. Obviously, $L_{I_1}$ can be naturally identified with $R^s$.
Hence, the claim in the above can be more precisely described as that the image $\bar{P}\subset R^s$ of $P$ contains the $\|\cdot\|_\infty$-ball $B_s$ centered at the origin and with the radius $\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}$ in $R^s$.
Next, we prove that $B_s\subseteq \bar{P}$. Contradictorily, suppose that $\bar{P}$ does not contain $B_s$. Since $P$ is a closed convex set, $\bar{P}$ is also a closed convex set. According to the separating hyperplane theorem, for $\nu\in B_s\setminus \bar{P}$ and $\bar{\nu}\in \bar{P}$, there exists $u\in R^s$ with $\|u\|_1=1$, such that
\begin{eqnarray}\label{SHT}
\begin{array}{ll}
u^T\nu>\max\limits_{\bar{\nu}\in \bar{P}}~ u^T\bar{\nu}.
\end{array}
\end{eqnarray}
In the following, we prove that there does not exist $u$ such that the inequality \eqref{SHT} holds.
First, define $\bar{z}^1\in R^n$ with $\|\bar{z}^1\|_0=s$ and $\bar{z}^2\in R^{m+n}$ as
\begin{eqnarray*}
\begin{array}{ll}
\bar{z}^1=\left\{\begin{array}{cc}1,~i\in I_1;\\ 0,~i\in\bar{I}_1,\end{array}\right.
~\bar{z}^2\left\{\begin{array}{cc}\neq0,~i\in I_2;\\ =0,~i\in\bar{I}_2.\end{array}\right.
\end{array}
\end{eqnarray*}
By the definition of $\hat{\gamma}_{s,K}(A^\prime, c, \beta)$, for the given $\bar{z}^1\in R^n$, $\bar{z}^2\in R^{m+n}$, there exists $\bar{\theta}\in R^{m+n}$ such that
\begin{eqnarray*}
\begin{array}{ll}
\|\bar{\theta}\|_*\leq\beta,~\|\big(A_1^{T}\bar{\theta})-c^T\circ\bar{z}^1\|_\infty\leq \hat\gamma,~ \big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ \bar{z}^2_i\neq 0;\\
\leq 0,&~if~ \bar{z}^2_i=0.
\end{array}\right.
\end{array}
\end{eqnarray*}
Then for the $u$ with $\|u\|_1=1$ in \eqref{SHT}, combining the above inequality with the definitions of $\Pi_{I_1}$ and $\bar{P}$, there exists a vector $\nu^\prime\in\bar{P}$ such that $$|c_i\nu^\prime_i-c_isign(u_i)|\leq\hat{\gamma}, ~i\in I_1.$$
Thus,
$$sign(u_i)-\frac{\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}\leq\nu^\prime_i\leq sign(u_i)+\frac{\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}, ~i\in I_1.$$ Since $\hat{\gamma}<\frac{1}{2}\min\limits_{0<i\leq n}c_i$, the above inequalities imply that the sign of $\nu_i^\prime$ is same as $u_i$, for all $i$. Moreover, according to the definition of $\hat{\gamma}_{s,K}(A^\prime, c, \beta)$, we can get $$1-\frac{\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}\leq\nu^\prime_i\leq 1+\frac{\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}, ~i\in I_1.$$
Hence, $\nu^\prime>0$, $u>0$, and
$$\nu^\prime_i\geq\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}, i\in I_1.$$
So
\begin{eqnarray*}
u^T\nu^\prime\geq\sum\limits_{i=1}^{s}|u_i|\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}=\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}.
\end{eqnarray*}
Further, for the above given $\nu\in B_s$ and $\|u\|_1=1$, we have
\begin{eqnarray*}
\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}\geq\|\nu\|_\infty=\|u\|_1\|\nu\|_\infty\geq u^T\nu>u^T\nu^\prime\geq\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i},
\end{eqnarray*}
which is a contradiction. So the claim holds.
Through the above proof, we can conclude that, for any $z=(z^1,z^2)^T\in R^n\times R^{m+n}$ with $z^1_i=1$ for $i\in I_1$, and $z^1_i=0$ else, there exists $\tau^\prime\in\Pi_{I_1}$ such that $$\tau^\prime_i=(\frac{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}{\min\limits_{0<i\leq n}c_i}) z_i^1, ~i\in I_1.$$
Further, by the definition of $\Pi_{I_1}$, there exists $\hat\theta\in R^{m+n}$ with $\|\hat\theta\|_*\leq \frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat{\gamma}}\beta$ such that
\begin{eqnarray*}
\begin{array}{ll}
&(A_1^T\hat\theta)_i\left\{\begin{array}{cc}=\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}c_i\tau^\prime_i=c_iz^1_i,& ~ i\in I_1;\\
\in [-\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}\hat\gamma,\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}\hat\gamma],& ~ i\in\bar{ I}_1,\end{array}\right.\\
\end{array}
\end{eqnarray*}
and
\begin{eqnarray*}
\begin{array}{ll}
&(A_2^{T}\hat\theta)_i\left\{\begin{array}{cc}
=0,& ~i\in I_2;\\
\leq 0,&~ i\in\bar{I}_2.\end{array}\right.
\end{array}
\end{eqnarray*}
So by the definition of $\gamma_{s,K}(A^\prime,c,\beta)$, and since $\hat{\gamma}<\frac{1}{2}\min\limits_{0<i\leq n}c_i$, we can obtain
\begin{eqnarray*}
\gamma_{s,K}(A^\prime,c,\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}\beta)\leq\frac{\min\limits_{0<i\leq n}c_i}{\min\limits_{0<i\leq n}c_i-\hat\gamma}\hat\gamma<\min\limits_{0<i\leq n}c_i. \qquad \hfill\square
\end{eqnarray*}
\end{proof}
Based on Proposition \ref{pro1}, Theorem \ref{EPI1} can be equivalently written as:
\begin{theorem}\label{thm8}
Given $A^\prime\in R^{(m+n)\times (m+2n)}$, $s$ is an integer and $0\leq s\leq n$, $0< c\leq 1$, if $\hat{\gamma}_{s,K}(A^\prime,c)<\frac{1}{2}\min\limits_{0< i\leq n} c_i$, then $(A^\prime,c)$ is nonnegative partially $s$-good.
\end{theorem}
According to Theorem \ref{EPI1}, to show that $(A^\prime, c)$ is nonnegative partially $s$-good, we need to compare the magnitude of $\gamma_{s,K}(A^\prime, c)$ with $\min\limits_{0< i\leq n}c_i$.
Now, due to Theorem \ref{thm8}, and we know that $\hat{\gamma}_{s,K}(A^\prime, c, \beta)$ is weaker than $\gamma_{s,K}(A^\prime, c, \beta)$, hence we focus on $\hat\gamma(\cdot)$, which has a specific representation presented in the next subsection.
\subsection{Specific representation of $\hat{\gamma}_{s,K}(\cdot)$}\label{chaper33}
$\hat{\gamma}_{s,K}(A^\prime, c, \beta )$is given in Definition \ref{def2}, which is essentially obtained from the optimality condition of problem (\ref{EQ}).
In this subsection, we give a specific representation of $\hat{\gamma}_{s,K}(A^\prime, c, \beta)$ in more detail.
\begin{theorem}\label{EPI2}
Consider the polytope
\begin{equation*}
P_s=\{\tau\in R_+^{m+2n}: \tau=(\tau^1,\tau^2)^T, \tau^1\in R^n, \tau^2\in R^{m+n},~\|\tau^1\|_1\leq s,~\|\tau^1\|_\infty \leq 1\},
\end{equation*}
we have
\begin{equation}\label{F}
\hat{\gamma}_{s,K}(A^\prime,c,\beta)=\max_{\tau, x}\{\sum_{i=1}^{n} \tau^1_ic_ix_i-\beta\|A_1x\|_1:\tau\in P_s,\|x\|_1\leq1,~x\geq0\}.
\end{equation}
Particularly,
\begin{equation}\label{G}
\hat{\gamma}_{s,K}(A^\prime,c)=\max\limits_{\tau, x}\{\sum_{i=1}^{n} \tau^1_ic_ix_i:~\tau\in P_s,~\|x\|_1\leq 1,~A_1x=0,~x\geq0\}.
\end{equation}
\end{theorem}
\begin{proof}
For any vector $y\ge 0$, let $I_2(y)$ be its support set. According to Definition \ref{def2}, define
$$B_\beta(y)=\{\theta\in R^{m+n}:~\|\theta\|_*\leq\beta,~(A_2^{T}\theta)_i=0 \text{ for } i\in I_2(y),~(A_2^{T}\theta)_i\leq0 \text{ otherwise}\},$$
$$B=\{\nu\in R^{n}:~\|\nu\|_\infty\leq 1\}.$$
Then, $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ is the smallest $\gamma$, such that the set $C_{1,\gamma,\beta}:=A_1^{T}B_\beta(y)+\gamma B\subseteq R^n$ is closed, convex, and contains all vectors with $s$ nonzero elements, which are selected from $c_i$, $i=1, 2, \cdots, n$. This statement is equivalent to that $C_{1,\gamma,\beta}$ contains the convex hull of the vectors.
Let $\hat{c}=(c^T, 0, \cdots, 0)^T\in R^{m+2n}$. Then $C_{1,\gamma,\beta}$ contains the projection of $\hat{c}\circ P_s$ onto the $R^n$ space. Thus, $\gamma$ satisfies the relationship of $\hat{c}\circ P_s\subseteq C_{1,\gamma,\beta}\times R^{m+n}$, if and only if for any pair of $(x,y)^T\geq0$ with $x\in R^n$ and $y\in R^{m+n}$,
\begin{equation}\label{UP}
\begin{aligned}
&\max_{\tau\in P_s}\sum_{i=1}^{n} \tau^1_ic_ix_i \leq\max_{\eta\in C_{1,\gamma,\beta}}\{\sum_{i=1}^{n} \eta_ix_i\}\\
&=\max_{\theta,\nu}\{<x,A_1^{T}\theta>+\gamma <x,\nu>: ~\theta\in B_\beta(y),~ \|\nu\|_\infty\leq
\}\\
&\le\max_{\theta,\nu}\{<x,A_1^{T}\theta>+\gamma <x,\nu>: ~\|\theta\|_*\leq \beta,~ \|\nu\|_\infty\leq 1\}\\
&=\max_{\theta,\nu}\{<A_1x,\theta>+\gamma <x,\nu>: ~\|\theta\|_*\leq \beta,~ \|\nu\|_\infty\leq 1\}\\
&=\beta\|A_1x\|_1+\gamma\|x\|_1.
\end{aligned}
\end{equation}
That is, $\hat{c}\circ P_s\subseteq C_{1,\gamma,\beta}\times R^{m+n}$ if and only if $\gamma$ satisfies that
\begin{equation*}
\begin{aligned}
\max_{\tau\in P_s}~ \{\sum_{i=1}^{n} \tau^1_ic_ix_i-\beta\|A_1x\|_1:~x\geq0\}\leq \gamma\|x\|_1,
\end{aligned}
\end{equation*}
namely,
\begin{equation*}
\begin{aligned}
\max_{\tau, x}\{\sum_{i=1}^{n} \tau^1_ic_ix_i-\beta\|A_1x\|_1:\tau\in P_s,\|x\|_1\leq1,~x\geq0\}\leq\gamma.
\end{aligned}
\end{equation*}
Therefore Eq. \eqref{F} holds, since $\hat{\gamma}_{s,K}(A^\prime,c,\beta)$ is the smallest $\gamma$. Finally, Eq. \eqref{G} holds due to that Eq. \eqref{F} can be regarded as a penalty problem of Eq. \eqref{G}. $\hfill\square$
\end{proof}
For $c\circ x\geq0\in R^{n}$, we define the sum of the $s$ largest entries of $c\circ x$ as $$\|c\circ x\|_{s,K,1}:= \max\limits_{\tau\in P_s}\sum\limits_{i=1}^{n} \tau^1_ic_ix_i.$$
Then by taking Theorems \ref{thm8} and \ref{EPI2} into consideration, we can obtain the following result:
\begin{corollary}\label{CO1}
Given a matrix $A_1$, $\hat\gamma_{s,K}(A^\prime,c)$ is the least upper bound on $\|c\circ x\|_{s,K,1}:=\max\limits_{\tau\in P_s}\sum\limits_{i=1}^{n} \tau^1_ic_ix_i$ over $x\geq0$, $x\in Ker(A_1)$ and $\|x\|_1\leq1$. As a result, if the maximum of $\|c\circ x\|_{s,K,1}$ over $x\in Ker(A_1)$ and $\|x\|_1\leq1$ is less than $\frac{1}{2}\min\limits_{0<i\leq n}c_i$, then $(A^\prime,c)$ is nonnegative partially $s$-good.
\end{corollary}
Equations \eqref{F} and \eqref{G} provide the specific forms of $\hat\gamma_{s, K}(\cdot)$. Thus, we can judge the nonnegative partial $s$-goodness of $(A^\prime,c)$ according to Theorem \ref{thm8}, as long as the Eqs. \eqref{F} and \eqref{G} can be calculated. However, in Eqs. \eqref{F} and \eqref{G}, the calculation of $\hat\gamma_{s,K}(\cdot)$ is complicated, and sometimes it is not easy to directly calculate the specific value of $\hat\gamma_{s,K}(A^\prime,c,\beta)$.
In order to make up for this shortcoming, in what follows, we give effective upper bounding of $\hat\gamma_{s,K}(A^\prime,c,\beta)$ to estimate the value of $\hat\gamma_{s,K}(A ^\prime,c,\beta)$.
\section{ Efficient bounding of $\hat\gamma_{s,K}(\cdot)$}\label{chaper4}
From the previous sections, we have shown that $\hat\gamma_{s,K}(A^\prime,c,\beta)$ plays an important role in distinguishing whether $(A^\prime, c)$ is non-negative partially $s$-good. However, it still not easy to determine the exact value of $\hat\gamma_{s,K}(A^\prime,c,\beta)$ according to Eqs. \eqref{G} and \eqref{F}. In this section, we introduce efficiently computable upper bound on the value of $\hat\gamma_{s,K}(A^\prime,c,\beta)$.
Since $\hat\gamma_{s,K}(A^\prime,c,\beta)\geq\hat\gamma_{s,K}(A^\prime,c)$ for any $\beta>0$, we will use Eq. $\eqref{G}$ to calculate an upper bound of $\hat\gamma_{s,K}(A^\prime,c)$. The difficulty of calculation is in the process of the linear constraint $A_1x=0$, which will be handled via Lagrange relaxation.
Since $\hat\gamma_{s,K}(A^\prime,c)>0$, hence, we only consider the case where the elements in $x$ are not all $0$. For any matrix $Q=[q_1,\cdots,q_{n}]\in R^{(m+n)\times n}$ with $A_2^T Q\leq 0$,
we have
\begin{equation*}
\begin{array}{ll}
Q^TA_1x&=\sum\limits_{\ell=1}^{m+n}\sum\limits_{j=1}^{n}q_{\ell, i} a_{\ell,j}x_j,~i=1,2,\dots,n\\
&=0,
\end{array}
\end{equation*}
where $a_{\ell,j}$ is the element of matrix $A_1$ in the $\ell$-th row and $j$-th column, $q_{\ell,i}$ is the element of matrix $Q$ in the $\ell$-th row and $i-$th column.
Let $C\in R^{n\times n}$ be the diagonal matrix $diag(c_1, c_2, \cdots, c_n)$. For the above $Q$ and by Eq. $\eqref{G}$, we can get
\begin{eqnarray*}
\begin{aligned}
&\hat\gamma_{s,K}(A^\prime,c)\\
&=\max\limits_{\tau, x}\{\sum_{i=1}^{n} \tau^1_ic_ix_i:~\tau\in P_s,~\|x\|_1\leq 1,A_1x=0,~x\geq0\}\\
&\le\max\limits_{\tau\in P_s\atop x\geq0}\left\{\sum_{i=1}^{n} \tau^1_i(c_ix_i-\sum\limits_{\ell=1}^{m+n}\sum\limits_{j=1}^{n}q_{\ell, i} a_{\ell,j}u_j): ~\|x\|_1\leq 1,Q^TA_1 x=0,A_2^T Q\leq 0\right\}\\
&=\max\limits_{\tau\in P_s\atop x\geq0}\left\{\tau^{1T}(Cx-Q^TA_1x): ~ \|x\|_1\leq 1,~Q^TA_1 x=0,A_2^T Q\leq 0\right\}\\
&\le\max\limits_{\tau\in P_s\atop x\geq0}\left\{\tau^{1T}(Cx-Q^TA_1x): ~ \|x\|_1\leq 1,A_2^T Q\leq 0\right\}\\
&=\max\limits_{\tau\in P_s\atop x\geq0}\left\{\tau^{1T}(C-Q^TA_1)x:~\|x\|_1\leq 1,A_2^T Q\leq 0\right\}.
\end{aligned}
\end{eqnarray*}
Hence we can solve the problem
\begin{eqnarray}\label{up2}
\begin{aligned}
\max\limits_{\tau, x}\left\{\tau^{1T}(C-Q^TA_1)x:~x\geq0,\|x\|_1\leq 1,\tau\in P_s,A_2^T Q\leq 0\right\}
\end{aligned}
\end{eqnarray}
to obtain an upper bound of $\hat\gamma_{s,K}(A^\prime,c)$, which is linear in $x$.
In the above problem, the feasible region for $x$ is the convex hull of just $n$ points $ e_i$, $i=1, 2, \cdots, n$, where $e_i$ is the $n$-dimensional vector with the $i$-th component being $1$, and the remaining components being $0$. Therefore, the above problem can be rewritten as
\begin{equation}\label{upp}
\begin{aligned}
&\max\limits_{\tau, x}\left\{\tau^{1T}(C-Q^TA_1)x:~x\geq0,\|x\|_1\leq 1,\tau\in P_s,A_2^T Q\leq 0\right\}\\
&\leq\max\limits_{\tau, 0< j\leq n}\left\{|\tau^{1T}(C-Q^TA_1)e_j|:~ \tau\in P_s,A_2^T Q\leq 0\right\}\\
&=\max_{0< j\leq n}\left\{\max_{\tau\in P_s}|\tau^{1T}(C-Q^TA_1)e_j|:~A_2^T Q\leq 0\right\}\\
&=\max\limits_{0< j\leq n} \left\{\|(C-Q^TA_1)e_j\|_{s,K,1}:~A_2^T Q\leq 0\right\}.
\end{aligned}
\end{equation}
Define $g_{A_1,C,s,K}(Q)$ as $\max\limits_{0< j\leq n} \|(C-Q^TA_1)e_j\|_{s,K,1}$, and let
\begin{equation*}
\eta_{s,K}(A_1,C,\infty):=\left\{
\begin{array}{cl} \min\limits_{Q} & g_{A_1,C,s,K}(Q)\\
\text{s.t.}&~A_2^T Q\leq 0.
\end{array}
\right.
\end{equation*}
Then
\begin{equation*}
\hat\gamma_{s,K}(A^\prime,c)\leq\eta_{s,K}(A_1,C,\infty).
\end{equation*}
Since $g_{A_1,C,s,K}(Q)$ is easy to compute, so $\eta_{s,K}(A_1,C,\infty)$ is easy to compute.
Further, from \eqref{UP} we have
\begin{equation*}
\begin{aligned}
\max_{\tau\in P_s}\sum_{i=1}^{n} \tau^1_ic_ix_i &\leq\max_{\theta,\nu}\{<A_1x,\theta>+\gamma <x,\nu>: \|\theta\|_*\leq \beta,\|\nu\|_\infty\leq 1\}\\
&=\max_{\theta}\{<A_1x,\theta>:\|\theta\|_*\leq \beta\}+\gamma\|x\|_1\}.
\end{aligned}
\end{equation*}
Thus, $\gamma$ satisfies that
\begin{equation*}
\begin{aligned}
\max_{\tau\in P_s\atop x\geq0}\left\{\sum_{i=1}^{n} \tau^1_ic_ix_i -\max_{\theta}<A_1x,\theta>: \|\theta\|_*\leq \beta,\|x\|_1\leq1\right\}\leq\gamma.
\end{aligned}
\end{equation*}
Note that $\hat\gamma_{s,K}(A^\prime, c, \beta)$ is the infimum of $\gamma$, hence,
\begin{equation*}
\begin{aligned}
\hat\gamma_{s,K}(A^\prime, c, \beta)=\max\limits_{\tau\in P_s,x\geq0}\left\{\sum_{i=1}^{n} \tau^1_ic_ix_i -\max_{\theta}<A_1x,\theta>:\|\theta\|_*\leq \beta,\|x\|_1\leq1\right\}.
\end{aligned}
\end{equation*}
For any matrix $Q=[q_1,\cdots,q_{n}]\in R^{(m+n)\times n}$ with $A_2^T Q\leq 0$, $\|q_{i}\|_*\leq\beta$ for all $i$, and $Q^TA_1x=0$, we can get
\begin{equation*}
\begin{aligned}
&\hat\gamma_{s,K}(A^\prime,c,\beta)\\
&=\max_{\tau\in P_s\atop {x\geq0 \atop \|x\|_1\leq1}}\left\{\sum_{i=1}^{n} \tau^1_ic_ix_i -\max_{\theta}<A_1x,\theta>:\|\theta\|_*\leq\beta\right\}\\
&\leq\max_{\tau\in P_s\atop {x\geq0 \atop \|x\|_1\leq1}}\left\{\sum_{i=1}^{n} \tau^1_ic_ix_i -<A_1x,q_{i}>: \|q_i\|_*\leq \beta, Q^TA_1x=0,A_2^T Q\leq 0\right\}\\
&=\max_{\tau\in P_s\atop {x\geq0 \atop \|x\|_1\leq1}}\left\{\sum_{i=1}^{n}\tau^1_i(c_ix_i-\sum\limits_{\ell=1}^{m+n}\sum\limits_{j=1}^{n}q_{\ell, i} a_{\ell,j}x_j): \|q_i\|_*\!\leq\! \beta, Q^TA_1x=0,A_2^T Q\leq 0\right\}\\
&=\max_{\tau\in P_s\atop {x\geq0 \atop \|x\|_1\leq1}}\left\{\tau^{1T}(Cx-Q^TA_1x):\! \|q_i\|_*\leq \beta,
Q^TA_1x=0,A_2^T Q\leq 0\right\}\\
&\leq\max_{\tau\in P_s\atop {x\geq0 \atop \|x\|_1\leq1}}\left\{\tau^{1T}(C-Q^TA_1)x: \|q_i\|_*\leq \beta,A_2^T Q\leq 0\right\},
\end{aligned}
\end{equation*}
where $q_i$, $i=1, \dots, n$, is the $i$-th column of matrix $Q$, $a_\ell$, $\ell=1, \dots, n$, is the $\ell$-th column of matrix $A_1$.
Similar to \eqref{upp}, we can solve the following problem
$$\max_{\tau\in P_s\atop x\geq0} ~\left\{\tau^{1T}(C-Q^TA_1)x: \|x\|_1\leq1,\|q_i\|_*\leq \beta,A_2^T Q\leq 0\right\}.$$
Moreover, it is easy to change the above upper bound of $\hat\gamma_{s,K}(A^\prime,c,\beta)$ to $\eta_{s,K}(A_1,C,\beta)$, which is defined as
\begin{equation}\label{L}
\begin{aligned}
\min\limits_{Q} & \max\limits_{0< j\leq n}~\|(C-Q^TA_1)e_j\|_{s,K,1}\\
\text{s.t.}~ &\|q_i\|_*\leq\beta,~0< i\leq n,\\
&A_2^T Q\leq 0,
\end{aligned}
\end{equation}
where $q_\ell$ is the $\ell$-th column of matrix $Q$.
Obviously, problem $\eqref{L}$ is a convex programming and is solvable. Similar to the properties of $\hat\gamma_{s,K}(A^\prime,c,\beta)$, $\eta_{s,K}(A_1,C,\beta)$ is a nondecreasing function of $s$, and is a nonincreasing function of $\beta$. Thus we can get an upper bound on $\hat\gamma_{s,K}(A^\prime, c , \beta)$, by calculating the least upper bound of $g_{A_1,C,s,K}(Q)$ with respect to $Q$, i.e., Eq. \eqref{L}.
In addition, according to the definition of $\|\cdot\|_{s,K,1}$, given positive integers $s$ and $t$, we have $\|\cdot\|_{st,K,1}\leq t\|\cdot\|_{s,K,1}$, and
\begin{equation}\label{eta}
\eta_{s,K}(A_1, C, \beta)\leq s\eta_{1,K}(A_1, C, \beta).
\end{equation}
So, the upper bound $\eta_{s,K}(A_1, C, \beta)$ of $\hat{\gamma}_{s,K}(A^\prime, c, \beta)$ can be replaced by $s\eta_{1,K}(A^\prime, C, \beta)$.
This property allows us to reduce the calculation of $\eta_{s,K}(A_1, C)$ to $\eta_{1,K}(A_1, C)$, which greatly reduces the amount of calculation.
Let $\mathbf{Q}=\{Q: \|q_i\|_*\leq\beta,i=1,\dots,n, ~A_2^T Q\leq 0\}$. According to the definition of $\eta_{s,K}(A_1, C, \beta)$, we have
\begin{equation*}
\begin{array}{lll}
\eta_{1,K}(A_1, C, \beta)&=\min\limits_{Q\in \mathbf{Q}}\max\limits_{0<j\leq n}\|(C-Q^TA_1)e_j\|_{\infty}\\
&=\min\limits_{Q\in \mathbf{Q}}\max\limits_{0<j\leq n}\left\|\left(\begin{array}{ll}|c_1-q_1^T a_1 |,~ \quad |- q_1^ T a_2|, ~&\dots,~ \quad |-q_1^T a_n |\\|-q^T_2a_1|,~\qquad |c_2-q^T_2a_2|, ~&\dots, ~\quad|-q^T_2a_n|\\ &\ddots\\|-q^T_na_1|,~\qquad|-q^T_na_2|, ~&\dots, ~\quad |c_n-q^T_na_n| \end{array}\right)e_j\right\|_{\infty}\\
&=\min\limits_{Q\in \mathbf{Q}}\max\limits_{0<j\leq n}\|(C-A_1^T Q)e_j\|_{\infty}\\
&=\min\limits_{Q\in \mathbf{Q}}\max\limits_{0<j\leq n}\left\|\left(\begin{array}{ll}|c_1- a_1^T q_1 |,~ \quad |- a_1^ T q_2|, ~&\dots,~ \quad |- a_1^T q_n |\\|-a_2^T q_1|,~\qquad |c_2-a_2^T q_2|, ~&\dots, ~\quad|-a_1q^T_n|\\ &\ddots\\|-a_n^Tq_1|,~\qquad|-a_n^Tq_2|, ~&\dots, ~\quad |c_n-a_n^Tq_n| \end{array}\right)e_j\right\|_{\infty}\\
&=\min\limits_{Q\in \mathbf{Q}}\max\limits_{0<j\leq n}\left\|\left(\begin{array}{ll}0\\0\\ \vdots \\c_j\\ \vdots\\0 \end{array}\right)-\left(\begin{array}{ll}a_1^T q_j\\a_2^T q_j \\ \vdots \\a_j^T q_j\\ \vdots \\a_n^T q_j \end{array}\right)\right\|_{\infty}\\
&=\min\limits_{Q\in \mathbf{Q}}\max\limits_{0<j\leq n}\|C_j-A_1^T q_j\|_{\infty},
\end{array}
\end{equation*}
which is equivalent to solve $n$ convex optimization problems of dimension $n$:
\begin{equation}\label{eta1}
\begin{array}{ll}
\eta^j=\min\limits_{q_j\in \mathbf{Q}}\|C_j-A_1^Tq_j\|_{\infty}.
\end{array}
\end{equation}
Obviously, here $\eta_{1,K}(A^\prime, c, \beta)=\max\limits_{0<j\leq n}\eta^j$.
In Theorem \ref{thm8}, the sufficient condition for nonnegative partial $s$-goodness is $\hat\gamma_{s,K}(A^\prime,$ $c)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$. Note that, $\hat\gamma_{s,K}(A^\prime,c)$ takes the value of $\hat\gamma_{s,K}(A^\prime,c,\beta)$ for a large enough $\beta$.
Similarly, $\eta_{s,K}(A_1, C)$ takes the value of $\eta_{s,K}(A_1, C, \beta)$ for the above large enough $\beta$.
Given $A^\prime$, $c$ and $s$, suppose $\hat\gamma_{s,K}(A^\prime,c)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$, such that for every pair of vectors $z^1\in R^n$ and $z^2\in R^{m+n}$, where $z^1\in R^n$ has $s$ nonzero entries, each is equal to 1, there exists a vector $\theta\in R^{m+n}$ such that
\begin{eqnarray*}
\|\big(A_1^{T}\theta\big)-c\circ z^1\|_{\infty}\leq \hat\gamma_{s,K}(A^\prime,c),~
\text{and}~ \big(A_2^{ T}\theta\big)_i\left\{\begin{array}{cc}
=0,&~if~ z^2_i\neq 0;\\
\leq 0,&~if~ z^2_i=0.
\end{array}\right.
\end{eqnarray*}
Next, we need to show that under the assumptions the solution of $\theta$ is bounded, i.e., there exists $\bar{\beta}$ such that $\|\theta\|_{*}<\bar{\beta}$.
\begin{proposition}\label{pro1}
Suppose $\{u: \|u\|_1\leq \rho, u\in R^{m+n}\}\subseteq
\mathcal{R}(A_1)$, where $\mathcal{R}(A_1)=\{A_1d: \|d\|_1\leq1, d\in R^n\}$. For every $s \leq n$, if $\beta\geq\bar{\beta}=\frac{1}{\rho}(\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i)$ and $\hat\gamma_{s,K}(A^\prime,c)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$, then $\hat\gamma_{s,K}(A^\prime,c)=\hat\gamma_{s,K}(A^\prime,c,\beta)$.
\end{proposition}
\begin{proof}
According to the definition of $\hat\gamma_{s,K}(A^\prime,c, \beta)$, we have $\|A_1^T\theta\|_{\infty}<\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i$. Hence,
\begin{equation*}
\begin{array}{lll}
\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i&>\|A_1^T\theta\|_{\infty}\\
&=\max\limits_{d}\{d^TA_1^T\theta: \|d\|_1\leq1, d\in R^n\}\\
&=\max\limits_{u}\{u^T\theta: u=A_1d, \|d\|_1\leq1, d\in R^n\}\\
&\geq\max\limits_{u}\{u^T\theta: \|u\|_1\leq\rho\}\\
&=\rho\|\theta\|_{*}.\\
\end{array}
\end{equation*}
Then
$$\|\theta\|_{*}<\frac{1}{\rho}(\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i),$$
and we define
$$\bar{\beta}=\frac{1}{\rho}(\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i).$$
The proposition is proven.$\hfill\square$
\end{proof}
According to Proposition \ref{pro1}, if we could find a lower bound $\bar\beta$ of $\beta$ such that $\hat\gamma_{s,K}(A^\prime,c, \bar\beta)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$, then $\hat\gamma_{s,K}(A^\prime,c, \beta)\leq\hat\gamma_{s,K}(A^\prime,c, \bar\beta)<\frac{1}{2}\min\limits_{0< i\leq n}c_i$ for all $\beta\ge\bar{\beta}$, since $\hat\gamma_{s,K}(A^\prime,c,\beta)$ is a nonincreasing function of $\beta$. The same applies to $\eta_{s,K}(A_1, C, \beta)$.
\section{Heuristic algorithm and examples}\label{chaper5}
In this section, we give three examples to illustrate the proposed non-negative partial $s$-goodness condition for the equivalence between the integer programming problem \eqref{IP} and the weighted linear programming problem \eqref{la3}. To this aim, according to Theorem \ref{thm8}, first we should verify that $(A^\prime,c)$ is nonnegative partially $s$-good. Then, according to Definition \ref{dingyi1}, the partially weighted linear programming problem \eqref{la3} has the unique optimal solution. Further, according to Theorem \ref{thmknownS} or Theorem \ref{thm32}, the optimal solution of problem \eqref{la3} is also the optimal solution of the $l_0$-norm minimization problem \eqref{la2}. Meanwhile, according to Theorem \ref{thm21}, it is also an optimal solution of problem \eqref{IP}.
The main idea of verifying the nonnegative partial $s$-goodness of $(A^\prime,c)$ is as follows. Given the $\bar{\beta}$ in Proposition \ref{pro1}, for $\beta=\bar{\beta}$ and an arbitrary $s$, combining Theorem 8 and the definition of $\eta_{s,K}(A_1, C, \beta)$, it is obvious that $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$ is a sufficient condition for $(A^\prime, c)$ to be nonnegative partially $s$-good.
Let $s^*(A^\prime, c)$ be a lower bound on the largest $s$ such that $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, which will be obtained from Eqs. \eqref{eta} and \eqref{eta1}.
In other words, given $s^*(A^\prime, c)$ and $\beta\geq\bar{\beta}$, the value of $\eta_{s^*(A^\prime, c),K}(A_1, C, \beta)$ must be less than $ \frac{1}{2}\min\limits_{0<i\leq n}c_i$.
According to Theorems \ref{thmknownS} and \ref{thm32}, the partially weighted linear programming problem (9) must have a unique optimal solution. However in some cases, the optimal solution of problem \eqref{la3} may not be unique, and we need to adjust $c$, such that problem \eqref{la3} has a unique optimal solution.
To better understand the role of the weight $c$, let us divide a non-negative optimal solution $(x^*,y^*)^T$ with $\|x^*\|_0\leq s$ of problem \eqref{la3} into the following three cases:
$Case~1:$ problem \eqref{la3} has the unique optimal solution $(x^*,y^*)^T$ with $\|x^*\|_0\leq s$;
$Case~2:$ problem \eqref{la3} has multiple optimal solutions $(x^*,y^*)^T$ with $\|x^*\|_0\leq s$, and the vectors $x^*$ have the same sparsity;
$Case~3:$ problem \eqref{la3} has multiple optimal solutions $(x^*,y^*)^T$ with $\|x^*\|_0\leq s$, and the vectors $x^*$ have different sparsity.
Clearly, by Definition \ref{dingyi1}, it is natural that $(A^\prime, c)$ is nonnegative partially $s$-good in $Case~1$. Example \ref{ex1} in this section will illustrate this case.
$Case~2$ shows that there are multiple optimal solutions of problems \eqref{la2} and \eqref{la3}. At this point, we may adjust $c$ to a suitable value, such that one of the optimal solutions of problem \eqref{la3} is a unique optimal solution. Then we go to verify that $(A^\prime, c)$ is nonnegative partially $s$-good. Example \ref{ex2} in this section will illustrate this case.
Note that in $Case~2$, all optimal solutions of problem \eqref{la3} are optimal solutions of problem \eqref{la2}.
$Case~3$ is a very special case, in which problem \eqref{la3} has multiple optimal solutions $(x^*,y^*)^T$, and the vectors $x^*$ have different sparsity. Firstly, suppose that $\eta_{s,K}(A_1, C, \bar\beta)$ $< \frac{1}{2}\min\limits_{0<i\leq n}c_i$, we can obtain $s^*(A^\prime, c)$. Next, we choose one of the optimal solutions of problem \eqref{la3}, which satisfies $\|x^*\|_0\leq s^*(A^\prime, c)$. If such solution cannot be found, then we should find another $s^*(A^\prime, c)$; otherwise, we may adjust $c$ to a suitable value, such that this solution is unique, and then we continue to verify that $(A^\prime, c)$ is nonnegative partially $s$-good. Example \ref{ex3} in this section will illustrate this case.
The above idea of verifying the nonnegative partial $s$-goodness of $(A^\prime,c)$ can be organized in the following steps. Initially, let $c_i=1,~i=1,2, \dots, n$ and let $\beta=\bar{\beta}=\frac{1}{\rho}(\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i)$ according to Proposition \ref{pro1}.
$Step~ 1$: According to Eq. \eqref{eta1}, we calculate the value of $\eta_{1,K}(A_1, C, \bar\beta)$. If $\eta_{1,K}(A_1,$ $C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, then go to $Step~ 2$; otherwise (this will happen in $Cases~2$ or $3$), we should use $Step~ 5$ to update $c$, such that $\eta_{1,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, and go to $Step~ 2$.
$Step~ 2$: Since $s$ is not known at present, we suppose $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$. Then according to \eqref{eta},
$$s^*(A^\prime, c)=\lfloor\frac{\frac{1}{2}\min\limits_{0<i\leq n}c_i}{\eta_{1,K}(A_1, C, \bar\beta)}\rfloor.$$
$Step~ 3$: Consider an optimal solution $(x^*,y^*)$ of problem \eqref{la3}. If the solution of problem \eqref{la3} is unique and $\|x^*\|_0=s$, then we compare $s$ with $s^*(A^\prime, c)$. If $s=s^*(A^\prime, c)$, then we verify whether $s^*(A^\prime, c)\eta_{1,K}(A_1, C,$ $ \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$. When it holds, go to $Step~ 4$.
Otherwise, such as $Case~3$, not all solutions satisfy $s\eta_{1,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$. So we choose a solution with $\|x^*\|_0=s^*(A^\prime, c)$, and use $Step~ 5$ to update $c$, such that this optimal solution of problem \eqref{la3} is the unique optimal solution. Next, we verify that whether $s^*(A^\prime, c)\eta_{1,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$. When it holds, go to $Step~ 4$; otherwise, update $c$ again.
$Step~ 4$: According Theorem \ref{thm8}, it implies that $(A^\prime, c)$ is nonnegative partially $s$-good. Stop the algorithm.
$Step~ 5$: Update $c$ as follows: for the maximum component $x_i^*$ in $Step~ 3$, select $c_i$ such that $0<c_i\leq \bar\beta$. For the minimum component $x_j^*$ in $Step~ 3$, select $c_j$ from $(\bar\beta, \frac{3}{2}\bar\beta)$. The other components in $c$ are randomly selected from $[c_i, c_j]$, i.e., $c_i$ and $c_j$ are the minimum and maximum components of $c$ respectively. Let $\bar{\beta}=\frac{1}{\rho}(\max\limits_{0< i\leq n}c_i+\frac{1}{2}\min\limits_{0< i\leq n}c_i)$.
Below are three examples we provide. Example \ref{ex1} does not comply with $TUM$, Example \ref{ex2} does not comply with $TDI$, and Example \ref{ex3} is neither $TUM$ nor $TDI$ compliant.
\begin{example}\label{ex1}
Let
\begin{eqnarray*}
A=\left(\begin{array}{lll}
1 & 2 &~ 0 \\
0 & 1 &~ 1 \\
1 & 0 & 2
\end{array}\right), \quad b=\left(\begin{array}{l}
1 \\
1 \\
1
\end{array}\right).
\end{eqnarray*}
For $c=1$, given $\bar\beta=0.563$, according to Eq. \eqref{eta1}, we can calculate the value of $\eta_{1,K}(A_1,C, \bar\beta)$ is $0.2188$. Suppose $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, then according to \eqref{eta}, $s^*(A^\prime, c)=\lfloor\frac{0.5}{0.2188}\rfloor=2$. For $c=1$, problem \eqref{la3} has an optimal solution $\left(x^{1}, y^{1}\right)^T=\left(\left(0, \frac{1}{2}, \frac{1}{2}\right),\left(0,0,0, 1,\frac{1}{2},\frac{1}{2}\right)\right)^{T}$ and $\|x^1\|_0\leq2$. According to \eqref{eta}, $\eta_{s,K}(A_1, C, \bar\beta)\leq2\eta_{1,K}$ $(A_1,C, \bar\beta)$ $=0.4376 < \frac{1}{2}$. Then, by Theorem \ref{thm8}, $(A^\prime,c)$ is nonnegative partially $s$-good. Hence, according to Definition \ref{dingyi1}, $(x^1,y^1)^T$ is the unique optimal solution of problem \eqref{la3}. Next, according to Theorem \ref{thm32}, $(x^1,y^1)^T$ is an optimal solution of problem \eqref{la2}. So by Theorem \ref{thm21}, $(0,1,1)^T$ is an optimal solution of problems \eqref{IP} and \eqref{la1} respectively.
\end{example}
It must be remarked that, sometimes problem (\ref{la3}) has multiple nonnegative optimal solutions when $c=1$, and the vectors $x$ have the same sparsity. This situation is contradict to Definition \ref{dingyi1}. So the coefficient $c$ must be adjusted, such that problem (\ref{la3}) has a unique optimal solution. The following Example \ref{ex2} shows that this can be achieved.
\begin{example}\label{ex2}
Let
\begin{eqnarray*}
A=\left(\begin{array}{lll}
1 & 0 & 0 \\
1 & 1 & 0 \\
0 & 1 & 1
\end{array}\right),~ b=\left(\begin{array}{l}
0 \\
\frac{3}{2} \\
\frac{1}{2}
\end{array}\right).
\end{eqnarray*}
For $c=1$, given $\bar\beta=0.5$, according to Eq. \eqref{eta1}, we can calculate the value of $\eta_{1,K}(A_1,C, \bar\beta)$ is $0.5$. For $c=1$, problem \eqref{la3} has two optimal solutions, which are $\left(x^{1}, y^{1}\right)^T=\left(\left(1, \frac{1}{2}, 0\right),\left(1,0,0,0,\frac{1}{2},1\right)\right)^{T}$ and $\left(x^{2}, y^{2}\right)^T=\left(\left(\frac{3}{4}, \frac{3}{4}, 0\right), \left(\frac{3}{4}, 0, \frac{1}{4}, \frac{1}{4}, \frac{1}{4},1\right)\right)^{T}$, $c^Tx^1=c^Tx^2$, and both vectors $x^1$ and $x^2$ are $2$-sparse.
Next, let $c=(0.5,0.7,0.8)$. Given $\bar\beta=0.7$, we have $\eta_{1,K}(A_1,C, \bar\beta)=0.1$. Suppose $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, then according to \eqref{eta}, $s^*(A^\prime, c)=\lfloor\frac{0.25}{0.125}\rfloor=2$. For $c=(0.5,0.7,0.8)$,
problem \eqref{la3} has an optimal solution
$\left(x^{1}, y^{1}\right)^T=\left(\left(1, \frac{1}{2}, 0\right), \left(1, 0, 0, \right.\right.$ $ \left.\left.0,\frac{1}{2},1\right)\right)^{T}$, and $\|x^*\|_0\leq2$. According to \eqref{eta}, $\eta_{s,K}(A_1, C, \bar\beta)\leq2\eta_{1,K}(A_1,C)=0.2 < \frac{1}{2}\min\limits_{0<i\leq n}c_i=0.25$. Then, by Theorem \ref{thm8}, $(A^\prime,c)$ is nonnegative partially $s$-good. Hence, according to Definition \ref{dingyi1}, $(x^1,y^1)^T$ is the unique optimal solution of problem \eqref{la3}. Furthermore, according to Theorem \ref{thmknownS}, $(x^1,y^1)^T$ is an optimal solution of problem \eqref{la2}. Since $\|x^1\|_0=\|x^2\|_0$, it is natural that $(x^2,y^2)^T$ is also an optimal solution to problem \eqref{la2}. So by Theorem \ref{thm21}, $(1, 1, 0)$ is an optimal solution of problems \eqref{IP} and \eqref{la1} respectively.
\end{example}
Different from Example \ref{ex2}, another situation to be aware of is that, problem (\ref{la3}) has multiple nonnegative optimal solutions, and the vectors $x$ have different sparsities, when $c=1$. This situation is also contradict to Definition \ref{dingyi1}. So the coefficient $c$ must be adjusted too, such that problem (\ref{la3}) has a unique optimal solution. The following Example \ref{ex3} shows that this can be achieved.
\begin{example}\label{ex3}
Let
\begin{eqnarray*}
A=\left(\begin{array}{lll}
1 & 2 &~ 0 \\
0 & 1 &~ 1 \\
2 & 0 & 1
\end{array}\right), \quad b=\left(\begin{array}{l}
0 \\
\frac{1}{2} \\
\frac{1}{3}
\end{array}\right).
\end{eqnarray*}
For $c=1$, given $\bar\beta=0.375$, according to Eq. \eqref{eta1}, we can calculate the value of $\eta_{1,K}(A_1,C, \bar\beta)$ is $0.2917$. Suppose $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, then according to \eqref{eta}, $s^*(A^\prime, c)=\lfloor\frac{0.5}{0.2917}\rfloor=1$. For $c=1$, problem \eqref{la3} has two optimal solutions, they are: $\left(x^{1}, y^{1}\right)^T=\left(\left(0, \frac{1}{6}, \frac{1}{3}\right), \left(\frac{1}{3},0,0,1,\frac{5}{6},\frac{2}{3}\right)\right)^{T}$ with $\|x^1\|_0=2$, and $\left(x^{2}, y^{2}\right)^T=\left(\left(0, 0, \frac{1}{2}\right),(0, 0,\frac{1}{6}, 1, 1, \frac{1}{2})\right)^{T}$ with $\|x^2\|_0=1$, $c^Tx^1=c^Tx^2$. Clearly, $\|x^1\|_0>s^*(A^\prime, c)$, $\|x^2\|_0=s^*(A^\prime, c)$.
Next, let $c=(0.5,0.35,0.3)$. Given $\bar\beta=0.7$, we have $\eta_{1,K}(A_1,C, \bar\beta)=0.1$. Suppose $\eta_{s,K}(A_1, C, \bar\beta) < \frac{1}{2}\min\limits_{0<i\leq n}c_i$, then according to \eqref{eta}, $s^*(A^\prime, c)=\lfloor\frac{0.15}{0.1}\rfloor=1$. For $c=(0.5,0.7,0.8)$, problem \eqref{la3} has an optimal solution $\left(x^{2}, y^{2}\right)^T=\left(\left(0, 0, \frac{1}{2}\right),\left(0, 0,\frac{1}{6}, \right.\right. $\\$\left.\left.1, 1, \frac{1}{2}\right)\right)^{T}$ with $\|x^2\|_0$ $=1$. According to \eqref{eta}, $\eta_{s,K}(A_1, C, \bar\beta)\leq\eta_{1,K}(A_1,C, \bar\beta)=0.1 < \frac{1}{2}\min\limits_{0<i\leq n}c_i=0.15$. Hence, by Theorem \ref{thm8}, $(A^\prime,c)$ is nonnegative partially $s$-good, and according to Definition \ref{dingyi1}, $(x^2,y^2)^T$ is the unique optimal solution of problem \eqref{la3}. Furthermore, according to Theorem \ref{thm32}, $(x^2,y^2)^T$ is an optimal solution of problem \eqref{la2}. So by Theorem \ref{thm21}, $(0,0,1)^T$ is an optimal solution of problems \eqref{IP} and \eqref{la1} respectively.
\end{example}
\section{Conclusion}\label{chaper6}
In this paper, we studied the equivalence of a 0-1 linear program to a weighted linear programming problem. Firstly, we prove the equivalence between the integer programming problem and a sparse minimization problem. Next, we define the nonnegative partial $s$-goodness of the constraint matrix and the weight vector in the objective function of the weighted linear programming problem. Utilizing two quantities $\gamma_{s,K}(\cdot)$ and $\hat{\gamma}_{s,K}(\cdot)$ of the nonnegative partial $s$-goodness, we propose a necessary and a sufficient condition for the constraint matrix and weighted vector to be nonnegative partial $s$-good. Since it is difficult to calculate the two quantities, we further provide an efficiently computable upper bound of $\hat{\gamma}_{s, K}(A^\prime, c, \beta)$, such that the above sufficient condition is verifiable. It is worthy of mentioning that the objective coefficient $c$ of the weighted linear programming problem is not fixed. When the weighted linear programming problem has multiple optimal solutions, we may adjust $c$ so that the weighted linear programming problem has only a unique optimal solution. At the end, we provide three examples to illustrate the theory in this article.
\bibliographystyle{spmpsci}
|
1,477,468,750,948 | arxiv | \section{placeholder}
\section{More Experiments and Detailed Experiment Setup}
\label{sec:more-experiments}
\subsection{Experiments setup}
In this section, we introduce the experiment setup in detail.
\paragraph{Small Synthetic Example} We generate the dataset in the following way: We first set up a random matrices $X\in\mathbb{R}^{N\times d}$(samples), where $N$ is the number of samples, $d$ is the input dimension and $Y\in\mathbb{R}^{N}$(labels). Each entry in $X$ or $Y$ follows a uniform distribution with support $[-1,1]$. Each entry is independent from others. Then we normalize the dataset $X$ such that each row in $X$ has norm $1$, denote the normalized dataset as $\hat X = [\hat x_1,\dots,\hat x_N]^T$. Then we compute the smallest singular value for the matrix $[\hat x_1^{\otimes 2},\dots,\hat x_N^{\otimes 2}]^T$, and we feed the normalized dataset $\hat X$ into the two-layer network(Section \ref{sec:prelim_architecture}) with $r$ hidden neurons. We select all the parameters as shown in Theorem \ref{thm:main-theorem-twolayer}, and plot the function value for $f(\cdot)$.
In our experiment for the small artificial random dataset, we choose $N = 300,d = 100$, and $r = 300$.
\paragraph{MNIST experiments}
For MNIST, we use a squared loss between the network's prediction and the true label (which is an integer in $\{0,1,...,9\}$).
For the first two-layer network structure, we first normalize the samples in MNIST dataset to have norm 1. Then we set up a two-layer network with quadratic activation with $r = 3000$ hidden neurons (note that although our theory suggests to choose $r = 2d+2$, having a larger $r$ increases the number of decreasing directions and helps optimization algorithms in practice). For these experiments, we use Adam optimizer\citep{kingma2014adam} with batch size 128, initial learning rate 0.003, and decay the learning rate by a factor of 0.3 every 15 epochs (we find that the learning rate-decay is crucial for getting high accuracy).
We run the two-layer network in two settings, one for the original MNIST data, and one for the MNIST data with a small Gaussian noise (0.01 standard deviation per coordinate). The perturbation is added in order for the conditions in Theorem~\ref{thm:main-theorem-twolayer} to hold.
For the three-layer network structure, we first normalize the samples in MNIST dataset with norm 1. Then we do the PCA to project it into a 100-dimension subspace. We use $D = [x_1,\dots,x_n]$ to denote this dataset after PCA. Note that the original 2-layer network may not apply to this setting, since now the matrix $X = [x_1^{\otimes 2},\dots, x_n^{\otimes 2}]$ does not have full column rank($60000 > 100^2)$. We then add a small Gaussian perturbation to $\tilde D\sim \mathcal{N}(0,\sigma_1^2)$ to the sample matrix $D$ and denote the perturbed matrix $\bar D = [\bar x_1,\dots,\bar x_n]$. We then randomly select a matrix $Q\sim \mathcal{N}(0,\sigma_2^2)^{k\times d}$ and compute the random feature $z_j = (Q\bar x_j)^2$, where $(\cdot)^2$ denote the element-wise square. Then we feed this sample into the 2-layer neural network with hidden neuron $d$. Note that this is equivalent to our three-layer network structure in Section \ref{sec:prelim_architecture}. In our experiments, $k = 750, r = 3000, \sigma_1 = 0.05, \sigma_2 = 0.15$.
\paragraph{MNIST with random labels} These experiments have exactly the same set-up as the original MNIST experiments, except that the labels are replaced by a random number in \{0,1,2,...,9\}.
\subsection{Experiment Results}
In this section, we give detailed experiment results with bigger plots. For all the training loss graphs, we record the training loss for every 5 iterations. Then for the $i$th recorded loss, we average the recorded loss from $i-19$th to $i$th and set it as the average loss at $(5i)$th iteration. Then we take the logarithm on the loss and generated the training loss graphs.
\paragraph{Small Synthetic Example}
\begin{figure}[h!]
\centering
\includegraphics[width=5in]{randomsample.png}
\caption{Synthetic Example}
\label{fig:random-sample-large}
\end{figure}
As we can see in Figure~\ref{fig:random-sample-large} the loss converges to 0 quickly.
\paragraph{MNIST experiments with original labels}
\begin{figure}[ht!]
\centering
\includegraphics[width=5in]{original55000.png}
\caption{Two-layer network on original MNIST}
\label{fig:2-layer-original-large}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=5in]{perturbed25000_noise001.png}
\caption{Two-layer network on MNIST, with noise std 0.01}
\label{fig:2-layer-perturbed-large}
\end{figure}
First we compare Figure~\ref{fig:2-layer-original-large} and Figure~\ref{fig:2-layer-perturbed-large}. In Figure~\ref{fig:2-layer-original-large}, we optimize the two-layer architecture with original input/labels. Here the loss decreases to a small value ($\sim 0.1$), but the decrease becomes slower afterwards. This is likely because for the matrix $X$ defined in Theorem~\ref{thm:main-theorem-twolayer}, some of the directions have very small singular values, which makes it much harder to correctly optimize for those directions. In Figure~\ref{fig:2-layer-perturbed-large}, after adding the perturbation the smallest singular value of the matrix $X$ becomes better, and as we can see the loss decreases geometrically to a very small value ($<1e-5$).
A surprising phenomenon is that even though we offer no generalization guarantees, the network trained as in Figure~\ref{fig:2-layer-original-large} has an MSE error of 1.21 when tested on test set, which is much better than a random guess (recall the range of labels is 0 to 9). This is likely due to some implicit regularization effect \citep{gunasekar2017implicit, li2018algorithmic}.
For three-layer networks, in Figure~\ref{fig:3-layer-moise-large} we can see even though we are using only the top 100 PCA directions, the three-layer architecture can still drive the training error to a very low level.
\begin{figure}[ht!]
\centering
\includegraphics[width=5in]{pca55000_noise.png}
\caption{Three-layer network with top 100 PCA directions of MNIST, 0.05 noise per direction}
\label{fig:3-layer-moise-large}
\end{figure}
\paragraph{MNIST with random label}
When we try to fit random labels, the original MNIST input does not work well. We believe this is again because there are many small singular values for the matrix $X$ in Theorem~\ref{thm:main-theorem-twolayer}, so the data does not have enough effective dimensions fit random labels. The reason that it was still able to fit the original labels to some extent (as in Figure~\ref{fig:2-layer-original-large}) is likely because the original label is correlated with some features of the input, so the original label is less likely to fall into the subspace with smaller singular values. Similar phenomenon was found in \cite{arora2019fine}.
Once we add perturbation, for two-layer networks we can fit the random label to very high accuracy, as in Figure~\ref{fig:2-layer-perturbed-random-sample-large}. The performance for three-layer network in Figure~\ref{fig:3-layer-random-label-large} is also similar to Figure~\ref{fig:3-layer-moise-large}.
\begin{figure}[ht!]
\centering
\includegraphics[width=5in]{perturbed35000_noise001_randomlabel.png}
\caption{Two-layer network on MNIST, with noise std 0.01, random labels}
\label{fig:2-layer-perturbed-random-sample-large}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=5in]{pcarandomlabel50000_variance005.png}
\caption{Three-layer network with top 100 PCA directions of MNIST, 0.05 noise per direction, random labels}
\label{fig:3-layer-random-label-large}
\end{figure}
\section{Detailed Description of Perturbed Gradient Descent}
\label{sec:algdetail}
In this section we give the pseudo-code of the Perturbed Gradient Descent algorithm as in \cite{jin2017escape}, see Algorithm~\ref{alg:pgd}. The algorithm is quite simple: it just runs the standard gradient descent, except if the loss has not decreased for a long enough time, it adds a perturbation. The perturbation allows the algorithm to escape saddle points. Note that we only use PGD algorithm to find a second-order stationary point. Many other algorithms, including stochastic gradient descent and accelerated gradient descent, are also known to find a second-order stationary point efficiently. All these algorithms can be used for our analysis.
\begin{algorithm}[!ht]
\caption{Perturbed Gradient Descent}
\label{alg:pgd}
\begin{algorithmic}[1]
\Require $x_0,\ell,\rho,\varepsilon,c,\delta,\Delta_f$.
\State $\chi\leftarrow 3\max\left\{\log\left(\frac{d\ell \Delta_f}{c\varepsilon^2 \delta}\right),4\right\},\eta\leftarrow\frac{c}{\ell},r\leftarrow\frac{\sqrt{c}\varepsilon}{\chi^2\ell},g_{\text{thres}}\leftarrow \frac{\sqrt{c}\varepsilon}{\chi^2},f_{\text{thres}}\leftarrow\frac{c\sqrt{\varepsilon^3}}{\chi^3\sqrt{\rho}},t_{\text{thres}}\leftarrow \frac{\chi\ell}{c^2\sqrt{\rho\varepsilon}}$
\State $t_{\text{noise}} \leftarrow -t_{\text{thres}} - 1$
\For{$t=0,1,\dots$}
\If{$||\nabla f(x_t)||\le g_{\text{thres}}$ and $t - t_{\text{noise}} > t_{\text{thres}}$}
\State $\tilde x_t \leftarrow x_t, t_{\text{noise}}\leftarrow t$
\State $x_t \leftarrow \tilde x_t + \xi_t$, where $\xi_t$ is drawn uniformly from $\mathbb B_0(r)$.
\EndIf
\If{$t - t_{\text{noise}} = t_{\text{thres}}$ and $f(x_t) - f(\tilde x_{t_{\text{noise}}}) > -f_{\text{thres}}$}
\State\Return $\tilde x_{t_{\text{noise}}}$
\EndIf
\State $x_{t+1}\leftarrow x_t - \eta\nabla f(x_t)$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Gradient and Hessian of the Cost Function}
Before we prove any of our main theorems, we first compute the gradient and Hessian of the functions $f(W)$ and $g(W)$. In our training process, we need to compute the gradient of function $g(W)$, and in the analysis for the smoothness and Hessian Lipschitz constants, we need both the gradient and Hessian.
Recall that given the samples and their corresponding labels $\{(x_j,y_j)\}_{j\le n}$, we define the cost function of the neural network with parameters $W = [w_1,\dots,w_r]\in\mathbb{R}^{d\times r}$,
\[f(W) = \frac{1}{4n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)^2.\]
Given the above form of the cost function, we can write out the gradient and the hessian with respect to $W$. We have the following gradient,
\begin{align*}
\frac{\partial f(W)}{\partial w_k} =& \frac{1}{4n}\sum_{j=1}^n 2\left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right) \cdot 2 a_k (w_k^Tx_j) x_j\\
=& \frac{a_k}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^Tw_k.
\end{align*}
and $\frac{\partial^2 f(W)}{\partial w_{k_1}\partial w_{k_2}} =$
\[ \left\{\begin{aligned}
\frac{a_{k_1}}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^T + \frac{2a_{k_1}a_{k_2}}{n}\sum_{j=1}^n(x_j^Tw_{k_1})(x_j^Tw_{k_2})x_jx_j^T&,\ \text{if}\ k_1 = k_2 \\
\frac{2a_{k_1}a_{k_2}}{n}\sum_{j=1}^n(x_j^Tw_{k_1})(x_j^Tw_{k_2})x_jx_j^T &,\ \text{if}\ k_1 \neq k_2
\end{aligned}\right.\]
In the above computation, $\frac{\partial f(W)}{\partial w_k}$ is a column vector and $\frac{\partial^2 f(W)}{\partial w_{k_1}\partial w_{k_2}}$ is a square matrix whose different rows means the derivative to elements in $w_{k_2}$ and different columns represent the derivative to elements in $w_{k_1}$. Then, given the above formula, we can write out the quadratic form of the hessian with respect to the parameters $Z = [z_1,z_2,\dots,z_r]\in \mathbb{R}^{d\times r}$,
\begin{align*}
&\nabla^2 f(W)(Z,Z)\\
=& \sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^T\right)z_k \\
&\quad + \sum_{1\le k_1,k_2\le r}w_{k_2}^T\left(\frac{2a_{k_1}a_{k_2}}{n}\sum_{j=1}^n(x_j^Tw_{k_1})(x_j^Tw_{k_2})x_jx_j^T\right)w_{k_1} \\
=& \sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^T\right)z_k + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2.
\end{align*}
In order to train this neural network in polynomial time, we need to add a small regularizer to the original ocst function $f(W)$. Let
\[g(W) = f(W) + \frac{\gamma}{2}||W||_F^2,\]
where $\gamma$ is a constant. Then we can directly get the gradient and the hessian of $g(W)$ from those of $f(W)$. We have
\begin{align*}
\nabla_{w_k} g(W) =& \frac{a_k}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^Tw_k + \gamma w_k \\
\nabla^2_W g(W)(Z,Z) =& \sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^T\right)z_k\\
&\quad+ \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2 + \gamma ||Z||_F^2.
\end{align*}
For simplicity, we can use $x_j^TWAW^Tx_j-y_j$ to denote $(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j$, where $A$ is a diagonal matrix with $A_{ii} = a_i$. Then we have
\begin{align*}
\nabla_W g(W) =& \frac{1}{n}\sum_{j=1}^n \left(x_j^TWAW^Tx_j-y_j\right)x_jx_j^TWA + \gamma W \\
\nabla^2_W g(W)(Z,Z) =& \sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(x_j^TWAW^Tx_j-y_j\right)x_jx_j^T\right)z_k\\
&\quad+ \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2 + \gamma ||Z||_F^2.
\end{align*}
\section{Omitted Proofs for Section~\ref{sec:proof-sketch-twolayer}}
\label{sec:twolayerformal}
In this section, we will give a formal proof of Theorem \ref{thm:main-theorem-twolayer}. We will follow the proof sketch in Section \ref{sec:proof-sketch-twolayer}. First in Section~\ref{subsec:landscapeproof} we prove Lemma~\ref{lem:geo-property} which gives the optimization landscape for the two-layer neural network with \emph{large enough} width;then in Section~\ref{subsec:2layeralg} we will show that the training process on the function with regularization will end in polynomial time.
\subsection{Optimization landscape of two-layer neural net}\label{subsec:landscapeproof}
In this part we will prove the optimization landscape(Lemma \ref{lem:geo-property}) of 2-layer neural network. First we recall Lemma \ref{lem:geo-property}.
\lemoptlandscape*
For simplicity, we will use $\delta_j(W) = \sum_{i=1}^r a_i(w_i^T x_j)^2 - y_j$ to denote the error of the output of the neural network and the label $y_j$. Consider the matrix $M = \frac{1}{n}\sum_{j=1}^n\delta_j x_jx_j^T$. To show that every $\varepsilon$-second-order stationary point $W$ of $f$ will have small function value $f(W)$, we need the following 2 lemmas.
Generally speaking, the first lemma shows that, when the network is \emph{large enough}, any point with \emph{almost Semi-definite Hessian} will lead to a small spectral norm of matrix $M$.
\lemsmallesteigenvalue*
\begin{proof}
First note that the equation
\[\lambda_{\min}\nabla^2 f(W) = -\max_i |\lambda_i (M)|\]
is equivalent to
\[\min_{||Z||_F = 1}\nabla^2 f(W)(Z,Z) = -\max_{||z||_2 = 1} |z^T Mz|,\]
and we will give a proof of the equivalent form.
First, we show that
\[\min_{||Z||_F = 1}\nabla^2 f(W)(Z,Z) \ge -\max_{||z||_2 = 1} |z^T Mz|.\]
Intuitively, this is because $\nabla^2 f(W)$ is the sum of two terms, one of them is always positive semidefinite, and the other term is equivalent to a weighted combination of the matrix $M$ applied to different columns of $Z$.
\begin{align*}
&\nabla^2 f(W)(Z,Z)\\
=& \sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)x_jx_j^T\right)z_k + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2 \\
=& \sum_{k=1}^r a_k z_k^T M z_k + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2 \\
\ge & \sum_{k=1}^r a_k z_k^T M z_k \\
\ge & -\sum_{k=1}^r \max_i|\lambda_{i}(M)|\cdot ||z_k||_2^2 \\
=& -\max_i|\lambda_{i}(M)|\cdot ||Z||_F^2.
\end{align*}
Then we have
\[\min_{||Z||_F = 1}\nabla^2 f(W)(Z,Z) \ge \min_{||Z||_F = 1}(-\max_i|\lambda_{i}(M)|\cdot ||Z||_F^2) = -\max_i|\lambda_{i}M| = -\max_{||z||_2 = 1} |z^T Mz|.\]
For the other side, we show that
\[\min_{||Z||_F = 1}\nabla^2 f(W)(Z,Z) \le -\max_{||z||_2 = 1} |z^T Mz|\]
by showing that there exists $Z,||Z||_F = 1$ such that $\nabla^2 f(W)(Z,Z) = -\max_{||z||_2 = 1} |z^T Mz|$.
First, let $z_0 = \arg\max_{||z||_2 = 1} |z^T Mz|$. Recall that for simplicity, we assume that $r$ is an even number and $a_i = 1$ for all $i \le \frac{r}{2}$ and $a_i = -1$ for all $i \ge \frac{r+2}{2}$. If $z_0^T Mz_0 < 0$, there exists $u\in\mathbb{R}^{r}$ such that
\begin{enumerate}
\item $||u||_2 = 1$,
\item $u_i = 0$ for all $i \ge \frac{r+2}{2}$,
\item $\sum_{i=1}^r a_i u_i w_i = \textbf{0}$,
\end{enumerate}
since for constraints 2 and 3, they form a homogeneous linear system, and constraint 2 has $\frac{r}{2}$ equations and constraint 3 has $d$ equations. The total number of the variables is $r$ and we have $r > \frac{r}{2} + d$ since we assume that $r \ge 2d+2$. Then there must exists $r\neq \textbf{0}$ that satisfies constraints 2 and 3. Then we normalize that $u$ to have norm $||u||_2 = 1$.
Then, let $Z = z_0u^T$, we have $||Z||_F^2 = ||z_0||_2^2\cdot ||u||_2^2 = 1$ and
\begin{align*}
\nabla^2 f(W)(Z,Z) =& \sum_{k=1}^r a_k z_k^T M z_k + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2\\
=& \sum_{k=1}^r a_k u_k^2 z_0^T M z_0 + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i u_i w_i^Tx_jx_j^Tz_0\right)^2\\
=& z_0^TMz_0 + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r\textbf{0}^Tx_jx_j^Tz_0\right)^2\\
=& -\max_{||z||_2 = 1} |z^T Mz|,
\end{align*}
where the third equality comes from the fact that $||u||_2^2 = \sum_{i=1}^r u_i^2 = 1$, $u_i = 0$ for all $i > \frac{r}{2}$, and $\sum_{i=1}^r a_i u_i w_i = \textbf{0}$. The proof for the case when $z_0^T Mz_0 > 0$ is symmetric, except we use the second half of the coordinates (where $a_i = -1$).
\iffalse
Similarly, if $z_0^T Mz_0 > 0$, there exists $u\in\mathbb{R}^{r}$ such that
\begin{enumerate}
\item $||u||_2 = 1$,
\item $u_i = 0$ for all $i \le \frac{r}{2}$,
\item $\sum_{i=1}^r a_i u_i w_i = \textbf{0}$.
\end{enumerate}
Let $Z = z_0u^T$ and $||Z||_F = 1$, we have
\begin{align*}
\nabla^2 f(W)(Z,Z) =& \sum_{k=1}^r a_k z_k^T M z_k + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i w_i^Tx_jx_j^Tz_i\right)^2\\
=& \sum_{k=1}^r a_k u_k^2 z_0^T M z_0 + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i u_i w_i^Tx_jx_j^Tz_0\right)^2\\
=& z_0^TMz_0 + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r\textbf{0}^Tx_jx_j^Tz_0\right)^2\\
=& -\max_{||z||_2 = 1} |z^T Mz|.
\end{align*}
Then, we just prove that
\[\min_{||Z||_F = 1}\nabla^2 f(W)(Z,Z) \le -\max_{||z||_2 = 1} |z^T Mz|\]
and we conclude the proof by combining the previous result.
\fi
\end{proof}
The next step needs to connect the matrix $M$ and the loss function. In particular, we will show that if the spectral norm of $M$ is small, the loss is also small.
\lemspectralnormandvalue*
\begin{proof}
We know that the function value $f(W) = \frac{1}{n}\sum_{j=1}^n \delta_j^2 = \frac{1}{n}||\delta||_2^2$, where $\delta \in\mathbb{R}^n$ is the vector whose $j$-th element is $\delta_j$. Because $X = [x_1^{\otimes 2},\dots,x_n^{\otimes 2}]\in \mathbb{R}^{d^2 \times n}$ has full column rank and the smallest singular value is at least $\sigma$, we know that for any $v\in\mathbb{R}^n$,
\[||Xv||_2 \ge \sigma_{\min}(X)\cdot ||v||_2 \ge \sigma ||v||_2.\]
Since $M = \frac{1}{n}\sum_{j=1}^n \delta_j x_jx_j^T$ is a symmetric matrix, $M$ has $d$ real eigenvalues, and we use $\lambda_1,\dots,\lambda_d$ to denote these eigenvalues. Because we assume that the spectral norm of the matrix $M = \frac{1}{n}\sum_{j=1}^n \delta_j x_jx_j^T$ is upper bounded by $\lambda$, which means that $|\lambda_i| \le \lambda$ for all $1\le i\le d$, and we have
\[||M||_F^2 = \sum_{i=1}^d \lambda_i^2 \le \sum_{i=1}^d \lambda^2 = d\lambda^2.\]
Then we can conclude that
\[||M||_F^2 = ||\frac{1}{n}\sum_{j=1}^n \delta_j x_jx_j^T||_F^2 = \frac{1}{n^2}||X\delta||_2^2 \ge \frac{1}{n^2}\sigma^2 ||\delta||_2^2,\]
where the second equation comes from the fact that reordering a matrix to a vector preserves the Frobenius norm.
Then combining the previous argument, we have
\[f(W) = \frac{1}{4n}||\delta||_2^2 \le \frac{n}{4\sigma^2}||M||_F^2 \le \frac{nd\lambda^2}{4\sigma^2}.\]
\end{proof}
Lemma~\ref{lem:geo-property} follows immediately from Lemma~\ref{lem:smallesteigenvalue} and Lemma~\ref{lem:spectralnormandfuncvalue}.
\subsection{Training guarantee of the two-layer neural net}
\label{subsec:2layeralg}
Recall that in order to derive the time complexity for the training procedure, we add a regularizer to the function $f$. More concretely,
\[g(W) = f(W) + \frac{\gamma}{2}||W||_F^2,\]
where $\gamma$ is a constant that we choose in Theorem~\ref{thm:main-theorem-twolayer}.
To analyze the running time of the PGD algorithm, we first bound the smoothness and Hessian Lipschitz parameters when the Frobenius norm of $W$ is bounded.
\begin{restatable}{lemma}{lemsmoothness}\label{lem:smoothness-lipschitz-hessian}
In the set $\{W:||W||_F^2 \le \Gamma\}$, if we have $||x_j||_2 \le B$ and $|y_j| \le Y$ for all $j \le n$, then
\begin{enumerate}
\item $\nabla g(W)$ is $(3B^4\Gamma+YB^2+\gamma)$-smooth.
\item $\nabla^2 g(W)$ has $6B^4\Gamma^{\frac{1}{2}}$-Lipschitz Hessian.
\end{enumerate}
\end{restatable}
\begin{proof}
We first figure out the smoothness constant. We have
\begin{align*}
&||\nabla g(U) - \nabla g(V)||_F \\
=& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TUA + \gamma U - \frac{1}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^TVA - \gamma V||_F \\
\le& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TUA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^TVA||_F + \gamma ||U-V||_F.
\end{align*}
Then we bound the first term, we have
\begin{align*}
& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TUA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^TVA||_F \\
=& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TUA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TVA \\
&\quad + \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TVA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAV^Tx_j-y_j\right)x_jx_j^TVA \\
&\quad + \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAV^Tx_j-y_j\right)x_jx_j^TVA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^TVA||_F\\
\le& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TUA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TVA||_F \\
&\quad + ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TVA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAV^Tx_j-y_j\right)x_jx_j^TVA||_F \\
&\quad + ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAV^Tx_j-y_j\right)x_jx_j^TVA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^TVA||_F.\\
\end{align*}
The first term can be bounded by
\begin{align*}
&||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TUA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TVA||_F \\
\le& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j\right)x_jx_j^TUA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j\right)x_jx_j^TVA||_F\\
&\quad+ ||\frac{1}{n}\sum_{j=1}^n y_jx_jx_j^TUA - y_jx_jx_j^TVA||_F \\
\le& ||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j\right)x_jx_j^T||_F||(U-V)A||_F + YB^2||(U-V)A||_F \\
\le& B^4\Gamma ||U-V||_F + YB^2||U-V||_F.
\end{align*}
Similarly, we can show that
\[||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^TVA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TUAV^Tx_j-y_j\right)x_jx_j^TVA||_F \le B^4\Gamma ||U-V||_F,\]
and
\[||\frac{1}{n}\sum_{j=1}^n \left(x_j^TUAV^Tx_j-y_j\right)x_jx_j^TVA - \frac{1}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^TVA||_F \le B^4\Gamma ||U-V||_F.\]
Then, we have
\[||\nabla g(U) - \nabla g(V)||_F \le (3B^4\Gamma+YB^2+\gamma) ||U-V||_F.\]
Then we bound the Hessian Lipschitz constant. We have
\begin{align*}
&|\nabla^2 g(U)(Z,Z) - \nabla^2 g(V)(Z,Z)|\\
=& |\sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(x_j^TUAU^Tx_j-y_j\right)x_jx_j^T\right)z_k + \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i u_i^Tx_jx_j^Tz_i\right)^2 + \gamma ||Z||_F^2 \\
&\quad - \sum_{k=1}^r z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(x_j^TVAV^Tx_j-y_j\right)x_jx_j^T\right)z_k - \frac{2}{n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i v_i^Tx_jx_j^Tz_i\right)^2 - \gamma ||Z||_F^2| \\
\le&\sum_{k=1}^r|z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(x_j^T(UAU^T-VAV^T)x_j\right)x_jx_j^T\right)z_k|\\
&\quad+ \frac{2}{n}\sum_{j=1}^n|\left(\sum_{i=1}^r a_i u_i^Tx_jx_j^Tz_i\right)^2 - \left(\sum_{i=1}^r a_i v_i^Tx_jx_j^Tz_i\right)^2|.
\end{align*}
First we have
\begin{align*}
&|z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(x_j^T(UAU^T-VAV^T)x_j\right)x_jx_j^T\right)z_k|\\
\le& \frac{1}{n}\sum_{j=1}^n||\left(x_j^T(UAU^T-VAV^T)x_j\right)x_jx_j^T||_F||z_k||_2^2 \\
\le& \frac{1}{n}\sum_{j=1}^n||\left(x_j^T(UAU^T-UAV^T + UAV^T - VAV^T)x_j\right)x_jx_j^T||_F||z_k||_2^2 \\
\le& 2B^4\Gamma^{\frac{1}{2}}||U-V||_F||z_k||_2^2,
\end{align*}
So we can bound the first term by
\begin{align*}
&\sum_{k=1}^r|z_k^T\left(\frac{a_{k}}{n}\sum_{j=1}^n \left(x_j^T(UAU^T-VAV^T)x_j\right)x_jx_j^T\right)z_k|\\
\le&\sum_{k=1}^r2B^4\Gamma^{\frac{1}{2}}||U-V||_F||z_k||_2^2 = 2B^4\Gamma^{\frac{1}{2}}||U-V||_F||Z||_F^2.
\end{align*}
Then for the second term, note that
\[\sum_{i=1}^r a_i u_i^Tx_jx_j^Tz_i = \langle UA, x_jx_j^TZ\rangle,\]
and we have
\begin{align*}
&\frac{2}{n}\sum_{j=1}^n|\left(\sum_{i=1}^r a_i u_i^Tx_jx_j^Tz_i\right)^2 - \left(\sum_{i=1}^r a_i v_i^Tx_jx_j^Tz_i\right)^2| \\
=& \frac{2}{n}\sum_{j=1}^n |\langle UA, x_jx_j^TZ\rangle^2 - \langle VA, x_jx_j^TZ\rangle^2| \\
=& \frac{2}{n}\sum_{j=1}^n |\langle (U-V)A, x_jx_j^TZ\rangle\langle (U+V)A, x_jx_j^TZ\rangle| \\
\le& \frac{2}{n}\sum_{j=1}^n||(U-V)A||_F||x_jx_j^TZ||_F||(U+V)A||_F||x_jx_j^TZ||_F \\
\le& 4B^4\Gamma^{\frac{1}{2}}||U-V||_F||Z||_F^2,
\end{align*}
where the first inequality comes from the Cauchy-Schwatz inequality. Combining with the previous computation, we have
\[|\nabla^2 g(U)(Z,Z) - \nabla^2 g(V)(Z,Z)| \le 6B^4\Gamma^{\frac{1}{2}}||U-V||_F||Z||_F^2.\]
\end{proof}
We also have the theorem showing the convergence result of Perturbed Gradient Descent(Algorithm \ref{alg:pgd}).
\thmpgdconvergence*
Then based on the convergence result in \cite{jin2017escape} and the previous lemmas, we have the following main theorem for 2-layer neural network with quadratic activation.
\thmmainthmtwolayer*
\begin{proof}[Proof of Theorem \ref{thm:main-theorem-twolayer}]
We first show that during the training process, if the constant $c \le 1$, the objective function value satisfies
\[g(W_t) \le g(W_{\text{ins}}) + \frac{3c\varepsilon^2}{2\chi^4},\]
where we choose the smoothness constant $\ell \ge 1$ to be the smoothness for the region $g(W) \le g(W_{\text{ins}}) + \frac{3c\varepsilon^2}{2\chi^4}$.
In the PGD algorithm (Algorithm~\ref{alg:pgd}), we say a point is in a perturbation phase, if $t-t_{noise} < t_{thres}$. A point $x_t$ is the beginning of a perturbation phase if it reaches line 5 of Algorithm~\ref{alg:pgd} and a perturbation is added to it.
We use induction to show that the following properties hold.
\begin{enumerate}
\item If time $t$ is not in the perturbation phase, then $g(W_t) \le g(W_{\text{ins}})$.
\item If time $t$ is in a perturbation phase, then $g(W_t) \le g(W_{\text{ins}}) + \frac{3c\varepsilon^2}{2\chi^4\ell}$. Moreover, if $t$ is the beginning of a perturbation phase, then $g(\tilde W_t) \le g(W_{\text{ins}})$.
\end{enumerate}
First we show that at time $t=0$, the property holds. If $t=0$ is not the beginning of a perturbation phase, then the inequality holds trivially by initialization.
If $t=0$ is the beginning of a perturbation phase, then we know that $g(\tilde W_0) = g(W_{\text{ins}})$ from the definition of the algorithm, then
\begin{align}
g(W_0) =& g(\tilde W_0 + \xi_0)\label{equ:perturb} \\
\le& g(\tilde W_0) + ||\xi_0||_F||\nabla g(\tilde W_0)||_F + \frac{\ell}{2}||\nabla g(\tilde W_0)||_F^2\nonumber \\
\le& g(\tilde W_0) + r\cdot g_{\text{thres}} + \frac{\ell}{2}r^2\nonumber \\
\le& g(\tilde W_0) + \frac{\sqrt{c}\varepsilon}{\chi^2\ell} \cdot \frac{\sqrt{c}\varepsilon}{\chi^2} +\frac{\ell}{2} \frac{\sqrt{c}\varepsilon}{\chi^2\ell}\cdot \frac{\sqrt{c}\varepsilon}{\chi^2\ell}\nonumber \\
=& g(W_{\text{ins}}) + \frac{3c\varepsilon^2}{2\chi^4\ell}.\nonumber
\end{align}
Now we do the induction: assuming the two properties hold for time $t$, we will show that they also hold at time $t+1$. We break the proof into 3 cases:
{\bf Case 1}: $t+1$ is not in a perturbation phase. In this case, the algorithm does not add a perturbation on $W_{t+1}$, and we have
\begin{align}
g(W_{t+1}) =& g(W_{t} - \eta \nabla g(W_{t}))\label{equ:smoothness} \\
\le& g(W_{t}) - \langle \eta \nabla g(W_{t}), \nabla g(W_t)\rangle + \frac{\ell}{2}||\eta \nabla g(W_t)||_F^2\nonumber \\
\le& g(W_t) - \frac{\eta}{2}||\nabla g(W_t)||_F^2||\nonumber\\
\le& g(W_t). \nonumber
\end{align}
If $t$ is not in a perturbation phase, then from the induction hypothesis, we have
\[g(W_{t+1}) \le g(W_t) \le g(W_{\text{ins}}),\]
otherwise if $t$ is in a perturbation phase, since $t+1$ is not in a perturbation phase, $t$ must be at the end of the phase. By design of the algorithm we have:
\[g(W_{t+1}) \le g(W_t) \le g(\tilde W_{t_{\text{noise}}}) - f_{\text{thres}} \le g(W_{\text{ins}}).\]
{\bf Case 2}: $t+1$ is in a perturbation phase, but not at the beginning.
Using the same reasoning as \eqref{equ:smoothness}, we know
\[g(W_{t+1}) \le g(W_t) \le g(W_{\text{ins}}).\]
{\bf Case 3}: $t+1$ is at the beginning of a perturbation phase. First we know that
\[g(W_{t}) \le g(W_{\text{ins}}),\]
since $t$ is either not in a perturbation phase of at the end of a perturbation phase, then we have $g(\tilde W_{t+1}) \le g(W_{\text{ins}})$. Same as the computation in \eqref{equ:perturb}, we have
\[g(W_{t+1}) \le g(W_{\text{ins}}) + \frac{3c\varepsilon^2}{2\chi^4\ell}.\]
This finishes the induction.
Since we choose $\ell \ge 1$, we can choose the other parameters such that $g(W_{t+1}) \le g(W_{\text{ins}}) + \frac{3c\varepsilon^2}{2\chi^4} \le g(W_{\text{ins}}) + 1$. Then since
\[g(W) = f(W) + \frac{\gamma}{2}||W||_F^2,\]
we know that during the training process, we have $||W||_F^2 \le \frac{2(g(W_{\text{ins}}) + 1)}{\gamma}$. Since we train from $W_{\text{ins}} = 0$, we have $||W||_F^2 \le \frac{2(f(0) + 1)}{\gamma}$. From Lemma \ref{lem:smoothness-lipschitz-hessian}, we know that
\begin{enumerate}
\item $\nabla g(W)$ is $(3B^4\frac{2(f(0) + 1)}{\gamma}+YB^2+\gamma)$-smooth.
\item $\nabla^2 g(W)$ has $6B^4\sqrt{\frac{2(f(0) + 1)}{\gamma}}$-Lipschitz Hessian.
\end{enumerate}
As we choose $\gamma = (6B^4\sqrt{2(f(0) + 1)})^{2/5}\cdot \varepsilon^{2/5}$, we know that $\rho = (6B^4\sqrt{2(f(0) + 1)})^{4/5}\cdot \varepsilon^{-1/5}$ is an upper bound on the Lipschitz Hessian constant.
When PGD stops, we know that
\[\lambda_{\min}(\nabla^2 g(W)) \ge -\sqrt{\rho\varepsilon} = -(6B^4\sqrt{2(f(0) + 1)})^{2/5}\cdot \varepsilon^{2/5},\]
and we have
\[\lambda_{\min}(\nabla^2 f(W)) \ge \lambda_{\min}(\nabla^2 g(W)) - \gamma \ge -2(6B^4\sqrt{2(f(0) + 1)})^{2/5}\cdot \varepsilon^{2/5}.\]
From Lemma \ref{lem:smallesteigenvalue}, we know that the spectral norm of matrix $M$ is bounded by $2(6B^4\sqrt{2(f(0) + 1)})^{2/5}\cdot \varepsilon^{2/5}$, and from Lemma \ref{lem:spectralnormandfuncvalue}, we know that
\[f(W) \le \frac{nd\cdot 4(6B^4\sqrt{2(f(0) + 1)})^{4/5}\cdot \varepsilon^{4/5}}{4\sigma^2} = \frac{nd\cdot (6B^4\sqrt{2(f(0) + 1)})^{4/5}\cdot \varepsilon^{4/5}}{\sigma^2}. \]
The running time follows directly from the convergence theorem of Perturbed Gradient Descent(Theorem \ref{thm:pgd-convergence}) and the previous argument that the training trajectory will not escape from the set $\{W:||W||_F^2 \le \frac{2(g(W_{\text{ins}}) + 1)}{\gamma}\}$.
Then, in order to get the error to be smaller than $\varepsilon$, we choose
\[\varepsilon' = \left(\frac{\sigma^2\varepsilon}{nd}\right)^{5/4}\frac{1}{6B^4\sqrt{2(f(0) + 1)}},\]
and the total running time should be
\[O\left(\frac{B^8\ell(nd)^{5/2}(f(0)+1)^2}{\sigma^{5}\varepsilon^{5/2}}\log^4\left(\frac{Bnrd\ell\Delta(f(0)+1)}{\varepsilon^2\delta\sigma}\right)\right).\]
Besides, our parameter $\rho$ and $\gamma$ is chosen to be
\[\rho = (6B^4\sqrt{2(f(0) + 1)})^{4/5}\cdot \varepsilon'^{-1/5} = (6B^4\sqrt{2(f(0) + 1)})\left(\frac{nd}{\sigma^2\varepsilon}\right)^{\frac{1}{4}},\]
and
\[\gamma = (6B^4\sqrt{2(f(0) + 1)})^{2/5}\cdot \varepsilon'^{2/5} = \left(\frac{\sigma^2\varepsilon}{nd}\right)^{\frac{1}{2}}.\]
\end{proof}
\section{Omitted Proofs in Section~\ref{sec:proof-sketch-random-feature}
\label{sec:threelayerformal}
In this section, we give the proof of the main results of our three-layer neural network(Theorem \ref{thm:main-theorem-random-feature-with-perturbation} and \ref{thm:deterministic}). Our proof mostly uses leave-one-out distance to bound the smallest singular value of the relevant matrices, which is a common approach in random matrix theory (e.g., in \cite{}). However, the matrices we are interested in involves high order tensor powers that have many correlated entries, so we need to rely on tools such as
anti-concentration for polynomials in order to bound the leave-one-out distance.
First in Section \ref{subsec:pre-random-feature}, we introduce some more notations and definitions, and present some well-known results that will help us present the proofs. In Section \ref{subsec:proof-random-feature-perturb}, we proof Theorem \ref{thm:main-theorem-random-feature-with-perturbation} which focus on the smoothed analysis setting. Finally in Section \ref{subsec:proof-deterministic} we prove Theorem \ref{thm:deterministic} where we can give a deterministic condition for the input.
\subsection{Preliminaries}\label{subsec:pre-random-feature}
\paragraph{Representations of symmetric tensors}
Throughout this section, we use $T_d^p$ to denote the space of $p$-th order tensors on $d$ dimensions. That is, $T_d^p = (\mathbb{R}^d)^{\otimes p}$. A tensor $T \in T_d^p$ is symmetric if $T(i_1,i_2,...,i_p) = T(i_{\pi(1)},i_{\pi(2)}, ..., i_{\pi(p)}$ for every permutation $\pi$ from $[p]\to[p]$. We use $X_d^p$ to denote the space of all symmetric tensors in $T_d^p$.
The dimension of $X_d^p$ is $D_d^p=\binom{p+d-1}{p}$.
Let $\bar{X}_d^p=\left\{x\in X_d^p\Big|\Vert x\Vert_2=1\right\}$ be the set of unit tensors in $X^p_d$ (as a sub-metric space of $T^p_d$). For $\mathbb{R}^d$, let $\{e_i|i=1,2\cdots d\}$ be its standard orthonormal basis. For simplicity of notation we use $S_p$ to denote the group of bijections (permutations) $[p]\to[p]$, and $I^p_d$ to denote the set of integer indices $I^p_d=\{(i_1,i_2\cdots i_d)\in \mathbb{N}^d|\sum\limits_{j=1}^d i_j=p\}$. We can make $X_d^p$ isomorphic (as a vector space over $\mathbb{R}$) to Euclidean space $\mathbb{R}^{I^p_d}$ ($|I^p_d|=D_d^p$) by choosing a basis $\{s_{(i_1,i_2\cdots i_d)\in I^p_d}=\frac{1}{\prod\limits_{j=1}^d{i_j!}}\sum\limits_{\sigma\in S_p}e_{j_{\sigma(1)}}\otimes e_{j_{\sigma(2)}}\otimes\cdots\otimes e_{j_{\sigma(p)}}|(j_1\circ j_2\circ \cdots \circ j_p)=(1^{(i_1)}\circ2^{(i_2)}\circ\cdots \circ d^{(i_d)})\}$ where $(1^{(i_1)}\circ2^{(i_2)}\circ\cdots \circ d^{(i_d)})$ means a length $p$ string with $i_1$ 1's, $i_2$ 2's and so on, and let the isomorphism be $\phi^p_d$. We call the image of a symmetric tensor through $\phi^p_d$ its \emph{reduced vectorized form}, and we can define a new norm on $X_d^p$ with $\Vert x\Vert_{\text{rv}}=\Vert \phi^p_d(x)\Vert_2$.
Given the definition of \emph{reduced vectorized form} and the norm $\Vert\cdot\Vert_{\text{rv}}$, we have the following lemma that bridges between the norm $\Vert\cdot\Vert_{\text{rv}}$ and the original 2-norm.
\begin{lemma}\label{lem:rv-metric}
For any $x\in X_n^p$,
\begin{equation*}
\Vert x\Vert_{\text{rv}} \geq \frac{1}{\sqrt{p!}}\Vert x\Vert_2.
\end{equation*}
\end{lemma}
\begin{proof}
We can expand $x$ as $x=\sum\limits_{i\in I_n^p} x_i s_i$. Then $\Vert x\Vert_{\text{rv}}=\sqrt{\sum\limits_{i\in I_n^p} x_i^2}$ and $\Vert x\Vert_2=\sqrt{\sum\limits_{i\in I_n^p} x_i^2 \Vert s_i\Vert_2^2}$ as $\{s_i\}$ are orthogonal. Notice that for $i=(i_1,i_2\cdots i_n)$, $\Vert s_i\Vert_2^2 = \frac{p!}{\prod\limits_{j=1}^n i_j!}\leq p!$, and therefore
\[\Vert x\Vert_{2} \leq \sqrt{\sum\limits_{i\in I_n^p} x_i^2 p!} =\sqrt{p!} \Vert x\Vert_{\text{rv}}.\]
\end{proof}
\paragraph{$\varepsilon$-net} Part of our proof uses $\varepsilon$-nets to do a covering argument. Here we give its definition.
\begin{definition}[$\varepsilon$-Net]
Given a metric space $(X,d)$. A finite set $N\subseteq {\mathcal{P}}$ is called an $\varepsilon$-net for ${\mathcal{P}}\subset X$ if for every $\boldsymbol{x}\in{\mathcal{P}}$, there exists $\pi(\boldsymbol{x})\in N$ such that $d(\boldsymbol{x}, \pi(\boldsymbol{x}))\le\varepsilon$. The smallest cardinality of an $\varepsilon$-net for ${\mathcal{P}}$ is called the covering number:
$\mathcal{N}({\mathcal{P}},\varepsilon) = \inf\{|N|:N \text{ is an $\varepsilon$-net of ${\mathcal{P}}$}\}$.
\end{definition}
Then we introduce give an upper bound on the size of $\varepsilon$-net of a set $K\subseteq \mathcal{R}^d$. First, we need the definition of Minkowski sum
\begin{definition}[Minkowski sum]
Let $A,B\subseteq \mathcal{R}^d$ be 2 subsets of $\mathcal{R}^d$, then the Minkowski sum is defined as
\[A + B := \{a+b:a\in A,b\in B\}.\]
\end{definition}
Then the covering number can be bounded by a volume argument. This is well-known, and the proof can be found in \cite{vershynin2018high}(Proposition 4.2.12 in \cite{vershynin2018high}).
\begin{proposition}[Covering number]\label{prop:covering-number}
Given a set $K\subseteq \mathcal{R}^d$ and the corresponding metric $d(x,y) := \Vert x-y\Vert_2$. Suppose that $\varepsilon > 0$, and then we have
\[\mathcal{N}(K,\varepsilon)\le \frac{|K+\mathbb B_2^d(\varepsilon / 2)|}{|\mathbb B_2^d(\varepsilon / 2)|},\]
where $|\cdot|$ denote the volume of the set.
\end{proposition}
Then with the help of the previous proposition, we can now bound the covering number of symmetric tensors with unit length.
\begin{lemma}[Covering number of $\bar{X}_d^p$]\label{lem:eps-net}
There exists an $\varepsilon$-net of $\bar{X}_d^p$ with size $O\left(\left(1+\frac{2\sqrt{p!}}{\varepsilon}\right)^{D^p_d}\right)$, i.e.
\[\mathcal{N}(\bar{X}_d^p,\varepsilon) \le O\left(\left(1+\frac{2\sqrt{p!}}{\varepsilon}\right)^{D^p_d}\right).\]
\end{lemma}
\begin{proof}
Recall that $\phi^p_d(\cdot):\mathbb{R}^{d^p}\to\mathbb{R}^{D^p_d}$ is an bijection between the symmetric tensors in $\mathbb{R}^{d^p}$ and a vector in $\mathbb{R}^{D^p_d}$. We first show that an $\frac{\varepsilon}{\sqrt{p!}}$-net for the image $\phi^p_d(\bar{X}_d^p)$ implies an $\varepsilon$-net for the unit symmetric tensor $\bar{X}_d^p$.
Suppose that the $\frac{\varepsilon}{\sqrt{p!}}$-net for the image $\phi^p_d(\bar{X}_d^p)$ is denoted as $N\subset \phi^p_d(\bar{X}_d^p)$, and for any $x\in\phi^p_d(\bar{X}_d^p)$, there exists $\pi(x)\in N$ such that $||\pi(x) - x||_2 \le \frac{\varepsilon}{\sqrt{p!}}$. Then we know that $\left(\phi^p_d\right)^{-1}(N)$ is an $\varepsilon$-net for the unit symmetric tensors $\bar{X}_d^p$, because for any $x'\in\bar{X}_d^p$, we have
\begin{align*}
\Vert x' - \left(\phi^p_d\right)^{-1}(\pi(\phi^p_d(x'))) \Vert_2 \le& \sqrt{p!}\Vert\phi^p_d(x') - \pi(\phi^p_d(x'))\Vert_2 \\
\le& \sqrt{p!}\cdot \frac{\varepsilon}{\sqrt{p!}}\\
=& \varepsilon,
\end{align*}
where the first inequality comes from Lemma \ref{lem:rv-metric}.
Next, we bound the covering number for the set $\phi^p_d(\bar{X}_d^p)$. First note that the set satisfies $\phi^p_d(\bar{X}_d^p)\subset\mathbb{R}^{D^p_d}$, and from Proposition \ref{prop:covering-number}, we have
\begin{align*}
\mathcal{N}\left(\phi^p_d(\bar{X}_d^p),\frac{\varepsilon}{\sqrt{p!}}\right) \le& \frac{\bigg|\phi^p_d(\bar{X}_d^p)+\mathbb B_2^{D^p_d}(\frac{\varepsilon}{2\sqrt{p!}})\bigg|}{\bigg|\mathbb B_2^{D^p_d}(\frac{\varepsilon}{2\sqrt{p!}})\bigg|}\\
\le& \frac{\bigg|\mathbb B_2^{D^p_d}(1)+\mathbb B_2^{D^p_d}(\frac{\varepsilon}{2\sqrt{p!}})\bigg|}{\bigg|\mathbb B_2^{D^p_d}(\frac{\varepsilon}{2\sqrt{p!}})\bigg|}\\
=&\left(1+\frac{2\sqrt{p!}}{\varepsilon}\right)^{D^p_d},
\end{align*}
where the first inequality comes from Proposition \ref{prop:covering-number} and the second inequality comes from the fact that $||\phi^p_d(x)||_2 \le ||x||_2$.
\end{proof}
\paragraph{Leave-one-out Distance} Another main ingredient in our proof is \emph{Leave-one-out distance}. This is a notion that is closely related to the smallest singular value, but usually much easier to compute and bound. It has been widely used in random matrix theory, for example in \cite{rudelson2009smallest}.
\begin{definition}[Leave-one-out distance]
For a set of vectors $V=\{v_1,v_2\cdots v_n\}$, their leave-one-out distance is defined as
\[l(V)=\min\limits_{1\leq i\leq n}\inf\limits_{a_1,a_2\cdots a_n\in R}\Vert v_i-\sum\limits_{j\neq i}a_jv_j\Vert_2.\]
For a matrix $M$, its leave-one-out distance $l(M)$ is the leave-one-out distance of its columns.
\end{definition}
The leave-one-out distance is connected with the smallest singular value by the following lemma:
\begin{lemma}[Leave-one-out distance and smallest singular value]\label{lem:loo-distance-and-singular}
For a matrix $M\in R^{m\times n}$ with $m\geq n$, let $l(M)$ denote the leave-one-out distance for the columns of $M$, and $\sigma_{\min}(M)$ denote the smallest singular value of $M$, then
\begin{equation}\nonumber
\frac{l(M)}{\sqrt{n}}\leq\sigma_{\min}(M)\leq l(M).
\end{equation}
\end{lemma}
We give the proof for completeness.
\begin{proof}
For any $x\in \mathbb{R}^n\backslash \{0\}$, let $r(x)=\mathop{\text{argmax}}\limits_{i\in[n]}|x_i|$, then $|x_{r(x)}|>0$ for $x\neq 0$.
Because $l(M)=\min\limits_{i\in [n]}\inf\limits_{x\in \mathbb{R}^n, x_i=1}\Vert Mx\Vert_2$, we have
\begin{align*}
\sigma_{\min}(M)=&\inf\limits_{x\in R^n\backslash 0}\frac{\Vert Mx\Vert_2}{\Vert x\Vert_2}\\
=&\min\limits_{i\in[n]}\inf\limits_{x\in R^n\backslash 0, r(x)=i}\frac{\Vert M\frac{x}{x_i}\Vert_2}{\Vert \frac{x}{x_i}\Vert_2}\\
=&\min\limits_{i\in[n]}\inf\limits_{x'\in R^n\backslash 0, x'_i=1}\frac{\Vert Mx'\Vert_2}{\Vert x'\Vert_2}.
\end{align*}
Because of the equations $\Vert x'\Vert_2\geq |x'_i|=1$ and $\Vert x'\Vert_2=\sqrt{\sum\limits_{j\in [n]}x_j^2}\leq \sqrt{n}|x'_i|=\sqrt{n}$, we have $\frac{l(M)}{\sqrt{n}}\leq\sigma_{\min}(M)\leq l(M)$.
\end{proof}
\paragraph{Anti-concentration}
To make use of the random Gaussian noise added in the smoothed analysis setting, we rely on the following anti-concentration result by \cite{carbery2001distributional}:
\begin{restatable}[Anti-concentration (\cite{carbery2001distributional})]{proposition}{thmanticoncentration}\label{prop:anti-concentration}
For a multivariate polynomial $f(x)=f(x_1,x_2\cdots x_n)$ of degree $p$, let $x\sim \mathcal{N}(0,1)^n$ follows the standard normal distribution, and $\mathrm{Var}[f]\geq 1$, then for any $t\in \mathbb{R}$ and $\varepsilon>0$,
\begin{equation}
\Pr_x[|f(x)-t|\leq\varepsilon]\leq O(p)\varepsilon^{1/p}
\end{equation}
\end{restatable}
\paragraph{Gaussian moments}
To apply the anti-concentration result, we need to give lower bound of the variance of a polynomial when the variables follow standard Gaussian distribution $\mathcal{N}(0,1)$. Next, we will show some definitions, propositions, and lemmas that will help us to give lower bound for variance of polynomials.
\begin{proposition}[Gaussian moments]
if $x\sim \mathcal{N}(0,1)$ is a Gaussian variable, then for $p\in N$, $\mathbb{E}_x[x^{2p}]=\frac{(2p)!}{2^p(p!)}\leq 2^pp!$; $\mathbb{E}_x[x^{2p+1}]=0$.
\end{proposition}
\begin{definition}[Hermite polynomials]
In this paper, we use the normalized Hermite polynomials, which are univariate polynomials which form an orthogonal polynomial basis under the normal distribution. Specifically, they are defined by the following equality
\begin{equation*}
H_n(x)=\frac{(-1)^ne^{\frac{x^2}{2}}}{\sqrt{n!}}\left(\frac{d^n e^{-\frac{x^2}{2}}}{dx^n}\right)
\end{equation*}
\end{definition}
The Hermite polynomials in the above definition forms a set of orthonormal basis of polynomials in the standard Normal distribution. For a polynomial $f: \mathbb{R}^n\to \mathbb{R}$, let $f(x)=\sum\limits_{i\in I_n^{\leq p}} f^M_i \prod\limits_{j=1}^n x_j^{i_j}$ and $f(x)=\sum\limits_{i\in I_n^{\leq p}} f^H_i \prod\limits_{j=1}^n H_{i_j}(x_j)$ be its expansions in the basis of monomials and Hermite polynomials respectively ($H_k$ is the Hermite polynomial of order $k$). Let the index set $I_n^{\leq p}=\bigcup\limits_{j=0}^p I_n^j$. We have the following propositions. The propositions are well-known and easy to prove. We include the proofs here for completeness
\begin{proposition}
for $i\in I_n^p$, $f^M_i=\left(\prod\limits_{j=1}^n \frac{1}{\sqrt{i_j!}}\right) f^H_i$\label{pro1}
\end{proposition}
\begin{proof}
Consider $i=(i_1,i_2\cdots i_n)\in I^p_n$, in the monomial expansion, the coefficient for the monomial $M_i=\prod\limits_{j=1}^n x_j^{i_j}$ is $f^M_i$. In the Hermite expansion, since $H_n(x)$ is an order-$n$ polynomial, if the term
$\prod\limits_{j=1}^n H_{i'_j}(x_j)$ contain the monomial $M_i$, there must be $i'_j\geq i_j$, and therefore for $i\in I_n^p$ the only term in the Hermite expansion that contains $M_i$ is
$f^H_i\prod\limits_{j=1}^n H_{i_j}(x_j)$ (with $M_i$ as its highest order monomial). The coefficient for $x_j^{i_j}$ in $H_{i_j}(x_j)$ is $\frac{1}{\sqrt{i_j!}}$, and therefore $f^M_i=\left(\prod\limits_{j=1}^n \frac{1}{\sqrt{i_j!}}\right) f^H_i$
\end{proof}
\begin{proposition}
For $x\sim \mathcal{N}(0,1)^n$, $E_x[f]=f^H_{0^n}$, $E_x[f^2]=\sum\limits_{i\in I_n^{\leq p}}(f^H_i)^2$ ($0^n$ refers to the index $(0,0,0\cdots 0)\in I_n^0$).\label{pro2}
\end{proposition}
\begin{proof}
Firstly, let $w(x)=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$ be the PDF of $\mathcal{N}(0,1)$, then
\begin{align*}
\int\limits_{-\infty}^{\infty} H_n(x)w(x)dx & = \frac{(-1)^n}{\sqrt{2\pi n!}} \int\limits_{-\infty}^{\infty}\left[\frac{d^n e^{-x^2/2}}{dx^n}\right]dx\\
& = \left\{
\begin{matrix}
0 & n\geq 1\\
1 & n=0
\end{matrix}
\right.,
\end{align*}
as a result of $\frac{d^n e^{-x^2/2}}{dx^n}\rightarrow 0$ when $x\rightarrow \pm\infty$ for $n\geq 0$. Besides,
\begin{align*}
\int\limits_{-\infty}^{\infty} H_n(x)H_m(x)w(x)dx & = \delta_{nm}
\end{align*}
for its well-known orthogonality in Guassian distribution (with $\delta_{nm}=\mathbb{I}[n=m]$ as the Kronecker function). Therefore,
\begin{align*}
E_x[f] & =\sum\limits_{i\in I_n^{\leq p}}f_i^H \prod\limits_{j\in [n]}\int\limits_{-\infty}^{\infty} H_{i_j}(x_j) w(x_j) dx_j \\
& =\sum\limits_{i\in I_n^{\leq p}}f_i^H \prod\limits_{j\in [n]}\mathbb{I}[i_j=0]\\
& =f_{0^n}^H,
\end{align*}
\begin{align*}
E_x[f^2] & =\sum\limits_{i,i'\in I_n^{\leq p}}f_i^Hf_{i'}^H \prod\limits_{j\in [n]}\int\limits_{-\infty}^{\infty} H_{i_j}(x_j)H_{i'_j}(x_j) w(x_j) dx_j \\
& =\sum\limits_{i,i'\in I_n^{\leq p}}f_i^Hf_{i'}^H \prod\limits_{j\in [n]}\mathbb{I}[i_j=i'_j]\\
& =\sum\limits_{i\in I_n^{\leq p}} (f_i^H)^2.
\end{align*}
\end{proof}
Then, we have the following lemma that lower bounds the variance of a polynomial with some structure. Given the following lemma, we can apply the anti-concentration results in the proof of Theorem \ref{thm:main-theorem-random-feature-with-perturbation} and \ref{thm:deterministic}.
\begin{lemma}[Variance]\label{lem:variance}
Let $f(x)=f(x_1,x_2\cdots x_d)$ be a homogeneous multivariate polynomial of degree $p$, then there is a symmetric tensor $M\in X_d^p$ that $f(x)=\langle M,x^{\otimes p}\rangle$. For all $x_0\in \mathbb{R}^d$, when $x\sim \mathcal{N}(0,1)^d$,
\begin{equation*}
\mathrm{Var}_x[f(x_0+x)]\geq \Vert M\Vert_{\text{rv}}^2
\end{equation*}
\end{lemma}
\begin{proof} We can view $f(x_0+x)$ as a polynomial with respect to $x$ and let $f^M_i$ and $f^H_i$ be the coefficients of its expansion in the monomial basis and Hermite polynomial basis respectively (with variable $x$). It's clear to see that $(f^M_i|i\in I_n^p)$ is the reduced vectorized form of $M$. From the Proposition \ref{pro1} and \ref{pro2}, we have
\begin{align*}
\mathrm{Var}[f(x_0+x)]=&\mathbb{E}[f(x_0+x)^2]-\mathbb{E}[f(x_0+x)]^2\\
=&\sum_{i\in I_n^{\leq p}\backslash 0^n}(f^H_i)^2\\
\ge& \sum_{i\in I_n^{p}}(f^H_i)^2\geq\sum_{i\in I_n^{p}} (f^M_i)^2\\
=&\Vert M\Vert_{\text{rv}}^2.
\end{align*}
\end{proof}
We also need a variance bound for two sets of random variables
\begin{lemma}\label{lem:variance2}
Let $f(x)=f(x_1,x_2\cdots x_d)$ be a homogeneous multivariate polynomial of degree $2p$, then there is a symmetric tensor $M\in X_n^p$ that $f(x)=\langle M,x^{\otimes 2p}\rangle$. For all $u_0, v_0 \in \mathbb{R}^d$, when $u,v\sim \mathcal{N}(0,I_d)$, we have
\begin{equation*}
\mathrm{Var}_{u,v}[\langle M, (u_0+u)^{\otimes p}\otimes(v_0+v)^{\otimes p}\rangle]\geq \frac{1}{(2p)!}\Vert M\Vert_{\text{rv}}^2
\end{equation*}
\end{lemma}
\begin{proof} The proof is similar to Lemma~\ref{lem:variance}. We can view $\langle M, (u_0+u)^{\otimes p}\otimes(v_0+v)^{\otimes p}\rangle$ as a degree-$2p$ polynomial $g$ over $2d$ variables $(u,v)$. Therefore by Lemma~\ref{lem:variance} the variance would be at least the rv-norm of $g$. Note that every element (monomial in the expansion) in $M$ corresponds to at least one element in $g$, and the ratio of coefficient in the correspnding rv-basis is bounded by $(2p)!$, therefore $\|g\|_{rv} \ge \frac{1}{(2p)!} \|M\|_{rv}$, and the lemma follows from Lemma~\ref{lem:variance}.
\end{proof}
\subsection{Proof of Theorem \ref{thm:main-theorem-random-feature-with-perturbation}}\label{subsec:proof-random-feature-perturb}
In this section, we give the formal proof of Theorem \ref{thm:main-theorem-random-feature-with-perturbation}. First recall the setting of Theorem \ref{thm:main-theorem-random-feature-with-perturbation}: we add a small independent Gaussian perturbation $\tilde{x}\sim \mathcal{N}(0,v)^d$ on each sample $x$, and denote $\bar{x}=x+\tilde{x}$. The output of the first layer is $\{z_j\}$ where $z_j(i) = (r_i^\top \bar{x}_j)^p$.
Our goal is to prove that $\{z_j\}$'s satisfy the conditions required by Theorem~\ref{thm:main-theorem-twolayer}, in particular, the matrix $Z = [z_1^{\otimes 2}, ..., z_n^{\otimes 2}]$ has full column rank and a bound on smallest singular value. To do that, note that if we let $\bar{X}=[\bar{x_1}^{\otimes 2p},\bar{x_2}^{\otimes 2p}\cdots \bar{x_n}^{\otimes 2p}]$ be the order-2p perturbed data matrix, and $Q$ be a matrix whose $i$-th row is equal to $r_i^{\otimes p}$, then we can write $Z = (Q\otimes Q) \bar X$.
We first show an auxiliary lemma which helps us to bound the smallest singular value of the output matrix $(Q\otimes Q)\bar X$, and then we present our proof for Lemma \ref{lem:smallest-singular-perturb}.
Generally speaking, the proof of Lemma \ref{lem:smallest-singular-perturb} consists of the lower bound of the \emph{Leave-one-out distance} by the anti-concentration property of polynomials and the use of Lemma \ref{lem:loo-distance-and-singular} to bridge the \emph{Leave-one-out distance} and the smallest singular value.
\begin{lemma}\label{lem:projection}
Let $M$ be a $k$-dimensional subspace of the symmetric subspace of $X_d^p$, and let $\mbox{Proj}_M$ be the projection into $M$. For any $x\in R^d$ with pertubation $\tilde{x}\sim \mathcal{N}(0,v)^d$, $\bar{x}=x+\tilde{x}$,
\begin{equation*}
\Pr\left\{\Vert \mbox{Proj}_M \bar{x}^{\otimes p}\Vert_2 < \left(\frac{k}{(2p)!}\right)^{1/4}v^{\frac{p}{2}}\varepsilon\right\}< O(p)\varepsilon^{1/p}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $m_1,m_2\cdots m_k\in X_d^p$ be a set of orthonormal (in $T_d^p$ as a Euclidean space) basis that spans $M$, and each $m_i$ is symmetric. Then $\Vert \mbox{Proj}_M \bar{x}^{\otimes p}\Vert_2 = \sqrt{\sum\limits_{i=1}^k \langle m_i,\bar{x}^{\otimes p}\rangle^2}$. Let $g(x)=\sum\limits_{i=1}^k \langle m_i,x^{\otimes p}\rangle^2=\langle \sum\limits_{i=1}^k m_i^{\otimes 2}, x^{\otimes 2p}\rangle$, then $g(x)$ is a homogeneous polynomial with order $2p$. For any initial value $x$, if $\bar{x} = x+\tilde{x}$, then $\frac{1}{\sqrt{v}} \bar{x} = \frac{1}{\sqrt{v}} x + \frac{1}{\sqrt{v}}\tilde{x}$ is a vector where the random part $\frac{1}{\sqrt{v}}\tilde{x}\sim \mathcal{N}(0,1)^n$. Therefore by Lemma
\ref{lem:variance}
\begin{equation*}
\mathrm{Var}_x\left[g\left(\frac{1}{\sqrt{v}} \bar{x}\right)\right]\ge \Vert \sum\limits_{i=1}^k m_i^{\otimes 2}\Vert_{\text{rv}}^2\geq \frac{1}{(2p)!} \Vert \sum\limits_{i=1}^k m_i^{\otimes 2}\Vert_{2}^2=\frac{k}{(2p)!} .
\end{equation*}
Hence from Proposition \ref{prop:anti-concentration} we know that, when $\hat{x}\sim \mathcal{N}(0,v I)$,
\begin{equation*}
\Pr\left\{\Vert \mbox{Proj}_M \bar{x}^{\otimes p}\Vert_2<\left(\frac{k}{(2p)!}\right)^{1/4}v^{p/2}\varepsilon\right\}=\Pr\left\{\bigg|\sqrt{\frac{(2p)!}{k}}g(\frac{\bar{x}}{\sqrt{v}})\bigg|<\varepsilon^2\right\}\leq O(p)\varepsilon^{1/p}.
\end{equation*}
\end{proof}
\begin{lemma}\label{lem:projection2}
Let $M$ be a $k$-dimensional subspace of the symmetric subspace of $X_d^{2p}$, and let $\mbox{Proj}_M$ be the projection into $M$. For any $x,y\in \mathbb{R}^d$ with pertubation $\tilde{x},\tilde{y}\sim \mathcal{N}(0,v)^d$, $\bar{x}=x+\tilde{x}$, and $\bar{y}=y+\tilde{y}$, there is
\begin{equation*}
\Pr\left\{\Vert \mbox{Proj}_M (\bar{x}^{\otimes p}\otimes \bar{y}^{\otimes p})\Vert_2 < \left(\frac{k}{((4p)!)^2}\right)^{1/4}v^p\varepsilon\right\}< O(p)\varepsilon^{1/{2p}}.
\end{equation*}
\end{lemma}
\begin{proof}
The proof here is similar to that of Lemma \ref{lem:projection}. Let $m_1,m_2\cdots m_k\in X_d^{2p}$ be a set of orthonormal (in $T_d^{2p}$ as a Euclidean space) basis that spans $M$, and each $m_i$ is symmetric. Then $\Vert \mbox{Proj}_M (\bar{x}^{\otimes p}\otimes \bar{y}^{\otimes p})\Vert_2 = \sqrt{\sum\limits_{i=1}^k \langle m_i,(\bar{x}^{\otimes p}\otimes \bar{y}^{\otimes p})\rangle^2}$. Let $g(x,y)=\sum\limits_{i=1}^k \langle m_i,(x^{\otimes p}\otimes y^{\otimes p})\rangle^2=\langle \sum\limits_{i=1}^k m_i^{\otimes 2}, (x^{\otimes p}\otimes y^{\otimes p}\otimes x^{\otimes p}\otimes y^{\otimes p})\rangle = \langle \sum\limits_{i=1}^k m_i^{(2)}, (x^{\otimes 2p}\otimes y^{\otimes 2p})\rangle$ for some tensor $m_i^{(2)}$
then $g(x)$ is a homogeneous polynomial with order $4p$. Notice that $\Vert m_i^{(2)}\Vert_2=\Vert m_i^{\otimes 2}\Vert_2$ by a change of coordinate. For any initial value $x$ and $y$, if $\bar{x} = x+\tilde{x}$ and $\bar{y} = y+\tilde{y}$, then $\frac{1}{\sqrt{v}} \bar{x}$ and $\frac{1}{\sqrt{v}} \bar{y}$ are vectors where the random part $\frac{1}{\sqrt{v}}\tilde{x},\frac{1}{\sqrt{v}} \tilde{y}\sim \mathcal{N}(0,1)^n$. Therefore by Lemma \ref{lem:variance2},
\begin{align*}
\mathrm{Var}_{x,y}\left[g\left(\frac{1}{\sqrt{v}} \bar{x},\frac{1}{\sqrt{v}} \bar{y}\right)\right]\ge& \frac{1}{(4p)!}\Vert \sum\limits_{i=1}^k m_i^{(2)}\Vert_{\text{rv}}^2 \\
\geq& \frac{1}{((4p)!)^2} \Vert \sum\limits_{i=1}^k m_i^{( 2)}\Vert_{2}^2\\
=&\frac{1}{((4p)!)^2} \Vert \sum\limits_{i=1}^k m_i^{\otimes 2}\Vert_{2}^2\\
=&\frac{k}{((4p)!)^2} .
\end{align*}
Hence from Proposition \ref{prop:anti-concentration} we know that, when $\hat{x}\sim \mathcal{N}(0,v I)$,
\begin{align*}
&\Pr\left\{\Vert \mbox{Proj}_M (\bar{x}^{\otimes p}\otimes \bar{y}^{\otimes p})\Vert_2<\left(\frac{k}{((4p)!)^2}\right)^{1/4}v^{p}\varepsilon\right\}\\
=&\Pr\left\{\bigg|\frac{(4p)!}{\sqrt{k}}g\left(\frac{\bar{x}}{\sqrt{v}},\frac{\bar{y}}{\sqrt{v}}\right)\bigg|<\varepsilon^2\right\}\leq O(p)\varepsilon^{1/{2p}}.
\end{align*}
\end{proof}
Then we can show Lemma \ref{lem:smallest-singular-perturb} as follows.
\lemsmallestsingluarperturb*
Actually, we show a more formal version which also states the dependency on $p$.
\begin{restatable}[Smallest singular value for $(Q\otimes Q)\bar X$ with pertubation]{lemma}{lemsingularlbwithperturb}\label{lem:singular-lb-withperturb}
With $Q$ being the $k\times d^p$ matrix defined as $Q=[r_1^{\otimes p}, r_2^{\otimes p}\cdots r_k^{\otimes p}]^T$ $(r_i\sim \mathcal{N}(\mathbf 0, \text{I}))$, with pertubed $\bar{X}=[\bar{x_1}^{\otimes 2p},\bar{x_2}^{\otimes 2p}\cdots \bar{x_n}^{\otimes 2p}]$ $(\bar{x_i}=x_i+\tilde{x_i})$, and with $Z=(Q\times Q)\bar{X}$, when $\tilde{x_i}$ is drawn from i.i.d. Gaussian Distribution $\mathcal{N}(\mathbf 0, v\text{I})$, for $2\sqrt{n}\leq k\leq \frac{D^{2p}_d}{D_d^p\binom{2p}{p}}=O_p(d^p)$, with overall probability $\geq 1-O(p\delta)$,
the smallest singular value
\begin{equation}
\sigma_{\min}(Z)\geq\left(\frac{[D_d^{2p}-kD_d^p\binom{2p}{p}][{k+1\choose 2}-n]}{[(4p)!]^3}\right)^{1/4}\frac{v^{p}\delta^{4p}}{n^{2p+1/2}k^{4p}}\label{here1}
\end{equation}
\end{restatable}
\begin{proof}
First, we show that with high probability, the projection of rows of $Q\otimes Q$ in the space of degree $2p$ symmetric polynomials (in this proof we abuse the notation $\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)$ to denote the matrix with rows being the projection of rows of $Q\otimes Q$ onto the space in question) has rank $k_2 := {k+1 \choose 2}$, and moreover give a bound on $\sigma_{k_2}(\mbox{Proj}_{X^{2p}_d}(Q\otimes Q))$.
We do this by bounding the leave one out distance of the rows of $\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)$, note that we only consider rows $(i,j)$ as $\mbox{Proj}_{X^{2p}_d}(r_i^{\otimes p}\otimes r_j^{\otimes p})$ where $1\le i\le j \le k$ (this is because the $(i,j)$ and $(j,i)$-th row of $\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)$ are clearly equal).
The main difficulty here is that different rows of $\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)$ can be correlated. We solve this problem using a technique similar to \cite{ma2016polynomial}.
For any $1\le i\le j \le k$, fix the randomness for $r_l$ where $l\ne i,j$. Consider the subspace $S_{(i,j)} := \mbox{span}\{\mbox{Proj}_{X^{2p}_d}(r_l^{\otimes p} \otimes x^{\otimes p}), x\in \mathbb{R}^d, l\ne i,j\}$. The dimension of this subspace is bounded by $k\cdot D^p_d \cdot {2p\choose p}$ (as there are ${2p\choose p}$ ways to place $p$ copies of $r_l$ and $p$ copies of $x$). Note that any other row of $\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)$ must be in this subspace.
Now by Lemma~\ref{lem:projection2}, we know that the projection of row $(i,j)$ onto the orthogonal subspace of $S_{(i,j)}$ has norm $\left(\frac{D_{d}^{2p}-kD_d^p{2p\choose p}}{((4p)!)^2}\right)^{1/4}\varepsilon$ with probability $O(p)\epsilon^{1/2p}$. Thus by union bound on all the rows, with probability at least $1-O(p\delta)$, the leave-one-out distance is at least
\begin{equation*}
l(\mbox{Proj}_{X^{2p}_d}(Q\otimes Q))\geq \left(\frac{D_d^{2p}-kD_d^p{2p\choose p}}{((4p)!)^2}\right)^{1/4}\left(\frac{\delta}{\binom{k+1}{2}}\right)^{2p},
\end{equation*}
and by Lemma \ref{lem:loo-distance-and-singular} the minimal absolute singular value $\sigma_{\min}(\mbox{Proj}_{X^{2p}_d}(Q\otimes Q))\geq \frac{l\left(\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)\right)}{\sqrt{\binom{k+1}{2}}}$.
Next, let $V(Q\otimes Q)$ be the rowspace of $\mbox{Proj}_{X^{2p}_d}(Q\otimes Q)$, which as we just showed has dimension ${k+1\choose 2}$. We wish to show that the projections of columns of $X$ in $V(Q\otimes Q)$ have a large leave-one-out distance, and thus $(Q\otimes Q)X$ has a large minimal singular value.
Actually for each $i$, the subspace (which for simplicity will be denoted as $V_{-i}(Q\otimes Q)$) of $V(Q\otimes Q)$ orthogonal to $span\{\bar{x}_j^{\otimes 2p}|j\neq i\}$ has dimension ${k+1\choose 2}-n+1$ almost surely, and therefore by Lemma \ref{lem:projection} and union bound, with probability $1-O(p)\tau^{1/2p}n=1-O(p\delta)$, for all $i$,
\begin{equation*}
\Vert P_{V_{-i}(Q\otimes Q)} (x_i^{\otimes 2p})\Vert_2 = \mathbb{E}\left[\Vert P_{V_{-i}(Q\otimes Q)} (x_i^{\otimes 2p})\Vert_2\Big|\{\bar{x}_j|j\neq i\}\right]\geq \left(\frac{{k+1\choose 2}-n}{(4p)!}\right)^{1/4}v^{p}\tau,
\end{equation*}
thus with probability $1-O(p\delta)$, for any vector $c\in R^n$ with $\Vert c\Vert_2=1$, let $i^*=argmax_{i}|c_i|$, $|c_{i^*}|\geq \frac{1}{\sqrt{n}}$, and
\begin{equation*}
\begin{tabular}{rl}
$\Vert(Q\otimes Q)\hat{X}c\Vert_2$ & $\geq \sigma_{\min}(\mbox{Proj}_{X^{2p}_d}Q\otimes Q)|c_{i^*}| \Vert \mbox{Proj}_{V(Q\otimes Q)} \hat{X}\frac{c}{|c_{i^*}|}\Vert_2$ \\
& $\geq \frac{\sigma_{\min}(\mbox{Proj}_{X^{2p}_d}Q\otimes Q)}{\sqrt{n}}\Vert \mbox{Proj}_{V_{-{i^*}}(Q\otimes Q)} (x_{i^*}^{\otimes 2p})\Vert_2$ \\
& $\geq \left(\frac{[D_d^{2p}-kD_d^p\binom{2p}{p}][{k+1\choose 2}-n]}{[(4p)!]^3}\right)^{1/4}\frac{v^{p}\delta^{4p}}{n^{2p+1/2}k^{4p}}$ \\
\end{tabular}
\end{equation*}
And therefore we will get Lemma \ref{lem:singular-lb-withperturb}.
\end{proof}
A minor requirement of on $z_j$'s is that they all have bounded norm. This is much easier to prove:
\begin{restatable}[Norm upper bound for $Q\bar x^{\otimes p}$]{lemma}{lemnormupperboundperturb}\label{lem:norm-upperbound-perturb}
Suppose that $||x_j||_2 \le B$ for all $j\in [n]$ and $\bar x_j = x_j + \tilde x_j$ where $\tilde x_j\sim \mathcal{N}(\mathbf 0, v\text{I})$. Same as the previous notation, $Q = [r_1^{\otimes p},\dots,r_k^{\otimes p}]^T\in\mathbb{R}^{k\times d^{p}}$. Then with probability at least $1-\frac{\delta}{\sqrt{2\pi\ln((k+n)d\delta^{-1/2})}(k+n)d}$, for all $i\in [n]$, we have
\[||Q\bar x_i^{\otimes p}||_2 \le \sqrt{k}\left(2(B + 2\sqrt{vd\ln ((k+n)d\delta^{-1/2})})\sqrt{d\ln ((k+n)d\delta^{-1/2})}\right)^p.\]
\end{restatable}
\begin{proof}
First we have, for a standard normal random variable $N\sim\mathcal{N}(0,1)$, we have
\[\Pr\{|N| \ge x\} \le \frac{\sqrt{2}}{\sqrt{\pi}x}e^{-\frac{x^2}{2}}.\]
Then, apply the union bound, we have with probability at least $1-\frac{\delta}{\sqrt{2\pi\ln((k+n)d\delta^{-1/2})}(k+n)d}$, for all $l\in [k],i\in d, j\in[n], \ell\in d, \delta<1$, we have
\[|(r_l)_i| \le 2\sqrt{\ln ((k+n)d\delta^{-1/2})}, |(\tilde x_j)_{\ell}| \le 2\sqrt{v\ln ((k+n)d\delta^{-1/2})}.\]
Then for all $j\in [n]$, we have
\[||\bar x||_2 \le ||x||_2 + ||\tilde x||_2 \le B + 2\sqrt{vd\ln ((k+n)d\delta^{-1/2})}.\]
If for all $i\in [d], l\in [k], |(r_j)_i| < 2\sqrt{\ln ((k+n)d\delta^{-1/2})}$, then for any $\bar{x}$ such that
\[||\bar{x}|| \le B + 2\sqrt{vd\ln ((k+n)d\delta^{-1/2})}\]
and any $l\in [k]$, we have
\begin{align*}
|\left((r_l)^{\otimes p}\right)^T\bar{x}^{\otimes p}|
=& |(r_l^T\bar{x})^p|\\
\le& (||r_l|| \cdot ||\bar{x}||)^p\\
\le& \left(2(B + 2\sqrt{vd\ln ((k+n)d\delta^{-1/2})})\sqrt{d\ln ((k+n)d\delta^{-1/2})}\right)^p.
\end{align*}
Then we have
\[||Q\bar{x}^{\otimes p}||_2 \le \sqrt{k}\left(2(B + 2\sqrt{vd\ln ((k+n)d\delta^{-1/2})})\sqrt{d\ln ((k+n)d\delta^{-1/2})}\right)^p.\]
\end{proof}
Then combined with the previous lemmas which lower bound the smallest singular value(Lemma \ref{lem:singular-lb-withperturb}) and upper bound the norm(Lemma \ref{lem:norm-upperbound-perturb}) of the outputs of the random feature layer and Theorem \ref{thm:main-theorem-twolayer}, we have the following Theorem \ref{thm:main-theorem-random-feature-with-perturbation}.
\thmrandomfeaturewithperturb*
\begin{proof}
From the above lemmas, we know that with respective probability $1-o(1)\delta$, after the random featuring, the following happens:
\begin{enumerate}
\item $\sigma_{\min}((Q\otimes Q)\bar{X})\geq\left(\frac{[D_d^{2p}-kD_d^p\binom{2p}{p}][{k+1\choose 2}-n]}{[(4p)!]^3}\right)^{1/4}\frac{v^{p}\delta^{4p}}{p^{4p}n^{2p+1/2}k^{4p}}$
\item $\Vert Q\bar{x}_j^{\otimes p}\Vert_2 \le \sqrt{k}\left(2(B + 2\sqrt{vd\ln ((k+n)d\delta^{-1/2})})\sqrt{d\ln ((k+n)d\delta^{-1/2})}\right)^p$ for all $j\in[n]$.
\end{enumerate}
Thereby considering the PGD algorithm on $W$, since the random featuring outputs $[(r_i^T \bar{x}_j)^p]=Q[\bar{x}_j^{\otimes p}]$ has $[(r_i^T \bar{x}_j)^{2p}]=(Q\otimes Q)\bar{X}$, from Theorem \ref{thm:main-theorem-twolayer}, given the singular value condition and norm condition above we obtain the result in the theorem.
\end{proof}
\subsection{Proof of Theorem \ref{thm:deterministic}}\label{subsec:proof-deterministic}
In this section, we show the proof of Theorem \ref{thm:deterministic}. In the setting of Theorem \ref{thm:deterministic}, we do not add perturbation onto the samples, and the only randomness is the randomness of parameters in the random feature layer.
Recall that $Q\in\mathbb{R}^{k\times d^p}$ is defined as $Q=[r_1^{\otimes p}, r_2^{\otimes p}\cdots r_k^{\otimes p}]^T$. We show that: when $r_i$ is sampled from i.i.d. Normal distribution $\mathcal{N}(0,1)^d$ and $k$ is large enough, with high probability $Q$ is robustly full column rank. Let $N_{\varepsilon}$ and $N_{\sigma}$ be respectively an $\varepsilon$-net and a $\sigma$-net of $\bar{X}_d^p$ with size $Z_{\varepsilon}$ and $Z_{\sigma}$.
The following lemmas(Lemma \ref{prop1_L5}, \ref{prop2_L5} and \ref{prop3_L5}) apply the standard $\varepsilon$-net argument and lead to the smallest singular value of matrix $Q$(Lemma \ref{lem:leastSVQ}). Then we will derive the smallest singular value for the matrix $(Q\otimes Q)X$(Lemma \ref{lem:singular-lb-noperturb}).
Note that unlike the $Q$ matrix in the previous section, in this section the $Q$ matrix is going to have more rows than columns, so it has full column rank (restricted to the symmetry of $Q$). The $Q$ matrix in the previous section has full row rank. This is why we could not use the same approach to bound the smallest singular value for $Q$.
\begin{lemma}\label{prop1_L5}
For some constant $C$, with probability at least $1-Z_{\varepsilon}\left(Cp\eta^{1/p}\right)^k$, for all $c\in N_\varepsilon$, we have
\[\Vert Qc\Vert_2^2 \geq \frac{\eta^2}{p!}.\]
\end{lemma}
\begin{proof}
For any $c\in \bar{X}^p_d$, by Lemma \ref{lem:rv-metric}, $\Vert c\Vert_{\text{rv}}\geq \frac{1}{\sqrt{p!}}$.
Let $f(r)=c^T r^{\otimes p}$, then $f$ is a polynomial of degree $p$ with respect to $r$, and therefore by Lemma \ref{lem:variance},
\[\mathop{\text{Var}}\limits_{r\sim \mathcal{N}(0,1)^d}[f(r)]\geq \Vert c\Vert_{\text{rv}}^2\geq\frac{1}{p!}.\]
Thus by Proposition \ref{prop:anti-concentration},
\begin{equation*}
\Pr\limits_{r\sim \mathcal{N}(0,1)^d}\left\{\big|f(r)\big|< \frac{\eta}{\sqrt{p!}}\right\}\leq O(p)\eta^{1/p}.
\end{equation*}
Therefore, as $\Vert Qc\Vert_2^2=\sum\limits_{i=1}^K f(r_i)^2$,
\begin{align*}
\Pr\limits_{r_1,r_2\cdots r_K\sim \mathcal{N}(0,1)^d}\left\{\Vert Qc\Vert_2^2<\frac{\eta^2}{p!}\right\}\leq & \Pr\limits_{r_1,r_2\cdots r_K\sim \mathcal{N}(0,1)^d}\left\{\forall r_i: |f(r_i)|<\frac{\eta}{\sqrt{p!}}\right\}\\
\leq& \left(O(p)\eta^{1/p}\right)^k.
\end{align*}
Therefore for some constant $C$, for each $c\in \bar{X}^p_d$, with probability at most $\left(Cp\eta^{1/p}\right)^k$ there is $\Vert Qc \Vert_2^2<\frac{\eta^2}{p!}$. Thus by union bound this happens for all $c\in N_\varepsilon$ with probability at most $\leq Z_\varepsilon\left(Cp\eta^{1/p}\right)^k$, and thereby the proof is completed.
\end{proof}
\begin{lemma}\label{prop2_L5}
For $\tau>0$, with probability $1-O\left((Z_{\sigma}\left(\frac{\sqrt{k}}{\tau}\right)^{1/p}ke^{-\frac{1}{2}\left(\frac{\tau}{\sqrt{k}}\right)^{2/p}}\right)$,
for each $c\in N_\sigma$, $\Vert Qc\Vert_2\leq \tau$.
\end{lemma}
\begin{proof}
For any $c\in \bar{X}^p_d$,
\begin{align*}
\Pr\limits_{Q}\left\{\Vert Qc\Vert_2^2> \tau^2\right\}\leq& \Pr\limits_{r_1,r_2\cdots r_k\sim \mathcal{N}(0,1)^d}\left\{\exists i: |c^T r_i^{\otimes p}|>\frac{\tau}{\sqrt{k}}\right\}\\
\leq& k\Pr\limits_{r\sim \mathcal{N}(0,1)^d}\left\{|c^T r^{\otimes p}|>\frac{\tau}{\sqrt{k}}\right\}.
\end{align*}
Furthermore,
\begin{align*}
\Pr\limits_{r\sim \mathcal{N}(0,1)^d}\left\{|c^T r^{\otimes p}|>\frac{\tau}{\sqrt{k}}\right\} \leq& \Pr\limits_{r\sim \mathcal{N}(0,1)^d}\left\{\Vert c\Vert_2 \Vert r\Vert_2^p>\frac{\tau}{\sqrt{k}}\right\}\\
=&\Pr\limits_{r\sim \mathcal{N}(0,1)^d}\left\{\Vert r\Vert_2>\left(\frac{\tau}{\sqrt{k}}\right)^{1/p}\right\}\\
\leq& O\left(\left(\frac{\sqrt{k}}{\tau}\right)^{1/p}e^{-\frac{1}{2}\left(\frac{\tau}{\sqrt{k}}\right)^{2/p}}\right)
\end{align*}
Therefore for the $\sigma$-net $N_{\sigma}$, with a union bound we know with probability at least
\[1-O\left((Z_{\sigma}\left(\frac{\sqrt{k}}{\tau}\right)^{1/p}ke^{-\frac{1}{2}\left(\frac{\tau}{\sqrt{k}}\right)^{2/p}}\right),
\]
for all $c\in N_{\sigma}$, $\Vert Qc\Vert_2^2\leq\tau^2$.
\end{proof}
\begin{lemma}\label{prop3_L5}
For $\sigma<1$, $\tau>0$, with probability at least $ 1-O\left(Z_{\sigma}\left(\frac{\sqrt{k}}{\tau}\right)^{1/p}ke^{-\frac{1}{2}\left(\frac{\tau}{\sqrt{k}}\right)^{2/p}}\right)$, we have
for each $c\in \bar{X}_d^p$, $\Vert Qc\Vert_2\leq \frac{\tau}{1-\sigma}$.
\end{lemma}
\begin{proof}
We first show that give $N_\sigma$, for each $c\in\bar{X}^p_d$, we can find $c_1,c_2,c_3\cdots\in N_\sigma$ and $a_1,a_2,a_3\cdots\in\mathbb{R}$ such that
\begin{equation*}
c=\sum\limits_{i\geq 1}a_ic_i,
\end{equation*}
and that $a_1=1$, $0\leq a_i\leq \sigma a_{i-1}$ ($i\geq 2$). Thus $a_i\leq \sigma^{i-1}$.
In fact, we can construct the sequence by induction. Let $I: \bar{X}_d^p\to N_{\sigma}$ that
\[I(x)=\mathop{\text{argmin}}\limits_{y\in N_{\sigma}}\Vert y-x\Vert_2.\]
We take $c_1=I(c)$, $a_1=1$, and recursively
\[a_i=\bigg\Vert c-\sum\limits_{j=1}^{i-1}a_jc_j\bigg\Vert_2,\quad c_i=I\left(\frac{c-\sum\limits_{j=1}^{i-1}a_jc_j}{a_i}\right).\]
By definition, for any $c\in \bar{X}_d^p$, $\Vert c-I(c)\Vert_2\leq\sigma$, and therefore
\[\bigg\Vert\frac{c-\sum_{j=1}^{i-1}a_jc_j}{a_i}-c_i\bigg\Vert_2\leq\sigma,\]
which shows that $0\leq a_{i+1}=\Vert c-\sum\limits_{j=1}^{i}a_jc_j\Vert_2\leq\sigma a_i$, and by induction $a_i\leq \sigma^{i-1}$.
We know from Lemma \ref{prop2_L5} that with probability at least $ 1-O\left(Z_{\sigma}\left(\frac{\sqrt{k}}{\tau}\right)^{1/p}ke^{-\frac{1}{2}\left(\frac{\tau}{\sqrt{k}}\right)^{2/p}}\right)$, for all $c_i\in N_\sigma$, $\Vert Qc_i\Vert_2\leq \tau$, and therefore
\begin{equation*}
\Vert Qc\Vert_2\leq\sum\limits_{i\geq 1}a_i\Vert Qc_i\Vert_2 \leq \sum\limits_{i\geq 1}\sigma^{i-1}\tau = \frac{\tau}{1-\sigma}.
\end{equation*}
\end{proof}
\begin{lemma}[least singular value of $Q$]\label{lem:leastSVQ}
If $Q$ is the $k\times d^p$ matrix defined as $Q=[r_1^{\otimes p}, r_2^{\otimes p}\cdots r_k^{\otimes p}]^T$ with $r_i$ drawn i.i.d. from Gaussian Distribution $\mathcal{N}(0,\text{I})$, then there exists constant $G_0>0$ that for $k=\alpha pD_d^p$ ($\alpha>1$), with probability at least $ 1-o(1)\delta$, the rows of $Q$ will span $X_d^p$, and for all $c\in\bar{X}_d^p$,
\begin{equation*}
\Vert Qc\Vert_2\geq \Omega\left(\frac{\delta^{\left(\frac{1}{(\alpha-1)D_d^p}\right)}}{\left(p^p\sqrt{p!}\right)^{\frac{\alpha}{\alpha-1}}\left(k(G_0 p\ln pD_d^p)^p)\right)^{\frac{1}{2(\alpha-1)}}}\right)=\Omega_p\left(\frac{\delta^{\left(\frac{1}{(\alpha-1)D_d^p}\right)}}{k^{\frac{p+1}{2(\alpha-1)}}}\right),
\end{equation*}
where $\Omega_p$ is the big-$\Omega$ notation that treats $p$ as a constant.
\end{lemma}
\begin{proof}
We show that with high probability, for all $c\in \bar{X}^p_d$, $\Vert Qc\Vert^2_2 = \sum\limits_{i=1}^k \left([r_i^{\otimes p}]^T c\right)^2$ is large. To do this we will adopt an $\varepsilon$-net argument over all possible $c$.
First, we take the parameters
\[\sigma=\frac{1}{10},\quad \tau=\sqrt{k \left(2\log\frac{Z_\sigma k}{\delta}\right)^p},\quad\text{and}\quad \varepsilon=c_0 \frac{\delta^{\left(\frac{1}{(\alpha-1)D_d^p}\right)}}{\left(\tau p^p\sqrt{p!}\right)^{\frac{\alpha}{\alpha-1}}},\]
for small constant $c_0$ such that $c_0
C^pD^{\frac{1}{(\alpha-1)D}} \ll 1$, and $\eta=\frac{20}{9}\varepsilon\tau\sqrt{p!}$. From Lemma \ref{prop1_L5} and \ref{prop3_L5}, we know that with probability at least
\begin{align*}
&1-Z_\varepsilon\left(cp\eta^{1/p}\right)^{k}-O\left(Z_{\sigma}\left(\frac{\sqrt{k}}{\tau}\right)^{1/p}ke^{-\frac{1}{2}\left(\frac{\tau}{\sqrt{k}}\right)^{2/p}}\right)\\
=& 1-O\left(c_0^{(\alpha-1)D_d^p}2^{D_d^p}C^kD\delta\right)-O\left(\frac{\delta}{\sqrt{2\log\frac{Z_\sigma k}{\delta}}}\right)\\
=&1-o(1)\delta,
\end{align*}
the following holds true:
\begin{enumerate}
\item $\forall c_i\in N_\varepsilon$, $\Vert Qc_i\Vert_2\geq\frac{\eta}{\sqrt{p!}}$;
\item $\forall c\in\bar{X}_d^p$, $\Vert Qc\Vert_2\leq \frac{\tau}{1-\sigma}=\frac{\eta}{2\varepsilon\sqrt{p!}}$.
\end{enumerate}
Therefore for any $c\in \bar{x}_d^p$, let $i^*=\mathop{\text{argmin}}\limits_{i:c_i\in N_{\varepsilon}}\Vert c-c_i\Vert_2$, we know
\begin{align*}
\Vert Qc\Vert_2\geq& \Vert Qc_i\Vert_2 - \Vert Q(c-c_i)\Vert_2\\
\geq& \frac{\eta}{\sqrt{p!}}- \Vert c-c_i\Vert_2\bigg\Vert Q\frac{c-c_i}{\Vert c-c_i\Vert_2}\bigg\Vert_2\\
\geq& \frac{\eta}{\sqrt{p!}}- \varepsilon\frac{\eta}{2\varepsilon\sqrt{p!}}\\
=&\frac{\eta}{2\sqrt{p!}},
\end{align*}
and by definition we know that $\lambda_{\min}(Q)\geq \frac{\eta}{2\sqrt{p!}}$. By lemma \ref{lem:eps-net}, with $\log Z_\sigma=O(p\ln pD_d^p)\leq G_0 p\ln pD_d^p$ for some constant $G_0$, this gives us the lemma.
\end{proof}
\begin{restatable}[Smallest singular value for $(Q\otimes Q)X$ without pertubation]{lemma}{lemsingularlbnoperturb}\label{lem:singular-lb-noperturb}
With $Q$ being the $k\times d^p$ matrix defined as $Q=[r_1^{\otimes p}, r_2^{\otimes p}\cdots r_k^{\otimes p}]^T$, $X$ being the $d^{2p}\times n$ matrix defined as $X = [x_1^{\otimes 2p},\dots,x_n^{\otimes 2p}]\in \mathbb{R}^{d^{2p} \times n}$, and $Z=(Q\otimes Q)X$, for $k=\alpha pD_d^p$ ($\alpha>1$), when $r_i$ are randomly drawn from i.i.d. Guassian distribution $\mathcal{N}(\mathbf 0, \text{I})$, there exists constant $G_0>0$ such that with probability $\geq 1-o(1)\delta$, the smallest singular value of $Z$ satisfies \begin{equation}\sigma_{\min}(Z)\geq \Omega\left(\frac{\delta^{\left(\frac{2}{(\alpha-1)D_d^p}\right)}\sigma_{\min}(X)}{\left(p^p\sqrt{p!}\right)^{\frac{2\alpha}{\alpha-1}}\left[k(G_0p\ln pD_d^p)^p)\right]^{\frac{1}{(\alpha-1)}}}\right)=\Omega_p\left(\frac{\delta^{\left(\frac{2}{(\alpha-1)D_d^p}\right)}}{k^{\frac{p+1}{(\alpha-1)}}}\right)\sigma_{\min}(X)
\end{equation}
(where $\Omega_p$ is the big-$\Omega$ notation that treats $p$ as a constant). Furthermore, for $k=\Omega(p^2 D^p_d)$, with high probability $1-\delta$, $\sigma_{\min}(Z)\geq \Omega(\frac{\sigma_{\min}(X)}{k})$ (if $\delta$ is not exponentially small).
\end{restatable}
\begin{proof}
From Lemma \ref{lem:leastSVQ}, with probability $\geq 1-o(1)\delta$, for all $c\in\bar{X}_d^p$,
\[\Vert Qc\Vert_2\geq\Delta= \Omega\left(\frac{\delta^{\left(\frac{1}{(\alpha-1)D_d^p}\right)}}{\left(p^p\sqrt{p!}\right)^{\frac{\alpha}{\alpha-1}}\left(k(G_0 p\ln pD_d^p)^p)\right)^{\frac{1}{2(\alpha-1)}}}\right).\]
Then, from linear algebra, we know for all $s\in\bar{X}_d^p\otimes \bar{X}_d^p$,
$\Vert(Q\otimes Q)s\Vert_2\geq \Delta^2$. As $\bar{X}_d^{2p}\subset\bar{X}_d^p\otimes \bar{X}_d^p$,
\begin{equation*}
\begin{matrix}
\sigma_{\min}{(Q\otimes Q)X}=\inf\limits_{u\in R^n, \Vert u\Vert_2=1}\Vert (Q\otimes Q)Xu\Vert_2\\
=\inf\limits_{u\in R^n, \Vert u\Vert_2=1}\Vert (Q\otimes Q)\frac{Xu}{\Vert Xu\Vert_2}\Vert_2\Vert Xu\Vert_2\geq \Delta^2\inf\limits_{u\in R^n, \Vert u\Vert_2=1}\Vert Xu\Vert_2=\Delta^2 \sigma_{\min}(X),
\end{matrix}
\end{equation*}
which gives us this lemma \ref{lem:singular-lb-noperturb}.
\end{proof}
Besides the lower bound for the smallest singular value, we also need the following lemma to show that with high probability, the norm is upper bounded.
\begin{restatable}[Norm upper bound for $Qx^{\otimes p}$]{lemma}{lemnormupperbound}\label{lem:norm-upperbound}
Suppose that $||x_i||_2 \le B$ for all $i\in [n]$, and $Q = [r_1^{\otimes p},\dots,r_k^{\otimes p}]^T\in\mathbb{R}^{k\times d^{p}}$. Then with probability at least $1-\frac{\delta}{\sqrt{2\pi\ln(kd\delta^{-1/2})}kd}$, for all $i\in [n]$, we have
\[||Qx_i^{\otimes p}||_2 \le \sqrt{k}\left(2B\sqrt{d\ln (kd\delta^{-1/2})}\right)^p.\]
\end{restatable}
\begin{proof}
First we have, for a standard normal random variable $N\sim\mathcal{N}(0,1)$, we have
\[\Pr\{|N| \ge x\} \le \frac{\sqrt{2}}{\sqrt{\pi}x}e^{-\frac{x^2}{2}}.\]
Then, apply the union bound, we have
\begin{align*}
\Pr\left\{\exists i\in [d], j\in [k], |(r_j)_i| \ge 2\sqrt{\ln (kd\delta^{-1/2})}\right\} \le& kd\frac{\sqrt{2}}{\sqrt{\pi}2\sqrt{\ln (kd\delta^{-1/2})}}\exp{(-2\ln(kd\delta^{-1/2}))}\\
=& \frac{\delta}{\sqrt{2\pi\ln(kd\delta^{-1/2})}kd}.
\end{align*}
If for all $i\in [d], j\in [k], |(r_j)_i| < 2\sqrt{\ln (kd)}$, then for any $x$ such that $||x|| \le B$ and any $k_0\in [k]$, we have
\begin{align*}
|\left((r_{k_0})^{\otimes p}\right)^Tx^{\otimes p}| =&|(r_{k_0}^Tx)^p|\\
\le& (||r_{k_0}|| \cdot ||x||)^p\\
\le& (2B\sqrt{d\ln (kd\delta^{-1/2})})^p.
\end{align*}
Then we have
\[||Qx^{\otimes p}||_2 \le \sqrt{k}\left(2B\sqrt{d\ln (kd\delta^{-1/2})}\right)^p.\]
\end{proof}
Then, combining the previous lemmas and Theorem \ref{thm:main-theorem-twolayer}, we have the following Theorem \ref{thm:deterministic}.
\thmdeterministic*
\begin{proof}
From the above lemmas, we know that with respective probability $1-o(1)\delta$, after the random featuring, the following happens:
\begin{enumerate}
\item There exists constant $G_0$ that $\sigma_{\min}((Q\otimes Q)X)\geq\frac{\delta^{\left(\frac{2}{(\alpha-1)D_d^p}\right)}\sigma_{\min}(X)}{\left(p^p\sqrt{p!}\right)^{\frac{2\alpha}{\alpha-1}}\left[k(G_0p\ln pD_d^p)^p)\right]^{\frac{1}{(\alpha-1)}}}$
\item $\Vert Qx_j^{\otimes p}\Vert_2 \le \sqrt{k}(2B\sqrt{d\ln (kd\delta^{-1/2})})^p$ for all $j\in[n]$.
\end{enumerate}
Thereby considering the PGD algorithm on $W$, since the random featuring outputs $[(r_i^T x_j)^p]=Q[x_j^{\otimes p}]$ has $[(r_i^T x_j)^2p]=(Q\otimes Q)X$, from Theorem \ref{thm:main-theorem-twolayer}, given the singular value condition and norm condition above we obtain the result in the theorem.
\end{proof}
\section{Introduction}
In deep learning, highly non-convex objectives are optimized by simple algorithms such as stochastic gradient descent. However, it was observed that neural networks are able to fit the training data perfectly, even when the data/labels are randomly corrupted\citep{zhang2016understanding}. Recently, a series of work (\citet{du2019gradient, allen2019convergence, chizat2018global, jacot2018neural}, see more references in Section~\ref{sec:related}) developed a theory of neural tangent kernels (NTK) that explains the success of training neural networks through overparametrization. Several results showed that if the number of neurons at each layer is much larger than the number of training samples, networks of different architectures (multilayer/recurrent) can all fit the training data perfectly.
However, if one considers the number of parameters required for the current theoretical analysis, these networks are highly overparametrized. Consider fully connected networks for example. If a two-layer network has a hidden layer with $r$ neurons, the number of parameters is at least $rd$ where $d$ is the dimension of the input. For deeper networks, if it has two consecutive hidden layers of size $r$, then the number of parameters is at least $r^2$. All of the existing works require the number of neurons $r$ per-layer to be at least the number of training samples $n$ (in fact, most of them require $r$ to be a polynomial of $n$). In these cases, the number of parameters can be at least $nd$ or even $n^2$ for deeper networks, which is much larger than the number of training samples $n$. Therefore, a natural question is whether neural networks can fit the training data in the mildly overparametrized regime - where the number of (trainable) parameters is only a constant factor larger than the number of training data. To achieve this, one would want to use a small number of neurons in each layer - $n/d$ for a two-layer network and $\sqrt{n}$ for a three-layer network. \citet{yun2018small} showed such networks have enough capacity to memorize any training data. In this paper we show with polynomial activation functions, simple optimization algorithms are guaranteed to find a solution that memorizes training data.
\subsection{Our Results}
In this paper, we give network architectures (with polynomial activations) such that every hidden layer has size much smaller than the number of training samples $n$, the total number of parameters is linear in $n$, and simple optimization algorithms on such neural networks can fit any training data.
We first give a warm-up result that works when the number of training samples is roughly $d^2$ (where $d$ is the input dimension).
\begin{theorem}[Informal]\label{thm:main1:informal} Suppose there are $n \le {d+1 \choose 2}$ training samples in general position, there exists a two-layer neural network with quadratic activations, such that the number of neurons in the hidden layer is $2d+2$, the total number of parameters is $O(d^2)$, and perturbed gradient descent can fit the network to any output.
\end{theorem}
Here ``in general position'' will be formalized later as a deterministic condition that is true with probability 1 for random inputs, see Theorem~\ref{thm:main-theorem-twolayer} for details.
In this case, the number of hidden neurons is only roughly the square root of the number of training samples, so the weights for these neurons need to be trained carefully in order to fit the data. Our analysis relies on an analysis of optimization landscape - we show that every local minimum for such neural network must also be globally optimal (and has 0 training error). As a result, the algorithm can converge from an arbitrary initialization.
Of course, the result above is limited as the number of training samples cannot be larger than $O(d^2)$. We can extend the result to handle a larger number of training samples:
\begin{theorem}[Informal]\label{thm:main2:informal} Suppose number of training samples $n \le d^p$ for some constant $p$, if the training samples are in general position there exists a three-layer neural network with polynomial activations, such that the number of neurons $r$ in each layer is $O_p(\sqrt{n})$, and perturbed gradient descent on the middle layer can fit the network to any output.
\end{theorem}
Here $O_p$ considers $p$ as a constants and hides constant factors that only depend on $p$. We consider ``in general position'' in the smoothed analysis framework\citep{spielman2004smoothed} - given arbitrary inputs $x_1,x_2,...,x_n \in \mathbb{R}^d$, fix a perturbation radius $\sqrt{v}$, the actual inputs is $\bar{x}_j = x_j+\tilde{x}_j$ where $\tilde{x}_j\sim N(0, vI)$. The guarantee of training algorithm will depend inverse polynomially on the perturbation $v$ (note that the architecture - in particular the number of neurons - is independent of $v$). The formal result is given in Theorem~\ref{thm:main-theorem-random-feature-with-perturbation} in Section~\ref{sec:proof-sketch-random-feature}. Later we also give a deterministic condition for the inputs, and prove a slightly weaker result (see Theorem~\ref{thm:deterministic}).
\subsection{Related Works}
\label{sec:related}
\paragraph{Neural Tangent Kernel} Many results in the framework of neural tangent kernel show that networks with different architecture can all memorize the training data, including two-layer \citep{du2019gradient}, multi-layer\citep{du2018gradient2, allen2019convergence, zou2019improved}, recurrent neural network\citep{allen2018convergence}. However, all of these works require the number of neurons in each layer to be at least quadratic in the number of training samples. \citet{oymak2019towards} improved the number of neurons required for two-layer networks, but their bound is still larger than the number of training samples. There are also more works for NTK on generalization guarantees (e.g., \citet{allen2018learning}), fine-grained analysis for specific inputs\citep{arora2019fine} and empirical performances\citep{arora2019exact}, but they are not directly related to our results.
\paragraph{Representation Power of Neural Networks} For standard neural networks with ReLU activations, \cite{yun2018small} showed that networks of similar size as Theorem~\ref{thm:main2:informal} can memorize any training data. Their construction is delicate and it is not clear whether gradient descent can find such a memorizing network efficiently.
\paragraph{Matrix Factorizations} Since the activation function for our two-layer net is quadratic, training of the network is very similar to matrix factorization problem. Many existing works analyzed the optimization landscape and implicit bias for problems related to matrix factorization in various settings\citep{bhojanapalli2016global, ge2016matrix, ge2017no, park2016non, gunasekar2017implicit, li2018algorithmic, arora2019implicit}. In this line of work, \citet{du2018power} is the most similar to our two-layer result, where they showed how gradient descent can learn a two-layer neural network that represents any positive semidefinite matrix. However positive definite matrices cannot be used to memorize arbitrary data, and our two-layer network can represent an arbitrary matrix.
\paragraph{Interpolating Methods} Of course, simply memorizing the data may not be useful in machine learning. However, recently several works\citep{belkin2018overfitting, belkin2019does, liang2019risk, mei2019generalization} showed that learning regimes that interpolate/memorize data can also have generalization guarantees. Proving generalization for our architectures is an interesting open problem.
\section{Preliminaries}
In this section, we introduce notations, the two neural network architectures used for Theorem~\ref{thm:main1:informal} and \ref{thm:main2:informal}, and the perturbed gradient descent algorithm.
\subsection{Notations}
We use $[n]$ to denote the set $\{1,2,...,n\}$.
For a vector $x$, we use $\|x\|_2$ to denote its $\ell_2$ norm, and sometimes $\|x\|$ as a shorthand.
For a matrix $M$, we use $\|M\|_{F}$ to denote its Frobenius norm, $\|M\|$ to denote its spectral norm. We will also use $\lambda_i(M)$ and $\sigma_i(M)$ to denote the $i$-th largest eigenvalue and singular value of matrix $M$, and $\lambda_{\min}(M)$, $\sigma_{\min}(M)$ to denote the smallest eigenvalue/singular value.
For the results of three-layer networks, our activation is going to be $x^p$, where $p$ is considered as a small constant. We use $O_p()$, $\Omega_p()$ to hide factors that only depend on $p$.
For vectors $x,y\in \mathbb{R}^d$, the tensor product is denoted by $(x\otimes x)\in \mathbb{R}^{d^2}$. We use $x^{\otimes p} \in \mathbb{R}^{d^p}$ as a shorthand for $p$-th power of $x$ in terms of tensor product. For two matrices $M,N\in \mathbb{R}^{d_1\times d_2}$, we use $M\otimes N \in \mathbb{R}^{d_1^2\times d_2^2}$ denote the Kronecker product of 2 matrices.
\subsection{Network Architectures}
\label{sec:prelim_architecture}
\begin{figure}[tb]
\def1.8cm{1.8cm}
\def3.6cm{3.6cm}
\centering
\subfloat[2-layer neural net structure]{
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=1.8cm]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=12pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=green!50];
\tikzstyle{output neuron}=[neuron, fill=red!50];
\tikzstyle{hidden neuron}=[neuron, fill=blue!50];
\tikzstyle{annot} = [text width=4em, text centered]
\foreach \name / \y in {1,...,4}
\node[input neuron] (I-\name) at (0,-\y) {};
\foreach \name / \y in {1,...,5}
\path[yshift=0.5cm]
node[hidden neuron] (H-\name) at (1.8cm,-\y cm) {};
\node[output neuron, right of=H-3] (O) {};
\foreach \source in {1,...,4}
\foreach \dest in {1,...,5}
\path[line width=0.3mm] (I-\source) edge (H-\dest);
\foreach \source in {1,...,5}
\path (H-\source) edge (O);
\node[annot,above of=H-1, node distance=1cm] (hl) {Hidden layer};
\node[annot,left of=hl] {Input layer};
\node[annot,right of=hl] {Output layer};
\end{tikzpicture}
}
\quad
\subfloat[3-layer neural net structure]{
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=1.8cm]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=12pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=green!50];
\tikzstyle{output neuron}=[neuron, fill=red!50];
\tikzstyle{hidden neuron 2}=[neuron, fill=blue!50];
\tikzstyle{hidden neuron 1}=[neuron, fill=purple!50];
\tikzstyle{annot} = [text width=4em, text centered]
\foreach \name / \y in {1,...,4}
\node[input neuron] (I-\name) at (0,-\y) {};
\foreach \name / \y in {1,...,5}
\path[yshift=0.5cm]
node[hidden neuron 1] (H1-\name) at (1.8cm,-\y cm) {};
\foreach \name / \y in {1,...,5}
\path[yshift=0.5cm]
node[hidden neuron 2] (H2-\name) at (3.6cm,-\y cm) {};
\node[output neuron,right of=H2-3] (O) {};
\foreach \source in {1,...,4}
\foreach \dest in {1,...,5}
\path (I-\source) edge (H1-\dest);
\foreach \source in {1,...,5}
\foreach \dest in {1,...,5}
\path[line width=0.3mm] (H1-\source) edge (H2-\dest);
\foreach \source in {1,...,5}
\path (H2-\source) edge (O);
\node[annot,above of=H1-1, node distance=1cm] (hl) {Hidden layer 1};
\node[annot,above of=H2-1, node distance=1cm] (hr) {Hidden layer 2};
\node[annot,left of=hl] {Input layer};
\node[annot,right of=hr] {Output layer};
\end{tikzpicture}
}
\caption{Neural Network Architectures. The trained layer is in bold face. The activation function after the trained parameters is $x^2$(blue neurons). The activation function before the trained parameters is $x^p$(purple neurons).}
\label{fig:architecture}
\end{figure}
In this section, we introduce the neural net architectures we use. As we discussed, Theorem~\ref{thm:main1:informal} uses a two-layer network (see Figure~\ref{fig:architecture} (a)) and Theorem~\ref{thm:main2:informal} uses a three-layer network (see Figure~\ref{fig:architecture} (b)).
\paragraph{Two-layer Neural Network} For the two-layer neural network, suppose the input samples $x$ are in $\mathbb{R}^d$, the hidden layer has $r$ hidden neurons (for simplicity, we assume $r$ is even, in Theorem~\ref{thm:main-theorem-twolayer} we will show that $r = 2d+2$ is enough). The activation function of the hidden layer is $\sigma(x) = x^2$.
We use $w_i\in\mathbb{R}^d$ to denote the input weight of hidden neuron $i$. These weight vectors are collected as a weight matrix $W = [w_1,w_2,\dots,w_r] \in \mathbb{R}^{d\times r}$. The output layer has only 1 neuron, and we use $a_i\in\mathbb{R}$ to denote the its input weight from hidden neuron $i$. There is no nonlinearity for the output layer. For simplicity, we fix the parameters $a_i,i\in [r]$ in the way that $a_i = 1$ for all $1\le i\le \frac{r}{2}$ and $a_i = -1$ for all $\frac{r}{2}+1 \le i \le r$. Given $x$ as the input, the output of the neural network is
\[y = \sum_{i=1}^r a_i(w_i^Tx)^2.\]
If the training samples are $\{(x_j,y_j)\}_{j\le n}$, we define the empirical risk of the neural network with parameters $W$ to be
\[f(W) = \frac{1}{4n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)^2.\]
\paragraph{Three-layer neural network} For Theorem~\ref{thm:main2:informal}, we use a more complicated, three-layer neural network.
In this network, the first layer has a polynomial activation $\tau(x) = x^p$,
and the next two layers are the same as the two-layer network.
We use $R = [r_1,\dots, r_k]^T\in\mathbb R^{k\times d}$
to denote the weight parameter of the first layer. The first hidden layer has $k$ neurons with activation $\tau(x) = x^p$ where $p$ is the parameter in Theorem~\ref{thm:main2:informal}. Given input $x$, the output of the first hidden layer is denoted as $z$, and satisfy $z_i = (r_i^T x_j)^p$
The second hidden layer has $r$ neurons (again we will later show $r = 2k+2$ is enough).
The weight matrix for second layer is denoted as $W = [w_1,\dots, w_{r}]\in \mathbb{R}^{k\times r}$ where each $w_i\in \mathbb{R}^k$ is the weight for a neuron in the second hidden layer. The activation for the second hidden layer is $\sigma(x) = x^2$. The third layer has weight $a$ and is initialized the same way as before, where $a_1 = a_2=\cdots = a_{r/2} = 1$, and $a_{r/2+1} = \cdots = a_{r} = -1$. The final output $y$ can be computed as
\[
y = \sum_{i=1}^{r} a_i (w_i^T z)^2.
\]
Given inputs $(x_1,y_1), ..., (x_n,y_n)$, suppose $z_i$ is the output of the first hidden layer for $x_i$, the empirical loss is defined as:
\[f(W) = \frac{1}{4n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i(w_i^Tz_j)^2 - y_j\right)^2.\]
Note that only the second-layer weight $W$ is trainable. The first layer with weights $R$ acts like a random feature layer that maps $x_i$'s into a new representation $z_i$'s.
\subsection{Second order stationary points and perturbed gradient descent}
Gradient descent converges to a global optimum of a convex function. However, for non-convex objectives, gradient descent is only guaranteed to converge into a first-order stationary point - a point with 0 gradient, which can be a local/global optimum or a saddle point. Our result requires any algorithm that can find a second-order stationary point - a point with 0 gradient and positive definite Hessian. Many algorithms were known to achieve such guarantee\citep{ge2015escaping, sun2015nonconvex, carmon2018accelerated, agarwal2017finding, jin2017escape, jin2017accelerated}. As we require some additional properties of the algorithm (see Section~\ref{sec:proof-sketch-twolayer}), we will adapt the Perturbed Gradient Descent(PGD, \citep{jin2017escape}). See Section~\ref{sec:algdetail} for a detailed description of the algorithm. Here we give the guarantee of PGD that we need. The PGD algorithm requires the function and its gradient to be Lipschitz
\begin{definition}[Smoothness and Hessian Lipschitz]
A differentiable function $f(\cdot)$ is $\ell$\text{-smooth} if:
\[\forall x_1,x_2,\ ||\nabla f(x_1) - \nabla f(x_2)|| \le \ell ||x_1 - x_2||.\]
A twice-differentiable function $f(\cdot)$ is $\rho$\text{-Hessian Lipschitz} if:
\[\forall x_1,x_2,\ ||\nabla^2 f(x_1) - \nabla^2 f(x_2)|| \le \rho ||x_1 - x_2||.\]
\end{definition}
Under these assumptions, we will consider an approximation for exact second-order stationary point as follows:
\begin{definition}[$\varepsilon$-second-order stationary point]\label{def:stationary-point}
For a $\rho$-Hessian Lipschitz function $f(\cdot)$, we say that $x$ is an $\varepsilon$\textbf{-second-order stationary point} if:
\[||\nabla f(x)||\le \varepsilon, \text{and }\lambda_{\min}(\nabla^2f(x)) \ge -\sqrt{\rho \varepsilon}.\]
\end{definition}
\cite{jin2017escape} showed that PGD converges to an $\varepsilon$-second-order stationary point efficiently:
\begin{restatable}[Convergence of PGD (\cite{jin2017escape})]{theorem}{thmpgdconvergence}\label{thm:pgd-convergence}
Assume that $f(\cdot)$ is $\ell$-smooth and $\rho$-Hessian Lipschitz. Then there exists an absolute constant $c_{\text{max}}$ such that, for any $\delta > 0, \varepsilon
\le \frac{\ell^2}{\rho},\Delta_f \ge f(x_0) - f^*$, and constant $c \le c_{\text{max}}$, $PGD(x_0,\ell,\rho,\varepsilon,c,\delta,\Delta_f)$ will output an $\varepsilon$-second-order stationary point with probability $1-\delta$, and terminate in the following number of iterations:
\[O\left(\frac{\ell(f(x_0) - f^*)}{\varepsilon^2}\log^4\left(\frac{d\ell\Delta_f}{\varepsilon^2\delta}\right)\right).\]
\end{restatable}
\section{Warm-up: Two-layer Net for Fitting Small Training Set}\label{sec:proof-sketch-twolayer}
In this section, we show how the two-layer neural net in Section~\ref{sec:prelim_architecture} trained with perturbed gradient descent can fit any small training set (Theorem \ref{thm:main1:informal}). Our result is based on a characterization of optimization landscape: for small enough $\varepsilon$, every $\varepsilon$-second-order stationary point achieves near-zero training error. We then combine such a result with PGD to show that simple algorithms can always memorize the training data. Detailed proofs are deferred to Section~\ref{sec:twolayerformal} in the Appendix.
\subsection{Optimization landscape of two-layer neural network}
Recall that the two-layer network we consider has $r$-hidden units with bottom layer weights $w_1,w_2,...,w_r$, and the weight for the top layer is set to $a_i = 1$ for $1\le i\le r/2$, and $a_i = -1$ for $r/2+1\le i \le r$. For a set of input data $\{(x_1,y_1),(x_2,y_2),...,(x_n,y_n)\}$, the objective function is defined as \[f(W) = \frac{1}{4n}\sum_{j=1}^n\left(\sum_{i=1}^r a_i(w_i^Tx_j)^2 - y_j\right)^2.\]
With these definitions, we will show that when a point is an approximate second-order stationary point (in fact, we just need it to have an almost positive semidefinite Hessian) it must also have low loss:
\begin{restatable}[Optimization Landscape]{lemma}{lemoptlandscape}\label{lem:geo-property}
Given training data $\{(x_1,y_1),(x_2,y_2),...,(x_n,y_n)\}$,
Suppose the matrix $X = [x_1^{\otimes 2},\dots,x_n^{\otimes 2}]\in \mathbb{R}^{d^2 \times n}$ has full column rank and the smallest singular value is at least $\sigma$. Also suppose that the number of hidden neurons satisfies $r\ge 2d+2$. Then if $\lambda_{\min}\nabla^2 f(W) \ge -\varepsilon$, the function value is bounded by $f(W) \le \frac{nd\varepsilon^2}{4\sigma^2}$.
\end{restatable}
For simplicity, we will use $\delta_j(W) = \sum_{i=1}^r a_i(w_i^T x_j)^2 - y_j$ to denote the {\em residual} for $j$-th data point: the difference between the output of the neural network and the label $y_j$. We will also combine these residuals into a matrix $M(W) := \frac{1}{n}\sum_{j=1}^n\delta_j(W) x_jx_j^T$. Intuitively, we first show that when $M(W)$ is large, the smallest eigenvalue of $\nabla^2 f(W)$ is very negative.
\begin{restatable}{lemma}{lemsmallesteigenvalue}\label{lem:smallesteigenvalue}
When the number of the hidden neurons $r \ge 2d+2$, we have
\[\lambda_{\min}\nabla^2 f(W) = -\max_i |\lambda_i(M)|,\]
where $\lambda_{\min}\nabla^2 f(W)$ denotes the smallest eigenvalue of the matrix $\nabla^2 f(W)$ and $\lambda_i (M)$ denotes the $i$-th eigenvalue of the matrix $M$.
\end{restatable}
Then we complete the proof by showing if the objective function is large, $M(W)$ is large.
\begin{restatable}{lemma}{lemspectralnormandvalue}\label{lem:spectralnormandfuncvalue}
Suppose the matrix $X = [x_1^{\otimes 2},\dots,x_n^{\otimes 2}]\in \mathbb{R}^{d^2 \times n}$ has full column rank and the smallest singular value is at least $\sigma$. Then if the spectral norm of the matrix $M = \frac{1}{n}\sum_{j=1}^n \delta_j x_jx_j^T$ is upper bounded by $\lambda$, the function value is bounded by
\[f(W) \le \frac{nd\lambda^2}{4\sigma^2}.\]
\end{restatable}
Combining the two lemmas, we know $f(W)$ is bounded when the point has almost positive Hessian, therefore every $\varepsilon$-second-order stationary point must be near-optimal.
\subsection{Optimizing the two-layer neural net}
In this section, we show how to use PGD to train our two-layer neural network.
Given the property of the optimization landscape for $f(W)$, it is natural to directly apply PGD to find a second-order stationary point. However, this is not enough since the function $f$ does not have bounded smoothness constant and Hessian Lipschitz constant (its Lipschitz parameters depend on the norm of $W$), and without further constraints, PGD is not guaranteed to converge in polynomial time. In order to control the Lipschitz parameters, we note that these parameters are bounded when the norm of $W$ is bounded (see Lemma~\ref{lem:smoothness-lipschitz-hessian} in appendix). Therefore we add a small regularizer term to control the norm of $W$. More concretely, we optimize the following objective
\[g(W) = f(W) + \frac{\gamma}{2}||W||_F^2.\]
We want to use this regularizer term to show that: 1. the optimization landscape is preserved: for appropriate $\gamma$, any $\varepsilon$-second-order stationary point of $g(W)$ will still give a small $f(W)$; and 2. During the training process of the 2-layer neural network, the norm of $W$ is bounded, therefore the smoothness and Hessian Lipschitz parameters are bounded. Then, the proof of Theorem \ref{thm:main1:informal} just follows from the combination of Theorem \ref{thm:pgd-convergence} of PGD and the result of the geometric property.
The first step is simple as the regularizer only introduces a term $\gamma I$ to the Hessian, which increases all the eigenvalues by $\gamma$. Therefore any $\varepsilon$-second-order stationary point of $g(W)$ will also lead to the fact that $|\lambda_{\min}\nabla^2 f(W)|$ is small, and hence $f(W)$ is small by Lemma~\ref{lem:geo-property}.
For the second step, note that in order to show the training process using PGD will not escape from the area $\{W:||W||_F^2 \le \Gamma\}$ with some $\Gamma$, it suffices to bound the function value $g(W)$ by $\gamma \Gamma/2$, which implies $\|W\|_F^2 \le \frac{2}{\gamma} g(W) \le \Gamma$. To bound the function value we use properties of PGD: for a gradient descent step, since the function is smooth in this region, the function value always decreases; for a perturbation step, the function value can increase, but cannot increase by too much.
Using mathematical induction, we can show that the function value of $g$ is smaller than some fixed value(related to the random initialization but not related to time $t$) and will not escape the set $\{W:||W||_F^2 \le \Gamma\}$ for appropriate $\Gamma$.
Using PGD on function $g(W)$, we have the following main theorem for the 2-layer neural network.
\begin{restatable}[Main theorem for 2-layer NN]{theorem}{thmmainthmtwolayer}\label{thm:main-theorem-twolayer}
Suppose the matrix $X = [x_1^{\otimes 2},\dots,x_n^{\otimes 2}]\in \mathbb{R}^{d^2 \times n}$ has full column rank and the smallest singular value is at least $\sigma$. Also assume that we have $||x_j||_2 \le B$ and $|y_j| \le Y$ for all $j \le n$. We choose our width of neural network $r\ge 2d+2$ and we choose $\rho = (6B^4\sqrt{2(f(0) + 1)})\left(nd/(\sigma^2\varepsilon)\right)^{1/4}$, $\gamma = \left(\sigma^2\varepsilon/nd\right)^{1/2}$, and $\ell = \max\{(3B^4\frac{2(f(0) + 1)}{\gamma}+YB^2+\gamma),1\}$. Then there exists an absolute constant $c_{\text{max}}$ such that, for any $\delta > 0,\Delta \ge f(0) + 1$, and constant $c \le c_{\text{max}}$, $PGD(0,\ell,\rho,\varepsilon,c,\delta,\Delta)$ on $W$ will output an parameter $W^*$ such that with probability $1-\delta$, $f(W^*) \le \varepsilon$ when the algorithm terminates in the following number of iterations:
\[O\left(\frac{B^8\ell(nd)^{5/2}(f(0)+1)^2}{\sigma^{5}\varepsilon^{5/2}}\log^4\left(\frac{Bnrd\ell\Delta(f(0)+1)}{\varepsilon^2\delta\sigma}\right)\right).\]
\end{restatable}
\section{Three-Layer Net for Fitting Larger Training Set}\label{sec:proof-sketch-random-feature}
In this section, we show how a three-layer neural net can fit a larger training set (Theorem \ref{thm:main2:informal}). The main limitation of the two-layer architecture in the previous section is that the activation functions are quadratic. No matter how many neurons the hidden layer has, the whole network only captures a quadratic function over the input, and cannot fit an arbitrary training set of size much larger than $d^2$. On the other hand, if one replaces the quadratic activation with other functions, it is known that even two-layer neural networks can have bad local minima\citep{safran2018spurious}.
To address this problem, the three-layer neural net in this section uses the first-layer as a random mapping of the input. The first layer is going to map inputs $x_i$'s into $z_i$'s of dimension $k$ (where $k = \Theta(\sqrt{n})$). If $z_i$'s satisfy the requirements of Theorem~\ref{thm:main-theorem-twolayer}, then we can use the same arguments as the previous section to show perturbed gradient descent can fit the training data.
We prove our main result in the smoothed analysis setting, which is a popular approach for going beyond worst-case. Given any worst-case input $\{x_1,x_2,...,x_n\}$, in the smoothed analysis framework, these inputs will first be slightly perturbed before given to the algorithm. More specifically, let $\bar x_j = x_j + \tilde x_j$, where $\tilde x_j\in \mathbb R^{d}$ is a random Gaussian vector following the distribution of $\mathcal{N}(\mathbf 0, v \mathbf I)$. Here the amount of perturbation is controlled by the variance $v$. The final running time for our algorithm will depend inverse polynomially on $v$. Note that on the other hand, the network architecture and the number of neurons/parameters in each layer does not depend on $v$.
Let $\{z_1,z_2,...,z_n\}$ denote the output of the first layer with $(z_j)_i = (r_i^T \bar x_j)^p (j = 1,2,...,n)$, we first show that $\{z_j\}$'s satisfy the requirement of Theorem~\ref{thm:main-theorem-twolayer}:
\begin{restatable}{lemma}{lemsmallestsingluarperturb}\label{lem:smallest-singular-perturb}
Suppose $k \le O_p(d^p)$ and ${k+1 \choose 2} > n$, let $\bar{x}_j = x_j +\tilde{x}_j$ be the perturbed input in the smoothed analysis setting, where $\tilde{x}_j \sim \mathcal{N}(\mathbf 0, v \mathbf I)$, let $\{z_1,z_2,...,z_n\}$ be the output of the first layer on the perturbed input ($(z_j)_i = (r_i^T \bar x_j)^p$). Let $Z\in \mathbb{R}^{k^2\times n}$ be the matrix whose $j$-th column is equal to $z_j^{\otimes 2}$, then with probability at least $1-\delta$, the smallest singular value of $Z$ is at least $\Omega_p(v^p\delta^{4p}/n^{2p+1/2}k^{4p})$.
\end{restatable}
This lemma shows that the output of the first layer ($z_j$'s) satisfies the requirements of Theorem~\ref{thm:main-theorem-twolayer}. With this lemma, we can prove the main theorem of this section:
\begin{restatable}[Main theorem for 3-layer NN]{theorem}{thmrandomfeaturewithperturb}\label{thm:main-theorem-random-feature-with-perturbation}
Suppose the original inputs satisfy $\|x_j\|_2\le 1, |y_j|\le 1$, inputs $\bar x_j = x_j+\tilde{x}_j$ are perturbed by $\tilde{x}_j\sim \mathcal{N}(0,vI)$, with probability $1-\delta$ over the random initialization, for $k = 2\lceil \sqrt{n} \rceil$, perturbed gradient descent on the second layer weights achieves a loss $f(W^*) \le \epsilon$ in $O_p(1)\cdot \frac{(n/v)^{O(p)}}{\epsilon^{5/2}} \log^4(n/\epsilon)$ iterations.
\end{restatable}
Using different tools, we can also prove a similar result without the smoothed analysis setting:
\begin{restatable}{theorem}{thmdeterministic}\label{thm:deterministic} Suppose the matrix $X = [x_1^{2p},...,x_n^{2p}]\in \mathbb{R}^{d^{2p}\times n}$ has full column rank, and smallest singular value at least $\sigma$. Choose $k = O_p(d^p)$, with high probability perturbed gradient descent on the second layer weights achieves a loss $f(W^*) \le \epsilon$ in $O_p(1)\cdot \frac{(n)^{O(p)}}{\sigma^5\epsilon^{5/2}} \log^4(n/\epsilon)$ iterations.
\end{restatable}
When the number of samples $n$ is smaller than $d^{2p}/(2p)!$, one can choose $k = O_p(d^p)$, in this regime the result of Theorem~\ref{thm:deterministic} is close to Theorem~\ref{thm:main-theorem-random-feature-with-perturbation}. However, if $n$ is just larger, say $n = d^{2p}$, one may need to choose $k = O_p(d^{p+1})$, which gives sub-optimal number of neurons and parameters.
\section{Experiments}
In this section, we validate our theory using experiments. Detailed parameters of the experiments as well as more result are deferred to Section~\ref{sec:more-experiments} in Appendix.
\paragraph{Small Synthetic Example} We first run gradient descent on a small synthetic data-set, which fits into the setting of Theorem~\ref{thm:main-theorem-twolayer}.
Our training set, including the samples and the labels, are generated from a fixed normalized uniform distribution(random sample from a hypercube and then normalized to have norm 1). As shown in Figure \ref{fig:random-sample}, simple gradient descent can already memorize the training set.
\begin{figure}[ht!]
\centering
\includegraphics[width=2.5in]{randomsample_small.png}
\caption{Training loss for random sample experiment}
\label{fig:random-sample}
\end{figure}
\paragraph{MNIST Experiment} We also show how our architectures (both two-layer and three-layer) can be used to memorize MNIST. For MNIST, we use a squared loss between the network's prediction and the true label (which is an integer in $\{0,1,...,9\}$). For the two-layer experiment, we use the original MNIST dataset, with a small Gaussian perturbation added to the data to make sure the condition in Theorem~\ref{thm:main-theorem-twolayer} is satisfied. For the three-layer experiment, we use PCA to project MNIST images to 100 dimensions (so the two-layer architecture will no longer be able to memorize the training set). See Figure~\ref{fig:original-label} for the results. In this part, we use ADAM as the optimizer to improve convergence speed, but as we discussed earlier, our main result is on the optimization landscape and the algorithm is flexible.
\begin{figure}[!th]
\centering
\subfloat[Two-layer network with perturbation on input]{\includegraphics[width=2.5in]{original55000_small.png}}
\subfloat[Three-layer network on top 100 PCA directions]{\includegraphics[width=2.5in]{pca35000_500_small.png}}
\caption{MNIST with original label}
\label{fig:original-label}
\end{figure}
\paragraph{MNIST with random label} We further test our results on MNIST with random labels to verify that our result does not use any potential structure in the MNIST datasets. The setting is exactly the same as before. As shown in Figure \ref{fig:random-label}, the training loss can also converge.
\begin{figure}[!th]
\centering
\subfloat[Two-layer network with perturbation on input]{\includegraphics[width=2.5in]{perturbed35000_noise001_randomlabel_small.png}}
\subfloat[Three-layer network on top 100 PCA directions]{\includegraphics[width=2.5in]{pcarandomlabel50000_variance005_small.png}}
\caption{MNIST with random label}
\label{fig:random-label}
\end{figure}
\section{Conclusion}
In this paper, we showed that even a mildly overparametrized neural network can be trained to memorize the training set efficiently. The number of neurons and parameters in our results are tight (up to constant factors) and matches the bounds in \cite{yun2018small}. There are several immediate open problems, including generalizing our result to more standard activation functions and providing generalization guarantees. More importantly, we believe that the mildly overparametrized regime is more realistic and interesting compared to the highly overparametrized regime. We hope this work would serve as a first step towards understanding the mildly overparametrized regime for deep learning.
\newpage
|
1,477,468,750,949 | arxiv | \section{Introduction and statement of the main results}
\label{sec:int}
Consider an algebraic $k$-group $G$ acting on a $k$-variety $X$, where
$k$ is a field. If $X$ is normal and $G$ is smooth, connected and affine,
then $X$ is covered by open $G$-stable quasi-projective subvarieties;
moreover, any such variety admits a $G$-equivariant immersion in
the projectivization of some finite-dimensional $G$-module.
This fundamental result, due to Sumihiro (see
\cite[Thm.~1, Lem.~8]{Sumihiro} and
\cite[Thm.~2.5, Thm.~3.8]{Sumihiro-II}), has many applications.
For example, it yields that $X$ is covered by $G$-stable affine
opens when $G$ is a split $k$-torus; this is the starting
point of the classification of toric varieties (see \cite{CLS})
and more generally, of normal varieties with a torus action
(see e.g. \cite{AHS, Langlois, LS}).
Sumihiro's theorem does not extend directly to actions of arbitrary
algebraic groups. For example, a non-trivial abelian variety $A$,
acting on itself by translations, admits no equivariant embedding
in the projectivization of a finite-dimensional $A$-module, since $A$
acts trivially on every such module. Also, an example of Hironaka (see
\cite{Hironaka}) yields a smooth complete threefold equipped with an
involution $\sigma$ and which is not covered by $\sigma$-stable
quasi-projective opens. Yet a generalization of Sumihiro's
theorem was obtained in \cite{Brion10} for actions of smooth connected
algebraic groups over an algebraically closed field. The purpose
of this article is to extend this result to an arbitrary field.
More specifically, for any connected algebraic group $G$, we will prove:
\begin{theorem*}\label{thm:cover}
Every normal $G$-variety is covered by $G$-stable quasi-projective opens.
\end{theorem*}
\begin{theorem*}\label{thm:model}
Every normal quasi-projective $G$-variety admits a $G$-equivariant
immersion in the projectivization of a $G$-linearized vector bundle
on an abelian variety, quotient of $G$ by a normal subgroup scheme.
\end{theorem*}
See the beginning of \S \ref{subsec:fp} for unexplained
notation and conventions. Theorem \ref{thm:cover} is proved in
\S \ref{subsec:cover}, and Theorem \ref{thm:model} in \S
\ref{subsec:model}.
Theorem \ref{thm:cover} also follows from a result of Olivier Benoist
asserting that every normal variety contains finitely many maximal
quasi-projective open subvarieties (see \cite[Thm.~9]{Benoist}),
as pointed out by W\l odarczyk (see \cite[Thm.~D]{Wlodarczyk})
who had obtained an earlier version of the above result under more
restrictive assumptions.
When $G$ is affine, any abelian variety quotient of $G$ is trivial,
and hence the $G$-linearized vector bundles occuring in Theorem
\ref{thm:model} are just the finite-dimensional $G$-modules. Thus,
Theorems \ref{thm:cover} and \ref{thm:model} give back Sumihiro's results.
Also, for a smooth connected algebraic group $G$ over a perfect field $k$,
there exists a unique exact sequence of algebraic groups
$1 \to H \to G \to A \to 1$,
where $H$ is smooth, connected and affine, and $A$ is an abelian variety
(Chevalley's structure theorem, see \cite{Conrad,Milne} for modern
proofs). Then the $G$-linearized vector bundles occuring in
Theorem \ref{thm:model} are exactly the homogeneous vector bundles
$G \times^{H'} V$ on $G/H'$, where $H' \triangleleft G$ is a normal
subgroup scheme containing $H$ (so that $G/H'$ is an abelian variety,
quotient of $G/H = A$) and $V$ is a finite-dimensional $H'$-module.
The vector bundles on an abelian variety $A$ which are
$G$-linearizable for some algebraic group $G$ with quotient $A$
are exactly the homogeneous, or translation-invariant, bundles;
over an algebraically closed field, they have been classified by
Miyanishi (see \cite[Thm.~2.3]{Miyanishi}) and Mukai (see
\cite[Thm.~4.17]{Mukai}).
We now present some applications of Theorems \ref{thm:cover} and
\ref{thm:model}. First, as a straightforward consequence of Theorem
\ref{thm:model}, \emph{every normal quasi-projective $G$-variety $X$
admits an equivariant completion}, i.e., $X$ is isomorphic
to a $G$-stable open of some complete $G$-variety. When $G$
is smooth and linear, this holds for any normal $G$-variety $X$
(not necessarily quasi-projective), by a result of Sumihiro
again; see \cite[Thm.~3]{Sumihiro}, \cite[Thm.~4.13]{Sumihiro-II}.
We do not know whether this result extends to an arbitrary
algebraic group $G$.
Another direct consequence of Theorems \ref{thm:cover} and
\ref{thm:model} refines a classical result of Weil:
\begin{corollary*}\label{cor:bir}
Let $X$ be a geometrically integral variety equipped with
a birational action of a smooth connected algebraic group $G$.
Then $X$ is $G$-birationally isomorphic to a normal projective
$G$-variety.
\end{corollary*}
(Again, see the beginning of Subsection \ref{subsec:fp} for unexplained
notation and conventions). More specifically, Weil showed that
$X$ is $G$-birationally isomorphic to a normal $G$-variety $X'$
(see \cite[Thm.~p.~355]{Weil}). That $X'$ may be chosen projective
follows by combining Theorems \ref{thm:cover} and \ref{thm:model}.
If $\car(k) = 0$, then we may assume in addition that $X'$ is
\emph{smooth} by using equivariant resolution of singularities
(see \cite[Thm.~3.36, Prop.~3.9.1]{Kollar}). The existence of
such smooth projective ``models'' fails over any
imperfect field (see e.g. \cite[Rem.~5.2.3]{Brion17}); one may
ask whether \emph{regular} projective models exist
in that setting.
Finally, like in \cite{Brion10}, we may reformulate Theorem \ref{thm:model}
in terms of the Albanese variety, if $X$ is geometrically integral:
then $X$ admits a universal morphism to a torsor $\Alb^1(X)$ under
an abelian variety $\Alb^0(X)$ (this is proved in \cite[Thm.~5]{Serre}
when $k$ is algebraically closed, and extended to an arbitrary
field $k$ in \cite[App.~A]{Wittenberg}).
\begin{corollary*}\label{cor:alb}
Let $X$ be a geometrically integral variety equipped with
an action $\alpha$ of a smooth connected algebraic group $G$.
Then $\alpha$ induces an action $\Alb^1(\alpha)$ of $\Alb^0(G)$
on $\Alb^1(X)$. If $X$ is normal and quasi-projective, and $\alpha$
is almost faithful, then $\Alb^1(\alpha)$ is almost faithful as well.
\end{corollary*}
This result is proved in \S \ref{subsec:cor}. For a faithful action
$\alpha$, it may happen that $\Alb^1(\alpha)$ is not faithful, see
Remark \ref{rem:pr3}.
The proofs of Theorems \ref{thm:cover} and \ref{thm:model} follow the
same lines as those of the corresponding results of \cite{Brion10},
which are based in turn on the classical proof of the projectivity
of abelian varieties, and its generalization by Raynaud to the
quasi-projectivity of torsors (see \cite{Raynaud} and also
\cite[Chap.~6]{BLR}). But many arguments of \cite{Brion10} require
substantial modifications, since the irreducibility and normality
assumptions on $X$ are not invariant under field extensions.
Also, note that non-smooth subgroup schemes occur inevitably in
Theorem \ref{thm:model} when $\car(k) = p > 0$:
for example, the above subgroup schemes $H' \subset G$ obtained
as pull-backs of $p$-torsion subgroup schemes of $A$ (see
Remark \ref{rem:fin} (ii) for additional examples). Thus, we devote
a large part of this article to developing techniques of algebraic
transformation groups over an arbitrary field.
Along the way, we obtain a generalization of Sumihiro's theorem
in another direction: any normal quasi-projective variety equipped
with an action of an affine algebraic group $G$ - not necessarily
smooth or connected - admits an equivariant immersion in the
projectivization of a finite-dimensional $G$-module (see Corollary
\ref{cor:GH}).
This article is the third in a series devoted to the structure
and actions of algebraic groups over an arbitrary field (see
\cite{Brion15, Brion17}). It replaces part of the unsubmitted
preprint \cite{Brion14}; the remaining part, dealing with
semi-normal varieties, will be developed elsewhere.
\section{Preliminaries}
\label{sec:prel}
\subsection{Functorial properties of algebraic group actions}
\label{subsec:fp}
Throughout this article, we fix a field $k$ with algebraic
closure $\bar{k}$ and separable closure $k_s \subset \bar{k}$.
Unless otherwise specified, we consider separated schemes over
$k$; morphisms and products of schemes are understood
to be over $k$. The structure map of such a scheme $X$ is denoted
by $q = q_X : X \to \Spec(k)$, and the scheme obtained by base change
under a field extension $k'/k$ is denoted by $X \otimes_k k'$, or just
by $X_{k'}$ if this yields no confusion. A \emph{variety} is an integral
scheme of finite type over $k$.
Recall that a \emph{group scheme} is a scheme $G$ equipped
with morphisms
\[ \mu = \mu_G : G \times G \longrightarrow G, \quad
\iota = \iota_G: G \longrightarrow G \] and
with a $k$-rational point $e = e_G \in G(k)$ such that for
any scheme $S$, the set of $S$-points $G(S)$ is a group with
multiplication map $\mu(S)$, inverse map $\iota(S)$ and
neutral element $e \circ q_S \in G(S)$.
This is equivalent to the commutativity of the following diagrams:
\[
\xymatrix{
G \times G \times G \ar[r]^-{\mu \times \id}\ar[d]_{\id \times \mu}
& G \times G \ar[d]^{\mu} \\
G \times G \ar[r]^-{\mu} & G \\}
\]
(i.e., $\mu$ is associative),
\[
\xymatrix{
G \ar[r]^-{e \circ q \times \id} \ar[dr]_{\id} &
G \times G \ar[d]^{\mu} &
\ar[l]_-{\id \times e \circ q} \ar[dl]^{\id} G \\
& G \\
}
\]
(i.e., $e$ is the neutral element), and
\[
\xymatrix{
G \ar[r]^-{\id \times \iota} \ar[dr]_{e \circ q} &
G \times G \ar[d]^{\mu} &
\ar[l]_-{\iota \times \id} \ar[dl]^{e \circ q} G \\
& G \\
}
\]
(i.e., $\iota$ is the inverse map). We denote for simplicity
$\mu(g,h)$ by $g h$, and $\iota(g)$ by $g^{-1}$.
An \emph{algebraic group} is a group scheme of finite type
over $k$.
Given a group scheme $G$, a \emph{$G$-scheme} is a scheme $X$
equipped with a \emph{$G$-action}, i.e., a morphism
$\alpha : G \times X \to X$ such that for any scheme $S$, the map
$\alpha(S)$ defines an action of the group $G(S)$ on the set $X(S)$.
Equivalently, the following diagrams are commutative:
\[
\xymatrix{
G \times G \times X \ar[r]^-{\mu \times \id_X}\ar[d]_{\id_G \times \alpha}
& G \times X \ar[d]^{\alpha} \\
G \times X \ar[r]^-{\alpha} & X \\}
\]
(i.e., $\alpha$ is ``associative''), and
\[
\xymatrixcolsep{3pc}\xymatrix{
X \ar[r]^-{e \circ q \times \id_X}\ar[dr]_{\id_X} &
G \times X \ar[d]^{\alpha} \\
& X \\
}
\]
(i.e., the neutral element acts via the identity). We denote for
simplicity $\alpha(g,x)$ by $g \cdot x$.
The \emph{kernel} of $\alpha$ is the group functor that assigns
to any scheme $S$, the subgroup of $G(S)$ consisting of those
$g \in G(S)$ that act trivially on the $S$-scheme $X \times S$
(i.e., $g$ acts trivially on the set $X(S')$ for any $S$-scheme $S'$).
By \cite[II.1.3.6]{DG}, this group functor is represented by
a closed normal subgroup scheme $\Ker(\alpha) \triangleleft G$.
Also, note that the formation of $\Ker(\alpha)$ commutes with
base change by field extensions. We say that $\alpha$ is
\emph{faithful} (resp.~\emph{almost faithful}) if its kernel is trivial
(resp.~finite); then $\alpha_{k'}$ is faithful (resp.~almost faithful)
for any field extension $k'/k$.
A \emph{morphism of group schemes} is a morphism $f: G \to H$,
where of course $G$, $H$ are group schemes, and
$f(S) : G(S) \to H(S)$ is a group homomorphism for any scheme $S$.
Equivalently, the diagram
\[
\xymatrix{
G \times G \ar[r]^-{\mu_G}\ar[d]_{f \times f}
& G \ar[d]^{f} \\
H \times H \ar[r]^-{\mu_H} & H \\}
\]
commutes.
Consider a morphism of group schemes $f: G \to H$,
a scheme $X$ equipped with a $G$-action $\alpha$, a scheme $Y$
equipped with an $H$-action $\beta$ and a morphism (of schemes)
$\varphi : X \to Y$. We say that
\emph{$\varphi$ is equivariant relative to $f$} if we have
$\varphi(g \cdot x) = f(g) \cdot \varphi(x)$ for any scheme
$S$ and any $g \in G(S)$, $x \in X(S)$. This amounts to the
commutativity of the diagram
\[
\xymatrix{
G \times X \ar[r]^-{\alpha}\ar[d]_{f \times \varphi}
& X \ar[d]^{\varphi} \\
H \times Y \ar[r]^-{\beta} & Y. \\}
\]
We now recall analogues of some of these notions in birational geometry.
A \emph{birational action} of a smooth connected algebraic group
$G$ on a variety $X$ is a rational map
\[ \alpha : G \times X \dasharrow X \]
which satisfies the ``associativity'' condition on some open dense
subvariety of $G \times G \times X$, and such that the rational map
\[ G \times X \dasharrow G \times X, \quad
(g,x) \longmapsto (g, \alpha(g,x)) \]
is birational as well. We say that two varieties $X$, $Y$ equipped
with birational actions $\alpha$, $\beta$ of $G$ are
$G$-\emph{birationally isomorphic} if there exists a birational
map $\varphi : X \dasharrow Y$ which satisfies the equivariance
condition on some open dense subvariety of $G \times X$.
Returning to the setting of actions of group schemes,
recall that a vector bundle $\pi : E \to X$ on a
$G$-scheme $X$ is said to be $G$-\emph{linearized} if $E$
is equipped with an action of $G \times \bG_m$ such that
$\pi$ is equivariant relative to the first projection
$\pr_G : G \times \bG_m \to G$, and $\bG_m$ acts on $E$ by
multiplication on fibers. For a line bundle $L$, this is equivalent
to the corresponding invertible sheaf $\cL$ (consisting of local
sections of the dual line bundle) being $G$-linearized in the sense
of \cite[Def.~1.6]{MFK}.
Next, we present some functorial properties of these notions, which
follow readily from their definitions via commutative diagrams.
Denote by $Sch_k$ the category of schemes over $k$. Let $\cC$ be
a full subcategory of $Sch_k$ such that $\Spec(k) \in \cC$ and
$X \times Y \in \cC$ for all $X,Y \in \cC$. Let
$F : \cC \to Sch_{k'}$ be a (covariant) functor,
where $k'$ is a field. Following \cite[II.1.1.5]{DG}, we say that
\emph{$F$ commutes with finite products} if
$F(\Spec(k)) = \Spec(k')$ and the map
\[ F(\pr_X) \times F(\pr_Y) :
F(X \times Y) \longrightarrow F(X) \times F(Y) \]
is an isomorphism for all $X,Y \in \cC$, where
$\pr_X: X \times Y \to X$, $\pr_Y : X \times Y \to Y$ denote
the projections.
Under these assumptions, $F(G)$ is equipped with a $k'$-group
scheme structure for any $k$-group scheme $G \in \cC$. Moreover,
for any $G$-scheme $X \in \cC$, we obtain an $F(G)$-scheme
structure on $F(X)$. If $f : G \to H$ is a morphism of
$k$-group schemes and $G,H \in \cC$, then the morphism
$F(f): F(G) \to F(H)$ is a morphism of $k'$-group schemes.
If in addition $Y \in \cC$ is an $H$-scheme and
$\varphi : X \to Y$ an equivariant morphism relative to $f$,
then the morphism $F(\varphi) : F(X) \to F(Y)$ is equivariant
relative to $F(f)$.
Also, if $F_1 : \cC \to Sch_{k_1}$, $F_2 : \cC \to Sch_{k_2}$
are two functors commuting with finite products, and
$T : F_1 \to F_2$ is a morphism of functors,
then $T$ induces morphisms of group schemes
$T(G) : F_1(G) \to F_2(G)$, and equivariant morphisms
$T(X) : F_1(X) \to F_2(X)$ relative to $T(G)$, for all $G,X$ as above.
Consider again a functor $F : \cC \to Sch_{k'}$ commuting
with finite products. We say that
\emph{$F$ preserves line bundles} if for any line bundle
$\pi : L \to X$, where $X \in \cC$, we have that $L \in \cC$
and $F(\pi) : F(L) \to F(X)$ is a line bundle; in addition,
we assume that $\bG_{m,k} \in \cC$ and $F(\bG_{m,k}) \cong \bG_{m,k'}$
compatibly with the action of $\bG_{m,k}$ on $L$ by multiplication
on fibers, and the induced action of $F(\bG_{m,k})$ on $F(L)$.
Under these assumptions, for any $G$-scheme $X \in \cC$ and
any $G$-linearized line bundle $L$ on $X$, the line bundle
$F(L)$ on $F(X)$ is equipped with an $F(G)$-linearization.
\begin{examples}\label{ex:fp}
(i) Let $h : k \to k'$ be a homomorphism of fields. Then the
base change functor
\[ F : Sch_k \longrightarrow Sch_{k'},
\quad X \longmapsto X \otimes_h k' := X \times_{\Spec(k)} \Spec(k') \]
commutes with finite products and preserves line bundles. Also,
assigning to a $k$-scheme $X$ the projection
\[ \pr_X : X \otimes_h k' \longrightarrow X \]
yields a morphism of functors from $F$ to the identity of
$Sch_k$. As a consequence, $G \otimes_h k'$ is a
$k'$-group scheme for any $k$-group scheme $G$, and $\pr_G$ is a
morphism of group schemes. Moreover, for any $G$-scheme $X$,
the scheme $X \otimes_h k'$ comes with an action of
$G \otimes_h k'$ such that $\pr_X$ is equivariant; also,
every $G$-linearized line bundle $L$ on $X$ yields a
$G \otimes_h k'$-linearized line bundle $L \otimes_h k'$ on
$X \otimes_h k'$. This applies for instance to the Frobenius
twist $X \mapsto X^{(p)}$ in characteristic $p > 0$
(see Subsection \ref{subsec:ifm} for details).
\medskip
\noindent
(ii) Let $k'/k$ be a finite extension of fields, and $X'$
a quasi-projective scheme over $k'$. Then the Weil restriction
$\R_{k'/k}(X')$ is a quasi-projective scheme over $k$ (see
\cite[7.6]{BLR} and \cite[A.5]{CGP} for details on Weil restriction).
The assignment $X' \mapsto \R_{k'/k}(X')$ extends to a functor
\[ \R_{k'/k} : Sch^{qp}_{k'} \longrightarrow Sch_k^{qp}, \]
where $Sch_k^{qp}$ denotes the full subcategory of $Sch_k$
with objects being the quasi-projective schemes.
By \cite[A.5.2]{CGP}, $\R_{k'/k}$ commutes with finite products,
and hence so does the functor
\[ F : Sch_k^{qp} \longrightarrow Sch_k^{qp}, \quad
X \longmapsto \R_{k'/k}(X_{k'}). \]
Since every algebraic group $G$ is quasi-projective (see
e.g.~\cite[A.3.5]{CGP}), we see that $\R_{k'/k}(G_{k'})$ is equipped
with a structure of $k$-group scheme. Moreover, for any
quasi-projective $G$-scheme $X$, we obtain an
$\R_{k'/k}(G_{k'})$-scheme structure on $\R_{k'/k}(X_{k'})$.
The adjunction morphism
\[ j_X : X \longrightarrow \R_{k'/k}(X_{k'}) = F(X) \]
is a closed immersion by \cite[A.5.7]{CGP}, and extends to
a morphism of functors from the identity of $Sch_k^{qp}$ to
the endofunctor $F$. As a consequence, for any quasi-projective
$G$-scheme $X$, the morphism $j_X$ is equivariant relative
to $j_G$.
Note that $F$ does not preserve line bundles (unless $k'= k$),
since the algebraic $k'$-group $\R_{k'/k}(\bG_{m,k'})$ has
dimension $[k':k]$.
\medskip
\noindent
(iii) Let $X$ be a scheme, locally of finite type over $k$.
Then there exists an \'etale scheme $\pi_0(X)$ and a morphism
\[ \gamma = \gamma_X : X \longrightarrow \pi_0(X), \]
such that every morphism $f :X \to Y$, where $Y$ is \'etale,
factors uniquely through $\gamma$. Moreover, $\gamma$ is faithfully
flat, and its fibers are exactly the connected components of $X$.
The formation of $\gamma$ commutes with field extensions
and finite products (see \cite[I.4.6]{DG} for these results).
In particular, $X$ is connected if and only if $\pi_0(X) = \Spec(K)$
for some finite separable field extension $K/k$. Also, $X$ is
geometrically connected if and only if $\pi_0(X) = \Spec(k)$.
As a well-known consequence, for any group scheme $G$,
locally of finite type, we obtain a group scheme structure on
the \'etale scheme $\pi_0(G)$ such that $\gamma_G$ is a morphism
of group schemes; its kernel is the neutral component $G^0$.
Moreover, any action of $G$ on a scheme of finite type $X$
yields an action of $\pi_0(G)$ on $\pi_0(X)$ such that $\gamma_X$
is equivariant relative to $\gamma_G$. In particular,
every connected component of $X$ is stable under $G^0$.
\medskip
\noindent
(iv) Consider a connected scheme of finite type $X$,
and the morphism $\gamma_X : X \to \Spec(K)$ as in (iii).
Note that the degree $[K:k]$ is the number of geometrically connected
components of $X$. Also, we may view $X$ as a $K$-scheme; then it
is geometrically connected.
Given a $k$-scheme $Y$, the map
\[ \iota_{X,Y} := \id_X \times \pr_Y : X \times_K Y_K
\longrightarrow X \times_k Y \]
is an isomorphism of $K$-schemes, where $X \times_k Y$ is viewed as
a $K$-scheme via $\gamma_X \circ \pr_X$. Indeed, considering open affine
coverings of $X$ and $Y$, this boils down to the assertion that the map
\[ R \otimes_k S \longrightarrow R \otimes_K (S \otimes_k K),
\quad r \otimes s \longmapsto r \otimes (s \otimes 1) \]
is an isomorphism of $K$-algebras for any $K$-algebra $R$ and
any $k$-algebra $S$.
Also, note that the projection $\pr_X : X_K \to X$
has a canonical section, namely, the adjunction map
$\sigma_X : X \to X_K$. Indeed, considering an open affine
covering of $X$, this reduces to the fact that the inclusion map
\[ R \longrightarrow R \otimes_k K,
\quad r \longmapsto r \otimes 1 \]
has a retraction given by $r \otimes z \mapsto z r$.
Thus, $\sigma_X$ identifies the $K$-scheme $X$ with a connected
component of $X_K$.
For any $k$-scheme $Y$, the above map $\iota_{X,Y}$ is compatible
with $\sigma_X$ in the sense that the diagram
\[
\xymatrixcolsep{4pc}\xymatrix{
X \times_K Y_K \ar[r]^-{\iota_{X,Y}} \ar[d]_{\sigma_X \times \id_Y} &
X \times_k Y \ar[d]^{\sigma_{X \times_k Y}}\\
X_K \times_K Y_K \ar[r]^-{\pr_X \times \id_{Y_K}}
& (X \times_k Y)_K \\
}
\]
commutes, with the horizontal maps being isomorphisms. Indeed,
this follows from the identity
$z r \otimes s \otimes 1 = r \otimes s \otimes z$
in $R \otimes_K (S \otimes_k K)$ for any $R$ and $S$ as above, and
any $z \in K$, $r \in R$, $s \in S$.
Given a morphism of $k$-schemes $f : X' \to X$, we may also view
$X'$ as a $K$-scheme via the composition $X' \to X \to \Spec(K)$.
Then the diagram
\[
\xymatrix{
X' \ar[r]^f \ar[d]_{\sigma_{X'}} & X \ar[d]^{\sigma_X} \\
X'_K \ar[r]^{f_K} & X_K \\
}
\]
commutes, as may be checked by a similar argument.
In particular, if $X$ is equipped with an action of an algebraic
$k$-group $G$, then $G_K$ acts on the $K$-scheme $X$ through the
morphism $\pr_G : G_K \to G$; moreover, $X$ is stable under the
induced action of $G_K$ on $X_K$, since the diagram
\[
\xymatrixcolsep{4pc}\xymatrix{
G_K \times_K X \ar[r]^-{\iota_{X,G}} \ar[d]_{\id_{G_K} \times \sigma_X} &
G \times_k X \ar[r]^-{\alpha} \ar[d]_{\sigma_{G \times_k X}}
& X \ar[d]^{\sigma_X} \\
G_K \times_K X_K \ar[r]^-{\id_{G_K} \times \pr_X}
& (G \times_k X)_K \ar[r]^-{\alpha_K} & X_K \\
}
\]
commutes.
When $X$ is a normal $k$-variety, the above field $K$ is
the separable algebraic closure of $k$ in the function field
$k(X)$. (Indeed, $K$ is a subfield of $k(X)$ as $\gamma_X$ is
faithfully flat; hence $K \subset L$, where $L$ denotes the
separable algebraic closure of $k$ in $k(X)$. On the other hand,
$L \subset \cO(X)$ as $L \subset k(X)$ is integral over $k$.
This yields a morphism $X \to \Spec(L)$, and hence a homomorphism
$L \to K$ in view of the universal property of $\gamma_X$.
Thus, $L = K$ for degree reasons). Since the $K$-scheme $X$
is geometrically connected, we see that $X \otimes_K K_s$
is a normal $K_s$-variety. In particular, $X$ is geometrically
irreducible as a $K$-scheme.
\end{examples}
\subsection{Norm and Weil restriction}
\label{subsec:nwr}
Let $k'/k$ be a finite extension of fields, and $X$ a $k$-scheme.
Then the projection
\[ \pr_X : X_{k'} \longrightarrow X \]
is finite and the sheaf of $\cO_X$-modules $(\pr_X)_*(\cO_{X_{k'}})$ is
locally free of rank $[k':k] =: n$. Thus, we may assign to any line bundle
\[ \pi : L' \longrightarrow X_{k'}, \]
its \emph{norm} $\N(L')$; this is a line bundle on $X$, unique up to
unique isomorphism (see \cite[II.6.5.5]{EGA}). Assuming that $X$ is
quasi-projective, we now obtain an interpretation of $\N(L')$ in terms
of Weil restriction:
\begin{lemma}\label{lem:nwr}
Keep the above notation, and the notation of Example \ref{ex:fp} (ii).
\begin{enumerate}
\item[{\rm (i)}]
The map $\R_{k'/k}(\pi) : \R_{k'/k}(L') \to \R_{k'/k}(X_{k'})$
is a vector bundle of rank $n$.
\item[{\rm (ii)}]
We have an isomorphism of line bundles on $X$
\[ \N(L') \cong j_X^* \det \R_{k'/k}(L'). \]
\item[{\rm (iii)}]
If $X$ is equipped with an action of an algebraic group $G$
and $L'$ is $G_{k'}$-linearized, then $\N(L')$ is $G$-linearized.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Let $E := \R_{k'/k}(L')$ and $X' := \R_{k'/k}(X_{k'})$.
Consider the $\bG_{m,k'}$-torsor
\[ \pi^{\times} : L'^{\times} \longrightarrow X_{k'} \]
associated with the line bundle $L'$. Recall that
$L' \cong (L'^{\times} \times \bA^1_{k'})/\bG_{m,k'}$, where $\bG_{m,k'}$
acts simultaneously on $L'^{\times}$ and on $\bA^1_{k'}$
by multiplication. Using \cite[A.5.2, A.5.4]{CGP}, it follows that
\[
E \cong
(\R_{k'/k}(L'^{\times}) \times \R_{k'/k}(\bA^1_{k'}))/\R_{k'/k}(\bG_{m,k'}).
\]
This is the fiber bundle on $X'$ associated with the
$\R_{k'/k}(\bG_{m,k'})$-torsor $\R_{k'/k}(L'^{\times}) \to X'$ and
the $\R_{k'/k}(\bG_{m,k'})$-scheme $\R_{k'/k}(\bA^1_{k'})$. Moreover,
$\R_{k'/k}(\bA^1_{k'})$ is the affine space $\bV(k')$ associated with
the $k$-vector space $k'$ on which $\R_{k'/k}(\bG_{m,k'})$ acts
linearly, and $\bG_{m,k}$ (viewed as a subgroup scheme of
$\R_{k'/k}(\bG_{m,k'})$ via the adjunction map) acts by scalar
multiplication. Indeed, for any $k$-algebra $A$, we have
$\R_{k'/k}(\bA^1_{k'})(A)= A \otimes_k k' = A_{k'}$ on which
$\R_{k'/k}(\bG_{m,k'})(A) = A_{k'}^*$ and its subgroup
$\bG_{m,k}(A) = A^*$ act by multiplication.
(ii) The determinant of $E$ is the line bundle associated with the above
$\R_{k'/k}(\bG_{m,k'})$-torsor and the
$\R_{k'/k}(\bG_{m,k'})$-module $\bigwedge^n(k')$
(the top exterior power of the $k$-vector space $k'$).
To describe the pull-back of this line bundle under
$j_X: X \to X'$, choose a Zariski open covering $(U_i)_{i \in I}$
of $X$ such that the $(U_i)_{k'}$ cover $X_{k'}$ and the pull-back
of $L'$ to each $(U_i)_{k'}$ is trivial (such a covering exists by
\cite[IV.21.8.1]{EGA}). Also, choose trivializations
\[ \eta_i : L'_{(U_i)_{k'}} \stackrel{\cong}{\longrightarrow}
(U_i)_{k'} \times_{k'} \bA^1_{k'}. \]
This yields trivializations
\[ \R_{k'/k}(\eta_i) : E_{U'_i} \stackrel{\cong}{\longrightarrow}
U'_i \times_k \bV(k'), \]
where $U'_i := \R_{k'/k}((U_i)_{k'})$. Note that the $U'_i$ do not
necessarily cover $X'$, but the $j_X^{-1}(U'_i) = U_i$ do cover $X$.
Thus, $j_X^*(E)$ is equipped with trivializations
\[ j_X^*(E)_{U_i} \stackrel{\cong}{\longrightarrow}
U_i \times_k \bV(k'). \]
Consider the $1$-cocycle
$(\omega_{ij} := (\eta_i \eta_j^{-1})_{(U_i)_{k'} \cap (U_j)_{k'}})_{i,j}$
with values in $\bG_{m,k'}$. Then the line bundle
$j_X^*(\det(E)) = \det(j_X^*(E))$ is defined by the $1$-cocycle
$(\det(\omega_{ij}))_{i,j}$ with values in $\bG_{m,k}$,
where $\det(\omega_{ij})$ denotes the determinant of
the multiplication by $\omega_{ij}$ in the $\cO(U_i \cap U_j)$-algebra
$\cO(U_i \cap U_j) \otimes_k k'$.
It follows that $j_X^*(\det(E)) \cong \N(L')$ in view of the definition
of the norm (see \cite[II.6.4, II.6.5]{EGA}).
(iii) By Example \ref{ex:fp} (ii), $\R_{k'/k}(X_{k'})$
is equipped with an action of $\R_{k'/k}(G_{k'})$; moreover, $j_X$
is equivariant relative to $j_G : G \to \R_{k'/k}(G_{k'})$. Also,
the action of $G_{k'} \times \bG_{m,k'}$ on $L'$ yields an action
of $\R_{k'/k}(G) \times \R_{k'/k}(\bG_{m,k'})$ on $E$ such that
$\bG_{m,k} \subset \R_{k'/k}(\bG_{m,k'})$ acts by scalar
multiplication on fibers. Thus, the vector bundle $E$ is equipped
with a linearization relative to $\R_{k'/k}(G_{k'})$, which induces
a linearization of its determinant. This yields the assertion in view of (ii).
\end{proof}
\subsection{Iterated Frobenius morphisms}
\label{subsec:ifm}
In this subsection, we assume that $\car(k) = p > 0$.
Then every $k$-scheme $X$ is equipped with the
\emph{absolute Frobenius endomorphism} $F_X$: it induces
the identity on the underlying topological space,
and the homomorphism of sheaves of algebras
$F_X^{\#} : \cO_X \to (F_X)_*(\cO_X) = \cO_X$
is the $p$th power map, $f \mapsto f^p$. Note that $F_X$
is not necessarily a morphism of $k$-schemes, as the
structure map $q_X : X \to \Spec(k)$ lies in a commutative
diagram
\[
\xymatrix{
X \ar[r]^-{F_X} \ar[d]_{q_X} & X \ar[d]^{q_X} \\
\Spec(k) \ar[r]^-{F_k} & \Spec(k), \\}
\]
where $F_k := F_{\Spec(k)}$. We may form the commutative diagram
\[
\xymatrix{X \ar[d]_{F_{X/k}} \ar[dr]^{F_X} \\
X^{(p)} \ar[r]^-{\pr_X} \ar[d]_{q_{X^{(p)}}} & X \ar[d]^{q_X} \\
\Spec(k) \ar[r]^-{F_k} & \Spec(k), \\}
\]
where the square is cartesian and $F_{X/k} \circ q_{X^{(p)}} = q_X$.
In particular, $F_{X/k} : X \to X^{(p)}$
is a morphism of $k$-schemes: the \emph{relative Frobenius morphism}.
The underlying topological space of $X^{(p)}$ may be identified with
that of $X$; then $\cO_{X^{(p)}} = \cO_X \otimes_{F_k} k$ and
the morphism $F_{X/k}$ induces the identity on topological spaces,
while $F_{X/k}^{\#} : \cO_X \otimes_{F_k} k \to \cO_X$ is given by
$f \otimes z \mapsto z f^p$.
The assignment $X \mapsto X^{(p)}$ extends to a covariant endofunctor
of the category of schemes over $k$,
which commutes with products and field extensions; moreover,
the assignment $X \mapsto F_{X/k}$ extends to a morphism of functors
(see e.g. \cite[VIIA.4.1]{SGA3}). In view of Subsection \ref{subsec:fp},
it follows that for any $k$-group scheme $G$, there is a canonical
$k$-group scheme structure on $G^{(p)}$ such that
$F_{G/k} : G \to G^{(p)}$ is a morphism of group schemes.
Its kernel is called the \emph{Frobenius kernel} of $G$;
we denote it by $G_1$. Moreover, for any $G$-scheme $X$,
there is a canonical $G^{(p)}$-scheme structure on $X^{(p)}$
such that $F_{X/k}$ is equivariant relative to $F_G$.
By \cite[XV.1.1.2]{SGA5}, the morphism $F_{X/k}$ is integral, surjective
and radicial; equivalently, $F_{X/k}$ is a universal homeomorphism
(recall that a morphism of schemes is \emph{radicial}
if it is injective and induces purely inseparable extensions of residue
fields). Thus, $F_{X/k}$ is finite if $X$ is of finite type over $k$; then
$X^{(p)}$ is of finite type over $k$ as well, since it is obtained from
$X$ by the base change $F_k : \Spec(k) \to \Spec(k)$. In particular,
for any algebraic group $G$, the Frobenius kernel $G_1$ is finite
and radicial over $\Spec(k)$. Equivalently, $G_1$ is an
\emph{infinitesimal group scheme}.
Next, let $\cL$ be an invertible sheaf on $X$, and
$f : L \to X$ the corresponding line bundle. Then
$f^{(p)} : L^{(p)} \to X^{(p)}$ is a line bundle, and there is
a canonical isomorphism
\[ F_{X/k}^*(L^{(p)}) \cong L^{\otimes p} \]
(see \cite[XV.1.3]{SGA5}). If $X$ is a $G$-scheme and $L$ is
$G$-linearized, then $L^{(p)}$ is $G^{(p)}$-linearized as well,
in view of Example \ref{ex:fp} (i). Also, note that
\emph{$L$ is ample if and only if $L^{(p)}$ is ample}.
Indeed, $L^{(p)}$ is the base change of $L$ under $F_k$, and hence
is ample if so is $L$ (see \cite[II.4.6.13]{EGA}).
Conversely, if $L^{(p)}$ is ample, then so is $F_{X/k}^*(L^{(p)})$
as $F_{X/k}$ is affine (see e.g. \cite[II.5.1.12]{EGA}); thus,
$L$ is ample as well.
We now extend these observations to the
\emph{iterated relative Frobenius morphism}
\[ F^n_{X/k} : X \longrightarrow X^{(p^n)}, \]
where $n$ is a positive integer. Recall from
\cite[VIIA.4.1]{SGA3} that $F^n_{X/k}$ is defined inductively by
$F^1_{X/k} = F_{X/k}$, $X^{(p^n)} = (X^{(p^{n-1})})^{(p)}$ and
$F^n_{X/k}$ is the composition
\[
\xymatrixcolsep{5pc}\xymatrix{
X \ar[r]^{F_{X/k}} &
X^{(p)} \ar[r]^-{F_{X^{(p)}/k}} &
X^{(p^2)} \to \cdots \to
X^{(p^{n-1})} \ar[r]^-{F_{X^{(p^{n-1})}/k}} &
X^{(p^n)}.
}
\]
This yields readily:
\begin{lemma}\label{lem:ifm}
Let $X$ be a scheme of finite type, $L$ a line bundle on $X$,
and $G$ an algebraic group.
\begin{enumerate}
\item[{\rm (i)}] The scheme $X^{(p^n)}$ is of finite type, and
$F^n_{X/k}$ is finite, surjective and radicial.
\item[{\rm (ii)}] $F^n_{G/k} : G \to G^{(p^n)}$ is a morphism of
algebraic groups, and its kernel (the $n$th Frobenius kernel $G_n$)
is infinitesimal.
\item[{\rm (iii)}] $L^{(p^n)}$ is a line bundle on $X^{(p^n)}$,
and we have a canonical isomorphism
\[ (F^n_{X/k})^*(L^{(p^n)}) \cong L^{\otimes p^n}. \]
Moreover, $L$ is ample if and only if $L^{(p^n)}$ is ample.
\item[{\rm (iv)}] If $X$ is a $G$-scheme, then $X^{(p^n)}$ is a
$G^{(p^n)}$-scheme and $F^n_{X/k}$ is equivariant relative to
$F^n_{G/k}$. If in addition $L$ is $G$-linearized, then
$L^{(p^n)}$ is $G^{(p^n)}$-linearized.
\end{enumerate}
\end{lemma}
\begin{remarks}\label{rem:ifm}
(i) If $X$ is the affine space $\bA^d_k$, then
$X^{(p^n)} \cong \bA^d_k$ for all $n \geq 1$. More generally,
if $X \subset \bA^d_k$ is the zero subscheme of
$f_1,\ldots,f_m \in k[x_1,\ldots,x_d]$, then
$X^{(p^n)} \subset \bA^d_k$ is the zero subscheme of
$f_1^{(p^n)}, \ldots, f_m^{(p^n)}$, where each $f_i^{(p^n)}$
is obtained from $f_i$ by raising all the coefficients to the
$p^n$th power.
\medskip
\noindent
(ii) Some natural properties of $X$ are not preserved
under Frobenius twist $X \mapsto X^{(p)}$. For example, assume
that $k$ is imperfect and choose $a \in k \setminus k^p$, where
$p := \car(k)$. Let $X := \Spec(K)$, where
$K$ denotes the field $k(a^{1/p}) \cong k[x]/(x^p -a)$. Then
$X^{(p^n)} \cong \Spec(k[x]/(x^p -a^{p^n}))
\cong \Spec(k[y]/(y^p))$ is non-reduced for all $n \geq 1$.
This can be partially remedied by replacing $X^{(p)}$
with the scheme-theoretic image of $F_{X/k}$; for example,
one easily checks that this image is geometrically reduced
for $n \gg 0$. But given a normal variety $X$, it may happen
that $F^n_{X/k}$ is an epimorphism and $X^{(p^n)}$ is non-normal
for any $n \geq 1$. For example, take $k$ and $a$ as above
and let $X \subset \bA^2_k = \Spec(k[x,y])$ be the zero subscheme
of $y^{\ell} - x^p + a$, where $\ell$ is a prime and $\ell \neq p$.
Then $X$ is a regular curve: indeed, by the jacobian criterion,
$X$ is smooth away from the closed point $P := (a^{1/p},0)$; also,
the maximal ideal of $\cO_{X,P}$ is generated by the image of $y$,
since the quotient ring
$k[x,y]/(y^{\ell} - x^p +a, y) \cong k[x]/(x^p - a)$ is a field.
Moreover, $X^{(p^n)} \subset \bA^2_k$ is the zero subscheme of
$y^{\ell} - x^p + a^{p^n}$, and hence is not regular at the point
$(a^{p^{n-1}},0)$. Also, $F^n_{X/k}$ is an epimorphism as
$X^{(p^n)}$ is integral.
\end{remarks}
\subsection{Quotients by infinitesimal group schemes}
\label{subsec:qi}
Throughout this subsection, we still assume that $\car(k) = p > 0$.
Recall from \cite[VIIA.8.3]{SGA3} that for any algebraic group
$G$, there exists a positive integer $n_0$ such that the quotient
group scheme $G/G_n$ is smooth for $n \ge n_0$. In particular,
for any infinitesimal group scheme $I$, there exists a positive
integer $n_0$ such that the $n$th Frobenius kernel $I_n$ is
the whole $I$ for $n \geq n_0$. The smallest such integer is called
the \emph{height} of $I$; we denote it by~$h(I)$.
\begin{lemma}\label{lem:quot}
Let $X$ be a scheme of finite type equipped with an action $\alpha$
of an infinitesimal group scheme $I$.
\begin{enumerate}
\item[{\rm (i)}] There exists a categorical quotient
\[ \varphi = \varphi_{X,I} : X \longrightarrow X/I, \]
where $X/I$ is a scheme of finite type and $\varphi$ is a finite, surjective,
radicial morphism.
\item[{\rm (ii)}] For any integer $n \geq h(I)$, the relative Frobenius
morphism $F^n_{X/k} : X \to X^{(p^n)}$ factors uniquely as
\[ X \stackrel{\varphi}{\longrightarrow} X/I
\stackrel{\psi}{\longrightarrow} X^{(p^n)}. \]
Moreover, $\psi = \psi_{X,I}$ is finite, surjective and radicial as well.
\item[{\rm (iii)}] Let $n \geq h(I)$ and $L$ a line bundle on $X$.
Then $M := \psi^*(L^{(p^n)})$ is a line bundle on $X/I$, and
$\varphi^*(M) \cong L^{\otimes p^n}$. Moreover, $L$ is ample
if and only if $M$ is ample.
\item[{\rm (iv)}] If $X$ is a normal variety, then so is $X/I$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Observe that the morphism
\[ \gamma := \id_X \times \alpha : I \times X \longrightarrow I \times X \]
is an $I$-automorphism and satisfies $\gamma \circ \pr_X = \alpha$
on $I \times X$. As $I$ is infinitesimal, the morphism $\pr_X$ is finite,
locally free and bijective; thus, so is $\alpha$. In view of
\cite[V.4.1]{SGA3}, it follows that the categorical quotient $\varphi$
exists and is integral and surjective. The remaining assertions
will be proved in (ii) next.
(ii) By Lemma \ref{lem:ifm} (iv), $F^n_{X/k}$ is $I$-invariant for
any $n \geq h(I)$. Since $\varphi$ is a categorical quotient, this yields
the existence and uniqueness of $\psi$. As $F^n_{X/k}$ is universally
injective, so is $\varphi$; equivalently, $\varphi$ is radicial.
In view of (i), it follows that $\varphi$ is a universal homeomorphism.
As $F^n_{X/k}$ is a universal homeomorphism as well, so is $\psi$.
Recall from Lemma \ref{lem:ifm} that $X^{(p^n)}$ is of finite type and
$F^n_{X/k}$ is finite. As a consequence, $\varphi$ and $\psi$ are finite,
and $X/I$ is of finite type.
(iii) The first assertion follows from Lemma \ref{lem:ifm} (iii).
If $L$ is ample, then so is $L^{(p^n)}$ by that lemma; thus,
$M$ is ample as $\psi$ is affine. Conversely, if $M$ is ample,
then so is $L$ as $\varphi$ is affine.
(iv) Note that $X/I$ is irreducible, since $\varphi$ is a homeomorphism.
Using again the affineness of $\varphi$, we may thus assume that $X/I$,
and hence $X$, are affine. Then the assertion follows by a standard
argument of invariant theory. More specifically, let $X = \Spec(R)$,
then $R$ is an integral domain and $X/I = \Spec(R^I)$, where
$R^I \subset R$ denotes the subalgebra of invariants, consisting of
those $f \in R$ such that $\alpha^{\#}(f) = \pr_X^{\#}(f)$ in
$\cO(I \times X)$. Thus, $R^I$ is a domain. We check that it is normal:
if $f \in \Frac(R^I)$ is integral over $R^I$, then $f \in \Frac(R)$ is
integral over $R$, and hence $f \in R$. To complete the proof,
it suffices to show that $f$ is invariant. But $f = \frac{f_1}{f_2}$
where $f_1, f_2 \in R^I$ and $f_2 \neq 0$; this yields
\[ 0 = \alpha^{\#}(f_1) - \pr_X^{\#}(f_1)
= \alpha^{\#}(f f_2) - \pr_X^{\#}(f f_2)
= (\alpha^{\#}(f) - \pr_X^{\#}(f)) \pr_X^{\#}(f_2) \]
in $\cO(I \times X) \cong \cO(I) \otimes_k R$.
Via this isomorphism, $\pr_X^{\#}(f_2)$ is identified
with $1 \otimes f_2$, which is not a zero divisor in
$\cO(I) \otimes_k R$ (since its image in
$\cO(I) \otimes_k \Frac(R)$ is invertible). Thus,
$\alpha^{\#}(f) - \pr_X^{\#}(f) = 0$ as desired.
\end{proof}
\begin{remark}\label{rem:qi}
With the notation of Lemma \ref{lem:quot}, we may identify
the underlying topological space of $X/I$ with that of $X$,
since $\varphi$ is radicial. Then the structure sheaf $\cO_{X/I}$
is just the sheaf of invariants $\cO_X^I$. As a consequence,
$\varphi_{X,I}$ is an epimorphism.
\end{remark}
\begin{lemma}\label{lem:prod}
Let $X$ (resp. $Y$) be a scheme of finite type equipped with
an action of an infinitesimal algebraic group $I$ (resp.~$J$).
Then the morphism
$\varphi_{X,I} \times \varphi_{Y,J} : X \times Y \to X/I \times Y/J$
factors uniquely through an isomorphism
\[ (X \times Y)/(I \times J) \stackrel{\cong}{\longrightarrow}
X/I \times Y/J. \]
\end{lemma}
\begin{proof}
Since $\varphi_{X,I} \times \varphi_{Y,J}$ is invariant under
$I \times J$, it factors uniquely through a morphism
$f : (X \times Y)/(I \times J) \to X/I \times Y/J$. To show that
$f$ is an isomorphism, we may assume by Remark \ref{rem:qi}
that $X$ and $Y$ are affine. Let $R := \cO(X)$ and
$S := \cO(Y)$; then we are reduced to showing that the natural
map
\[ f^{\#} : R^I \otimes S^J \longrightarrow (R \otimes S)^{I \times J} \]
is an isomorphism. Here and later in this proof, all tensor products
are taken over $k$.
Clearly, $f^{\#}$ is injective. To show the surjectivity, we consider first
the case where $J$ is trivial. Choose a basis $(s_a)_{a \in A}$ of the
$k$-vector space $S$. Let $f \in R \otimes S$ and write
$f = \sum_{a \in A} r_a \otimes s_a$, where the $r_a \in R$ are unique.
Then $f \in (R \otimes S)^I$ if and only if
$\alpha^*(f) = \pr_X^*(f)$ in $\cO(I \times X \times Y)$, i.e.,
\[ \sum_{a \in A} \alpha^*(r_a) \otimes s_a =
\sum_{a \in A} \pr_X^*(r_a) \otimes s_a \]
in $\cO(I) \otimes R \otimes S$. As the $s_a$ are linearly independent
over $\cO(I) \otimes R$, this yields $\alpha^*(r_a) = \pr_X^*(r_a)$,
i.e., $r_a \in R^I$, for all $a \in A$. In turn, this yields
$(R \otimes S)^I = R^I \otimes S$.
In the general case, we use the equality
\[ (R \otimes S)^{I \times J} = (R \otimes S)^I \cap (R \otimes S)^J \]
of subspaces of $R \otimes S$. In view of the above step, this yields
\[ (R \otimes S)^{I \times J} = (R^I \otimes S) \cap (R \otimes S^J). \]
Choose decompositions of $k$-vector spaces
$R = R^I \oplus V$ and $S = S^J \oplus W$; then we obtain a
decomposition
\[ R \otimes S = (R^I \otimes S^J) \oplus (R^I \otimes W)
\oplus (V \otimes S^J) \oplus (V \otimes W), \]
and hence the equality
\[ (R^I \otimes S) \cap (R \otimes S^J) = R^I \otimes S^J. \]
\end{proof}
\begin{lemma}\label{lem:equiv}
Let $G$ be an algebraic group, $X$ a $G$-scheme of finite type
and $n$ a positive integer. Then there exists a unique action of $G/G_n$
on $X/G_n$ such that the morphism $\varphi_{X,G_n} : X \to X/G_n$
(resp.~$\psi_{X,G_n} : X/G_n \to X^{(p^n)}$) is equivariant relative to
$\varphi_{G,G_n} : G \to G/G_n$
(resp.~$\psi_{G,G_n} : G/G_n \to G^{(p^n)}$).
\end{lemma}
\begin{proof}
Denote as usual by $\alpha : G \times X \to X$ the action
and write for simplicity
$\varphi_X := \varphi_{X,G_n}$ and $\varphi_G := \varphi_{G,G_n}$.
Then the map
$\varphi_X \circ \alpha : G \times X \to X/G_n$
is invariant under the natural action of $G_n \times G_n$, since
we have for any scheme $S$ and any $u,v \in G_n(S)$,
$g \in G(S)$, $x \in X(S)$ that $(ug)(vx) = u (g v g^{-1}) g x$
and $g v g^{-1} \in G_n(S)$. Also, the map
\[ \varphi_G \times \varphi_X : G \times X
\longrightarrow G/G_n \times X/G_n \]
is the categorical quotient by $G_n \times G_n$ in view
of Lemma \ref{lem:prod}. Thus, there exists a unique morphism
$\beta : G/G_n \times X/G_n \to X/G_n$
such that the following diagram commutes:
\[
\xymatrix{G \times X \ar[r]^{\alpha} \ar[d]_{\varphi_G \times \varphi_X}
& X \ar[d]^{\varphi_X} \\
G/G_n \times X/G_n \ar[r]^-{\beta} & X/G_n. \\
}
\]
We have in particular
$\beta(e_{G/G_n},\varphi_X(x)) = \varphi_X(x)$
for any schematic point $x$ of $X$. As $\varphi_X$
is an epimorphism (Remark \ref{rem:qi}), it follows that
$\beta(e_{G/G_n}, z) = z$ for any schematic point $z$ of
$X/G_n$. Likewise, we obtain
$\beta(x,\beta(y,z)) = \beta(xy,z)$ for any schematic points
$x,y$ of $G/G_n$ and $z$ of $X/G_n$, by using the fact that
\[ \varphi_G \times \varphi_G \times \varphi_X :
G \times G \times X \longrightarrow
G/G_n \times G/G_n \times X/G_n \]
is an epimorphism (as follows from Lemma \ref{lem:prod}
and Remark \ref{rem:qi} again). Thus, $\beta$ is the desired
action.
\end{proof}
\begin{lemma}\label{lem:lin}
Let $G$ be a connected affine algebraic group, $X$ a normal
$G$-variety, and $L$ a line bundle on $X$. Then
$L^{\otimes m }$ is $G$-linearizable for some positive
integer $m$ depending only on $G$.
\end{lemma}
\begin{proof}
If $G$ is smooth, then the assertion is that of
\cite[Thm.~2.14]{Brion15}. For an arbitrary $G$, we may choose
a positive integer $n$ such that $G/G_n$ is smooth.
In view of Lemmas \ref{lem:quot} and \ref{lem:equiv},
the categorical quotient $X/G_n$ is a normal $G/G_n$-variety equipped
with a $G$-equivariant morphism $\varphi : X \to X/G_n$ and with a line
bundle $M$ such that $\varphi^*(M) \cong L^{\otimes p^n}$. The line
bundle $M^{\otimes m}$ is $G/G_n$-linearizable for some positive
integer $m$, and hence $L^{\otimes p^n m}$ is $G$-linearizable.
\end{proof}
\subsection{$G$-quasi-projectivity}
\label{subsec:Gqp}
We say that a $G$-scheme $X$ is \emph{$G$-quasi-projective}
if it admits an ample $G$-linearized line bundle; equivalently,
$X$ admits an equivariant immersion in the projectivization
of a finite-dimensional $G$-module. If in addition the $G$-action
on $X$ is almost faithful, then $G$ must be affine, since it acts
almost faithfully on a projective space.
By the next lemma, being $G$-quasi-projective is invariant
under field extensions (this fact should be well-known, but
we could not locate any reference):
\begin{lemma}\label{lem:field}
Let $G$ be an algebraic $k$-group, $X$ a $G$-scheme over $k$,
and $k'/k$ a field extension. Then $X$ is $G$-quasi-projective
if and only if $X_{k'}$ is $G_{k'}$-quasi-projective.
\end{lemma}
\begin{proof}
Assume that $X$ has an ample $G$-linearized line bundle $L$.
Then $L_{k'}$ is an ample line bundle on $X_{k'}$
(see \cite[II.4.6.13]{EGA}), and is $G_{k'}$-linearized
by Example \ref{ex:fp} (i).
For the converse, we adapt a classical specialization argument
(see \cite[IV.9.1]{EGA}). Assume that $X_{k'}$ has an ample
$G_{k'}$-linearized line bundle $L'$. Then there exists
a finitely generated subextension $k''/k$ of $k'/k$ and a line bundle
$L''$ on $X_{k''}$ such that $L' \cong L'' \otimes_{k''} k'$; moreover,
$L''$ is ample in view of \cite[VIII.5.8]{SGA1}.
We may further assume (possibly by enlarging $k''$)
that $L''$ is $G_{k''}$-linearized. Next, there exists a finitely
generated $k$-algebra $R \subset k''$ and an ample line bundle
$M$ on $X_R$ such that $L'' \cong M \otimes_R k''$ and
$M$ is $G_R$-linearized. Choose a maximal ideal
$\fm \subset R$, with quotient field $K := R/\fm$. Then $K$ is a finite
extension of $k$; moreover, $X_K$ is equipped with an ample
$G_K$-linearized line bundle $M_K := M \otimes_R K$. Consider
the norm $L := \N(M_K)$; then $L$ is an ample line bundle on $X$
in view of \cite[II.6.6.2]{EGA}. Also, $L$ is equipped with a
$G$-linearization by Lemma \ref{lem:nwr}.
\end{proof}
Also, $G$-quasi-projectivity is invariant under Frobenius twists
(Lemma \ref{lem:ifm}) and quotients by infinitesimal group
schemes (Lemmas \ref{lem:quot} and \ref{lem:equiv}). We will
obtain a further invariance property of quasi-projectivity
(Proposition \ref{prop:GH}). For this, we need some preliminary
notions and results.
Let $G$ be an algebraic group, $H \subset G$ a subgroup scheme,
and $Y$ an $H$-scheme. The \emph{associated fiber bundle}
is a $G$-scheme $X$ equipped with a $G \times H$-equivariant
morphism $\varphi : G \times Y \to X$ such that the square
\[
\xymatrix{
G \times Y \ar[r]^-{\pr_G} \ar[d]_{\varphi} & G \ar[d]_f \\
X \ar[r]^-{\psi} & G/H, \\
}
\]
is cartesian, where $f$ denotes the quotient morphism,
$\pr_G$ the projection, and $G \times H$ acts on $G \times Y$ via
$(g,h) \cdot (g',y) = (gg' h^{-1}, h \cdot y)$ for any
scheme $S$ and any $g,g' \in G(S)$, $h \in H(S)$, $y \in Y(S)$.
Then $\varphi$ is an $H$-torsor, since so is $f$. Thus, the triple
$(X,\varphi,\psi)$ is uniquely determined; we will denote $X$ by
$G \times^H Y$. Also, note that $\psi$ is faithfully flat and
$G$-equivariant; its fiber at the base point $f(e_G) \in (G/H)(k)$
is isomorphic to $Y$ as an $H$-scheme.
Conversely, if $X$ is a $G$-scheme equipped with an equivariant
morphism $\psi: X \to G/H$, then $X = G \times^H Y$, where
$Y$ denotes the fiber of $\psi$ at the base point of $G/H$.
Indeed, form the cartesian square
\[
\xymatrix{
X' \ar[r]^-{\eta} \ar[d]_{\varphi} & G \ar[d]_f \\
X \ar[r]^-{\psi} & G/H. \\
}
\]
Then $X'$ is a $G$-scheme, and $\eta$ an equivariant morphism
for the $G$-action on itself by left multiplication.
Moreover, we may identify $Y$ with the fiber of $\eta$
at $e_G$. Then $X'$ is equivariantly isomorphic to $G \times Y$
via the maps
$G \times Y \to X'$, $(g,y) \mapsto g \cdot y$ and
$X' \to G \times Y$,
$z \mapsto (\psi'(z), \psi'(z)^{-1} \cdot z)$, and this
identifies $\eta$ with $\pr_G : G \times Y \to G$.
The associated fiber bundle need not exist in general, as follows
from Hironaka's example mentioned in the introduction (see
\cite[p.~367]{BB} for details). But it does exist when the $H$-action
on $Y$ extends to a $G$-action $\alpha : G \times Y \to Y$:
just take $X = G/H \times Y$ equipped with the diagonal action
of $G$ and with the maps
\[ f \times \alpha : G \times Y \longrightarrow G/H \times Y,
\quad \pr_{G/H} : G/H \times Y \longrightarrow G/H. \]
A further instance in which the associated fiber bundle exists
is given by the following result, which follows from
\cite[Prop.~7.1]{MFK}:
\begin{lemma}\label{lem:ass}
Let $G$ be an algebraic group, $H \subset G$ a subgroup scheme,
and $Y$ an $H$-scheme equipped with an ample $H$-linearized
line bundle $M$. Then the associated fiber bundles
$X := G \times^H Y$ and $L := G \times^H M$ exist. Moreover,
$L$ is a $G$-linearized line bundle on $X$, and is ample relative
to $\psi$. In particular, $X$ is quasi-projective.
\end{lemma}
In particular, the associated fiber bundle $G \times^H V$ exists
for any finite-dimensional $H$-module $V$, viewed as an
affine space. Then $G\times^H V$ is a $G$-linearized vector
bundle on $G/H$, called the \emph{homogeneous vector bundle}
associated with the $H$-module $V$.
We now come to a key technical result:
\begin{proposition}\label{prop:GH}
Let $G$ be an algebraic group, $H \subset G$ a subgroup scheme
such that $G/H$ is finite, and $X$ a $G$-scheme. If $X$ is
$H$-quasi-projective, then it is $G$-quasi-projective as well.
\end{proposition}
\begin{proof}
We first reduce to the case where \emph{$G$ is smooth}.
For this, we may assume that $\car(k) = p > 0$.
Choose a positive integer $n$ such that $G/G_n$ is smooth;
then we may identify $H/H_n$ with a subgroup scheme of
$G/G_n$, and the quotient $(G/G_n)/(H/H_n)$ is finite.
By Lemma \ref{lem:ifm}, $X^{(p^n)}$ is a $G/G_n$-scheme
of finite type and admits an ample $H/H_n$-linearized line bundle.
If $X^{(p^n)}$ admits an ample $G/G_n$-linearized line bundle $M$,
then $(F^n_{X/k})^*(M)$ is an ample $G$-linearized line bundle on $X$,
in view of Lemma \ref{lem:ifm} again. This yields the desired reduction.
Next, let $M$ be an ample $H$-linearized line bundle on $X$.
By Lemma \ref{lem:ass}, the associated fiber bundle
$G \times^H X = G/H \times X$ is equipped with the $G$-linearized
line bundle $L := G \times^H M$. The projection
$\pr_X : G/H \times X \to X$ is finite, \'etale of degree
$n := [G:H]$, and $G$-equivariant. As a consequence,
$E := (\pr_X)_*(L)$ is a $G$-linearized vector bundle of degree
$n$ on $X$; thus, $\det(E)$ is $G$-linearized as well. To complete
the proof, it suffices to show that $\det(E)$ is ample.
For this, we may assume that $k$ is algebraically closed
by using \cite[VIII.5.8]{SGA1} again. Then there exist lifts
$e = g_1, \ldots, g_n \in G(k)$ of the distinct $k$-points of $G/H$.
This identifies $G/H \times X$ with the disjoint union of
$n$ copies of $X$; the pull-back of $\pr_X$ to the $i$th copy is
the identity of $X$, and the pull-back of $L$ is $g_i^*(M)$. Thus,
$E \cong \oplus_{i = 1}^n g_i^*(M)$, and hence
$\det(E) \cong \otimes_{i=1}^n g_i^*(M)$ is ample indeed.
\end{proof}
\begin{remark}\label{rem:GH}
Given $G$, $H$, $X$ as in Proposition \ref{prop:GH} and an
$H$-linearized ample line bundle $M$ on $X$, it may well happen
that no non-zero tensor power of $M$ is $G$-linearizable. This holds
for example when $G$ is the constant group of order $2$ acting on
$X = \bP^1 \times \bP^1$ by exchanging both factors, $H$ is trivial,
and $M$ has bi-degree $(m_1,m_2)$ with $m_1 > m_2 \ge 1$.
\end{remark}
\begin{corollary}\label{cor:GH}
Let $G$ be an affine algebraic group, and $X$ a normal
quasi-projective $G$-variety. Then $X$ is $G$-quasi-projective.
\end{corollary}
\begin{proof}
Choose an ample line bundle $L$ on $X$. By Lemma \ref{lem:lin},
some positive power of $L$ admits a $G^0$-linearization.
This yields the assertion in view of Proposition \ref{prop:GH}.
\end{proof}
\section{Proofs of the main results}
\label{subsec:pr}
\subsection{The theorem of the square}
\label{subsec:ts}
Let $G$ be a group scheme with multiplication map
$\mu$, and $X$ a $G$-scheme with action map $\alpha$.
For any line bundle $L$ on $X$, denote by $L_G$ the
line bundle on $G \times X$ defined by
\[ L_G := \alpha^*(L) \otimes \pr_X^*(L)^{-1},\]
where $\pr_X : G \times X \to X$ stands for the projection.
Next, denote by $L_{G \times G}$ the line bundle on
$G \times G \times X$ defined by
\[ L_{G \times G} := (\mu \times \id_X)^*(L_G)
\otimes (\pr_1 \times \id_X)^*(L_G)^{-1}
\otimes (\pr_2 \times \id_X)^*(L_G)^{-1}, \]
where $\pr_1, \pr_2 : G \times G \to G$ denote the two projections.
Then $L$ is said to \emph{satisfy the theorem of the square}
if there exists a line bundle $M$ on $G \times G$ such that
\[ L_{G \times G} \cong \pr_{G \times G}^*(M), \]
where $\pr_{G \times G} : G \times G \times X \to G \times G$
denotes the projection.
By \cite[p.~159]{BLR}, $L$ satisfies the theorem of the square
if and only if the polarization morphism
\[ G \longrightarrow \Pic_X, \quad
g \longmapsto g^*(L) \otimes L^{-1} \]
is a homomorphism of group functors, where $\Pic_X$
denotes the Picard functor that assigns with any scheme
$S$, the commutative group $\Pic(X \times S)/\pr_S^*\Pic(S)$.
In particular, the line bundle
$(gh)^*(L) \otimes g^*(L)^{-1} \otimes h^*(L)^{-1} \otimes L$
is trivial for any $g,h \in G(k)$; this is the original formulation
of the theorem of the square.
\begin{proposition}\label{prop:ts}
Let $G$ be a connected algebraic group, $X$ a normal, geometrically
irreducible $G$-variety, and $L$ a line bundle on $X$.
Then $L^{\otimes m}$ satisfies the theorem of the square
for some positive integer $m$ depending only on $G$.
\end{proposition}
\begin{proof}
By a generalization of Chevalley's structure theorem due
to Raynaud (see \cite[IX.2.7]{Raynaud} and also
\cite[9.2 Thm.~1]{BLR}),
there exists an exact sequence of algebraic groups
\[ 1 \longrightarrow H \longrightarrow G
\stackrel{f}{\longrightarrow} A \longrightarrow 1, \]
where $H$ is affine and connected, and $A$ is an abelian variety.
(If $G$ is smooth, then there exists a smallest such subgroup scheme
$H = H(G)$; if in addition $k$ is perfect, then $H(G)$ is smooth as well).
We choose such a subgroup scheme $H \triangleleft G$.
In view of Lemma \ref{lem:lin}, there exists a positive integer $m$
such that $L^{\otimes m}$ is $H$-linearizable. Replacing $L$ with
$L^{\otimes m}$, we may thus assume that $L$ is equipped with an
$H$-linearization. Then $L_G$ is also $H$-linearized for the
action of $H$ on $G \times X$ by left multiplication on $G$,
since $\alpha$ is $G$-equivariant for that action, and
$\pr_X$ is $G$-invariant. As the map
$f \times \id_X : G \times X \to A \times X$ is an $H$-torsor
relative to the above action, there exists a line bundle
$L_A$ on $A \times X$, unique up to isomorphism, such that
\[ L_G = (f \times \id_X)^*(L_A) \]
(see \cite[p.~32]{MFK}). The diagram
\[
\xymatrixcolsep{4pc}\xymatrix{
G \times G \times X \ar[r]^-{\mu_G \times \id_X} \ar[d]_{f \times f \times \id_X}
& G \times X \ar[d]^{f \times \id_X} \\
A \times A \times X \ar[r]^-{\mu_A\times \id_X} & A \times X \\}
\]
commutes, since $f$ is a morphism of algebraic groups; thus,
\[ (\mu_G \times \id_X)^*(L_G) \cong
(f \times f \times \id_X)^* (\mu_A \times \id_X)^*(L_A). \]
Also, for $i = 1,2$, the diagrams
\[
\xymatrixcolsep{4pc}\xymatrix{
G \times G \times X \ar[r]^-{\pr_i \times \id_X} \ar[d]_{f \times f \times \id_X}
& G \times X \ar[d]^{f \times \id_X} \\
A \times A \times X \ar[r]^-{\pr_i \times \id_X} & A \times X \\}
\]
commute as well, and hence
\[ (\pr_i \times \id_X)^*(L_G) \cong
(f \times f \times \id_X)^* (\pr_i \times \id_X)^*(L_A). \]
This yields an isomorphism
\[ L_{G \times G} \cong (f \times f \times \id_X)^*(L_{A \times A}), \]
where we set
\[ L_{A \times A} := (\mu_A \times \id_X)^*(L_A)
\otimes (\pr_1 \times \id_X)^*(L_A)^{-1}
\otimes (\pr_2 \times \id_X)^*(L_A)^{-1}. \]
Note that the line bundle $L_A$ on $A \times X$ is equipped
with a rigidification along $e_A \times X$, i.e., with an isomorphism
\[ \cO_X \stackrel{\cong}{\longrightarrow}
(e_A \times \id_X)^*(L_A). \]
Indeed, recall that
$L_A = \alpha^*(L) \otimes \pr_X^*(L)^{-1}$ and
$\alpha \circ (e_A \times \id_X) = \pr_X \circ (e_A \times \id_X)$.
As
$(\mu_A \times \id_X) \circ (e_A \times \id_{A \times X})
= \pr_2 \circ (e_A \times \id_{A \times X})$ and
$(\pr_1 \times \id_X) \circ (e_A \times \id_{A \times X})
= e_A \times \id_X$,
it follows that $L_{A \times A}$ is equipped with a rigidification
along $e_A \times A \times X$. Likewise, $L_{A \times A}$
is equipped with a rigidification along $A \times e_A \times X$.
The assertion now follows from the lemma below, a version of the
classical theorem of the cube (see \cite[III.10]{Mumford}).
\end{proof}
\begin{lemma}\label{lem:cube}
Let $X$, $Y$ be proper varieties equipped with $k$-rational points
$x$, $y$. Let $Z$ be a geometrically connected scheme of finite type,
and $L$ a line bundle on $X \times Y \times Z$. Assume that the
pull-backs of $L$ to $x \times Y \times Z$ and $X \times y \times Z$
are trivial. Then $L \cong \pr_{X \times Y}^*(M)$ for some line bundle
$M$ on $X \times Y$.
\end{lemma}
\begin{proof}
By our assumptions on $X$ and $Y$, we have $\cO(X) = k = \cO(Y)$.
Choose rigidifications
\[ \cO_{Y \times Z} \stackrel{\cong}{\longrightarrow}
(x \times \id_Y \times \id_Z)^*(L), \quad
\cO_{X \times Z} \stackrel{\cong}{\longrightarrow}
(\id_X \times y \times \id_Z)^*(L). \]
We may assume that these rigidifications induce the same isomorphism
\[ \cO_Z \stackrel{\cong}{\longrightarrow}
(x \times y \times \id_Z)^*(L),
\]
since their pull-backs to $Z$ differ by a unit in
$\cO(Z) = \cO(Y \times Z) = \cO(X \times Z)$.
By \cite[II.15]{Murre} together with \cite[Thm.~2.5]{Kleiman},
the Picard functor $\Pic_X$ is represented by a commutative group
scheme, locally of finite type, and likewise for
$\Pic_Y$, $\Pic_{X \times Y}$. Also, we may view
$\Pic_{X \times Y}(Z)$ as the group of isomorphism classes of
line bundles on $X \times Y \times Z$, rigidified along
$x \times y \times Z$, and likewise for $\Pic_X(Z)$, $\Pic_Y(Z)$
(see e.g. \cite[Lem.~2.9]{Kleiman}).
Thus, $L$ defines a morphism of schemes
\[ \varphi : Z \longrightarrow \Pic_{X \times Y}, \quad
z \longmapsto (\id_{X \times Y} \times z)^*(L). \]
Denote by $N$ the kernel of the morphism of group schemes
\[ \pr_X^* \times \pr_Y^* : \Pic_{X \times Y}
\longrightarrow \Pic_X \times \Pic_Y. \]
Then $\varphi$ factors through $N$ in view of the rigidifications
of $L$. We now claim that $N$ is \'etale. To check this,
it suffices to show that the differential of
$\pr_X^* \times \pr_Y^*$ at the origin is an isomorphism.
But we have
\[ \Lie(\Pic_{X \times Y}) \cong H^1(X \times Y, \cO_{X \times Y})
\cong (H^1(X, \cO_X) \otimes \cO(Y)) \oplus
(\cO(X) \otimes H^1(Y,\cO_Y)). \]
where the first isomorphism follows from \cite[Thm.~5.11]{Kleiman},
and the second one from the K\"unneth formula. Thus,
\[ \Lie(\Pic_{X \times Y}) \cong H^1(X, \cO_X) \oplus H^1(Y,\cO_Y)
\cong \Lie(\Pic_X) \oplus \Lie(\Pic_Y). \]
Moreover, these isomorphisms identify the differential of
$\pr_X^* \times \pr_Y^*$ at the origin with the identity.
This implies the claim.
Since $Z$ is geometrically connected, it follows from the claim
that $\varphi$ factors through a $k$-rational point of $N$.
By the definition of the Picard functor, this means that
\[ L \cong \pr_{X \times Y}^*(M) \otimes \pr_Z^*(M') \]
for some line bundles $M$ on $X \times Y$ and $M'$ on $Z$.
Using again the rigidifications of $L$, we see that
$M'$ is trivial.
\end{proof}
\subsection{Proof of Theorem \ref{thm:cover}}
\label{subsec:cover}
Let $X$ be a normal $G$-variety, where $G$ is a connected
algebraic group. We first reduce to the case where $G$ is
\emph{smooth}; for this, we may assume that $\car(k) > 0$.
By Lemmas \ref{lem:quot} and \ref{lem:equiv},
there is a finite $G$-equivariant morphism
$\varphi : X \to X/G_n$ for all $n \geq 1$, where $X/G_n$
is a normal $G/G_n$-variety. Since $G/G_n$ is smooth
for $n \gg 0$, this yields the desired reduction.
Consider an open affine subvariety $U$ of $X$. Then the
image $G \cdot U = \alpha(G \times U)$ is open in $X$ (since
$\alpha$ is flat), and $G$-stable. Clearly, $X$ is covered by
opens of the form $G \cdot U$ for $U$ as above; thus, it suffices
to show that $G \cdot U$ is quasi-projective. This follows from
the next proposition, a variant of a result of Raynaud on
the quasi-projectivity of torsors (see \cite[V.3.10]{Raynaud}
and also \cite[6.4 Prop.~2]{BLR}).
\begin{proposition}\label{prop:ray}
Let $G$ be a smooth connected algebraic group, $X$ a normal $G$-variety,
and $U \subset X$ an open affine subvariety. Assume that $X = G \cdot U$
and let $D$ be an effective Weil divisor on $X$ with support
$X \setminus U$. Then $D$ is an ample Cartier divisor.
\end{proposition}
\begin{proof}
By our assumptions on $G$, the action map
$\alpha : G \times X \to X$ is smooth and its fibers are geometrically
irreducible; in particular, $G \times X$ is normal. Also,
the Weil divisor $G \times D$ on $G \times X$ contains no fiber
of $\alpha$ in its support, since $X = G \cdot U$. In view of the
Ramanujam-Samuel theorem (see \cite[IV.21.14.1]{EGA}), it follows
that $G \times D$ is a Cartier divisor. As $D$ is the pull-back
of $G \times D$ under $e_G \times \id_X$, we see that $D$ is Cartier.
To show that $D$ is ample, we may replace $k$ with any separable
field extension, since normality is preserved under such extensions.
Thus, we may assume that $k$ is \emph{separably closed}.
By Proposition \ref{prop:ts}, there exists a positive integer $m$
such that the line bundle on $X$ associated with $mD$ satisfies
the theorem of the square. Replacing $D$ with $mD$, we see that
the divisor $gh \cdot D - g \cdot D - h \cdot D + D$ is principal
for all $g, h \in G(k)$. In particular, we have isomorphisms
\[ \cO_X(2D) \cong \cO_X(g \cdot D + g^{-1} \cdot D) \]
for all $g \in G(k)$.
We now adapt an argument from \cite[p.~154]{BLR}. In view of the
above isomorphism, we have global sections $s_g \in H^0(X,\cO_X(2D))$
($g \in G(k))$ such that $X_{s_g} = g \cdot U \cap g^{-1} \cdot U$
is affine. Thus, it suffices to show that $X$ is covered by the
$g \cdot U \cap g^{-1} \cdot U$, where $g \in G(k)$. In turn, it suffices
to check that every closed point $x \in X$ lies in
$g \cdot U \cap g^{-1} \cdot U$ for some $g \in G(k)$.
Denote by $k'$ the residue field of $x$; this is a finite
extension of $k$. Consider the orbit map
\[ \alpha_x : G_{k'} \longrightarrow X_{k'}, \quad
g \longmapsto g \cdot x. \]
Then $V := \alpha_x^{-1}(U_{k'})$ is open in $G_{k'}$, and
non-empty as $X = G \cdot U$. Since $G$ is geometrically irreducible,
$V \cap V^{-1}$ is open and dense in $G_{k'}$. As
$\pr_G : G_{k'} \to G$ is finite and surjective, there exists
a dense open subvariety $W$ of $G$ such that
$W_{k'} \subset V \cap V^{-1}$. Also, since $G$ is smooth, $G(k)$
is dense in $G$, and hence $W(k)$ is non-empty. Moreover,
$x \in g \cdot U \cap g^{-1} \cdot U$ for any $g \in G(k)$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:model}}
\label{subsec:model}
It suffices to show that $X$ is $G$-equivariantly isomorphic to
$G \times^H Y$ for some subgroup scheme $H\subset G$ such
that $G/H$ is an abelian variety, and some $H$-quasi-projective
closed subscheme $Y \subseteq X$. Indeed, we may then view
$Y$ as a $H$-stable subscheme of the projectivization $ \bP(V)$
of some finite-dimensional $H$-module $V$. Hence $X$ is
a $G$-stable subscheme of the projectivization $\bP(E)$,
where $E$ denotes the homogeneous vector bundle
$G \times^H V \to G/H$.
Next, we reduce to the case where $G$ is \emph{smooth},
as in the proof of Theorem \ref{thm:cover}.
We may of course assume that $\car(k) > 0$.
Choose a positive integer $n$ such that $G/G_n$ is smooth
and recall from Lemmas \ref{lem:quot} and \ref{lem:equiv} that
$X/G_n =: X'$ is a normal quasi-projective variety equipped
with an action of $G/G_n =: G'$ such that the quotient morphism
$\varphi : X \to X'$ is equivariant. Assume that there exists an
equivariant isomorphism $X' \cong G' \times^{H'} Y'$ satisfying
the above conditions. Let $H \subset G$ (resp. $Y \subset X$)
be the subgroup scheme (resp. the closed subscheme)
obtained by pulling back $H' \subset G'$ (resp. $Y' \subset X'$).
Then $G/H \cong G'/H'$, and hence the composition
$X \to X' \to G'/H'$ is a $G$-equivariant morphism with fiber
$Y$ at the base point. This yields a $G$-equivariant isomorphism
$X \cong G \times^H Y$, where $G/H$ is an abelian variety.
Moreover, $Y$ is $H$-quasi-projective, since it is equipped with
a finite $H$-equivariant morphism to $Y'$, and the latter is
$H$-quasi-projective. This yields the desired reduction.
Replacing $G$ with its quotient by the kernel of the action, we may
further assume that \emph{$G$ acts faithfully on $X$}.
We now use the notation of the proof of Proposition \ref{prop:ts};
in particular, we choose a normal connected affine subgroup scheme
$H \triangleleft G$ such that $G/H$ is an abelian variety, and an ample
$H$-linearized line bundle $L$ on $X$. Recall that the line bundle
$L_G = \alpha^*(L) \otimes \pr_X^*(L^{-1})$
satisfies $L_G = (f \times \id_X)^*(L_A)$ for a line bundle $L_A$ on
$A \times X$, rigidified along $e_A \times X$. Since the Picard
functor $\Pic_A$ is representable, this yields a morphism of schemes
\[ \varphi : X \longrightarrow \Pic_A, \quad
x \longmapsto (\id_A \times x)^*(L_A). \]
We first show that \emph{$\varphi$ is $G$-equivariant} relative
to the given $G$-action on $X$, and the $G$-action on $\Pic_A$ via
the morphism $f: G \to A$ and the $A$-action on $\Pic_A$ by
translation. Since $G \times X$ is reduced (as $G$ is smooth and
$X$ is reduced), it suffices to check the equivariance on points
with values in fields. So let $k'/k$ be a field extension, and
$g \in G(k')$, $x \in X(k')$. Then
$\varphi(x) \in \Pic_A(k') = \Pic(A_{k'})$.
Moreover, by \cite[p.~32]{MFK}, the pull-back map $f_{k'}^*$
identifies $\Pic(A_{k'})$ with
the group $\Pic^{H_{k'}}(G_{k'})$ of $H_{k'}$-linearized
line bundles on $G_{k'}$; also,
$f_{k'}^* \varphi_{k'}(x) = (\id_{G_{k'}} \times x)^*(L_{G_{k'}})$
in $\Pic^{H_{k'}}(G_{k'})$. Thus,
\[ f_{k'}^* \varphi_{k'}(x) = \alpha_x^*(L_{k'}), \]
where $\alpha_x : G_{k'} \to X_{k'}$ denotes the orbit map.
We have $\alpha_{g \cdot x} = \alpha_x \circ \rho(g)$,
where $\rho(g)$ denotes the right multiplication by $g$ in
$G_{k'}$. Hence
\[ f_{k'}^* \varphi_{k'}(g \cdot x) = \rho(g)^* f_{k'}^* \varphi_{k'}(x). \]
Also, since $f$ is $G$-equivariant, we have
$f_{k'} \circ \rho(g) = \tau(f_{k'}(g)) \circ f_{k'}$,
where $\tau(a)$ denotes the translation by $a \in A(k')$ in the abelian
variety $A_{k'}$. This yields the equality
\[ f_{k'}^* \varphi_{k'}(g \cdot x) =
f_{k'}^* \tau(f_{k'}(g))^* \varphi_{k'}(x) \]
in $\Pic^{H_{k'}}(G_{k'})$, and hence the desired equality
\[ \varphi_{k'}(g \cdot x) = \tau(f_{k'}(g))^* \varphi_{k'}(x) \]
in $\Pic(A_{k'})$.
Next, we show that \emph{$\varphi(x)$ is ample for any $x \in X$}.
By \cite[XI.1.11.1]{Raynaud}, it suffices to show that the line
bundle $f^*\varphi(x)$ on $G_{k'}$ is ample, where $k'$ is as above;
equivalently, $\alpha_x^*(L)$ is ample. The orbit map $\alpha_x$
(viewed as a morphism from $G$ to the orbit of $x$)
may be identified with the quotient map by the isotropy subgroup
scheme $G_{k',x} \subset G_{k'}$. This subgroup scheme is affine (see
e.g. \cite[Prop.~3.1.6]{Brion17}) and hence so is the morphism
$\alpha_x$. As $L$ is ample, this yields the assertion.
Now recall the exact sequence of group schemes
\[ 0 \longrightarrow \hat{A} \longrightarrow \Pic_A
\longrightarrow \NS_A \longrightarrow 0, \]
where $\hat{A} = \Pic^0_A$ denotes the dual abelian variety,
and $\NS_A = \pi_0(\Pic_A)$ the N\'eron-Severi group scheme;
moreover, $\NS_A$ is \'etale.
\emph{If $X$ is geometrically irreducible}, it follows that the base
change $\varphi_{k_s} : X_{k_s} \to \Pic_{A_{k_s}}$ factors
through a unique coset $Y = \hat{A}_{k_s} \cdot M$, where
$M$ is an ample line bundle on $A_{k_s}$. We then have an
$A_{k_s}$-equivariant isomorphism $Y \cong A_{k_s}/K(M)$, where
$K(M)$ is a finite subgroup scheme of $A_{k_s}$. So there exists
a finite Galois extension $k'/k$ and a $G_{k'}$-equivariant
morphism of $k'$-schemes $\varphi': X_{k'} \to A_{k'}/F$,
where $F$ is a finite subgroup scheme of $A_{k'}$. As $F$ is
contained in the $n$-torsion subgroup scheme $A_{k'}[n]$
for some positive integer $n$, and
$A_{k'}/A_{k'}[n] \stackrel{\cong}{\longrightarrow} A_{k'}$
via the multiplication by $n$ in $A_{k'}$, we obtain a
morphism of $k'$-schemes $\varphi'': X_{k'} \to A_{k'}$
which satisfies the equivariance property
\[ \varphi''(g \cdot x) = \tau(n f(g)) \cdot \varphi''(x) \]
for all schematic points $g \in G_{k'}$, $x \in X_{k'}$.
The Galois group $\Gamma_{k'} := \Gal(k'/k)$ acts on
$G_{k'}$ and $A_{k'}$; replacing $\varphi''$ with the sum
of its $\Gamma_{k'}$-conjugates (and $n$ with $n [k':k]$),
we may assume that $\varphi''$ is $\Gamma_{k'}$-equivariant.
Thus, $\varphi''$ descends to a morphism
$\psi :X \to A$ such that
$\psi(g \cdot x) = \tau(n f(g)) \cdot \psi(x)$
for all schematic points $g \in G$, $x \in X$. We may view
$\psi$ as a $G$-equivariant morphism to $A/A[n]$, or equivalently,
to $G/H'$, where $H' \subset G$ denotes the pull-back
of $A[n] \subset A$ under $f$. Since $H'/H$ is finite, we see that
$G/H'$ is an abelian variety and $H'$ is affine. Moreover, $\psi$
yields a $G$-equivariant isomorphism $X \cong G\times^{H'} Y$
for some closed $H'$-stable subscheme $Y \subset X$.
By Corollary \ref{cor:GH}, $X$ is $H'$-quasi-projective; hence so
is $Y$. This completes the proof in this case.
Finally, we consider the \emph{general case}, where $X$ is not
necessarily geometrically irreducible. By Example \ref{ex:fp} (iv),
we may view $X$ as a geometrically irreducible $K$-variety,
where $K$ denotes the separable algebraic closure of $k$ in
$k(X)$. Moreover, $G_K$ acts faithfully on $X$ via $\pr_G : G_K \to G$.
Also, $X$ is quasi-projective over $K$ in view of
\cite[II.6.6.5]{EGA}. So the preceding step yields a $G_K$-equivariant
morphism $X \to G_K/H'$ for some normal affine $K$-subgroup scheme
$H' \triangleleft G_K$ such that $A' = G_K/H'$ is an abelian
variety. On the other hand, we have an exact sequence of $K$-group
schemes
\[ 1 \longrightarrow H_K \longrightarrow G_K
\stackrel{f_K}{\longrightarrow} A_K \longrightarrow 1, \]
where $H_K$ is affine and $A_K$ is an abelian variety. Consider
the subgroup scheme $H_K \cdot H' \subset G_K$ generated by $H_K$
and $H'$. Then $H_K \cdot H'/H' \cong H_K/H_K \cap H'$ is
affine (as a quotient group of $H_K$) and proper (as a subgroup
scheme of $G_K/H_K = A_K$), hence finite. Thus, $H_K \cdot H'$
is affine, and the natural map $G_K/H' \to G_K/H_K \cdot H'$ is
an isogeny of abelian varieties. Replacing $H'$ with $H_K \cdot H'$,
we may therefore assume that $H_K \subset H'$. Then the finite subgroup
scheme $H'/H_K \subset A_K$ is contained in $A_K[n]$ for some
positive integer $n$. This yields a $G_K$-equivariant morphism
$X \to A_K/A_K[n]$, and hence a $G$-equivariant morphism
$X \to A/A[n]$ by composing with
$\pr_{A/A[n]}: A_K/A_K[n] \to A/A[n]$. Arguing as at the end of the
preceding step completes the proof.
\begin{remarks}\label{rem:fin}
(i) Consider a smooth connected algebraic group $G$
and an affine subgroup scheme $H$ such that $G/H$ is an
abelian variety. Then the quotient map $G \to G/H$ is a
morphism of algebraic groups (see e.g. \cite[Prop.~4.1.4]{Brion17}).
In particular, $H$ is normalized by $G$. But in positive characteristics,
this does not extend to an arbitrary connected algebraic group $G$.
Consider indeed a non-trivial abelian variety $A$; then we
may choose a non-trivial infinitesimal subgroup $H \subset A$, and
form the semi-direct product $G := H \ltimes A$, where $H$ acts on
$A$ by translation. So $H$ is identified with a non-normal subgroup
of $G$ such that the quotient $G/H = A$ is an abelian variety.
Also, recall that a smooth connected algebraic group $G$
admits a smallest (normal) subgroup scheme with quotient
an abelian variety. This also fails for non-smooth
algebraic groups, in view of \cite[Ex.~4.3.8]{Brion17}.
\medskip
\noindent
(ii) With the notation and assumptions of Theorem \ref{thm:model},
we have seen that $X$ is an associated fiber bundle $G \times^H Y$
for some subgroup scheme $H \subset G$ such that $G/H$ is
an abelian variety, and some closed $H$-quasi-projective subscheme
$Y \subset X$. If $G$ acts almost faithfully on $X$, then the
$H$-action on $Y$ is almost faithful as well; thus, $H$ is affine.
Note that the pair $(H,Y)$ is not uniquely determined by $(G,X)$,
since $H$ may be replaced with any subgroup scheme $H' \subset G$
such that $H' \supset H$ and $H'/H$ is finite. So one may rather
ask whether there exists such a pair $(H,Y)$ with a smallest
subgroup scheme $H$, i.e., the corresponding morphism
$\psi : X \to G/H$ is universal among all such morphisms.
The answer to this question is generally negative (see
\cite[Ex.~5.1]{Brion10}); yet one can show that it is positive
in the case where $G$ is smooth and $X$ is almost homogeneous
under $G$.
Even under these additional assumptions, there may exist no pair
$(H,Y)$ with $H$ smooth or $Y$ geometrically reduced. Indeed, assume
that $k$ is imperfect; then as shown by Totaro (see \cite{Totaro}),
there exist non-trivial \emph{pseudo-abelian varieties}, i.e.,
smooth connected non-proper algebraic groups such that every
smooth connected normal affine subgroup is trivial. Moreover,
every pseudo-abelian variety $G$ is commutative. Consider the
$G$-action on itself by multiplication; then the above
associated fiber bundles are exactly the bundles of the form
$G \times^H H$, where $H \subset G$ is an affine subgroup scheme
(acting on itself by multiplication) such that $G/H$ is an abelian
variety. There exists a smallest such subgroup scheme (see
\cite[9.2 Thm.~1]{BLR}), but no smooth one.
For a similar example with a projective variety, just
replace $G$ with a normal projective equivariant completion
(which exists in view of \cite[Thm.~5.2.2]{Brion17}).
To obtain an explicit example, we recall a construction
of pseudo-abelian varieties from \cite[Sec.~6]{Totaro}.
Let $k$ be an imperfect field of characteristic $p$, and
$U$ a smooth connected unipotent group of exponent $p$.
Then there exists an exact sequence of commutative algebraic groups
\[ 0 \longrightarrow \alpha_p \longrightarrow H
\longrightarrow U \longrightarrow 0, \]
where $H$ contains no non-trivial smooth connected subgroup scheme.
Next, let $A$ be an elliptic curve which is supersingular,
i.e., its Frobenius kernel is $\alpha_p$. Then
$G := A \times^{\alpha_p} H$
is a pseudo-abelian variety, and lies in two exact sequences
\[ 0 \longrightarrow A \longrightarrow G
\longrightarrow U \longrightarrow 0,
\quad
0 \longrightarrow H \longrightarrow G
\longrightarrow A^{(p)} \longrightarrow 0, \]
since $H/\alpha_p \cong U$ and $A/\alpha_p \cong A^{(p)}$.
We claim that $H$ is the smallest subgroup scheme $H' \subset G$
such that $G/H'$ is an abelian variety. Indeed, $H' \subset H$
and $H'/H$ is finite, hence $\dim(H') = \dim(H) = \dim(U)$. If
$H' \cap \alpha_p$ is trivial, then the natural map $H' \to U$
is an isomorphism. Thus, $H'$ is smooth, a contradiction.
Hence $H' \supset \alpha_p$, so that the natural map
$H'/\alpha_p \to U$ is an isomorphism; we conclude that $H' = H$.
In particular, taking for $U$ a $k$-form of the additive group,
we obtain a pseudo-abelian surface $G$. One may easily check
that $G$ admits a unique normal equivariant completion $X$; moreover,
the surface $X$ is projective, regular and geometrically integral,
its boundary $X \setminus G$ is a geometrically irreducible curve,
homogeneous under the action of $A \subset G$, and
$X \cong G \times^H Y$, where $Y$ (the schematic closure
of $H$ in $X$) is not geometrically reduced. Also, the projection
\[ \psi : X \longrightarrow G/H = A^{(p)} \]
is the Albanese morphism of $X$, and satisfies
$\psi_*(\cO_X) = \cO_{A^{(p)}}$.
\end{remarks}
\subsection{Proof of Corollary \ref{cor:alb}}
\label{subsec:cor}
Recall from \cite{Wittenberg} that there exists an abelian variety
$\Alb^0(X)$, a torsor $\Alb^1(X)$ under $\Alb^0(X)$, and a morphism
\[ a_X : X \longrightarrow \Alb^1(X) \]
satisfying the following universal property: for any morphism
$f : X \to A^1$, where $A^1$ is a torsor under an abelian variety $A^0$,
there exists a unique morphism $f^1 : \Alb^1(X) \to A^1$
such that $f = f^1 \circ a_X$, and a unique morphism of abelian
varieties $f^0 : \Alb^0(X) \to A^0$ such that $f^1$ is equivariant
relative to $f^0$. We then say that $a_X$ is the
\emph{Albanese morphism}; of course, $\Alb^1(X)$ will be the Albanese
torsor, and $\Alb^0(X)$ the Albanese variety.
When $X$ is equipped with a $k$-rational point $x$, we may identify
$\Alb^1(X)$ with $\Alb^0(X)$ by using the $k$-rational point $a_X(x)$
as a base point. This identifies $a_X$ with the universal morphism from
the pointed variety $(X,x)$ to an abelian variety, which sends $x$ to
the neutral element.
By the construction in \cite[App.~A]{Wittenberg} via Galois descent,
the formation of the Albanese morphism commutes with separable
algebraic field extensions. Also, the formation of this morphism
commutes with finite products of pointed, geometrically integral
varieties (see e.g.~\cite[Cor.~4.1.7]{Brion17}). Using Galois descent
again, it follows that the formation of the Albanese morphism commutes
with finite products of arbitrary geometrically integral varieties.
In view of the functorial considerations of Subsection \ref{subsec:fp},
for any such variety $X$ equipped with an action $\alpha$ of
a smooth connected algebraic group $G$, we obtain a morphism
of abelian varieties
\[ \Alb^0(\alpha) : \Alb^0(G) \longrightarrow \Alb^0(X) \]
such that $a_X$ is equivariant relative to the morphism of algebraic groups
\[ \Alb^0(\alpha) \circ a_G : G \longrightarrow \Alb^0(X). \]
Also, by Remark \ref{rem:fin} (i) and \cite[Thm.~4.3.4]{Brion17},
the Albanese morphism $a_G : G \to \Alb^0(G)$ can be identified
with the quotient morphism by the smallest affine subgroup scheme
$H \subset G$ such that $G/H$ is an abelian variety.
Assume in addition that $X$ is normal and quasi-projective, and $\alpha$
is almost faithful. Then, as proved in Subsection \ref{subsec:model},
there exists a $G$-equivariant morphism $\psi : X \to G/H'$, where
$H' \subset G$ is an affine subgroup scheme such that $G/H'$
is an abelian variety; in particular, $H' \supset H$ and $H'/H$ is finite.
This yields an $\Alb^0(G)$-equivariant morphism of abelian varieties
\[ \psi^0 : \Alb^0(X) \longrightarrow G/H', \]
where $\Alb^0(G) = G/H$ acts on $G/H'$ via the quotient morphism
$G/H \to G/H'$. Since the latter action is almost faithful, so is the action
of $\Alb^0(G)$ on $\Alb^0(X)$, or equivalently on $\Alb^1(X)$.
\begin{remark}\label{rem:pr3}
Keep the notation and assumptions of Corollary \ref{cor:bir}, and
assume in addition that $\alpha$ is faithful. Then the kernel
of the induced action $\Alb^1(\alpha)$ (or equivalently, of
$\Alb^0(\alpha)$) can be arbitrarily large, as shown by the following
example from classical projective geometry.
Let $C$ be a smooth projective curve of genus $1$; then $C$ is a torsor
under an elliptic curve $G$. Let $n$ be a positive integer and
consider the $n$th symmetric product $X := C^{(n)}$. This is a smooth
projective variety of dimension $n$, equipped with a faithful
action of $G$. We may view $X$ as the scheme of effective Cartier
divisors of degree $n$ on $C$; this defines a morphism
\[ f : X \longrightarrow \Pic^n(C), \]
where $\Pic^n(C)$ denotes the Picard scheme of line bundles of
degree $n$ on $C$. The elliptic curve $G$ also acts on $\Pic^n(C)$,
and $f$ is equivariant; moreover, the latter action is transitive
(over $\bar{k})$ and its kernel is the $n$-torsion subgroup scheme
$G[n] \subset G$, of order $n^2$. Thus, we may view $\Pic^n(C)$
as a torsor under $G/G[n]$. Also, $f$ is a projective bundle, with fiber
at a line bundle $L$ over a field extension $k'/k$ being the projective
space $\bP H^0(C_{k'},L)$. It follows that $f$ is the Albanese morphism
$a_X$. In particular, $\Alb^0(X) \cong G/G[n]$.
\end{remark}
\medskip
\noindent
{\bf Acknowledgements}. Many thanks to St\'ephane Druel and
Philippe Gille for very helpful discussions on an earlier version of
this paper, and to Bruno Laurent and an anonymous referee
for valuable comments on the present version.
Special thanks are due to Olivier Benoist for pointing out that
Theorem \ref{thm:cover} follows from his prior work \cite{Benoist},
and for an important improvement in an earlier version of
Proposition \ref{prop:ray}.
|
1,477,468,750,950 | arxiv | \section{Introduction}
Charge transport in granular conductors is an intense area of research, wherein, electronic properties of granular conductors can be tuned at the nanoscale by varying grain size and inter-grain separation \cite{murray,gaponenko,mowbray,bimberg,stangl}. Grain size distribution in a granular conductor is known to vary from a few to hunderds of nanometers, resulting in characteristics originating from quantization of confined electron states at few nanometer scale and collective properties of coupled grains at hundreds of nanometer scale, thus, opening the possibility of a variety of novel and improved applications ranging from electronic to optoelectronic \cite{bimberg,stangl}. Ultra-thin films grown by physical vapour deposition, constitute an important method of preparation of such granular conductors \cite{barr,wu,morris}. It is expected that as the parameters of the deposition are changed such that a transformation from a continuous film to an island-like configuration results, electrical transport may correspondingly change due to quantum confinement effects arising due to an interplay of opposing effects of length scales and energy scales. Such an effect has, in fact, been observed in different systems ranging from self-assembled quantum dots \cite{yakimov,murray,gaponenko,mowbray,bimberg,stangl}, granular metal films \cite{kubo,gorkov,chuang,zabrodskii,khondaker}, nanocluster assembled films \cite{bansal,tejal,praveen} etc. Additionally, charge conduction in a disordered metal is known to vary significantly from that of a pure metal, wherein, mechanisms like inelastic electron scattering from impurities and defects contribute significantly to charge transport apart from the ``pure'' electron-phonon scattering observed in pure metals \cite{gershenzon,bergmann,pritsina}. So, it is expected that charge transport in granular metal films, where disorder in a given film changes as the thickness is varied, would also be affected by such mechanisms.\\
Palladium (Pd) metal offers a possibility with respect to charge transport in granular metals, since Pd metal is, additionally, well known for catalytic activity, especially for hydrogen (H$_2$) gas and selectively absorb it \cite{flanagan,kay,zuhner,bohmholdt,sakamoto,wolf,barr,wu,morris,favier,dankert,lee,ramanathan,zeng,walter,yang,jiang,cabrera,xu,krishnan,kumar,raviprakash,mitra,feng,dawson}, thus, providing an extra parameter that can be tuned to control the charge transport mechanism in nano-sized grains of Pd films. In bulk Pd and at room temperature, the incoming H$_2$ molecules are physisorbed on the Pd surface and dissociate into hydrogen (H) atoms due to the high reactivity of the Pd atoms to H atoms \cite{kay}, which then, diffuse into the Pd lattice until they reach the octahedral sites of face centred cubic (fcc) Pd \cite{zuhner,bohmholdt}. The process of diffusion is enhanced at the grain boundaries or dislocations since they provide a high diffusivity path \cite{sakamoto,lee}. The random occupation of H into the Pd lattice results in a solid solution of Pd and H called the $\alpha$-phase (PdH$_{\alpha}$) and extends for exposure of Pd till $\sim$ 15,000 ppm H$_2$ concentration \cite{narehood}. In the $\alpha$-phase, Pd lattice expands by 0.15$\%$ due to incorporation of H atoms in the interstitial sites of Pd. Above $\sim$ 15,000 ppm H$_2$ concentration, a phase transformation happens between $\alpha$ phase to $\beta$ phase (PdH$_{\beta}$) that results in a lattice constant increase from 3.895 \AA (maximum for $\alpha$ phase) to 4.025 \AA (minimum for $\beta$ phase), resulting in a 3.4$\%$ increase in the lattice size \cite{lewis,flanagan}. The PdH$_x$ structures above 15,000 ppm H$_2$ concentrations efficiently scatter conduction electrons leading to resistance increase of the material. In continuous Pd thin films grown by various methods like sputtering, thermal evaporation and pulsed laser deposition, an increase in resistance with exposure has, in fact, been reported \cite{cabrera,krishnan,kumar,raviprakash}. It was also found by Lee et al. \cite{lee} that, in an ultra- thin film form, the process of $\alpha$ to $\beta$ structural transition is hysteretic, with the width of the hysteresis loop, strongly dependent on the film thickness. However, for ultra-thin films of 5 nm thickness, no hysteresis was found.\\
If the ultra-thin Pd films are made discontinuous with the size of each island in the nanometer range and the inter-island separation lesser than a nanometer, then charge transport may happen via tunneling \cite{praveen,coutts,simmons}. It was found that if such discontinuous films are near the percolation threshold, then H$_2$ exposure would have two effects on the charge transport: (i) H$_2$ adsorbed on the surface would result in an increase in the work function of the island, resulting in an increase in resistance of the film and (ii) expansion of the Pd islands leading to a decrease of the inter-island separation, thereby, decreasing the effective resistance of the assembly \cite{barr,wu,morris}. Using this process of hydrogen absorption-induced lattice expansion (HAILE), a decrease in resistance has been observed in a variety of Pd configurations ranging from thin films \cite{dankert,lee,xu,ramanathan,wu} to nanowires \cite{zeng,walter,yang}, nanofibres \cite{jiang}, nanoclusters \cite{shin} etc. However, fundamental understanding of the relationship between film thickness, grain size, resistivity and sensitivity of ultra-thin Pd films is far from complete. In this investigation, we have undertaken a systematic study of the electronic properties of ultra-thin Pd films with mass equivalent thickness in the range 2 nm to 6 nm, and their change after exposure to H$_2$ gas via measurement of the change in their resistive response. Before being exposed to H$_2$, temperature dependent resistance measurements showed the 6 nm, 5 nm and 4 nm thin films to be metallic (dR/dT $>$ 0) with the 3 nm thin film exhibiting a metal to insulator transition at $\sim$ 19.5 K. In contrast, the 2 nm films were found to be insulating and Mott’s variable range hopping mechanism was found to govern the charge transport in these films. Importantly, all the films that were metallic at room temperature (thickness 6 - 3 nm) were found to exhibit a decrease in resistance on being exposed to H$_2$ due to HAILE phenomenon. However, the 2 nm thick films that were insulating at room temperature were found to have an initial increase in resistance upon exposure to hydrogen, followed by a subsequent decrease. We found that the time constants needed to reach $\sim$ 83$\%$ of the initial resistance were two rather than one. In order to explain the existence of two time-constants in our ultra-thin Pd films, we propose a model employing percolation induced opening up of new conduction pathways.
\section{Experimental Details}
Gold, Chromium and Palladium wires used in thermal evaporator were purchased from M/s. Goodfellow Cambridge Ltd., U.K. Pd films constituting nanoscale islands were deposited on a pre-fabricated structure as shown in Fig. \textbf{\ref{AFM} (a)} using a commercial electron beam evaporator (Tectra e-flux), attached to an ultra-high vacuum (UHV) chamber with turbo molecular pumps. The base pressure achieved in the chamber prior to the deposition was $\sim$ 3 x 10$^{-7}$ mbar. The films were deposited at the rate 0.2 A/s at a chamber pressure $\sim$ 2 x 10$^{-6}$ mbar. The thicknesses and deposition rate of the films were monitored using a quartz crystal microbalance (Inficon), in the evaporation chamber. The reported thicknesses are “nominal” as displayed values since a true thickness would be poorly defined for a quasi-continuous nano-island film. The pre-fabricated substrates used for the deposition of nano-island films and shown in Fig. \ref{AFM} (a), were prepared as follows: First, pre cleaned glass slides were cut in to 1 cm x 3 mm size and the centre of the glass slides was shadow masked using a thin copper wire of thickness 100 $\mu$m. These shadow masked glass slides were loaded in a thermal evaporator to deposit gold layers of 50 nm thickness, to be used as contact pads. Prior to deposition of the gold layers, a thin wetting layer of chromium was also deposited. After making the contact pads, the shadow mask was removed and a channel region of 80 $\mu$m width was obtained. Pd nano-island films were deposited on the channel regions of these structures. Morphological characterisation of the as deposited Pd films were conducted using a Bruker Multimode 8 Atomic Force Microscope (AFM) while the crystallinity of the grown films was investigated using 300 keV FEI Tecnai F30’s high resolution transmission electron microscope (HRTEM).\\
Resistance (R) vs. temperature (T) measurements were carried out on Nanomagnetics Hall effect measurement system’s closed cycle refrigerator. Before measuring R-T for a given film, it was ensured that the current-voltage (I-V) curves for a given film were linear in the $\pm$ 100 mV range both at 300 K as well as 2.1 K. The base temperature achievable in this system is 3.6 K with a resolution of $\pm$ 0.2 K. Change of transport mechanism in Pd ultra-thin films upon exposure to hydrogen gas was studied using an in-house built set-up \cite{suresh}. The H$_2$ gas used in the measurement set-up was air balanced. Both the H$_2$ gas flow as well as air flow towards the measurement chamber were regulated using Alicat Scientific’s mass flow controllers (MFC's) to achieve desired dilution of the H$_2$ gas with synthetic air. The change of resistance of the Pd films upon the sorption of H$_2$ was measured using Keithley’s 2700 Digital multimeter cum data acquisition system.
\section{Results and Discussion:}
\subsection{HRTEM measurements}
\begin{figure}[h!]
\centering
\includegraphics[width=0.94\textwidth]{Fig2.jpg}
\caption{(a) and (c): HRTEM images of a 3 nm and 5 nm respectively thick Pd film on a 2 nm scale. (b) and (d) represent the corresponding SAED patterns of the images (a) and (c).}
\label{TEM}
\end{figure}
To characterize the nano-island films for crystallinity, we measured HRTEM on each of them. Figs. \ref{TEM} (a) and (c) show representative HRTEM images measured on a 3 nm thin film and 5 nm thin film respectively, wherein, lattice planes corresponding to the formation of a uniform lattice are clearly seen. Two lattice planes, one corresponding to Pd (111) \cite{navaladian} where the distance between the lattice planes is measured to be $\approx$ 2.238 \AA ~ on an average, and the other corresponding to Pd (200) \cite{navaladian,du} where the distance between the planes is measured to be $\approx$ 1.9 \AA, have been marked. Polycrystalline nature of the films with short range ordering is evident in the ring structure of the corresponding select area electron diffraction (SAED) pattern. The ring SAED pattern are indexed to (111), (200), (220), (311), (222) and (331) lattice planes and confirm the FCC crystal structure of metallic Pd (JCPDS file No. 87-0638).
\subsection{AFM measurements}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{Fig3.jpg}
\caption{(Colour online): (a) Schematic of Pd nano-island thin film assembly for resistance measurements. AFM topographs obtained on (b) 2 nm (c) 3 nm (d) 4nm (e) 5 nm and (f) 6 nm thick films. Inset of each shows a log-normal distribution of nano-islands.}
\label{AFM}
\end{figure}
To characterize the surface morphologies of the grown films, we did AFM measurements on the grown films. Main panels of Figs. \ref{AFM} (b)-(f) show the surface morphologies of the 2 nm, 3 nm, 4 nm, 5 nm and 6 nm thick Pd films respectively. It can be seen that Pd forms a randomly connected non-uniform distribution of grains. The size-distribution of grains in each film was calculated using the Gwyddion, Image J and Scanning Probe Image Processor softwares, all of which gave consistent results. Average size of the grains in each film was obtained by fitting a log-normal distribution to the histograms obtained from the AFM images and shown as insets to the main panel of Figs. \ref{AFM} (b)-(f).
\subsection{Charge transport mechanism without hydrogen exposure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.94\textwidth]{Fig4.jpg}
\caption{(Colour online): (a) Thickness variation of the calculated average size of the thin films. (b) Resistance variation of the thin films with thickness at room temperature.}
\label{RT-R}
\end{figure}
The average particle size, \textit{r}, of the grown films obtained from the log-normal distribution curves are plotted in Fig. \ref{RT-R} (a), as a function of thickness. It can be seen that as the thickness decreases from 6 nm, \textit{r} decreases till the film thickness of 4 nm, below which, the average size starts to increase. The utilized technique of e-beam evaporation is a physical vapour deposition technique that involves condensation of a vapourised material on a substrate, wherein, various microscopic processes govern the final morphology of the grown films/nanoparticles \cite{venables}. It is known that the ratio of bulk cohesive energy, E$_c$, of the vapourised material to the adsorption energy on the substrate, E$_A$, determine if highly coalesced clusters form or thin films form. In the case of pure metal deposition, the metallic clusters are known to form in drop-like fashion, analogous to the condensation of water-vapour on a substrate \cite{beysens}. In this process, drops of metal nanoparticles keep coalescing on each other till E$_A$ wins over E$_c$ and it becomes energetically favourable for the droplets to flatten out on the substrate. The observed decrease of the average particle size with thickness from 6 nm to 4 nm, then, suggests the importance of E$_c$ in these range of thin film thicknesses while the increase in \textit{r} below 4 nm suggests the transformation of the growth process to that of surface adsorption.\\
In order to understand the effect of the film thickness on the electronic state of the resultant system, the room temperature resistance \textit{R} of the films are plotted as a function of thickness (d) in Fig. \ref{RT-R} (b). The black filled circles are the experimental data while the red continuous curve is a fit to the equation:
\begin{equation} \label{eqn:RT-T}
R(d) = A*\left(\frac{1}{1-exp(-B*d)}\right)
\end{equation}
where \textit{A} is a constant and \textit{B} has a functional dependence on the thickness \textit{d}. It can be seen that the data points are fitted to the equation (\ref{eqn:RT-T}) rather well. It is well-known that the room temperature resistance of a bulk metal is dominated by scattering, primarily, due to phonons. If the thickness of the bulk metal is decreased to a thin film configuration, then other contributions are known to affect the resistance of such thin films. Primary amongst them are scattering due to grain boundaries arising due to a decrease in the size of the grains with the decrease in the thickness of the films \cite{mayadas,mayadas1}. However, it is known that if the film is very thin, then the contribution to resistance is dominated by surface scattering \cite{fuchs,sondheimer,zhang1}. In the limit of the thickness of the film reaching few nanometers, electron confinement effects are known to alter the resistance of nanostructures drastically \cite{zhang1,murray,gaponenko,mowbray,bimberg,stangl}, wherein, the metallic nature of transport may even change to that akin to insulators \cite{praveen}. The surface scattering is known to vary as (1-exp(-$\kappa$tH))$^{-1}$ \cite{fuchs,sondheimer,zhang1}, where $\kappa$ = \textit{h}/$\lambda$; $\lambda$ is the mean free path, \textit{t} is the thickness of the film and \textit{H} is a function of thickness. So, the constant \textit{B} in equation \ref{eqn:RT-T} is the product of $\kappa$ and \textit{H}. A reasonable fit of resistance to the surface scattering model, then, implies that the surface scattering dominates charge transport in these Pd thin films \cite{dutta,heiman,lacy}. It is worth noting that while the 6 nm, 5 nm, 4 nm and 3 nm thick films are metallic at room temperature, the 2 nm film exhibits insulating behavior displaying room temperature electron confinement effects therein (Fig. \ref{RT-d}). We would like to add that in order to make any quantitative estimates about the charge transport, it is necessary to use the exact resistivity expression from Mayadas and Shatzkes \cite{mayadas,mayadas1} involving terms arising due to contributions from grain boundaries and surface scattering. The solution of the expression involves numerical integration which we have not yet developed.\\
In order to understand the details of electrical transport in the thin films, temperature dependence of their resistance were measured. Fig. \ref{RT-d} (a) plots the temperature dependence of the 6 nm thick film. It can be seen that \textit{R} decreases linearly with temperature till $\sim$ 50 K below which it saturates to the residual resistance till the lowest measured temperature. This is typical of disordered metallic systems where the resistance decreases monotonically with temperature, limited by impurity or defect scattering at low temperatures \cite{zhai,mayadas,mayadas1,dutta,heiman,lacy}. A similar behavior is observed in the 5 nm thick film as well as the 4 nm thick film, as shown in Figs. \ref{RT-d} (b) and (c). According to the Matthiessen’s rule:
\begin{equation}\label{eqn:matthieseen}
R(T) = R_0 + R_1(T)
\end{equation}
where R$_0$ is the residual resistance arising due to scattering from defects and impurities in the system while R$_1$(T) corresponds to the intrinsic resistance of the metal which is temperature dependent. Several mechanisms are known to affect the temperature dependent component $R_1$, namely, elastic electron scattering, inelastic electron-phonon scattering, inelastic electron-impurity scattering, elastic electron-impurity scattering etc. giving rise to a host of interference processes \cite{gershenzon,bergmann,pritsina}. In bulk Pd metals, R$_1$(T) has been found to follow a T$^{1.5}$ behaviour from 300 K to $\sim$ 13 K \cite{white,kemp,matula} below which the temperature dependence transforms to that of T$^2$ \cite{white,schindler}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{Fig5.jpg}
\caption{(Colour online): Temperature variation of resistance for the (a) 6 nm, (b) 5 nm, (c) 4 nm, (d) 3 nm and (e) 2 nm thin film. Red curves in (a)-(d) is a straight line fit. See text for details.}
\label{RT-d}
\end{figure}
Black filled circles in all the graphs of Fig. \ref{RT-d} correspond to data points while red lines in Figs. \ref{RT-d} (a)-(d) is a straight line fit to equation 3 below:
\begin{equation}\label{eqn:mattheisen1}
R(T) = R_0 + \alpha T
\end{equation}
where R$_0$ is the residual resistance at 0 K and $\alpha$ is the co-efficient of the linear temperature dependence. As can be observed from the Fig. 5, a linear fit characterizes the 4-6 nm films in the higher temperature domain very well. This linear regime though progressively decreases in range below room temperature and is observed till $\sim$ 150 K in the 3 nm film. Even though the resistance decreases with temperature characterizing a metallic behavior, the observed linear temperature dependence of resistance is different from the T$^{1.5}$ behaviour that was observed in bulk metallic Pd \cite{white,kemp,matula}. However, thin films of Pd in the thickness range 15-40 nm were observed to have a linear temperature dependence in the temperature range 300 K- 100 K \cite{satrapinski}. It is to be noted that the range of linear temperature dependence is larger in thin films of 6 nm to 4 nm from 300 K to $\sim$ 50 K. In contrast, for the 3 nm thick film, the linear variation of resistance with temperature occurs over a narrower range of 300 K to $\sim$ 150 K below which the temperature dependence of resistance becomes weaker. At $\sim$ 19.5 K, the resistance of the 3 nm film reaches a minimum, below which it starts to increase (dR/dT $<$ 0), signaling a ``metal to insulator'' transition at 19.5 K. So, even though the 3 nm film shows metallic behavior at room temperature, charge localization effects become dominant at lower temperatures ($<$ 19.5 K).
\begin{figure}[h!]
\centering
\includegraphics[width=0.94\textwidth]{Fig6.jpg}
\caption{(Colour online): Thickness variation of (a) residual resistance R$_0$ and (b) temperature coefficient of resistance $\alpha$. (c) lnW vs. lnT plot for the 2 nm thick film. Obtained exponent p = 0.35.}
\label{R0}
\end{figure}
The temperature dependence of resistance of the 2 nm thick film is completely different from all the other films, in that, the resistance of the film increases with temperature in the entire temperature range of 300 K to 3.5 K (dR/dT $<$ 0), suggesting an insulator like charge transport in this film. Since the room temperature resistance is also that of an insulator ($\sim$ 43 k$\Omega$), it implies that the thin film has transformed to a discontinuous film at this thickness level of 2 nm. From Fig. \ref{RT-R} (a), we found the average particle size to be at a higher value of $\sim$ 20 nm for the film thickness of 2 nm, compared to the 6 nm thick film. This observation, then, suggests that the transformation of growth process from cohesive energy dominated for the 6 nm thin film to that of surface adsorption dominated for the 2 nm thin film happened in such a way that the inter-particles separation must have increased in the thinner film of 2 nm compared to the thicker metallic film of 6 nm, so as to give an insulating mode of charge transport for the 2 nm thick film.\\
From the fits to equation 3, the values of residual resistance R$_0$ as well as temperature co-efficient of resistance $\alpha$ were obtained for all the films. Figs. \ref{R0} (a) and (b) plot R$_0$ as well as $\alpha$ as a function of film thickness, where the symbols denote the extracted values obtained from the fit using equation 3. It can be seen that as the thickness of the film decreases from 6 nm towards 4 nm, $\alpha$ keeps increasing until the film thickness reaches 3 nm, wherein, $\alpha$ decreases. It is to be noted that for the 3 nm film, the range of straight line fit had decreased from 300 K to $\sim$ 150 K (c.f. Fig. \ref{RT-d}(d)). The non-monotonic variation of $\alpha$ with the film thickness suggests that the above mentioned processes contributing to $\alpha$, namely, elastic electron scattering, inelastic electron-phonon scattering, inelastic electron-impurity scattering, elastic electron-impurity scattering etc. \cite{gershenzon,bergmann,pritsina} together behave in a non-monotonic fashion with thickness such that one process leads the other and vice-versa as the thickness of the films is varied, such that the result is a non-monotonic variation of $\alpha$ with thickness. On the other hand, the temperature independent component R$_0$ (shown by green filled diamonds in Fig. \ref{R0} (a)) is seen to increase monotonically with a decrease in the film thickness. It is known that several factors like impurities, defects, grain boundary scattering and surface scattering contribute to R$_0$ \cite{zhai,mayadas,mayadas1,dutta,heiman,lacy}. It is expected that factors like impurities and defects that arise in a given process of making a thin film, do not vary as much as the surface whose fraction keeps increasing as the film thickness is reduced. From Fig. \ref{RT-R} (b), it was found that surface scattering contributes predominantly to the room temperature resistance as the film thickness was varied. So, the monotonic increase of R$_0$ with a decrease of film thickness in Fig. \ref{R0} (a) is ascribed predominantly to surface scattering. However, contributions from grain boundary scattering cannot be completely neglected and we ascribe the increase of R$_0$ with film thickness decrease due to a combination of predominantly surface scattering but also due to grain boundary scattering.\\
From Fig. \ref{RT-d} (f), it is clear that the 2 nm film is insulating at all temperatures where the resistance increased with a decrease in temperature resulting in a negative dR/dT. While studying charge transport in disordered insulators, Mott \cite{mott,mott1} found that at low temperatures, charge transport happens via electrons hopping from localized sites to localized sites near the Fermi level, such that the hopping probability is maximized at an optimal distance “r”. Assuming a constant density of states at the Fermi level, Mott found the resistance to vary with temperature as:
\begin{equation}\label{eqn:Mott}
R(T) = R_0 exp(T_0/T)^p
\end{equation}
where
\begin{equation}\label{eqn:exponent}
p = \frac{1}{D+1}
\end{equation}
is called the hopping exponent and \textit{D} is the space dimensionality of the solid that has a value 1/4 for a 3 dimensional solid and is equal to 1/3 for a 2-dimensional solid.\\
In order to estimate the hopping exponent, \textit{p}, it is convenient to calculate the logarithmic derivative \cite{khondaker,praveen}:
\begin{equation}\label{eqn:lnW}
W = -\frac{\partial lnR(T)}{\partial lnT} = p\left(\frac{T_0}{T}\right)^p
\end{equation}
from where \textit{p} can be easily obtained since ln W = A-p*lnT\\
To understand if Mott’s variable range hopping is the primary mechanism of charge transport in the 2 nm thick film, we plotted lnW as a function of lnT, as shown in Fig. \ref{R0} (b). Black filled circles are the data points while the red line is a straight line fit to the data points. From the fit, the value of hopping exponent was obtained as 0.35, which is extremely close to the expected 0.33 value for a two dimensional film, thus confirming Mott’s variable range hopping as the main mechanism of charge transport in the 2 nm thick insulating Pd film.\\
From equation \ref{eqn:mattheisen1}, it is clear that the temperature co-efficient of $\alpha$ is a direct measure of dR(T)/dT, so a positive value of dR(T)/dT is an indicator of metallicity. However, a non-monotonic variation of $\alpha$, as observed in Fig. \ref{R0} (b) above, indicates that the metallicity of each film of thickness 6 nm to 3 nm is not of the pure Pd metal kind where only electron-phonon interaction is the dominant mechanism that behaves monotonically with the film thickness \cite{pritsina}. So, other mechanisms like inelastic electron-impurity scattering arising from structural disorders in the thin films of Pd, are expected to play a role in the charge transport mechasim of such films. Additionally, a metal-insulator transition at $\sim$ 19.5 K in the thin film of 3 nm thickness, is a pointer to the fact that the processes governing the charge transport in such ultra-thin films are non-trivial, do not correspond to those of bulk Pd metal, and is likely determined by structural defects/impurities, e.g., grain-size distribution etc. In order to further probe the nature of charge transport in such ultra-thin films of Pd, we decided to identify and vary alternative parameters that could have an affect on the resistance of the films. Since Pd is known to selectively absorb $H_2$ gas, we decided to investigate the changes in the room temperature resistance of each film of Pd on its exposure to $H_2$.
\subsection{Charge transport mechanism under hydrogen exposure}
Pd metal’s catalytic activity on H$_2$ gas and its selectivity to H$_2$ gas absorption is well known. Barr and Dankert \cite{barr,dankert} proposed a novel possibility with respect to charge transport in nano-island Pd films using the phenomenon of HAILE, wherein, the Pd islands that are close to percolation threshold swell due to H atom absorption closing the gap between islands, thereby, decreasing the resistance. This phenomenon is expected to work only for H$_2$ exposure of the order of 10,000-20,000 ppm of H$_2$ concentration since the $\alpha$ to $\beta$ transition leading to the volume expansion of the lattice is known to happen at those concentration of H$_2$ \cite{lewis,wolf,flanagan}. However, in scenarios where the inter-island gap is small enough that it could get closed during the lattice expansion of PdH$_x$ happening at the $\alpha$ $\rightarrow$ $\beta$ transition, this limitation could be overcome \cite{favier,dankert,ramanathan,xu}. Since our ultra-thin films in the 6 nm to 3 nm range are metallic (in few Ohms range) at room temperature, they are likely close to the percolation threshold and a HAILE assisted charge trasnport may be possible in such films. So, we investigated the resistive response of all the thin films, to H$_2$ gas exposure at low concentrations. For each film thickness, 3 samples were measured for a given H$_2$ concentration.
\begin{figure}[h!]
\centering
\includegraphics[width=0.94\textwidth]{Fig7.jpg}
\caption{(Colour online): Resistance decrease in the 4 nm thick film at a H$_2$ concentration of (a) 5000 ppm and (b) 1000 ppm. Red continuous curves are fits to equation 7. See text for details.}
\label{R5000ppm}
\end{figure}
Figure \ref{R5000ppm} (a) shows the resistive change in a 4 nm thin film that was exposed to a H$_2$ concentration of 5000 ppm, corresponding to 0.5$\%$ of H$_2$, shown as black solid circles. The 4 nm thin Pd film was initially exposed to an atmosphere of synthetic air in which the resistance of the film was $\sim$ 32.7 $\Omega$. As is immediately apparent from the graph, on exposure to H$_2$, the resistance decreases, likely arising from lattice expansion of Pd islands i.e. joining up of the islands. So, our nano-sized island films can detect a low concentration of 5000 ppm H$_2$, lower than the expected range of 10,000-20,000 ppm of H$_2$ concentration exploiting HAILE mechanism. By studying the effect of grain size in thin films of thickness 2 nm to 8 nm on the $\alpha$ $\rightarrow$ $\beta$ transition \cite{narehood,eastman,pundt,suleiman}, it was found that the lattice constants corresponding to PdH$_{\alpha_{max}}$ as well as PdH$_{\beta_{min}}$ increased monotonically as a function of size. Hence, the observed decrease in resistance with lower concentration (5000 ppm) H$_2$ exposure suggests that the required lattice expansion for the 4 nm thick film is of correct magnitude such that the HAILE mechanism is responsive under H$_2$ exposure.\\
In order to check if the films could detect an even lower concentration of H$_2$, we exposed the same 4 nm thick film to a H$_2$ concentration of 1000 ppm as shown in Fig. \ref{R5000ppm} (b). From the observed resistance drop from the starting value of $\sim$ 32.2 $\Omega$, it can be seen that the 4 nm thick film can detect H$_2$ even in the much lower concentration of 1000 ppm. So, our nano-island films also show the possibility of it being used as a low concentration H$_2$ gas detector. However, the films do not regain the baseline resistance value upon withdrawal of H$_2$ due to the very strong stiction of Pd on the used glass substrate. We are now in the process of overcoming this hurdle by coating the films of glass using a self-assembled monolayer that help in reducing the stiction of Pd on glass \cite{xu}.\\
Red solid lines in Fig. \ref{R5000ppm} are fits to an expression of the form:
\begin{equation}\label{eqn:tau}
R = R_a + R_b*exp(-t/\tau_1) +R_c*exp(-t/\tau_2)
\end{equation}
where R$_a$, R$_b$ and R$_c$ are constants and $\tau_1$, $\tau_2$ are time constants defined in the usual sense denoting the time taken by the decaying resistance to fall to 1/e of its initial value. It is interesting to note that the system requires two time-constants to reach $\sim$ 83$\%$ of the starting resistance value (it takes a very long time to reach saturation due to strong stiction). The values of $\tau_1$ and $\tau_2$ obtained for a 5000 ppm exposure of H$_2$ are $\sim$ 290 s and 2780 s respectively. The values of the two time constants were found to increase to $\sim$ 500 s and 7000 s respectively for the lower concentration of 1000 ppm of H$_2$.
\subsubsection{Percolation induced enhanced conductive pathway model}
\begin{figure}[h!]
\centering
\includegraphics[width=0.94\textwidth]{Fig8.jpg}
\caption{(Colour online): Schematic representation of enhanced conducting pathways upon H$_2$ exposure. Contact pads in each schematic is shown by yellow rectangles: (a) Before H$_2$ exposure, where grey spheres represent disconnected nano-islands of palladium and blue spheres represent at least one percolative path. (b) After H$_2$ exposure, at least one new percolative path opens at the surface resulting in the smaller time-constant $\tau_1$. (c) Opening of more percolative paths as time evolves resulting in the second time-constant $\tau_2$. }
\label{percolationmodel}
\end{figure}
It is known that, in general, two different time-constants in a system arise from two different mechanisms. For example, Ji et al. \cite{ji} studied the H$_2$S gas sensing property of a metal-oxide sensor SnO$_2$ and found that apart from reacting with the adsorbed oxygen anions, H$_2$S also chemisorbed onto SnO$_2$ to produce SnS$_2$. The two different mechanisms result in different time constants. Similarly, Wang et al. \cite{wang} studied gas sensing properties of Fe$_2$2O$_3$ samples with different phases, namely, $\alpha$-Fe$_2$O$_3$ and $\beta$-Fe$_2$2O$_3$, and found different gas sensing mechanisms for the two phases and, consequently, two different time-constants. However, in our case, the sensing material comprises a single element, Pd and the sensing gas, a monoatomic gas H$_2$. In such a case, the only possibility of chemisorption of H$_2$ with Pd is in making a resultant compound PdH$_x$. It is true that PdH$_x$ undergoes a series of transformation between PdH$_{\alpha}$ and PdH$_{\beta}$ as a function of the H$_2$ gas concentration. However, in the low concentration of 1000-5000 ppm of H$_2$ gas reported in this work, the resultant structure is of PdH$_{\alpha}$ type. So, different mechanisms arising from a phase change from PdH$_{\alpha}$ to PdH$_{\beta}$ doesn’t seem likely.\\
It is known that Pd crystal’s different facets have different surface energies resulting in differences in gas sensing, and consequently, different time-constants \cite{johnson,zalineeva}. However, from HRTEM images of Fig. \ref{TEM}, we do not see the formation of any nanocrystal. So, it is also not possible that the two time-constants observed in our nanofilms may be arising from different H$_2$ loading/unloading time-scales that could have been associated with a Pd nanocrystal's different facets. So, in order to understand the presence of two time constants in the system, we propose a model based on enhanced conducting pathways in Pd nano-islands on H$_2$ exposure as shown by the schematic diagram in Fig. \ref{percolationmodel}. In hydrogen sensors made by Palladium mesowire arrays by Favier et al. \cite{favier}, two kinds of sensors were reported: “Mode I” sensors that were conductive in the absence of hydrogen and “Mode II” sensors that were insulating in the absence of hydrogen (with resistance $>$ 10 M$\Omega$). In both the kinds of sensors, namely, “Mode I” as well as “Mode II” sensors, a hydrogen exposure led to a decrease in the resistance of the sensors. In order to explain this phenomenon, the authors proposed a mechanism of hydrogen induced dilation of Palladium grains that resulted in a decrease in resistance. We have built our model based on this proposition. Our 6 nm, 5 nm, 4 nm and 3 nm thick films show conductive behavior at room temperature, while the 2 nm thick film is resistive at room temperature. So, the 6 nm - 3 nm thick films belong to the “Mode I” category while the 2 nm thick film belongs to the “Mode II” category. Grey spheres in Fig. \ref{percolationmodel} represent nano-sized palladium islands of varying sizes (c.f. Figs. \ref{AFM} and \ref{RT-R} above). Since the 6 nm - 3 nm thin films are initially conducting in the absence of hydrogen, it implies the existence of at least one percolative path for the nano-islands before any H$_2$ exposure, as shown by the connected blue spheres in Fig. \ref{percolationmodel} (a). The charge transport in such initially conductive channels may be through wavefunction overlap metallic transport. As soon as the films get exposed to H$_2$ gas, because of the HAILE process, at least one new percolative path gets formed in the nano-islands assembly (shown by the brown spheres in Fig. \ref{percolationmodel} (b)), that were initially not touching each other similar to the “Mode I” category of Favier et al. \cite{favier}. Since it is expected that the paths would be in a net-like array \cite{zeng}, so they are shown as both horizontal as well as vertical paths. Once the contact between grains is made, the charge transport may happen via wavefunction overlap metallic conduction mechanism. The opening up of the first new perlocative path reduces the resistance of the assembly and is a fast process whose time constant is governed by $\tau_1$. As time progresses, more and more new percolative paths open up, creating parallel network of resistors and decreasing the net resistance of the assembly as shown in Fig. \ref{percolationmodel} (c). The longer time constant $\tau_2$ is governed by the opening up of the many percolative paths slowly one after the other. Hence, the system takes a large time to equilibrate. This phenomenon, can also explain the extremely short time constants achieved in meso-wire arrays \cite{zeng,walter,yang} where the opening up of percolative processes is expected to happen simultaneously.\\
Since the lattice expansion from PdH$_0$ to PdH$_{\alpha_{max}}$ occurs monotonically with H$_2$ concentration \cite{narehood,eastman,pundt,suleiman}, lower concentrations of H$_2$ would require longer times for HAILE mechansim to set-up and consequently longer time constants. This is exactly what is observed in the 4 nm thick film of Pd that was exposed to a lower concentration of 1000 ppm of H$_2$ and the obtained time constants $\tau_1$ and $\tau_2$ were found to be higher than those corresponding to a higher exposure of 5000 ppm. While comparing the isotherms of a 3.8 nm and a 6 nm Pd cluster during H$_2$ loading and unloading, Pundt et al. \cite{pundt} found an enhanced solubility of H$_2$ in the low concentration region (below PdH$_{\alpha_{max}}$). Simultaneously, the lattice constants of the PdH$_{\alpha_{max}}$ structure was found to increase as compared to the bulk PdH$_{\alpha_{max}}$ such that the increase was larger in the 6 nm cluster than that in the 3.8 nm cluster. Pundt et al. \cite{pundt} attributed this observation to additional sorption of H$_2$ in the subsurface sites. As discussed earlier, surface scattering has a predominant effect on the resistance of our ultra-thin films (see discussions relevant to Fig. \ref{RT-R}). This observation, then suggests, that the enhanced detection range to lower concentrations of 1000 ppm H$_2$ observable in our ultra-thin films may be due to extra H$_2$ absorption below the surface making \textit{H} atom layers below the surface, as shown in the schematic above.
\begin{figure}[h!]
\centering
\includegraphics[width=1.1\textwidth]{Fig9.jpg}
\caption{(Colour online): Variation of sensitivity in thin Pd films of thickness (a) 6 nm, (b) 5 nm, (c) 4 nm, (d) 3 nm and (e) 2 nm for a H$_2$ exposure of 5000 ppm concentration. Red continuous curves in each graph of (a)-(e) is a fit to estimate the time-constants $\tau_1$ and $\tau_2$.}
\label{tau-d}
\end{figure}
It is to be observed that the 2 nm thick film that is insulating at room temperature (c.f. Fig’s \ref{RT-R}-\ref{R0}) shows an initial increase in resistance after exposure to H$_2$ till $\sim$ 15 s after which the resistance starts to decrease again. A similar behavior was also observed by Xu et al. \cite{xu} in the 3 nm thick film that was made on the glass substrate and exposed to 2$\%$ H$_2$. They ascribed the said increase in resistance to PdH$_x$ hydride formation, since at 20,000 ppm and above H$_2$ concentrations, bulk PdH$_x$ structures transform to a lattice structure \cite{lewis,flanagan,wolf,barr,wu,morris}. However, at concentrations below 10,000 ppm, it is known that in bulk Pd, PdH$_{x}$ structures are in the form of a solid solution \cite{lewis,flanagan,wolf,barr,wu,morris}. Hence, at 5000 ppm concentration of H$_2$, the resultant PdH$_x$ structure should be amorphous, so an increase in resistance could possibly not happen due to scattering of electrons in an already disordered amorphous structure. Moreover, as already observed above, the 2 nm thick films were found to be insulating till the lowest measured temperatures and the transport in such films was found to be due to Mott’s variable range hopping mechanism. An initial increase in resistance in such films suggests that the gap between islands is too large to be closed at 5000 ppm concentration of H$_2$ exposure. The initial rise in resistance in this films, is then, ascribed to the increase in the work function of the granular Pd film due to surface adsorption of the H$_2$ molecules \cite{barr,wu,morris}, wherein, the charge gets transferred via Mott’s variable range hopping. This phenomenon is counter-acted on by the HAILE mechanism that tends to decrease the resistance of the system due to swelling up of the Pd atoms on H atom adsorption. With the passage of time, as more and more H atoms get adsorbed onto the Pd surface, Pd atoms catalytic activity decreases (due to unavailability of free Pd sites) forcing the dissociated H atoms to move from the surface inside the grain. These H atoms form additional PdH$_{x}$ structures below the surface and result in additional expansion of the Pd atoms due to swelling. At some point of time, the phenomenon of the work function increase of the surface would exactly balance the HAILE process beyond which HAILE phenomenon would win over resulting in a subsequent decrease in resistance, as observed. For the thickest 6 nm film, since the free surface area is the least (compared to the thickness of all the other films), work-function increase due to surface would always lose over HAILE induced expansion and, hence, only a resistance decrease is observed.
\begin{table}
\begin{tabular}{|c |c |c |}
\hline
\makecell{Thickness\\ (nm)} & \makecell{Time-constant $\tau_1$\\ (s)} & \makecell{Time-constant $\tau_2$\\ (s)} \\
\hline
6 & 20.7 $\pm$ 0.08 & -- \\ \hline
5 & 120.2 $\pm$ 0.6 & 1081.8 $\pm$ 6.2 \\ \hline
4 & 293.7 $\pm$ 1.2 & 2782.3 $\pm$ 25.9 \\ \hline
3 & 340.3 $\pm$ 2 & 2883.9 $\pm$ 73.8 \\ \hline
2 & 78.5 $\pm$ 0.2 & 1610.6 $\pm$ 2.8 \\
\hline
\end{tabular}
\caption{Variation of the two time-constants $\tau_1$ and $\tau_2$ with thickness.}
\label{Table1}
\end{table}
Our model proposes the existence of two time constants $\tau_1$ and $\tau_2$ of differing values such that $\tau_1$ has a much smaller value than $\tau_2$ since the former signals the switching on of the first new percolative path while the latter arises due to subsequent opening up of many new percolative paths. In order to see how the time-constants evolve as the thickness of the films is varied systematically from 6 nm down to 2 nm, we exposed each film to a H$_2$ concentration of 5000 ppm as shown in Figs. \ref{Table1} (a)-(e). Black curves in each graph correspond to the data points while red curves in each is a fit corresponding to equation \ref{eqn:tau}, such that $R$, $R_a$, $R_b$ and $R_c$ are replaced by their fractional change counterparts $\Delta R$, $\Delta R_a$, $\Delta R_b$ and $\Delta R_c$ respectively. The result of the fits are given in Table \textbf{\ref{Table1}}. It is to be noted that for all the other films barring the 6 nm thick film, the waiting time was for 4000 s. So, a reasonable value of the higher time-constant $\tau_2$ could be obtained. A similar value of the time-constant $\tau_2$ could not be obtained for a shorter waiting time of 600 s for the 6 nm film. From the Table \ref{Table1}, two observations can be made for the metallic films at room temperature: (i) Both the time-constants $\tau_1$ and $\tau_2$ systematically increase with a decrease in the thickness of the films and (ii) $\tau_1$ $<$ $\tau_2$ for each film such that $\tau_1$ is an order of magnitude smaller than $\tau_2$.\\
From Fig. \ref{RT-R} (a), it was found that the average size of the grains varied non-monotonically with size \textit{r}, where \textit{r} reduced with a reduction in thickness from 6 nm to 4 nm but increased below 4 nm. This variation is exactly contrasted by the behaviour of temperature coefficient of resistance $\alpha$'s variation with thickness, wherein, $\alpha$ increased with a reduction of thickness till 4 nm but decreased below it (see Fig. \ref{R0} (b)). Since various scattering mechanisms like inelastic electron-impurity scattering, elastic electron scattering etc. \cite{gershenzon,bergmann,pritsina} govern the behaviour of $\alpha$, the above observation, then, suggests that at room temperature, when the thickness is reduced from 6 nm till 4 nm, the average size \textit{r} decreases but the scattering mechanisms increases. However below 4 nm, \textit{r} increases but the scattering mechanism decrease. When hydrogen is brought in at low concentration of 5000 ppm, the resultant PdH$_{\alpha}$ structure is a disordered solid solution \cite{zuhner,bohmholdt,narehood}. It is expected that the phase formation of PdH$_{\alpha}$ would be affected by the underlying state of order of the Pd films. Consequently, the opening of percolative paths governing the time constants would strongly be dependent on this order. That the time-constants vary monotonically with a decrease in the thickness of the films indicate that the non-monotonic variation of size and $\alpha$ in completely contrasting manner result in a negation of the non-monotonic behaviour of each to produce a monotonic variation of $\tau$'s. This further implies that, even though all the films between 6 nm and 3 nm have a positive dR/dT at room temperature, the inter-grain separation of each film is increased as the thickness is decreased, requiring a longer time-constant for HAILE mechanism to set-in with decreasing film thickness. For the 3 nm thick film, the increased inter-grain distance is of such a magnitude that room temperature provides enough thermal energy for conduction. However, as the temperature is reduced, reducing the associated thermal energy, a point would be reached when this energy is insufficient for conduction, resulting in a metal-insulator transition, as observed (c.f. Fig. \ref{RT-d} (d)).\\
For the 2 nm film, in contrast, the inter-grain separation is such that even room-temperature thermal energy is insufficient for conduction and the resultant state is insulating. As mentioned above, the initial increase in resistance of the 2 nm film may, possibly, be arising due to increase in the work function of the Pd film due to surface adsorption of the incoming H$_2$ molecules at the available Pd sites \cite{barr,wu,morris,sun}. The subsequent decrease in time constants $\tau_1$ and $\tau_2$, in these films, then, suggest that the sub-surface absorption of H atoms that took place after the H atoms were pushed below the surface resulted in a state where the atoms were at a small enough distances such that HAILE mechanism set up resulted in a fast first percolation pathway formation. The observation of $\tau_1$ being an order of magnitude smaller than $\tau_2$ could be understood from the fact that $\tau_1$ corresponds to the time when the first percolative path opens which is a fast process, while $\tau_2$ corresponds to the subsequent opening up of newer percolative paths which is a slower process.
\section{Conclusion:}
To conclude, charge transport mechanism of ultra-thin films of palladium of varying thicknesses from 6 nm down to 2 nm, have been investigated by studying the temperature dependence of their resistance. It was found that the films of thicknesses varying from 6 nm to 4 nm were metallic over the entire temperature range measured. The lower thickness film of 3 nm thickness was found to undergo a metal-insulator transition at a temperature of 19.5 K while the still lower thickness film of 2 nm was insulating at all measured temperatures following Mott’s variable range mechanism of charge transport. The effect of hydrogen exposure to the charge transport mechanism of the ultra-thin films was further investigated by exposing the films to low concentration ($\sim$ 5000 ppm) of hydrogen gas. It was found that at room temperature, all the metallic films exhibited a decrease of resistance on H$_2$ exposure that has been ascribed to hydrogen induced lattice expansion phenomenon. The insulating film, on the other hand, exhibited an initial increase of resistance on H$_2$ exposure that showed a decrease in resistance upon further exposure. All the films exhibited two time-constants rather than a single one to reach back to the starting value of resistance. In order to explain the presence of two time-constants in the system, we proposed a model employing enhanced conducting pathways upon H$_2$ exposure, wherein, the smaller time-constant is ascribed to the opening of the first new percolative channel and is a fast process. The higher time-constant corresponds to the slow opening up of parallel percolative channels in the ultra-thin films. So, the amount of $H_2$ gas at room temperature is as an extra parameter that could be used to tune the charge transport mechanism in ultra-thin films of Pd that may have a bearing on the use of ultra-thin films of Pd for low concentration hydrogen sensing. This study can also have implications for liquid metals which have properties that can be tuned by adding other materials into its bulk or surface \cite{kalantar,castro}. For instance, it is known that liquid metals can dissolve other elements or molecules in themselves which upon dissolution can act as precursors in the liquid environment, thereby, generating new products. Our study of the effect of dissolution of low concentration hydrogen into ultra-thin films of palladium can have manifestations on how this dissolution happens, if surfaces and sub-surface absorption is important and if opening of percolation paths similar to what has been described above have any consequence for the dissolution process.
\section*{Conflicts of Interest}
There are no conflicts of interest to declare.
\section*{Acknowledgement:}
D.K.S. thanks financial support from SERB, DST, Govt. of India (Grant Nos YSS/2015/001743). P.S.G. thanks ISRO RESPOND (Grant No. ISRO/RES/3/762/17-18) for financial support. D.J.-N. acknowledges financial support from SERB-DST and ISRO RESPOND, Govt. of India (Grant Nos YSS/2015/001743, ISRO/RES/3/704/16-17 and ISRO/RES/3/762/17-18). JM acknowledges financial support from SERB, Govt. of India (SR/52/CMP-0139/2012), UGC-UKIERI (184-26/2014(IC),184-16/2017(IC)) and the Royal Academy of Engineering, Newton Bhabha Fund, UK.
\hfill \break
\textbf{References}\\
|
1,477,468,750,951 | arxiv | \section{Introduction}
Spacecraft position and attitude estimation is essential to on-orbit operations \cite{nanjangud2018robotics}, e.g., formation flying, rendezvous, docking, servicing and space debris removal \cite{taylor2018remove}. These rely on precise and robust estimation of the relative pose and trajectory of object targets in close-proximity under harsh lighting conditions and against highly textured background (i.e. Earth). As surveyed in \cite{opromolla2017review}, according to the specific operation scenario, the targets may be either: (i) cooperative if they use a dedicated radio-link, fiducial markers or retro-reflectors to aid pose determination or (ii) non-cooperative with either unknown or known geometry. Recently, the latter has been gaining more interest by both the research community and space agencies due mainly to the accumulation of inactive satellites and space debris in low Earth orbit \cite{forshaw2016removedebris} but also military space operations. For instance, ESA opened a competition \cite{ESAChallenge}, this year, to estimate the pose of a known spacecraft from a single image using supervised learning. This papers addresses this problem.
\par
The main limitation of deep learning (DL) is that it needs a lot of data, which is especially costly in space. Therefore, as our first contribution, we propose a visual simulator built on Unreal Engine 4, named URSO, which allows obtaining photorealistic images and depth masks of commonly used spacecrafts orbiting the Earth, as seen in Fig. \ref{fig:teaser}.
Secondly, we carried out an extensive experimental study of a DL-based pose estimation framework on datasets obtained from URSO, where we investigate the performance impact of several aspects of the architecture and training configuration. Among our findings, we conclude that data augmentation with random camera orientation perturbations is quite effective to combat overfitting and we present a probabilistic orientation estimation via soft classification that performs significantly better than direct orientation regression and it can further model uncertainty due to orientation ambiguity as a Gaussian mixture. Moreover, our best solution achieved \nth{3} place on the synthetic dataset and \nth{2} place on the real dataset of ESA pose estimation challenge \cite{ESAChallenge}. We also demonstrate qualitatively how models trained on URSO data can generalize to real images from space through our augmentation pipeline.
\begin{figure}[t]
\centering
\begin{tabular}{@{}c@{ }c@{}}
\includegraphics[width=40.5mm,height=30.5mm]{images/fig1/half_marble.png} &
\includegraphics[scale=0.09]{images/fig1/499_rgb.png}\\
\includegraphics[scale=0.09]{images/fig1/279_rgb.png} &
\includegraphics[scale=0.09]{images/fig1/4603_rgb.png}\\
\end{tabular}
\caption{ Example of frames synthesized by URSO of a soyuz model. For videos and the datasets used in this work, refer to: {\small \url{https://pedropro.github.io/project/urso/}}}
\vspace*{-1mm}
\label{fig:teaser}
\end{figure}
\section{Related Work}
Previous monocular solutions \cite{naasz2010flight,kelsey2006vision,liu2014relative,petit2013robust,petit20153d, capuano2019robust} to spacecraft tracking and pose estimation rely on model-based approaches (e.g. \cite{DrummondCipolla}) that align a wireframe model of the object to an edge image (typically given by a Canny detector) of the real object based on heuristics. However objects are more than just a collection of edges and geometric primitives. Convolutional Neural Networks (CNNs) can learn more complex and meaningful features to the task at hand while ignoring background features (e.g. clouds) based on context.
Despite the maturity of DL in many computer vision tasks. Only recently \cite{kendall2015posenet,xiang2018posecnn,kehl2017ssd,do2018deep,hu2019segpose,tekin2018real,rad2017bb8} has DL become common in pose estimation problems.
Kendall \textit{et al.} \cite{kendall2015posenet} first proposed adapting and training GoogLeNet on Structure-from-Motion models for camera relocalization. Their network was trained to regress a quaternion by minimizing the $L_2$ loss between quaternions. Moreover, they extended their method to model uncertainty by using Monte Carlo sampling with dropout. \cite{kendall2016modelling}. Kehl \textit{et al.} \cite{kehl2017ssd} proposed a DL solution for detection and pose estimation of multiple objects based on hard viewpoint classification, where ambiguous views are manually removed a-priori. On the other hand, Xiang \textit{et al.} \cite{xiang2018posecnn} proposed a model based on a segmentation network for handling multiple objects. While object locations are estimated using Hough voting on the network image output, their orientations are estimated through quaternion regression following ROI pooling. To account for object symmetries, a loss function based on ICP is used, but this is prone to local minima and it requires a depth map. To handle more than one object instance per class, Thanh-Toan \textit{et al.} \cite{do2018deep} extended Mask-RCNN to pose estimation by simply adding a head branch, which regresses orientation as the angle-axis vector. Although, this minimal parameterization avoids the quaternion normalization, they still employed an $L_2$ loss function. Mahendran \textit{et al.} \cite{mahendran20173d} also regress the angle-axis vector, but they minimize directly the geodesic loss. Su \textit{et al.}\cite{su2015render} performed fine grained hard viewpoint classification. Hara \textit{et al.} \cite{hara2017designing} compared regressing the azimuth using either the $L_2$ loss or the angular difference loss versus hard classification with mean-shift algorithm to retrieve a continuous value. DL has also been successfully applied to visual odometry \cite{wang2017deepvo,Zhou_2018_ECCV}. While Wang \textit{et al.} \cite{wang2017deepvo} simply regresses Euler angles, Zhou \textit{et al.} \cite{Zhou_2018_ECCV} regress simultaneously multiple (i.e. 64) pose hypothesis with angle-axis representation and then average them since pose updates in visual odometry are usually small. There is a large body of work on pose estimation from RGB-D images, which was recently comprehensively evaluated in \cite{hodan2018bop}, where typically ICP is used for pose refinement. In their benchmark, \cite{hodan2018bop} concluded that learning-based solutions are still not on par with point-cloud-based methods \cite{hinterstoisser2016going} in terms of precision. But more recently, \cite{hu2019segpose,tekin2018real,rad2017bb8} have advanced state-of-the-art by refraining from estimating directly pose and instead use CNNs to regress the 2D projections of predefined 3D keypoints and finally estimate pose using robust P$n$P solutions, e.g., embedded in RANSAC. Approaches such as \cite{hu2019segpose} however need further work to handle very small or far-away objects, as they rely on coarse segmentation grids.
\par
Sharma \textit{et al.} \cite{sharma2018pose} were the first to propose using CNNs for spacecraft pose estimation based on hard viewpoint classification, but later they \cite{sharma2019pose} proposed doing position estimation based on bounding box detection and orientation estimation based on soft classification. Although, the approach to position fails when part of object is outside the field of view, the orientation estimation has its merits. Two head branches are used for orientation estimation: one does hard classification, given a set of pre-defined quaternions, to find the $N$ closest quaternions, then a second branch estimates the weights for these $N$ quaternions, and the final orientation is given by the weighted average quaternion. Our method for orientation estimation is similar to this approach, however, our framework does not require two orientation branches, provides intuitive regularization parameters and can handle multiple hypothesis due to perceptual aliasing. \par
The same work \cite{sharma2019pose} introduced the dataset used in the ESA challenge, which is just made of montages of real images with basic OpenGL renderings of a satellite. On the other hand, two image simulation tools have so far been specifically developed to support vision-based navigation in space scenes (e.g. Martian surface, asteroid landing): the early PANGU \cite{parkes2004planet} used by ESA and the more comprehensive Airbus internal simulator: SurRender \cite{brochard2018scientific}, which supports ray tracing and very large datasets. Nevertheless, state-of-the-art game engines (e.g. UE4), widely used in autonomous driving \cite{Dosovitskiy17} and robotics \cite{shah2018airsim,martinez2018unrealrox} offer far more resources to develop complex and photorealistic environments, but these have been criticized in \cite{brochard2018scientific} for being designed for human vision and lacking the photometric accuracy of actual sensors. We point out that recent efforts have been made in the source-available UE4 to implement physically-based shading models and cameras.
\begin{figure}[t]
\centering
\includegraphics[scale=0.49]{images/fig1/pipeline.pdf}
\caption{Simplified overview of the network architecture proposed in this work.}
\label{fig:pipeline}
\end{figure}
\section{Pose Estimation Framework}
\label{sec:method}
Our network architecture, depicted in Fig. \ref{fig:pipeline} is aimed at simplicity rather than efficiency to perform a first ablation study. We adopted the ResNet architectures with pre-trained weights as the network backbone, due to its low number of pooling layers, and good accuracy-complexity trade-off \cite{canziani2016analysis}. The last fully-connected layer and the global average pooling layer of the original network were removed to keep spatial feature resolution, leaving effectively only one pooling layer at the second layer. The global pooling layer was replaced by one extra 3$\times$3 convolution with stride of 2 (bottleneck layer) to compress the CNN features since our task branches are fully-connected to the input tensor. For lower space complexity, one could use instead a Region Proposal Network as in \cite{xiang2018posecnn,do2018deep,he2017mask}, but this complicates our end-to-end pose estimation. As a drawback, our network does not handle multiple objects \textit{per se}.
\par
Our 3D location estimation is a simple regression branch with two fully-connected layers, but instead of minimizing the absolute Euclidean distance, we minimize the relative error, corresponding to the first term of our total loss function:
\begin{equation}
\label{eq:loc_total}
L_{\textrm{total}} = \beta_1 \sum_{i}^{m}\frac{\|t^{(i)}-t^{(i)}_{gt}\|_2}{\|t^{(i)}_{gt}\|_2} + \beta_2 L_{\textrm{ori}}
\end{equation}
where $t^{(i)}$ and $t^{(i)}_{gt}$ are respectively the estimated and ground-truth translation vector. The solely advantage of minimizing the relative error, is that the fine-tuned loss weights $\{\beta_1,\beta_2\}$ in our experiments generalize better to other datasets, as this loss does not depend on the translation scale. To avoid having to fine-tune loss weights, we have also experimented instead in Section \ref{sec:experiments} regressing three virtual 3D keypoints and then estimate pose using a closed-form solution \cite{arun1987least}.
\subsection{Direct Orientation Regression}
\label{sec:ori_reg}
While several works \cite{kendall2015posenet,do2018deep,wang2017deepvo} have used $L_2$ or $L_1$ loss to regress orientation. This does not represent correctly the actual angular distance for any orientation representation. Quaternions, for example, are non-injective. While one can map quaternions to lie only on one hemisphere as in \cite{kendall2017geometric}, $L_2$ distances to quaternions near the equator will still not express the geodesic distance. Therefore we have experimented minimizing directly either:
$L_{\alpha}=\arccos(|{q^{(i)}}^\top q_{gt}^{(i)}|)$
or:
$L_{\cos{\alpha}}=1-|{q^{(i)}}^\top q_{gt}^{(i)}|$ to regress a unit quaternion $q^{(i)}$, subject to a normalization layer. One possible issue with the first expression is that the derivative of $\cos^{-1}(x)$ is infinite at $x=1$, but this can be easily solved by scaling down $x$.
\subsection{Probabilistic Orientation Soft Classification}
Alternatively, we propose to do continuous orientation estimation via classification with soft assignment coding \cite{liu2011defense}. The key idea is to encode each label ($q_{gt}$) as a Gaussian random variable in an orientation discrete output space (represented in Fig. \ref{fig:pipeline}), so that the network learns to output probability mass functions. To this end, a 3D histogram is used as the network output, where each bin maps to a combination of discrete Euler angles specified by the quantization step. Special care is taken to avoid redundant bins in the \textit{Gimbal lock} and borders. Let $Q=\{b_1,..,b_N\}$ be the quaternions corresponding to the histogram bins, then, during training, each bin is encoded with the soft assignment function:
\begin{equation}
\label{eq:soft_encoding}
f(b_i,q_{gt}) = \frac{K(b_i,q_{gt})}{\sum_{j}^{N}K(b_j,q_{gt})}
\end{equation}
where the kernel function $K(x,y)$ uses the normalized angular difference between two quaternions:
\begin{equation}
\label{eq:kernel_fx}
K(x,y)= e^{-\frac{\big(\frac{2\cos^{-1}(|x^\top y |)}{\pi}\big)^2}{2 \sigma^2}}
\quad \textrm{and} \quad
\sigma^2 = \frac{\big(\frac{\Delta}{M}\big)^2}{12}
\end{equation}
and the variance $ \sigma^2$ is given by the quantization error approximation, where $\Delta/M$ represents the quantization step, $\Delta$ is the smoothing factor that controls the Gaussian width and $M$ is the number of bins per dimension (i.e. Euler angle).
At test time, given the bin activations $\{a_1,..,a_N\}$ and the respective quaternions, in one hemisphere, we can fit a quaternion by minimizing the weighted least squares:
\begin{equation}
\label{eq:quat_avg}
\hat{q} = \operatorname*{argmin}_q \sum_{i}^{N} w_i (1-{b_i}^\top q)^2
\end{equation}
where $a_i$ is assigned to $w_i$ and the optimal solution is given by the right null space of the matrix $\sum_{i}^{N} w_i ({b_i}{b_i}^\top)$ \cite{markley2007averaging}. This solution was also employed in \cite{sharma2019pose}.
\par
\subsection{Multimodal Orientation Estimation}
\label{multimodal_EM}
When there are ambiguous views in the training-set, this results in one-to-many mappings, therefore the optimal network that minimizes the cross entropy losses, given the soft assignments in (\ref{eq:soft_encoding}), will output a multimodal distribution.
To extract multiple orientation hypothesis from such network's output, we propose an Expectation-Maximization (EM) framework to fit a Gaussian Mixture model $\Theta=\{\theta_1,...,\theta_K\}$ with means $\{q_1,...,q_K\}$.
As the E step, for every model $\theta_j$ and bin we compute the membership:
\begin{equation}
\label{eq:E_step}
p(\theta_j|b_i) = \frac{p(b_i|\theta_j) p(\theta_j)}{\sum_{k}^{K}p(b_i|\theta_k) p(\theta_k)}
\end{equation}
where $p(b_i|\theta_j)= K(b_i,q_j)$ with $\sigma_j$ initialized as in (\ref{eq:kernel_fx}) and the priors $p(\theta_j)$ as equiprobable. These are then updated in the M step:
\begin{equation}
\label{eq:prior}
p(\theta_j) = \sum_{i}^{N} a_i p(\theta_j|b_i) \textrm{ {\small and} }
\sigma_{j} = \sum_{i}^{N} w_{ji} \Big(\frac{2\cos^{-1}(|b_i^\top q_j |)}{\pi}\Big)^2
\end{equation}
where $q_j$ is firstly obtained by solving (\ref{eq:quat_avg}) with the weights:
$
w_{ji} = \frac{a_i p(\theta_j|b_i)} {p(\theta_j)}
$
. The model means are initialized as the $K$ bins with strongest activations after non-maximum suppression.
To find the optimal number of models, we increase $K$ until the log-likelihood stops increasing by more than a threshold.
\section{URSO: Unreal Rendered Spacecraft On Orbit}
Our simulator leverages Unreal Engine 4 (UE4) features to render realistic images, e.g., physically based materials, bloom and lens flare. Lighting in our environment is simply made of a directional light and spotlight to simulate respectively sunlight and Earth albedo. Ambient lighting was disabled and to simulate the sun we used a body of emissive material with UE4 bloom scatter convolution. Earth was modelled as a high polygonal sphere textured with $21600 \times 10800$ Earth and cloud images from the Blue Marble Next Generation collection \cite{BlueMarble}. This is further masked to obtain specular reflections from the ocean surface. Additionally a third party asset is used to model the atmospheric scattering. Our scene includes a Soyuz and Dragon spacecraft models with geometry imported from 3D model repositories \cite{Turbosquid}. \par
To generate datasets, we sample randomly $5000$ viewpoints around the day side of the Earth from low Earth orbit altitude. The Earth rotation, camera orientation and target object pose are all randomized. Specifically, the target object is placed randomly within the camera viewing frustum and an operating range between [10,40] m. Our interface uses UnrealCV plugin \cite{qiu2017unrealcv}, which allows obtaining an RGB image and depth map for each viewpoint. Images were rendered at a resolution of 1080$\times$960 pixels by a virtual camera with a 90$^\circ$ horizontal FOV and auto-exposure.
\section{Data Augmentation and Sim-to-Real Transfer}
Typical image transformations (e.g. cropping, flipping) have to be considered carefully as these may change the object nature and camera intrinsic parameters, which, in our case, is embedded in the network. One can do random in-plane rotation, since there is no concept of up and down in space, but the object may get out of bounds due to the aspect ratio, therefore this was only done for the ESA \& Stanford dataset, where the satellite is always nearly centered. Additionally, we can cause small random perturbations to the camera orientation by warping the images as shown in Fig. \ref{fig:aug}. We do this during training and accordingly update the pose labels by repeating the encoding in (\ref{eq:soft_encoding}). To generalize the learned models to real data, we convert the images to grayscale, change the image exposure and contrast, add AWG noise, blur the images and drop out patches as shown in Fig. \ref{fig:aug}. The motivation to use the latter is that it can help disentangling features from our mock-up that do not match the real object and it can improve robustness to occlusions and shadows.
\begin{figure}[tb]
\centering
\begin{tabular}{@{}c@{ }c@{ }c@{}}
\includegraphics[scale=0.18]{images/aug/warped/1.png}&
\includegraphics[scale=0.44]{images/aug/sim2real/1.png}&
\includegraphics[scale=0.44]{images/aug/sim2real/3.png}\\[-5pt]
\scriptsize{(a)} & \scriptsize{(b)} & \scriptsize{(c)} \\
\end{tabular}
\begin{tabular}{@{}c@{ }c@{}}
\includegraphics[scale=0.205]{images/aug/clean.jpg} &
\includegraphics[scale=0.195]{images/aug/thrusters.jpg} \\
\scriptsize{(d)} & \scriptsize{(e)} \\[-5pt]
\end{tabular}
\caption{Image augmentation and sim-to-real examples. (a) Image warped due to camera orientation perturbation, (b) and (c) Images after our sim-to-real post-processing. (d) and (e) show real images (5 seconds apart) of a soyuz with overlayed estimated pose after training with data augmentation. Notice the thrusters in action on (e).}
\label{fig:aug}
\end{figure}
\section{Experiments}
\label{sec:experiments}
We conducted experiments on datasets captured using URSO and the ESA \& Stanford's benchmark dataset \cite{ESAChallenge}, named SPEED. The latter contains both synthetic and real images with 1920$\times$1200 px, generated in \cite{sharma2019pose}, of a mock-up model of one satellite used in a flight mission, named PRISMA \cite{d2012spaceborne}. The testing set contains 300 real images and 2998 synthetic images, whereas the training-set contains 12000 synthetic images and only 5 real images. All images are in grayscale. The labels of the testing set are not provided, instead the methods are evaluated by the submission server based on a subset of the testing-set.
As for URSO, we collected one dataset for the dragon spacecraft and two datasets for the soyuz model with different operating ranges: \textit{soyuz\_easy} with [10-20] m and \textit{soyuz\_hard} with [10-40] m. Low ambient light was also exceptionally enabled on \textit{soyuz\_easy}. We have noticed that training on \textit{soyuz\_easy} converges faster, therefore our first experiments in this section use this dataset. All three datasets contain 5000 images, of which 10\% were held out for testing and another 10\% for validation. Performance is reported as the mean absolute location error, the mean angular error and also the metric used by the ESA challenge server, referred to as \textit{ESA Error}, which is the sum of the mean relative location error, as in (\ref{eq:loc_total}), and the mean angular error.
\subsection{Implementation and Training Details}
Networks were trained on one NVIDIA GTX 2080 Ti, using SGD with a moment of 0.9, a weight decay regularization of 0.0001 and a batch size of 4 images. Training starts with weights from the backbone of Mask R-CNN trained on COCO dataset, since we use high image resolutions. The learning rate ($lr$) was scheduled using step decay depending on the model convergence, which we have found to depend highly on the orientation estimation method, number of orientation bins, augmentation pipeline and the dataset. By default, unless explicitly stated, we used: ResNet-50 with a bottleneck width of 32 filters, orientation soft classification with 16 bins per Euler angle, camera rotation perturbations with maximum magnitude of 10$^\circ$ to augment the dataset and images were resized to half their original size. Training a model with this default configuration on \textit{soyuz\_easy} converges after 30 epochs with $lr=0.001$ plus 5 epochs with $lr=0.0001$, whereas orientation regression takes approximately half the number of iterations.
\subsection{Results}
\label{sec:results}
First, results from fine-tuning the parameters of our probabilistic orientation estimation based on soft classification are shown in Table \ref{tab:ori_tuning} for \textit{soyuz\_easy}.
\begin{minipage}{.45\linewidth}
\centering
\vspace{10pt}
\scriptsize{
\begin{tabular}{llll}
\hline
& &\multicolumn{2}{l}{ Angular error}\\
\hline
$\Delta$ & $\#$Bins & Train & Test \\
\hline
3 & 16 & 6.5$^{\circ}$ & 55.1$^{\circ}$ \\
6 & 16 & 5.3$^{\circ}$ & 8.6$^{\circ}$ \\
9 & 16 & 8.0$^{\circ}$ & 10.3$^{\circ}$ \\ \hline
6 & 4 & 11.8$^{\circ}$ & 20.0$^{\circ}$ \\
6 & 8 & 8.9$^{\circ}$ & 11.9$^{\circ}$ \\
6 & 24 & 3.1$^{\circ}$ & 7.4$^{\circ}$ \\
\hline
\end{tabular}}
\captionof{table}{Impact of orientation soft classification parameters. $\#$Bins is the number of bins per dimension.}
\label{tab:ori_tuning}
\end{minipage}
\hfill
\begin{minipage}{.45\linewidth}
\vspace{15pt}
\centering
\scriptsize{
\begin{tabular}{lll}
\hline
&\multicolumn{2}{l}{ Angular error}\\
\hline
Method & Train & Test \\
\hline
Regress$_1$ & 6.7$^{\circ}$ & 13.5$^{\circ}$ \\
Regress$_2$ & 6.9$^{\circ}$ & 13.4$^{\circ}$ \\
Regress$_3$ & 9.0$^{\circ}$ & 20.0$^{\circ}$ \\
Class & 5.3$^{\circ}$ & 8.0$^{\circ}$ \\
\hline
\end{tabular}}
\captionof{table}{Orientation error for each method. Regress$_3$ uses regression of 3D points, whereas Regress$_1$ and Regress$_2$ correspond to the best $\beta$ ratio in Fig. \ref{fig:3}.}
\vspace{10pt}
\label{tab:ori_overfit}
\end{minipage}
\begin{figure}[b]
\vspace{-20pt}
\centering
\includegraphics[scale=0.55]{images/loss_weights.pdf}
\vspace{-40pt}
\caption{Test errors vs ratio of loss weights. Regress$_1$ and Regress$_2$ regress orientation respectively using the $L_{\alpha}$ and $L_{\cos{\alpha}}$ from Section \ref{sec:ori_reg}.}
\label{fig:3}
\end{figure}
As one can see, $\Delta$ which is used to scale the Gaussian tail, acts as regularizer: when it is too small, it leads to overfitting, whereas when it is too high, precision is decreased, leading to underfitting. Increasing the number of bins per dimension of the orientation discrete space, improves the precision but the number of network parameters has cubic growth. Furthermore, similarly to $\Delta$, it can lead to overfitting, since bins will be less often activated during training.
Fig. \ref{fig:3} evaluates this method against regressing orientation on \textit{soyuz\_easy}, for different ratios of loss weights. Interestingly, for the three alternatives, using the network only for orientation estimation by setting $\beta_1=0$ in (\ref{eq:loc_total}) yields higher orientation error than performing both tasks simultaneously. The same cannot be said about the location error which grows with $\beta_2$. Table \ref{tab:ori_overfit} compares the orientation errors of train and test sets between these methods plus regressing instead three 3D keypoints. We can see that all three regression alternatives are outperformed and suffer from more overfitting on this dataset than the classification approach. It is worth noting that we have experimented using the adaptive weighting based on Laplace likelihood in \cite{kendall2017geometric} but achieved poor results. Moreover, optimal loss weights are subjective to the importance assigned to the specific tasks. \par
To demonstrate multimodal orientation estimation, we collected, via URSO, a dataset for the symmetrical marker shown in Fig. \ref{fig:multimodal_ori}. As shown in this figure, after training, the network learns to output two modes representing the two possible solutions. Using naively our unimodal estimation method on this dataset results in the error distribution labeled: \textit{Top-1 errors} in Fig. \ref{fig:multimodal_ori}, whereas if we use the multimodal EM algorithm, proposed in Section \ref{multimodal_EM}, and score the best of two hypothesis: \textit{Top-2} errors, we see that this method finds frequently the right solution.
Fig. \ref{fig:bottleneck} shows how feature compression in the bottleneck layer degrades performance and controls the network size. Similarly, for both tasks, performance changes significantly from using 8 to 128 convolutional filters.
\begin{figure}[th]
\centering
\begin{tabular}{@{}c@{ }c@{}}
\includegraphics[scale=0.15]{images/marker.png} &
\includegraphics[scale=0.45]{images/err_hist.pdf}
\vspace{-5pt}
\end{tabular}
\begin{tabular}{@{}c@{}}
\includegraphics[scale=0.47]{images/weights.png}
\end{tabular}
\caption{Multimodal orientation estimation experiment with a symmetrical marker, shown on the \textit{top-left}. Histograms of angular errors (deg) are shown on the \textit{top-right} for the testing set: Top-1 error corresponds to our single-hypothesis estimation method, whereas Top-2 error is scored as the hypothesis with smallest error from the top 2 hypothesis estimated by our EM framework. The \textit{bottom} image shows on the top row the encoded label of the left frame, whereas the bottom row shows the respective network output after training.}
\label{fig:multimodal_ori}
\end{figure}
\begin{figure}[h]
\begin{tabular}{@{}c@{ }c@{}}
\includegraphics[scale=0.47]{images/bottleneck.pdf}
\end{tabular}
\caption{Bottleneck width and size of branch input layers \textit{vs.} performance and complexity in terms of number of parameters on \textit{soyuz\_easy}.}
\label{fig:bottleneck}
\end{figure}
\begin{minipage}{.4\linewidth}
\vspace{7pt}
\centering
\scriptsize{
\begin{tabular}{llll}
\hline
Network & Loc. err. & Ori. err \\
\hline
ResNet-18 & 1.7 m & 19.9$^{\circ}$ \\
ResNet-34 & 1.4 m & 20.0$^{\circ}$ \\
ResNet-50 & 1.1 m & 13.0$^{\circ}$ \\
ResNet-101 & 1.0 m & 12.2$^{\circ}$ \\
\hline
\end{tabular}}
\vspace{10pt}
\captionof{table}{Impact of architecture depth on \textit{soyuz\_hard}.}
\vspace{10pt}
\label{tab:depth}
\end{minipage}%
\hfill
\begin{minipage}{.46\linewidth}
\vspace{5pt}
\centering
\scriptsize{
\begin{tabular}{lll}
\hline
Resolution & Loc. err. & Ori. err\\
\hline
320$\times$240 & 1.6 m & 24.9$^{\circ}$ \\
640$\times$480 & 1.1 m & 13.0$^{\circ}$ \\
1280$\times$960 & 1.3 m & 10.7$^{\circ}$ \\
\hline
\end{tabular}}
\vspace{10pt}
\captionof{table}{Impact of image resolution on \textit{soyuz\_hard}.}
\label{tab:resolution}
\end{minipage}
\begin{minipage}{.4\linewidth}
\vspace{-5pt}
\centering
\scriptsize{
\begin{tabular}{llll}
\hline
Aug. & Loc err. & Ori err. \\
\hline
None & 1.06 m & 19.5$^{\circ}$ \\
Rotation & 0.56 m & 8.0$^{\circ}$ \\
\hline
\end{tabular}}
\vspace{10pt}
\captionof{table}{Impact of applying rotation perturbations on \textit{soyuz easy}}
\vspace{20pt}
\label{tab:aug}
\end{minipage}%
\hfill
\begin{minipage}{.5\linewidth}
\vspace{-5pt}
\centering
\scriptsize{
\begin{tabular}{llll}
\hline
Dataset & Loc err. & Ori err. \\
\hline
SPEED & 0.17 m & 4.0$^{\circ}$ \\
Soyuz hard & 0.8 m & 7.7$^{\circ}$ \\
Dragon hard & 0.9 m & 13.9$^{\circ}$\\
\hline
\end{tabular}}
\vspace{10pt}
\captionof{table}{Results per dataset obtained with 24 bins per orientation dimension and 128 bottleneck filters.}
\vspace{10pt}
\label{tab:final}
\end{minipage}
Beyond 128 features, performance gain incurs a great memory footprint. Performance does not seem to be much sensitive to the size of the head input layers. \par
The impact of the architecture depth is shown in Table ~\ref{tab:depth}. ResNet with 50 layers is significantly better than its shallower counterparts, however adding more layers does not seem to improve much more the performance. Table \ref{tab:resolution} shows that orientation estimation is quite sensitive to the image input resolution. The same is not clear for localization.
\begin{figure}[b]
\centering
\vspace{-10pt}
\begin{tabular}{@{}c@{ }c@{}}
\includegraphics[scale=0.55]{images/err_dists.pdf}
\end{tabular}
\caption{Test-set errors distributed by object distance, for the models reported in Table \ref{tab:final}.}
\label{fig:err_dists}
\end{figure}
In terms of data augmentation, as reported in Table \ref{tab:aug}, rotation perturbations prove to be an effective technique to augment the dataset and our sim-to-real augmentation is essential to apply models learned on URSO to real footage as shown in {\scriptsize \url{https://youtu.be/x8IbxmOz730}}, particularly to deal with the lighting changes in Fig. \ref{fig:aug}. Furthermore, as shown in Table \ref{tab:final_esa_results}, we achieved \nth{2} place on the real dataset just by using our sim-to-real augmentation pipeline with the 5 real images provided. Table \ref{tab:final} compares performance between the three datasets using an increased bottleneck width and orientation output resolution. As we can see, SPEED with better lighting conditions is the easiest dataset and \textit{dragon\_hard} is the most challenging dataset due to viewpoint ambiguity, as shown in Fig. \ref{fig:err_dists} and Fig. \ref{fig:final_examples}.a.
\begin{table}[t]
\vspace{10pt}
\centering
\scriptsize{
\begin{tabular}{lll}
\hline
Team & Real err. & Synthetic err. \\
\hline
UniAdelaide & 0.3752 & 0.0095 \\
EPFL\_cvlab & 0.1140 & 0.0215 \\ \hline
Triple ensemble (ours) & 0.1555 & 0.0571 \\
Best model $\dagger$ (ours) & 0.1630 & 0.0604 \\
\hline
Top 10 average & 1.3848 & 0.1515 \\
\hline
\end{tabular}}
\caption{ESA pose estimation final scores of top 3 teams. Results for $\dagger$ were obtained for 20 $\%$ of the full test set. For the complete leaderboard, refer to \cite{ESAChallenge}.}
\label{tab:final_esa_results}
\end{table}
Table \ref{tab:final_esa_results} summarizes the results of the ESA pose estimation challenge. Our best single model used a bottleneck width of 800 filters and 64 bins per orientation dimension and was trained for a total of 500 epochs, whereas our second best model using 512 bottleneck filters and 32$\times$32$\times$32 orientation bins achieved respectively: 0.144 and 0.067 on the real and synthetic set. To combine the higher precision of the best model with the less likely overfitting second model we used a triple ensemble, which
is an average of results (using quaternion averaging) of this last model plus two models with 64$\times$64$\times$64 bins, picked at different training epochs. Our accuracy comes with a very large amount of parameters (around 500M) and it is still far from the scores of the top 2 teams, which rely on 2D keypoint regression solutions, image cropping+zooming and robust P$n$P. As shown in Fig. \ref{fig:err_dists}, gross errors start appearing after 20 m, therefore we could also benefit from running the models a second time on zoomed images, since we only used half the original size.
\begin{figure}[tb]
\centering
\begin{tabular}{@{}c@{\quad}c@{}}
\includegraphics[scale=0.32]{images/fig_last/i4.png}&
\includegraphics[scale=0.32]{images/fig_last/i20.png}\\[-5pt]
\scriptsize{(a)} & \scriptsize{(b)} \\
\includegraphics[scale=0.32]{images/fig_last/i14.png}&
\includegraphics[scale=0.32]{images/fig_last/i10.png} \\[-5pt]
\scriptsize{(c)} & \scriptsize{(d)} \\
\includegraphics[scale=0.32]{images/fig_last/i2.png}&
\includegraphics[scale=0.32]{images/fig_last/i16.png}\\[-5pt]
\scriptsize{(e)} & \scriptsize{(f)} \\[-5pt]
\end{tabular}
\caption{Failure and success cases from our testing sets with predicted and groundtruth poses, and orientation weights. Predicted and labeled 2D position are shown respectively as green and red dots. Predicted and labeled orientations are shown in the polar plots as Euler angles. (a) Incorrect orientation due to an ambiguous view. Notice how the respective distribution of weights is more spread out. (b) Poor orientation estimation due to poor lighting. (c) and (d) Good results under challenging background. }
\label{fig:final_examples}
\end{figure}
\subsection{Conclusion and Future Work}
This paper proposed both a simulator and a DL framework for spacecraft pose estimation. Experiments with this framework reveal the impact of several network hyperparameters and training choices and attempts to answer open questions, such as, what is the best way to estimate orientation?
We conclude that estimating orientation based on soft classification gives better results than direct regression and furthermore it provides the means to model uncertainty. This information is useful not only to make decisions but it can be used for filtering the pose if a temporal sequence is provided.
A promising direction is to address tracking using Recurrent Neural Networks and video sequences generated using URSO. As future work, we also plan to extend URSO to SLAM to address targets with unknown geometry.
\par
The architecture proposed in this work is not scalable in terms of image and orientation resolution. Future work should consider how to replace the dense connections without sacrificing performance, e.g., pruning the last layer connections. Additionally, the results reported in this work were obtained using a dedicated network for each dataset. It may be beneficial sharing the same backbone in terms of efficiency and performance.
\subsection{Acknowledgments}
This work is supported by grant EP/R026092 (FAIR-SPACE Hub) through UKRI under the Industry Strategic Challenge Fund (ISCF) for Robotics and AI Hubs in Extreme and Hazardous Environments. The authors are also grateful for the feedback and discussions with
Peter Blacker, Angadh Nanjangud and Zhou Hao.
\bibliographystyle{ieeetr}
{\footnotesize
|
1,477,468,750,952 | arxiv | \section{Introduction} \label{sec:Intro}
Dynamic contrast-enhanced (DCE) MRI involves the administration of a $T_1$-shortening Gadolinium-based contrast agent (CA), followed by the acquisition of successive $T_1$-weighted images as the contrast bolus enters and subsequently leaves the organ \cite{Sourbron2013}. In DCE-MRI, changes in CA concentration are derived from changes in signal intensity over time, then regressed to estimate pharmacokinetic (PK) parameters related to vascular permeability and tissue perfusion \cite{Lebel2014}.
Since perfusion and permeability are typically affected in the presence of vascular and cellular irregularities, DCE imaging has been considered as a promising tool for clinical diagnostics of brain tumours, multiple sclerosis lesions, and neurological disorders where disruption of blood-brain barrier (BBB) occurs. \cite{Oconnor2012,Heye2016}.
Despite its effectiveness in quantitative assessment of microvascular properties, conventional DCE-MRI is challenged by suboptimal image acquisition that severely restricts the spatiotemporal resolution and volume coverage \cite{Guo2016,Guo2017}. The shortest possible scanning time often leads to limited spatial resolution hampering detection of small image features and accurate tumor boundaries. Low temporal resolution hinders accurate fitting of PK parameters. Furthermore, volume coverage is usually inadequate to cover the known pathology, for instance in the case multiple metastatic lesions \cite{Guo2017}. Facing such severe constraints, DCE imaging can significantly benefit from undersampled acquisitions.
So far, existing works in
\cite{Lebel2014,Guo2016,Zhang2015} have proposed compressed sensing and parallel imaging based reconstruction schemes to accelerate DCE-MRI acquisitions, mainly targeting to achieve better spatial resolution and volume coverage while retaining the same temporal resolution. These methods are referred to as indirect methods \cite{Guo2017} because they are based on the reconstruction of dynamic DCE image series first, followed by a separate step for fitting the PK parameters on a voxel-by-voxel level using a tracer kinetic model \cite{Sourbron2013}. More recently, a model-based direct reconstruction model \cite{Guo2017} has been proposed to directly estimate PK parameters from undersampled (k,t) space data. The direct reconstruction method generally poses the estimation of PK maps as an error minimization problem. This approach has been shown to produce superior PK parameter maps and allows for higher acceleration compared to indirect methods. However, the main drawback of this method is that parameter reconstruction of an entire volume requires considerably high computation time.
Motivated by the recent advances of deep learning in medical imaging, in this paper, we present a novel deep learning based approach to directly estimate PK parameters from undersampled DCE-MRI data. First, our proposed network takes the corrupted image-time series as input and \textit{residual} parameter maps, which represent deviations from a kinetic model fitting on fully-sampled image-time series, as output, and aims at learning a nonlinear mapping between them.
Our motivation for learning the \textit{residual} PK maps is based on the observation that residual maps are more sparse and topologically less complex compared to target parameter maps. Second, we propose the \textit{forward physical model loss}, a custom loss function in which we exploit the physical relation between true contrast agent kinetics and measured time-resolved DCE signals when training our network.
Third, we validate our method experimentally on human \textit{in vivo} brain DCE-MRI dataset.
We demonstrate the superior performance of our method in terms of parameter reconstruction accuracy and significantly faster estimation of parameters during testing, taking approximately 1.5 seconds on an entire 3D test volume. To the best of our knowledge, we present the first work leveraging the machine learning algorithms -- specifically deep learning -- to directly estimate PK parameters from undersampled DCE-MRI time-series.
\section{Methods} \label{sec:Methods}
We treat the parameter inference from undersampled data in DCE imaging as a mapping problem between the corrupted intensity-time series and \textit{residual} parameter maps where the underlying mapping is learned using deep convolutional neural networks (CNNs). We provide a summary of general tracer kinetic models applied in DCE-MRI in Sec. \ref{sec:DCE-MRI}, formulate the forward physical model relating the PK parameters to undersampled data in Sec. \ref{sec:pyhsical-model}, finally describe our proposed deep learning methodology for PK parameter inference in Sec. \ref{sec:deep-cnn}.
\subsection{Tracer Kinetic Modeling in DCE-MRI} \label{sec:DCE-MRI}
\vspace{-1mm}
Tracer kinetic modeling aims at providing a link between the tissue signal enhancement and the physiological or so-called pharmacokinetic parameters, including the fractional plasma volume ($v_{\text{p}}$), the fractional interstitial volume ($v_{\text{e}}$), and the volume transfer rate ($K^\text{trans}$) at which contrast agent (CA) is delivered to the extravascular extracellular space (EES).
\begin{figure}[t!]
\centering
\includegraphics[width=0.92\columnwidth]{fig1_real.png}
\caption{Computational steps in the forward model and the conventional pipeline of PK parameter estimation in DCE-MRI. \label{fig:DCEmodel}}
\end{figure}
One of the well-established tracer kinetic models is known as Patlak model \cite{Patlak1983}. This model describes a highly perfused two compartment tissue, ignoring backflux from the EES into the blood plasma compartment. The CA concentration in the tissues is determined by,
\begin{equation}
C(\mathbf{r},t) = v_p(\mathbf{r})C_p(t) + K^{\text{trans}} (\mathbf{r})\int_0^{t} C_p(\tau) d\tau,
\label{Patlak-model-eqn}
\end{equation}
where $\mathbf{r} \in (x,y,z)$ represent image domain spatial coordinates, $C(\mathbf{r},t)$ is the CA concentration over time, and $C_p(t)$ denotes the arterial input function (AIF) which is usually measured from voxels in a feeding artery.
In this work, we specifically employ the Patlak model for tracer pharmacokinetic modeling and estimation of ground truth tissue parameters. This model is a perfect match for our DCE dataset because it is often applied when the temporal resolution is too low to measure the cerebral blood flow, and it has been commonly used to measure the BBB leakage with DCE-MRI in acute brain stroke and dementia \cite{Heye2016,Sourbron2013}. An attractive feature of Patlak model is that the model equation in (\ref{Patlak-model-eqn}) can be linearized and fitted using linear least squares which has a closed-form solution, hence parameter estimation is fast \cite{Sourbron2013}.
\subsection{Forward Physical Model: From PK Parameters to Undersampled Data}
\label{sec:pyhsical-model}
Figure~\ref{fig:DCEmodel} depicts the conventional and forward model approaches relating the PK parameter estimation to undersampled or fully-sampled k-space data, and vice versa.
For direct estimation of PK parameters from the measured k-space data, as proposed in \cite{Fang2016,Guo2017}, a forward model can be formulated by inverting the steps in the conventional model as follows:
\begin{enumerate}
\item Given the sets of PK parameter pairs ($K^{\text{trans}} (\mathbf{r}), v_{\text{p}} (\mathbf{r})$) and arterial input function $C_p(t)$, CA concentration curves over time $C(\mathbf{r},t)$ are estimated using the Patlak model equation in (\ref{Patlak-model-eqn}).
\item Dynamic DCE image series $S(\mathbf{r},t)$ are converted to $C(\mathbf{r},t)$ through the steady-state spoiled gradient echo (SGPR) signal equation \cite{Guo2017}, given by
\begin{equation}
S(\mathbf{r},t) = \frac{M_{\text{0}}(\mathbf{r})\text{sin}\alpha(1-e^{-(K+L)})}{1-\text{cos}\alpha e^{-(K+L)}} +\left(S(\mathbf{r},0) - \frac{M_{\text{0}}(\mathbf{r})\text{sin}\alpha(1-e^{-K})}{1-\text{cos}\alpha e^{-K}}\right)
\label{signal-eqn}
\end{equation}
where $K = T_{\text{R}} / T_{\text{10}}(\rcoord)$, $L= r_1 C(\rcoord,t) T_{\text{R}}$, $T_{\text{R}}$ is the repetition time, $\alpha$ is the flip angle, $r_1$ is the contrast agent relaxivity taken as 4.2 $\text{s}^{-1}\text{mM}^{-1}$, $S(\mathbf{r},0)$ is the baseline (pre-contrast) image intensity, and $T_{\text{10}}(\mathbf{r})$ and $M_{\text{0}}(\mathbf{r})$ are respectively the $T_{\text{1}}$ relaxation and equilibrium longitudinal magnetization that are calculated from a pre-contrast $T_{\text{1}}$ mapping acquisition.
\item The undersampled raw (k,t)-space data $S(\mathbf{k},t)$ can be related to $S(\mathbf{r},t)$ for a single-coil data by an undersampling fast Fourier transform (FFT), $F_u$,
\begin{equation}
S(\mathbf{k},t) = F_uS(\mathbf{r},t),
\label{fu-eqn}
\end{equation}
where $\mathbf{k} \in (k_x,k_y,k_z)$ represents k-space coordinates.
\end{enumerate}
\vspace{-1mm}
By simply integrating the three computation steps in (\ref{Patlak-model-eqn}-\ref{fu-eqn}), we can form a single function $f_m$ modeling the signal evolution in (k-t) space
given the PK maps $\theta = \{K^{\text{trans}} (\mathbf{r}), v_{\text{p}} (\mathbf{r}) \}$, as $S(\mathbf{k},t) = f_m(\theta; \bm{\xi})$, where $\bm{\xi}$ denotes all the predetermined acquisition parameters as mentioned above.
Given the undersampled (k,t)-space data $S(\mathbf{k},t)$, the corrupted image series $S_u(\mathbf{r},t)$ can be obtained by applying IFFT to $S(\mathbf{k},t)$, i.e. $S_u(\mathbf{r},t) = F_u^{\mathsf{T}}S(\mathbf{k},t)$. We further define a new function $\boldsymbol{\tilde{f}_m}$ that integrates only the first two computation steps (\ref{Patlak-model-eqn}-\ref{signal-eqn}) to compute the dynamic DCE image series. We will incorporate $\boldsymbol{\tilde{f}_m}$ in our custom loss function that will be explained in the following section.
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\columnwidth]{fig2_real.png}
\caption{(a) The relation between a corrupted ($\theta_u$), target ($\theta_t$) and residual ($\theta_r$) PK maps, (b) Exemplary golden-angle sampling scheme in the $k_x$-$k_y$ plane through time. \label{fig:ResidualAndMask}}
\end{figure}
\subsection{PK Parameter Inference via Forward Physical Model Loss}
\label{sec:deep-cnn}
\subsubsection{Formulation.}
We hypothesize that a direct inversion between corrupted PK parameter maps $\theta_u$ and $S_u(\mathbf{r},t)$ is available through forward model, i.e., $S_u(\mathbf{r},t) = \boldsymbol{\tilde{f}_m}(\theta_u)$. However, this cannot provide yet sufficiently accurate estimate of target parameter maps $\theta_t$ obtained from fully-sampled data $S(\mathbf{r},t)$. To this end, we estimate a correction or residual map $\theta_r$ from the available signal $S_u(\mathbf{r},t)$ satisfying $\theta_r= \theta_u - \theta_t$. As shown in Fig.~\ref{fig:ResidualAndMask}-(a), we observe that \textit{residual} PK maps involve more sparse representations and exhibit spatially less varying structures inside the brain. The task of learning a residual mapping was shown to be much easier and effective than the original mapping \cite{Zhang2017}. Following the same approach, we adopt the residual learning strategy using deep CNNs. Our CNN is trained to learn a mapping between $S_u(\mathbf{r},t)$ and $\theta_r$ to output an estimate of residual maps $\tilde{\theta}_r$; $\tilde{\theta}_r = \mathcal{R}(S_u(\mathbf{r},t) |\mathbf{\Theta})$, where $\mathcal{R}$ represents the forward mapping of the CNN parameterised by $\mathbf{\Theta}$. The final parameter estimate is obtained via $\tilde{\theta}_t = \theta_u - \tilde{\theta}_r$.
\subsubsection{Loss Function.}
We simultaneously seek the signal belonging to the corrected model estimates to be sufficiently close to true signal, i.e., $\boldsymbol{\tilde{f}_m}(\tilde{\theta}_t) \approx S(\mathbf{r},t)$. Therefore, we design a custom loss function which requires solving the forward model in every iteration of the network training. We refer the resulting loss as \textit{forward physical model loss}. Given a set of training samples $\mathcal{D}$ of input-output pairs ($S_u(\mathbf{r},t), \theta_r$), we train a CNN model that minimizes the following loss,
\begin{equation}
\mathcal{L}(\mathbf{\Theta}) = \sum_{(S_u(\mathbf{r},t), \theta_r) \in \mathcal{D}} \lambda\|\theta_r - \tilde{\theta}_r\|_2^2 \enskip + \enskip (1-\lambda)\|S(\mathbf{r},t)- \boldsymbol{\tilde{f}_m}(\theta_u - \tilde{\theta}_r;\bm{\xi}) \|_2^2,
\label{loss-eqn}
\end{equation}
where $\lambda$ is a regularization parameter balancing the trade-off between the fidelity of the parameter and signal reconstruction. We emphasize that the second term in (\ref{loss-eqn}) allows the network to intrinsically exploit the underlying contrast agent kinetics in training phase.
\subsubsection{Network Architecture.} Figure~\ref{fig:NetworkArchitecture} illustrates our network architecture. The network takes a $4\text{D}$ image-time series as input, where time frames are stacked as input channels. The first convolutional layer applies $3\text{D}$ filters to each channel individually to extract low-level temporal features which are aggregated over frames via learned filter weights to produce a single output per voxel. Following the first layer, inspired by the work on brain segmentation \cite{Kamnitsas2017}, our network consists of parallel dual pathways to efficiently capture multi-scale information. The local pathway at the top focuses on extracting details from the local vicinity while the global pathway at the bottom is designed to incorporate more contextual global information. The global pathway consists of $4$ dilated convolutional layers with dilation factors of $2,4,8,16$, implying increased receptive field sizes. The filter size of each convolutional layer including dilated convolutions is $\mathrm{3\times3\times3}$, and the rectified linear units (ReLU) activation is applied after each convolution. Local and global pathways are then concatenated to form a multi-scale feature set. Following this, $2$ fully-connected layers are used to determine the best possible feature combination that can accurately map the input to output of the network. Finally, the last layer outputs the estimated residual maps.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{fig3.png}
\caption{The network architecture used for the estimation of residual PK maps. The number of filters and output nodes are provided at the bottom of each layer. \label{fig:NetworkArchitecture}}
\end{figure}
\section{Experiments and Results}
\subsubsection{Datasets.}
We perform experiments on fully-sampled DCE-MRI datasets acquired from three mild ischaemic stroke patients. DCE image series were acquired using a 1.5T clinical scanner with a 3D T1W spoiled gradient echo sequence (TR/TE = $8.24/3.1$ ms, flip angle = $12^{\circ}$, FOV = $24 \times 24$ cm, matrix = $256 \times 192$, slice thickness = $4$ mm, $73$ sec temporal resolution, $21$ dynamics). An intravenous bolus injection of $0.1$ mmol/kg of gadoterate meglumine (Gd-DOTA) was administered simultaneously. The total acquisition time for DCE-MRI was approximately $24$ minutes. Two pre-contrast acquisitions were carried out at flip angles of $2^{\circ}$ and $12^{\circ}$ to calculate pre-contrast longitudinal relaxation times.
\subsubsection{Preprocessing.}
Undersampling was retrospectively applied to the fully-sampled data in the $k_x$-$k_y$ plane using a randomized golden-angle sampling pattern \cite{Zhu2016} over time (see Fig.~\ref{fig:ResidualAndMask}-(b)) with a 10-fold undersampling factor. The pre-contrast first frame was fully sampled. Due to the low temporal resolution of our data, we estimated subject-specific vascular input functions (VIFs) extracted by averaging a few voxels located on the superior sagittal sinus where the inflow artefact was reduced compared to a feeding artery \cite{Heye2016}. Data augmentation was employed by applying rigid transformations on image slices. We generated random 2D+t undersampling masks to be applied on the images of different orientations. This allows the network to learn diverse patterns of aliasing artifacts. All the subject's data required for network training/testing were divided into non-overlapping 3D blocks of size $52 \times 52 \times 33$, resulting in 64 blocks per subject.
\subsubsection{Experimental setup.}
All experiments were performed in a leave-one-subject-out fashion. The networks were trained using the Adam optimizer with a learning rate of $10^{-3}$ (using a decay rate of $10^{-4}$) for 300 epochs and mini-batch size of 4.
To demonstrate the advantage of the proposed method, we compare it with the state-of-the-art model-based iterative parameter reconstruction method using the MATLAB implementation provided by the authors \cite{Guo2017}. We use the concordance correlation coefficient (CCC) and structured similarity metric (SSIM) metrics to quantitatively assess the PK parameter reconstruction, and peak signal-to-noise ratio (PSNR) metric to assess the image reconstruction.
Experiments were run on a NVIDIA GeForce Titan Xp GPU with 12 GB RAM.
\subsubsection{Results.}
Figure~\ref{fig:ParamQualitative} shows the qualitative PK parameter reconstructions obtained from different methods using 10-fold undersampling. The results indicate that CNN-$\lambda=0.5$ incorporating two loss terms simultaneously produces better maps and considerably higher SSIM score calculated with respect to fully-sampled PK maps. The model-based iterative reconstruction yields the PK maps where the artifacts caused by undersampling are still observable.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{fig4_real.png}
\caption{Reconstructed PK parameter maps of two exemplary slices of a test subject with a 10-fold undersampling. Brain masks are applied to estimated maps. Our CNN model incorporating both loss terms ($\lambda=0.5$) achieves the best paramater estimates. The resulting SSIM values are provided at the bottom-left corner of each map. \label{fig:ParamQualitative}}
\end{figure}
In Fig.~\ref{fig:ImageQualitative} we present the exemplary reconstructed images obtained by applying the operation $\boldsymbol{\tilde{f}_m}$ to the estimated PK maps. All the reconstruction approaches result in high quality images, however, the model-based reconstruction can better preserve the finer details. Unfortunately, our fully-sampled data suffer from Gibbs artifacts appearing as multiple parallel lines throughout the image. As marked by white arrows, our CNN method can significantly suppress these artifacts whereas they still appear in the image obtained by model-based iterative reconstruction. Finally, Fig.~\ref{fig:QuantResults} demonstrates the quantitative results of parameter estimation and image reconstruction. The highest CCC and SSIM values for parameter estimation are achieved by our CNN model when both loss terms are incorporated with $\lambda=0.3$ and $\lambda=0.5$, yielding an average score of 0.88 and 0.92, respectively. The difference is statistically significant for both CCC ($p=0.017$) and SSIM ($p=0.0086$) when compared against model-based reconstruction. The model-based reconstruction performs the highest PSNR for image reconstruction, where it is followed by the proposed CNN with $\lambda=0.3$. The difference between them is statistically significant with $p\ll0.05$. The PSNR also shows a decreasing trend with increasing $\lambda$ as expected.
We emphasize that the parameter inference of our method on a 3D test volume takes around 1.5 seconds while the model-based method requires around 95 minutes to reconstruct the same volume, enabling $\approx 4\times 10^3$ faster computation
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\columnwidth]{fig5_final.png}
\caption{Visual comparison of the image reconstruction results of an examplary DCE slice. White arrows indicate a few regions where the Gibbs artifacts are observable. Our CNN model with both $\lambda=0.5$ and $1.0$ can significantly suppress the artifacts appearing in fully-sampled image and model-based reconstruction as well. \label{fig:ImageQualitative}}
\end{figure}
\begin{figure}[t!]
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=4.5cm}}]{figure}[\FBwidth]
{\caption{Parameter estimation (SSIM \& CCC) and image reconstruction (PSNR) performances calculated on all test slices for model-based (MB) reconstruction method and our proposed CNN model with different $\lambda$ settings.
}\label{fig:QuantResults}}
{\includegraphics[width=7.5cm]{fig6_real.png}}
\end{figure}
\section{Conclusion}\label{sec:Conclusion}
\vspace{-2mm}
We present a novel deep learning based framework for direct estimation of PK parameter maps from undersampled DCE image-time series. Specifically, we design a \textit{forward physical model loss} function through which we exploit the physical model relating the contrast agent kinetics to the time-resolved DCE signals. Moreover, we utilize the residual learning strategy in our problem formulation. The experiments demonstrate that our proposed method can outperform the state-of-the-art model-based reconstruction method, and allow almost instantaneous inference of the PK parameters in the clinical workflow of DCE-MRI.
\subsubsection{Acknowledgements.}
The research leading to these results has received funding from the European Unions H2020 Framework Programme (H2020-MSCA-ITN- 2014) under grant agreement no 642685 MacSeNet. We also acknowledge Wellcome Trust (Grant 088134/Z/09/A) for recruitment and MRI scanning costs.
\bibliographystyle{splncs03}
|
1,477,468,750,953 | arxiv | \section{Introduction}
In continuum mechanics, Cauchy has only described the behavior of a mechanical system when inhomogeneities have a characteristic length scale much smaller than the macro-scale in which the phenomena are observed. Usually, the mechanical description of other conservative systems needs a higher order stress tensor and they are many physical phenomena described by this generalized continuum theory. For instance Piola’s continua need a \textit{(n+1) -- uple} of hyper stress-tensors where the order is increasing from second to \textit{n+1}; the contact interactions do not reduce to forces per unit area on boundaries, but include $k$-forces concentrated on areas, on lines or even in wedges \cite{Isola1,Isola2}.
The \textit{ (n+1) -- th} order models are suitable for describing non-local effects as in bio-mechanical phenomena \cite{Madeo,GouinHR}, damage phenomena \cite{Yang}, and internal friction in solids \cite{Limam}.
The range of validity of Noll’s theorem is not verified but the second principle of thermodynamics is clearly proved \cite{Toupin,Mindlin}. Many efforts have been made to study these media, theoretically and numerically, where the research of symmetric forms for the equations of processes must be a main subject to verify the well-posed mathematical problems.
For fluids, across liquid-vapor interfaces, pressure $p$ and volumetric internal energy $\varepsilon_{0}$
are
non-convex functions of volumetric entropy $\eta$ and mass density.
Consequently, the simplest continuous model allowing to study non-homogeneous
fluids inside interface layers considers another volumetric internal energy
$\varepsilon$ as the sum of two terms: the first one
defined as $\varepsilon_{0}$ and the second one
associated with the non-uniformity of the fluid which is approximated by an
expansion limited at the first gradient of mass density.
This form of energy which describes interfaces
as diffuse layers was first introduced by van der Waals
\cite{Waals} and is now widely used in the literature \cite{Cahn}. The model has many applications for inhomogeneous fluids \cite{Gouin1, Gouin2}
and is extended for different materials
in
continuum mechanics,
which modelizes the behavior of
strongly inhomogeneous media
\cite{garajeu,GouinHR,Gavrilyuk,Gouin-Ruggeri,Eremeyev,HMR}.
The model
yields a constant temperature at equilibrium.
Consequently,
the volume entropy varies with the
mass density in the same way as in the bulks. This first assumption of van der
Waals using long-ranged but weak-attractive
forces is not exact for realistic intermolecular potentials
and the thermodynamics
is not completely considered \cite{Evans}.
For variational principles, it
is not possible to take directly the temperature gradient into account : the volume internal energy must be a functional of canonical variables, \textit{i.e.} mass density and volumetric entropy.
The simplest
model was called \emph{thermocapillary fluid model} when the internal energy depends on mass density, volumetric entropy and their first gradients \cite{casal4, Rowlinson}. Such a
behavior has also been considered in models when at
equilibrium the temperature is not constant in
inhomogeneous parts of complex media
\cite{Maitournam,Forest,Seppecher}.
\newline
To improve the model accuracy, the general case considers fluids when the volume
internal energy depends on mass density, volumetric entropy and their
gradients up to a convenient $n$-order ($n \in \mathbb{N}$) where
continuum models of
\emph{gradient theories} are useful in case of strongly inhomogeneous fluids \cite{Germain1,Seppecher2}. The models have a justification in the framework of mean-field molecular theories when the van der Waals forces
exert stresses on fluid molecules producing surface tension effects \cite{Evans,Rowlinson,Widom,Saccomandi}.
\\
In \cite{Gouin}, we obtained the equation of
motions for perfect multi-gradient fluids.
For dissipative motions, the
conservation of mass and balance of entropy implied the
equation of energy. The Clausius-Duhem inequality
was deduced from viscous-stress dissipation and Fourier's
equation.\\
Moreover, the symmetrization of the equations of mechanical systems is a main subject of study for the structure of solutions of complex media, and being
still debated. \\
First, we present some historical remarks which are detailed in Ref. \cite{RS} :\\
In 1961, Godunov wrote a paper on \textit{an interesting class of quasi-linear sytem} which proves that with convenient change of variables, the
system of Euler fluids becomes symmetric. He also proved that all systems coming from variational principles can be written in symmetric form \cite{Godunov}.
In 1971, Friedrichs and Lax proved that all systems compatible with the entropy principle are symmetrizable \cite{Friedrichs}: after a pre-multiplication with a convenient matrix, systems become symmetric.
In 1974, Boillat introduced a new field of variables for which the original system can be written in a symmetric form \cite{Boillat}. He was the first who symmetrized original hyperbolic systems that were compatible with the entropy principle. He called the systems, \textit{Godunov's systems} \cite{LectureNotes}.
The technique of Lagrange multipliers to study the entropy principle was given first by I-Shi Liu \cite{Liu}, and was similar to the work by Ruggeri and Strumia which were interested in extending the previous technique to the relativistic case by using a covariant formulation \cite{Strumia}.
In 1982, Boillat extended the symmetrization to the case with constraints \cite{Boillat2}; the problem was also considered by Dafermos \cite{Dafermos}.
In 1983,
Ruggeri realized it was possible to construct a symmetrization for parabolic sytems and he wrote down the expression of \textit{the main field of variables} for Navier-Stokes-Fourier fluids \cite{Acta_Ruggeri}, and in 1989, he proved that symmetrization was compatible with the Galilean invariance \cite{Ruggeri3,Ruggeri4}.
Second, we consider the framework of models which are represented by quasi-linear first-order systems of
n$ balance laws (we adopt the sum convection on repeated indexes) :
\begin{equation}
\frac{\partial\boldsymbol{G}^{\it 0}(\boldsymbol{v})}{\partial t}+\frac{\partia
\boldsymbol{G} ^{j}({\boldsymbol{v}})}{\partial x^{j}}=\boldsymbol{g(\boldsymbol{v})},
\label{sh}
\end{equation}
with an additional scalar balance equation corresponding to the energy equation in
pure mechanics or the entropy equation in thermodynamics :
\begin{equation*}
\frac{\partial{h}^{\it 0}(\boldsymbol{v})}{\partial t}+\frac{\partial h ^{j}
\boldsymbol{v})}{\partial x^{j}}= {\it{\Sigma}} (\boldsymbol{v}), \label{shsu}
\end{equation*}
where $\boldsymbol{G}^{\it 0},\boldsymbol{G}^j, \ j \in \{ \textit{1}, \dots , n\} $,
\boldsymbol{g}, \boldsymbol{v}$ are column vectors of $\mathbb{R}^n$, and ${h}^{\it 0}, h ^{j},\, j \in \{\textit{1}, \dots, n \},\, {\it\Sigma}$\, are scalar functions; scalar $t$, and $
\boldsymbol{x} = ( x^{\it { 1}}, \cdots , x^{n})$ are time and $\mathbb{R}^n$-space
coordinates, respectively.
Function $h^{\it 0}$ is assumed to be convex with respect to field
\boldsymbol{G}^{\it 0} (\boldsymbol{v})\equiv \boldsymbol{v}$
(see Refs. \cite{Godunov,Friedrichs,Ruggeri2}).
Dual-vector field $\boldsymbol{v}^\prime$, associated with Legendre
transform $h^{\prime {\it 0}}$ and potentials $h^{\prime j}$ is such that (see Ref. \cite{Boillat}) :
\begin{equation*}
\boldsymbol{v}^{\prime}= \left(\frac{\partial h^{\it 0}}{\partial \boldsymbol{v}}\right)^\star, \qquad
h^{\prime {\it 0}} = \boldsymbol{v}^{\prime\star} \, \boldsymbol{v}- h^{\it 0}, \qquad
h^{\prime j} = \boldsymbol{v}^{\prime\star}\, \boldsymbol{G}^j(\boldsymbol{v})- h^j, \label{MF}
\end{equation*}
where $^\star$ indicates the transposition. By a convexity argument, it is possible to take $\boldsymbol{v}^{\prime}$ as a vector field and we obtain :
\begin{equation} \label{change}
\boldsymbol{v} =\left( \frac{\partial h^{\prime {\it 0}}}{\partial \boldsymbol{v}^\prime}\right)^\star
, \qquad \boldsymbol{G}^j(\boldsymbol{v})= \left(\frac{\partial h^{\prime j}}
\partial \boldsymbol{v}^\prime}\right)^\star.
\end{equation}
Inserting new variables given by Eqs. \eqref{change} into System \eqref{sh}, we get :
\begin{equation*} \label{symform}
\frac{\partial}{\partial t}\left(\frac{\partial h^{\prime {\it 0}}}{\partial
\boldsymbol{v}^\prime}\right) + \frac{\partial}{\partial x^j}\left(\frac
\partial h^{\prime j}}{\partial \boldsymbol{v}^\prime}\right) = \boldsymbol{g
}(\boldsymbol{v}^{\prime}),
\end{equation*}
which is symmetric and equivalent to
\begin{equation}
\boldsymbol{A}^{\it 0}\,\frac{\partial \boldsymbol{v}^\prime}{\partial t}
\boldsymbol{A}^{j}\frac{ \partial \boldsymbol{v}^\prime}{\partial x^{j}}
\boldsymbol{g}(\boldsymbol{v}^\prime), \label{symm}
\end{equation
where matrix $\boldsymbol{A}^{\it 0}\equiv \left(\boldsymbol{A}^{{\it 0}}\right)^\star $ is
positive-definite symmetric and matrices $\boldsymbol{A}^{j}=\left(\boldsymbol{A
^{j}\right)^{\star }$ are symmetric,
\begin{equation} \label{matrici}
\boldsymbol{A}^{{\it 0}}\equiv \left(\boldsymbol{A}^{\it 0}\right)^\star= \frac{\partial^2
h^{\prime {\it 0}}}{\partial \boldsymbol{v}^{\prime 2}}, \qquad \boldsymbol{A}^j\equiv \left(\boldsymbol{A
^{j}\right)^{\star }= \frac
\partial^2 h^{\prime j}}{\partial \boldsymbol{v}^{\prime 2}}, \quad (j=\textit{1}, \dots,n).
\end{equation}
The symmetric form of governing equations implies hyperbolicity. For
conservation laws with vanishing production terms, the hyperbolicity is
equivalent to the stability of constant solutions with respect to
perturbations in form $\ e^{i(\boldsymbol{k}^{\star }\boldsymbol{x}-\omega
t)} $,\ where $i^{2}=-1,\ \boldsymbol{k}^{\star }=[k_{\it 1},\cdots ,k_{n}] \in \mathbb{R}^{n \star} $ and
$\omega$ is a real scalar. Indeed, the symmetric form of governing equations
for an unknown vector $\boldsymbol{v}, \ (\boldsymbol{v}^\star = [v_1,\cdots
,v_{n}]$) implies the \emph{dispersion relation :}
\begin{equation*}
{\rm{det}}\,(\boldsymbol{A}_{(k)}-\omega \boldsymbol{A}^{\it 0})=0 \quad \mathrm
with} \quad \boldsymbol{A}_{(k)}=\boldsymbol{A}^{j}k_{j}\,, \label{eigenval}
\end{equation*
which determines real values of $\omega $ for any \emph{real wave vector\,}
\boldsymbol{k}$, where operator\ det\ denotes the determinant. In this case, phase velocities are real and coincide with
the characteristic velocities of hyperbolic system \cite{RMSeccia,BLR}.
Moreover, right-eigenvectors of $\boldsymbol{A}_{(k)}$ with respect to
\boldsymbol{A}^{\it 0}$ are linearly independent and any symmetric system is also
automatically hyperbolic.
Symmetric form given by Eq. \eqref{symm} with relations \eqref{matrici} are commonly called \emph{Godunov's systems} \cite{Godunov}.
\\
In the case of systems with parabolic structure (\emph{hyperbolic-parabolic
systems}), a generalization of symmetric system is written :
\begin{equation}
\boldsymbol{A}^{\it 0}\,\frac{\partial \boldsymbol{v}^\prime}{\partial t}+\boldsymbol{A
^{j}\frac{\partial \boldsymbol{v}^\prime}{\partial x^{j}}
-\frac{\partial}{\partial
x^j} \left(\boldsymbol{B}^{jl}\frac{\partial \boldsymbol{v}^\prime}{\partial x^l
\right)=0, \label{symmPara}
\end{equation}
where matrices\ $\boldsymbol{B}^{jl}= \left(\boldsymbol{B}^{jl}\right)^\star$\ are symmetric, and $
\boldsymbol{B}_{(k)}=\boldsymbol{B}^{jl} k_j k_l $ \ are
non-negative definite.
\newline
The compatibility of hyperbolic-parabolic
systems given by Eq. \eqref{symmPara} with entropy principle and the
corresponding determination of main field is given in \cit
{Acta_Ruggeri} for Navier-Stokes-Fourier fluids and in general case
in \cite{Kawa}. The same authors
considered linearized version of
System \eqref{symmPara} proving that the
constant solutions are stable. \\
These reminders being given, the aim of present paper is to extend the results of symmetrization for\textit{ the most general case of multi-gradient fluids}. Using a
convenient change of variables -- \textit{the main field }-- associated with a Legendre's
transformation of
the total fluid energy, equations of processes can be written in this special divergence form as in Eq. \eqref{symmPara}. Near an equilibrium position, we obtain a new
Hermitian-symmetric form of the system of perturbations. The
obtained set belongs to the class of dispersive systems.\\
The paper is organized as follows : In Section 2, we recall the main results obtained in \cite{Gouin} (equations of
conservative motions, balance of energy and compatibility with the two laws of thermodynamics). We additively obtain the existence of a stress tensor which can write the equation of
motions in a form similar to those of continuous media.
In Section 3, \textit{the main field of variables} -- for which the conservative equations of motions are written in divergence form -- is obtained.
In Section 4, the Hermitian-symmetric form for the equations of perturbations near an equilibrium position is deduced. The perturbations are stable in domains where the total volume energy is a convex function of the main field of variables, which proof confirms that the mathematical problem is well posed. A conclusion ends
the paper.
\section{Multi-gradient fluids and equation of motions}
In this section we recall in a new presentation, \textit{adapted for symmetric calculations}, the main results obtained in \cite{Gouin}, but subsection {\it 2.3} introduces new calculations allowing to obtain the stress tensor of conservative multi-gradient fluids. In this Section, for the sake of simplicity, we identify vectors and covectors and we always indicate indexes in subscript
position without taking account of the tensors' covariance or
contravariance.
\subsection{Definition of multi-gradient fluids}
We consider perfect fluids
with a volume internal energy $\varepsilon $ function
of volumetric entropy $\eta $, mass density $\rho $, and their
gradients until order $ n \in \mathbb{N}$,
\begin{equation*}
\varepsilon =\varepsilon (\eta , \rho, {\Greekmath 0272} \eta , {\Greekmath 0272} \rho , {\ldots },{\Greekmath 0272}
^{n}\eta ,{\Greekmath 0272} ^{n}\rho ) ,
\end{equation*
where operators ${\Greekmath 0272} ^{p}$,\ $p\in \{1,\ldots ,n\}$,\ denote the
successive gradient in Euclidian space ${\mathcal D}_{t}$, of Euler variables $\boldsymbol{s} \equiv \left[x_1, x_2,x_3\right]^\star$, occupied by
the fluid at time $t$,
\begin{equation}
{\Greekmath 0272}^p\, \eta \equiv\left\{\eta\,,_{ x_{j_1}} \ldots
,_{x_{j_p}}\right\} \quad \mathrm{and}\quad {\Greekmath 0272} ^{p}\rho
\equiv \left\{\rho\,,_{ x_{j_1}} \ldots ,_{x_{j_p}}\right\}. \label{multigrad}
\end{equation}
The subscript comma indicates partial derivatives with respect to variables $
x_{j_1} \ldots x_{j_p} $ belonging to the set of Euler
variables $( x_{1}, x_{2} , x_{3} ) $.
We deduce,
\begin{equation*}
d\varepsilon =\frac{\partial \varepsilon }{\partial \eta }\,d\eta +\frac
\partial \varepsilon }{\partial \rho }\,d\rho +\left( \frac{\partial \varepsilon }
\partial {\Greekmath 0272} \eta }\,\vdots\, d{\Greekmath 0272} \eta\right) +\left(\frac{\partial \varepsilon }
\partial {\Greekmath 0272} \rho }\,\vdots\, d{\Greekmath 0272} \rho\right) + \ldots +\left( \frac{\partial
\varepsilon }{\partial {\Greekmath 0272} ^{n}\eta }\,\vdots\, d{\Greekmath 0272} ^{n}\eta\right) +\left(\frac
\partial \varepsilon }{\partial {\Greekmath 0272} ^{n}\rho }\,\vdots\, d{\Greekmath 0272}
^{n}\rho\right) . \label{differentielenergy}
\end{equation*}
Notation \ $ \ \vdots\
$\ means the complete product of tensors (or scalar
product) and
\begin{equation*}
\tilde{T} =\frac{\partial \varepsilon (\rho,\eta)}{\partial \eta }\qquad \mathrm{and
\qquad \tilde{\mu} =\frac{\partial \varepsilon (\rho,\eta)}{\partial \rho },
\end{equation*
are called the \emph{extended temperature} and \emph{extended chemical
potential}, respectively.
\subsection{Equation of conservative motions}
The volume mass satisfies the mass
conservation :
\begin{equation*}
\frac{\partial \rho }{\partial t}+\rm{div}\left( \rho\, \boldsymbol{u
\right) =0 .
\label{density}
\end{equation*}
where
$\boldsymbol{u}$ is the fluid velocity and\ \ $\rm{div}$\ denotes the divergence operator.
The motion is
supposed to be conservative and consequently, the volumetric entropy verifies :
\begin{equation}
\frac{\partial \eta }{\partial t}+\rm{div}\left( \eta\, \boldsymbol{u
\right) =0 ,\label{entropyconservation}
\end{equation}
The specific entropy $s=\eta /\rho$ is constant along each trajectory.
The \emph{extended divergence} \emph{at order }$p$ is defined as :
\begin{equation*}
{\rm{div}}_{p}(b_{{j_{1} \ldots j_{p}}}) =\left( b_{{j_{1}\ldots j_{p}
}\right)_{{,x_{j_{1}}, \ldots , x_{j_{p}}}},\ p\in \mathbb{N}\quad
\mathrm{with}\quad {x_{j_{1}},\ldots , x_{j_{p}}}\in \left(
x_{1}, x_{2} , x_{3}\right).
\end{equation*}
Classically, term $\left( b_{{j_{1}\ldots j_{p}
}\right)_{{,x_{j_{1}}, \ldots ,x_{j_{p}}}}$ corresponds to the
summation on the repeated indexes $j_{1}\ldots j_{p}$ of the
consecutive derivatives of $ b_{j_{1}\ldots j_{p}}$ with respect
to $x_{j_{1}}, \ldots , x_{j_{p}}$.
Term $
\rm{div}}_{p}$ decreases from order $p$ the tensor order, while term ${\Greekmath 0272}
^{p} $ increases from order $p$ the tensor order.
We denote :
\begin{equation}
\left\{
\begin{array}{c}
\displaystyle\theta =\tilde {T}-{\rm{div}\boldsymbol{\Psi }}_{\it 1}+{\rm{div}}_{\it 2}
\boldsymbol{\Psi }}_{\it 2}+{\ldots +}(-1)^{n}{\rm{div}}_{n}{\boldsymbol{\Psi
}_{n}, \qquad {\rm with}\quad \boldsymbol{\Psi }_{p}=\frac{\partial \varepsilon }{\partial
{\Greekmath 0272} ^{p}\eta }, \\
\displaystyle\Xi =\tilde {\mu} -{\rm{div} \boldsymbol{\Phi }}_{\it 1}+{\rm{div}}_{\it 2}
\boldsymbol{\Phi }}_{\it 2}+{\ldots +}(-1)^{n}{\rm{div}}_{n}{\boldsymbol{\Phi
}_{n} ,\qquad {\rm with}\quad \boldsymbol{\Phi
}_{p}=\frac{\partial \varepsilon }{\partial {\Greekmath 0272} ^{p}\rho
} ,
\end{array
\right. \label{tempchem}
\end{equation
where $\theta $ and $\Xi $\, are called the
\emph{generalized temperature} and \emph{generalized chemical
potential}.
We obtain
the equation of conservative motions in Ref. \cite{Gouin}, where we can find the proofs of these results :
\begin{equation}
\boldsymbol{a}+ {\rm grad}\left( \Xi +\Omega \right)
+s\, {\rm grad}\,\theta = 0\qquad {\rm
or}\qquad\boldsymbol{a}+ {\rm grad}\left( H+\Omega \right) -\theta \,
{\rm grad}\, s=0 ,
\label{motion2}
\end{equation
where $\boldsymbol{a}$ denotes the acceleration, grad the gradient operator, $\Omega$ the external force potential, $H=$ $\Xi -s\,\theta $ is called the \emph{generalized free
enthalpy}.
Relations \eqref{motion2} are the generalization of relation (29.8)
in Ref. \cite{Serrin} and constitutes the \emph{thermodynamic form} of the
equation of isentropic motions for perfect fluids.
\subsection{Complement: the stress tensor of conservative fluids}
The new results of the subsection are not useful for the other parts of the paper, but completely extend results obtained in \cite{casal4}.
We have the relation :
\begin{equation*}
d\varepsilon=\tilde T \ d\eta +\tilde\mu\ d\rho +\left({\boldsymbol{\Psi }}_{\it 1}\,\vdots\, d{\Greekmath 0272}\eta\right) +\left({
\boldsymbol{\Phi }}_{\it 1}\,\vdots\,d{\Greekmath 0272} \rho\right) + \ldots +\left(\boldsymbol{\Psi }
_{n}\,\vdots\, d{\Greekmath 0272} ^{n}\eta\right) + \left(\boldsymbol{\Phi }_{n}\,\vdots\,d{\Greekmath 0272}
^{n}\rho\right).
\end{equation*}
The Legendre
transformation of $\varepsilon$ with respect to $\ \eta,
\rho, {\Greekmath 0272} \eta, {\Greekmath 0272} \rho, \dots, {\Greekmath 0272} ^{n}\eta,
{\Greekmath 0272} ^{n}\rho$ is denoted by ${\it\Pi}$. Fonction ${\it\Pi}$
depends on
$\tilde T, \tilde\mu , {\boldsymbol{\Psi
}}_{\it 1}, {\boldsymbol{\Phi }}_{\it 1}, \dots,{\boldsymbol{\Psi }}_{n},
{\boldsymbol{\Phi }}_{n} $.
\begin{equation}
{\it\Pi} = \eta \ \tilde T + \rho \ \tilde\mu\ +
\left({\Greekmath 0272} \eta\,\vdots\, {\boldsymbol{\Psi }}_{\it 1}\right) +\left({\Greekmath 0272} \rho\,\vdots\,{
\boldsymbol{\Phi }}_{\it 1} \right) + \ldots +\left({\Greekmath 0272} ^{n}\eta\,\vdots\, \boldsymbol{\Psi }
_{n}\right) + \left({\Greekmath 0272}
^{n}\rho\,\vdots\, \boldsymbol{\Phi }_{n}\right)
-\varepsilon,\label{Pi}
\end{equation}
and
\begin{equation*}
d{\it\Pi} = \eta\ d\tilde T \, +\ \rho\ d\tilde\mu\ +\left(
{\Greekmath 0272} \eta\,\vdots\,d{\boldsymbol{\Psi }}_{\it 1}\right)+
\left({\Greekmath 0272} \rho \,\vdots\, d\boldsymbol{\Phi }_{\it 1}\right) + \ldots +\left( {\Greekmath 0272} ^{n}\eta \,\vdots\, d\boldsymbol{\Psi }
_{n}\right)+ \left( {\Greekmath 0272} ^{n}\rho\,\vdots\,d{\boldsymbol{\Phi }}_{n}\right),
\end{equation*}
where
\begin{equation}
\frac{\partial{\it\Pi}}{\partial \tilde T}
=\eta,\quad\frac{\partial{\it\Pi}}{\partial \tilde\mu}
=\rho.\quad {\rm and}\ \quad \frac{\partial \mathcal{P}}{\partial {\boldsymbol{\Psi }}_{\it k}}= {\Greekmath 0272} ^{k}\eta, \quad \frac{\partial \mathcal{P}}{\partial {\boldsymbol{\Phi }}_{\it k}}= {\Greekmath 0272} ^{k}\rho ,\quad k \in \{\textit{1}, \ldots , n \} \label{pressure1}
\end{equation}
Consequently
\begin{equation*}
\frac{\partial {\it\Pi} }{\partial \boldsymbol{x}}=\ \eta \ \frac{\partial \tilde T}
\partial \boldsymbol{x}}+\rho \frac{\partial \tilde\mu}{\partial \boldsymbol{x}
+\left( {\Greekmath 0272} \eta \,\vdots\, \frac{\partial {\boldsymbol{\Psi }}_{\it 1}}{\partial
\boldsymbol{x}}\right) + \left( {\Greekmath 0272} \rho\,\vdots\,\frac{\partial {\boldsymbol{\Phi }}_{\it 1}}
\partial \boldsymbol{x}}\right)+ \ldots + \left({\Greekmath 0272} ^{n}\eta \,\vdots\, \frac{\partial
\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)+\left({\Greekmath 0272} ^{n}\rho \,\vdots\, \frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}}\right) .
\end{equation*}
Because $\quad \displaystyle \frac{\partial \,{\rm{div}}_{n}{\,\boldsymbol{\Phi }_n
}{\partial \boldsymbol{x}} =
{\rm{div}}_{n} \frac{\,\partial\boldsymbol{\Phi}_{n}
} {\partial\boldsymbol{x}} $, \ by taking account of identities,
\begin{eqnarray}
\left({\Greekmath 0272} \rho \,\vdots\, \frac{\partial {\boldsymbol{\Phi }}_{\it 1}}{\partial
\boldsymbol{x}}\right) &\equiv&{\rm{div}}\left(\, \rho \ \frac{\partial {\boldsymbol
\Phi }}_{\it 1}}{\partial \boldsymbol{x}}\right) -\rho \ \frac{\partial \,{\rm
div}\boldsymbol{\Phi }}_{1}}{\partial \boldsymbol{x}} \notag \\
\left({\Greekmath 0272} ^{\it 2}\rho \,\vdots\, \frac{\partial {\boldsymbol{\Phi }}_{\it 2}}{\partial
\boldsymbol{x}}\right) &\equiv&{\rm{div}}\left[\left( {\Greekmath 0272} \rho \,\vdots\, \ \frac{\partial
\boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}}\right)-\rho \ \frac{\partial
\rm{div}\ \boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}}\right] +\rho \
\frac{\partial \,{\rm{div}}_{\it 2}{\,\boldsymbol{\Phi }}_{\it 2}}{\partial
\boldsymbol{x}} \notag \\
&\vdots & \label{lemme1} \\
\left({\Greekmath 0272} ^{n}\rho \,\vdots\,\frac{\partial {\boldsymbol{\Phi }}_{n}}{\partial
\boldsymbol{x}}\right) &\equiv&{\rm{div}}\left[\left( {\Greekmath 0272} ^{n-1}\rho \,\vdots\, \frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272}
^{n-2}\rho \,\vdots\, \rm{div}\frac{\partial {\boldsymbol{\Phi }}_{\it n}}
\partial \boldsymbol{x}}\right)+{\ldots }\right. \ \notag \\
&+& \left. (-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\rho \,\vdots\,{\rm{div}}_{p-1}\frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}}\right)+{\ldots +
(-1)^{n-1}\rho\, \,{\rm{div}}_{n-1}\frac{\partial {\boldsymbol{\Phi }}_{n
}{\partial \boldsymbol{x}}\right]
+(-1)^{n}\rho \ \frac{\partial \,{\rm{div}}_{n}{\,\boldsymbol{\Phi }}_{n
}{\partial \boldsymbol{x}}, \notag
\end{eqnarray
and an analog expression for $\eta $, where $\rho$ and $ {\boldsymbol{\Phi }}_{p}$ are replaced by $\eta$ and $ {\boldsymbol{\Psi }}_{p}$, $p\in \{1,\ldots ,n\}$. We deduce,
\begin{equation*}
\rho \frac{\partial \Xi }{\partial \boldsymbol{x}}+\eta \frac{\partial
\theta }{\partial \boldsymbol{x}}=\rm{div}\left( \,{\it\Pi} \, \boldsymbol{I}
\boldsymbol{\sigma }\,\right) .
\end{equation*}
The identical transformation is denoted by
$\boldsymbol{I}$ and stress tensor $\sigma$ is :
\begin{eqnarray*}
\boldsymbol{\sigma } &=& {\rho \ }\frac{\partial {\boldsymbol{\Phi
}}_{\it 1}}{\partial \boldsymbol{x}}+\left({\Greekmath 0272} \rho \,\vdots\, \frac{\partial
\boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}}\right)-\rho \ \frac{\partial \,
\rm{div}}_{\it 2}{\,\boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}}+ \ldots +
\left( {\Greekmath 0272} ^{n-1}\rho \,\vdots\,\frac{\partial {\boldsymbol{\Phi
}}_{n}}{\partial
\boldsymbol{x}}\right)-\left({\Greekmath 0272} ^{n-2}\rho \,\vdots\, \rm{div}\frac{\partial
\boldsymbol{\Phi }}_{\it n}}{\partial \boldsymbol{x}}\right)
\\
&+& \ldots + (-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\rho \,\vdots\,{\rm{div}}_{p-1}\frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}} \right)+ \ldots +(-1)^{n-1}\rho \,{\rm{div}}_{n-1}\frac{\partial {\boldsymbol{\Phi }}_{n}}
\partial \boldsymbol{x}} \\
&+&{\eta\ }\frac{\partial {\boldsymbol{\Psi
}}_{\it 1}}{\partial \boldsymbol{x}}+\left({\Greekmath 0272} \eta\,\vdots\, \frac{\partial
\boldsymbol{\Psi }}_{\it 2}}{\partial \boldsymbol{x}}\right)-\eta\ \frac{\partial \,
\rm{div}}_{\it 2}{\,\boldsymbol{\Psi }}_{\it 2}}{\partial \boldsymbol{x}}+ \ldots +
\left( {\Greekmath 0272} ^{n-1}\eta \,\vdots\,\frac{\partial {\boldsymbol{\Psi
}}_{\it n}}{\partial
\boldsymbol{x}}\right)-\left({\Greekmath 0272} ^{n-2}\eta \,\vdots\, \rm{div}\frac{\partial {{\boldsymbol{\Psi }}}_{\it n}}{\partial \boldsymbol{x}}\right)
\\
& + &\ldots + (-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\eta\,\vdots\,{\rm{div}}_{p-1}\frac
\partial {\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}} \right)+ \ldots +(-1)^{n-1}\eta\,{\rm{div}}_{n-1}\frac{\partial {\boldsymbol{\Psi }}_{n}}
\partial \boldsymbol{x}} .
\end{eqnarray*}
Due to the mass conservation, we get
$\
\rho \,\boldsymbol{a} = {\partial (\rho\,\boldsymbol{u})}/{\partial
t}+\rm{div}\left( \rho
\boldsymbol{u}\otimes \boldsymbol{u}\right) \
$
and the
equation of motions \eqref{motion2} can be written in the other form :
\begin{equation}
\frac{\partial \rho \,\boldsymbol{u}}{\partial t}+\rm{div}\left(\, \rho
\boldsymbol{u}\otimes \boldsymbol{u}+ \,{\it\Pi} \, \boldsymbol{I}-\boldsymbol
\sigma }\,\right) + \rho \, \rm{grad}\, \Omega=0\label{stress tensor}
\end{equation}
The two previous equations are deduced from Hamilton's principle \cite{Gouin}, which can be used only for conservative media because
Eq. \eqref{entropyconservation} is verified. In this case,
Eq. \eqref{motion2} is strictly equivalent to Eq. \eqref{stress tensor}.\\
Let us note that, for classical fluids, the two equations are two forms of the equation of motions which are written in Eq. (29.8) of \cite{Serrin}:
\begin{equation*}
\rho \,\boldsymbol{a}+ {\rm grad}\, p +\rho\,{\rm grad} \, \Omega
=0 \qquad {\Longleftrightarrow
}\qquad \boldsymbol{a}+ {\rm grad}\left( \mu+\Omega \right)
+s\, {\rm grad}\,T = 0,
\label{motion}
\end{equation*}
where $p$ is here the thermodynamical pressure of simple fluids, $\mu$ the corresponding chemical potential and $T$ the Kelvin temperature.\\
The stress tensor $\boldsymbol
\sigma }$ is only an artifact different from the Cauchy stress tensor, which can be interesting to compare with solid mechanics; the most important conservative equations are expressed by Eq. \eqref{motion2}. It is the reason why the entropy law is expressed without dissipative terms.
\subsection{Equation of energy for dissipative motions (see the detailed proofs in Ref. \cite{Gouin})}
For viscous
fluids, the equation of motions
can be written as :
\begin{equation*}
\frac{\partial \rho \,\boldsymbol{u}}{\partial t}+\rm{div}\left(
\rho\boldsymbol{u}\otimes \boldsymbol{u}\right)+\rho \,\rm{grad}\, \Xi +\eta
\, \rm{grad}\, \theta -\rm{div}\,{\boldsymbol{\sigma}}_{v} + \rho
\, \rm{grad}\, \Omega =0 ,
\end{equation*}
here $\boldsymbol{\sigma }_{v}$ denotes the viscous-stress tensor
of the fluid. We denote
\begin{equation*}
\left\{
\begin{array}{l}
\displaystyle\boldsymbol{M} = \frac{\partial \rho
\boldsymbol{u}}{\partial t}+\rm{div}\left( \rho
\boldsymbol{u}\otimes \boldsymbol{u}\right) +\rho \,\rm{grad}\,
\Xi } +\eta \,\rm{grad} \, \theta -\rm{div}\boldsymbol{\sigma
}_{\it v}+\rho
\, \rm{grad}\, \Omega \\
\displaystyle B = \frac{\partial \rho }{\partial t}+\rm{div}\left( \rho \
\boldsymbol{u}\right) \\
\displaystyle N = \frac{\partial\rho s}{\partial t}+{\rm{div}}\left(\rho s\, \boldsymbol{u} \right)
+\frac{1}{\theta}\,\Big(\rm{div}{\boldsymbol q}
-r-\rm{Tr}\,\big(\,
\boldsymbol{\sigma }_{\it v}\, \boldsymbol{D }\, \big)\Big)
\\
\displaystyle F = \frac{\partial }{\partial t}\left(
\frac{1}{2}\,\,\rho\, \boldsymbol{
}\, \boldsymbol{.}\,\boldsymbol{u}+\rho \ \Xi
+\eta \ \theta -{\it\Pi} +\rho \ \Omega \right)\\
\displaystyle+\rm{div}\left\{ \left[ \left( \frac{1}{2}\,\rho\, \boldsymbol{
}\, \boldsymbol{.}\,\boldsymbol{u}+\rho \,\Xi +\eta \,\theta +\rho \,\Omega
\right)
\boldsymbol{I} - \boldsymbol{\sigma }_{\it v}\right] \boldsymbol{u}+\boldsymbol{\chi
\right\} +\rm{div}\,\boldsymbol{q}-r-\rho\,\frac{\partial \Omega
}{\partial t},
\end{array
\right.
\end{equation*}
where \ Tr\ \ denotes the trace operator and\ \ $ \boldsymbol{.} $ \ the scalar product ($\boldsymbol{u}\,\boldsymbol{.}\,\boldsymbol{u} =\boldsymbol{u}^\star \boldsymbol{u}$). Terms $\boldsymbol{q}$ and $r$ represent the heat-flux vector and the heat supply; $D=\frac{1}{2}\left( \partial\boldsymbol{u}/\partial \boldsymbol{x}+(\partial\boldsymbol{u}/\partial \boldsymbol{x})^\star\right) $ is the velocity gradient. Due to the relaxation time in the dissipative processes, we only consider the case when dissipative viscous stress tensor $\boldsymbol{\sigma
}_{\it v}$ takes account of the first derivative of the velocity field: the higher terms are assumed negligible and the viscosity does not take any gradient terms into
account.\\
The equation of motion is written $M=0$ in the dissipative case with addition of viscous stress tensor $\boldsymbol{\sigma
}_{\it v}$. The equation of motion
written in the conservative case can be now written for viscous
fluids, the conservative motions are written without viscosity. Terms $\boldsymbol{q}$ and $r$ being introduced together with $\boldsymbol{\sigma
}_{\it v}$ are adapted into $N$ and $F$ for the dissipative case.\\
Due to subsection 2.3, the term $\boldsymbol{M} $ can be written in two equivalent expressions :
\begin{equation*}
\boldsymbol{M} = \frac{\partial \rho
\boldsymbol{u}}{\partial t}+\rm{div}\left( \rho
\boldsymbol{u}\otimes \boldsymbol{u}\right) +\rho \,\rm{grad}\,
\Xi } +\eta \,\rm{grad} \, \theta -\rm{div}\boldsymbol{\sigma
}_{\it v}+\rho
\, \rm{grad}\, \Omega
\end{equation*}
or equivalently,
\begin{equation*}
\boldsymbol{M}=\frac{\partial \rho \,\boldsymbol{u}}{\partial t}+\rm{div}\left(\, \rho
\boldsymbol{u}\otimes \boldsymbol{u}+ \,{\it\Pi} \, \boldsymbol{I}-\boldsymbol
\sigma} - \boldsymbol{\sigma
}_{\it v} \,\right) + \rho \, \rm{grad}\, \Omega
\end{equation*}
The first expression is more adapted to the following.
Viscous stress tensor $\boldsymbol{\sigma
}_{\it v}$ is classically introduced in the same part than conservative stress tensor $\boldsymbol{\sigma
}$. In this case,
\begin{equation*}
N = \frac{\partial\rho s}{\partial t}+{\rm{div}}\left(\rho s\, \boldsymbol{u} \right)
+\frac{1}{\theta}\,\Big(\rm{div}{\boldsymbol q}
-r-\rm{Tr}\,\big(\,
\boldsymbol{\sigma }_{\it v}\, \boldsymbol{D }\, \big)\Big)
\end{equation*}
and due to relation,
\begin{equation*}
\left( \frac{\partial\rho s}{\partial t} +{\rm{div}}\left(\rho s\, \boldsymbol{u} \right)\right) {\theta}
+ \,\Big(\rm{div}{\boldsymbol q}
-r-\rm{Tr}\,\big(\,
\boldsymbol{\sigma }_{\it v}\, \boldsymbol{D }\, \big)\Big)=0,
\end{equation*}
term $ \displaystyle \frac{\partial\rho s}{\partial t}+{\rm{div}}\left(\rho s\, \boldsymbol{u} \right)$ corresponds to the variation of the entropy.
Then, we only take account of
$ \boldsymbol{D}$ which is the velocity deformation tensor and (see for proof, Ref. \cite{Gouin}),
\begin{eqnarray*}
\boldsymbol{\chi} = \rho \frac{\partial \Phi _{\it 1}}{\partial t}+\ldots+\left(\frac{\partial \Phi
_{n}}{\partial t}
\,\vdots\, {\Greekmath 0272} ^{n-1}\rho\right)-\left( \rm{div}\frac{\partial \Phi _{ \it n}}
\partial t}
\,\vdots\, {\Greekmath 0272} ^{{\it n}-2}\rho
\right)+\ldots +(-1)^{p-1}\left( {\rm{div}}_{p-1}\frac{\partial
\Phi
_{n}}{\partial t}
\,\vdots\,
{\Greekmath 0272} ^{n-p}\rho \right)
+\ldots +(-1)^{n-1}\,\rho \, {\rm{div}}_{n-1}\frac
\partial \Phi _{n}}{\partial t} \\
+\,\eta
\,\frac{\partial \Psi _{\it 1}}{\partial t}+ \ldots +
\left(\frac{\partial \Psi
_{\it n}}{\partial \it t}
\,\vdots\,
{\Greekmath 0272} ^{n-1}\eta
\right)
- \left(\rm{div}\frac{\partial \Psi _{\it n}}
\partial \it t}\,\vdots\, {\Greekmath 0272} ^{{\it n}-2}\eta\right)
+{\ldots }+(-1)^{p-1}
\left({\rm{div}}_{p-1}\frac{\partial
\Psi
_{\it n}}{\partial \it t} \,\vdots\,
{\Greekmath 0272} ^{n-p}\eta \right)
+\ldots +(-1)^{n-1}\,\eta \,{\rm{div}}_{n-1}\frac
\partial \Psi _{n}}{\partial t} .
\end{eqnarray*}
Term $\boldsymbol{\chi}$ is the general extension of the interstitial-working vector
obtained in \cite{casal5}.
We obtain the following results (see the proofs in Ref. \cite{Gouin}),
\begin{theorem}
Relatio
\begin{equation*}
F-\boldsymbol{M}\, \boldsymbol{.}\,\boldsymbol{u}-\left( \frac{1}{2}\,\boldsymbol{u}\, \boldsymbol{.}\,
\boldsymbol{u}+\ \Xi +\ \Omega \right) \,B-\theta\,N\equiv
0
\end{equation*
is an algebraic identity.
\end{theorem}
\noindent $\boldsymbol{M} = 0$\ is the equation of motion, $B=0$ is the mass
conservation and $N=0$ the entropy relation, then $F=0$ is the
equation of energy for dissipative fluids.
\begin{corollary}
The equation of energy is
\begin{equation*}\label{eenrgy}
\displaystyle\frac{\partial }{\partial t}\left( \frac{1}{2}\,\rho\,
\boldsymbol{u}\, \boldsymbol{.}\,
\boldsymbol{u}
+\rho \ \Xi +\eta \ \theta -{\it\Pi}+\rho \ \Omega \right) +
\displaystyle\rm{div}\left\{ \left[ \left( {\frac{1}{2}}\,\rho\,\boldsymbol{u}\, \boldsymbol{.}\,
\boldsymbol{u}+\rho \,\Xi +\eta \,\theta +\rho \,\Omega
\right)
\boldsymbol{I}-\boldsymbol{\sigma }_{\it v}\right] \boldsymbol{u}+\boldsymbol{\chi
\right\} +\rm{div}\, \boldsymbol{q}-r-\rho\,\frac{\partial \Omega }{\partial t}=0.
\end{equation*}
\end{corollary}
For dissipative fluid motions,
$
tr\left( \boldsymbol{\sigma }_{v}\,D\right)\geq 0
$.
From $N = 0$ and $B=0$, we deduce the Planck inequality
\cite{Truesdel1} :
\begin{equation*}
\rho \,\theta \,\frac{ds}{dt}+\rm{div}\,\boldsymbol{q} - r \geq 0.
\end{equation*}
We consider the Fourier equation in the form of general inequality :
$$
\boldsymbol{q}\,\boldsymbol{.}\,\rm{grad}\, \theta \leq 0 ,
$$
and we obtain,
\begin{equation*}
\rho \,\frac{ds}{dt}+\rm{div}\,\frac{\boldsymbol{q}}{\theta }-\frac{r}
\theta } \geq 0,
\end{equation*}
which is the extended form of the Clausius-Duhem inequality. Then, multi-gradient fluids are compatible with the two law of thermodynamics.
\section{Main field variables}
In this section, we use the properties of symmetry and consequently we cannot any more identify covariant and contravariant vectors and tensors. Then, superscript\ $^{\star}$\ denotes the
transposition in ${\mathcal D}_t$. When clarity is necessary, we use the notation $\boldsymbol{
}^{\star }\boldsymbol{c}$ for the scalar product of vectors $\boldsymbol{b}$ and $\boldsymbol{c}$,
tensor product $\boldsymbol{b}\, \boldsymbol{c}^{\star }$ corresponds to
$\boldsymbol{b}\otimes \boldsymbol{c}$.
The divergence of a linear
transformation $\boldsymbol{S}$ denotes the covector $\mathop{\rm div}
\boldsymbol{S})$ such that, for any constant vector $\boldsymbol{d}$,
\mathop{\rm div}(\boldsymbol{S})\, \boldsymbol{d}= \mathop{\rm div}
\boldsymbol{S}\,\boldsymbol{d})$.
Now, previous terms ${\Greekmath 0272} ^{p}\eta$ and ${\Greekmath 0272} ^{p}\rho$, defined in Eqs. (\ref{multigrad}), are covariant tensors of order $p$, while $\boldsymbol{\Psi
}_{p}$ and $\boldsymbol{\Phi
}_{p}$, defined in Eqs. \eqref{tempchem}, are contravariant tensors of order $p$.
\subsection{Study of conservative motion equation}
Without missing the generality, and for the sake of simplicity, we do not consider external-force term. The total energy of the fluid is :
\begin{equation*}
E=\frac{\boldsymbol{j}^{\star}\boldsymbol{j}}{2\rho }+\varepsilon
\qquad \mathrm{where}\quad \boldsymbol{j}=\rho \, \boldsymbol{u},
\end{equation*
and
\begin{equation*}
dE=\tilde T \ d\eta +R\ d\rho +\boldsymbol{u}^\star d\boldsymbol{j}+\left( d{\Greekmath 0272} \eta\,\vdots\, {\boldsymbol{\Psi }}_{\it {\LARGE 1}}\right) +\left(d{\Greekmath 0272} \rho\,\vdots\,
\boldsymbol{\Phi }}_{\it 1}\right) +\ldots +\left(d{\Greekmath 0272} ^{n}\eta \,\vdots\, {\boldsymbol{\Psi }
_{n}\right) +\left(d{\Greekmath 0272} ^{n}\rho \,\vdots\, {\boldsymbol{\Phi
}}_{n}\right) ,
\end{equation*}
where
\begin{equation*}
R=\tilde\mu \
\frac{\boldsymbol{u}^{\star }\boldsymbol{u}}{2}.
\end{equation*}
The Legendre
transformation of $E$ with respect to variables $\, \eta, \,
\rho, \boldsymbol{j}, {\Greekmath 0272} \eta, {\Greekmath 0272} \rho,\dots, {\Greekmath 0272}
^{n}\eta, {\Greekmath 0272} ^{n}\rho$ is denoted $ \mathcal{P} $; $ \mathcal{P} $ is a function
of
$\tilde T, R, \boldsymbol{u}, {\boldsymbol{\Psi
}}_{\it 1}, {\boldsymbol{\Phi }}_{\it 1}, \dots, {\boldsymbol{\Psi }}_{n},
{\boldsymbol{\Phi }}_{n} $.
\begin{equation*}
\mathcal{P}= \eta \ \tilde T+\ \rho \ R +\boldsymbol{j}^\star \boldsymbol{u}+\left({\Greekmath 0272} \eta\,\vdots\, {\boldsymbol{\Psi }}_{\it 1}\right) +\left({\Greekmath 0272} \rho
\, \vdots\, {\boldsymbol{\Phi }}_{\it 1}\right)+ \ldots +\left({\Greekmath 0272}^n \eta\,\vdots\, {\boldsymbol{\Psi }}_{n}\right)+\left({\Greekmath 0272}^n \rho
\, \vdots\, {\boldsymbol{\Phi }}_{n}\right) -E,\label{pressure0}
\end{equation*}
where
\begin{equation}
\frac{\partial \mathcal{P}}{\partial \tilde T}
=\eta,\quad\frac{\partial \mathcal{P}}{\partial R}
=\rho,\quad\frac{\partial\mathcal{P}}{\partial \boldsymbol{u}}
=\boldsymbol{j}\,^\star,\quad {\rm and}\ \quad \frac{\partial \mathcal{P}}{\partial {\boldsymbol{\Psi }}_{\it k}}= {\Greekmath 0272} ^{k}\eta, \quad \frac{\partial \mathcal{P}}{\partial {\boldsymbol{\Phi }}_{\it k}}= {\Greekmath 0272} ^{k}\rho ,\quad k \in \{\textit{1}, \ldots , n \}\label{pressure1}.
\end{equation}
We notice that
\begin{equation}
\mathcal{P}= \eta \ \tilde T+\ \rho \ \tilde\mu +\left({\Greekmath 0272} \eta\,\vdots\, {\boldsymbol{\Psi }}_{\it 1}\right) +\left({\Greekmath 0272} \rho
\, \vdots\, {\boldsymbol{\Phi }}_{\it 1}\right)+ \ldots +\left({\Greekmath 0272}^n \eta\,\vdots\, {\boldsymbol{\Psi }}_{n}\right)+\left({\Greekmath 0272}^n \rho
\, \vdots\, {\boldsymbol{\Phi }}_{n}\right) -\ \varepsilon.\label{pressure}
\end{equation}
Consequently, the value of $\mathcal{P}$ is the same than the value of $\it{\Pi} $ given in Eq. \eqref{Pi}, but function $\mathcal{P}$ is associated with a different field of variables.
Motion equation \eqref{motion2} can be written,
\begin{equation}
\frac{\partial \rho \,\boldsymbol{u}}{\partial t}+\rm{div}\left( \rho
\boldsymbol{u}\otimes \boldsymbol{u} \right) +\rho \,\rm{grad}\,\Xi +\eta \,\rm{grad
\,\theta = 0 . \label{motion1}
\end{equation
Due to \eqref{pressure1},
\begin{equation*}
d\left( \mathcal{P} \,\boldsymbol{u}\right) = d\mathcal{P} \ \,\boldsymbol{u}+\mathcal{P} \
\boldsymbol{u}\quad \mathrm{\Longrightarrow }\quad \frac{\partial
\left( \mathcal{P}
\,\boldsymbol{u}\right) }{\partial \boldsymbol{u}}=\boldsymbol{u}\otimes \boldsymbol{j} +
\mathcal{P} \ \boldsymbol{I} \notag \quad
\mathrm{\Longrightarrow } \quad \rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial
\boldsymbol{u}}\right] = \rm{div}\left( \rho \boldsymbol{u\otimes
u}\right) +\frac{\partial \mathcal{P} }{\partial \boldsymbol{x}}.
\end{equation*}
From \eqref{pressure}, we get
\begin{equation*}
\frac{\partial \mathcal{P} }{\partial \boldsymbol{x}}=\eta \ \frac{\partial \tilde T}
\partial \boldsymbol{x}}+\rho \ \frac{\partial \tilde\mu }{\partial \boldsymbol{x}
\ +\left({\Greekmath 0272} \eta\, \vdots\, \frac{\partial {\boldsymbol{\Psi }}_{\it 1}}{\partial
\boldsymbol{x}}\right)+\left({\Greekmath 0272} \rho \, \vdots\, \frac{\partial \boldsymbol{\Phi }_{\it 1}}
\partial \boldsymbol{x}}\right)+ \ldots + \left({\Greekmath 0272} ^{n}\eta \, \vdots\, \frac{\partial
\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)+\left({\Greekmath 0272} ^{n}\rho\, \vdots\, \frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}}\right) .
\end{equation*
Taking account of Eq. \eqref{lemme1}, we get :
\begin{eqnarray}
\frac{\partial \mathcal{P} }{\partial \boldsymbol{x}} &=&\eta \ \frac{\partial }{\partial \boldsymbol{x}}\Big(\, \tilde T-{\rm{div}\,
\boldsymbol{\Psi }}_{\it 1}+{\rm{div}}_{\it 2}{\boldsymbol{\Psi }}_{\it 2}+ \ldots +
(-1)^{n}{\rm{div}}_{n}{\boldsymbol{\Psi }}_{n}\,\Big)
+ \rho \ \frac{\partial }
\partial \boldsymbol{x}}\Big(\, \tilde\mu -{\rm{div}\ \boldsymbol{\Phi }}_{\it 1}+
\rm{div}}_{\it 2}{\boldsymbol{\Phi }}_{\it 2}+{\ldots +}(-1)^{n}{\rm{div}}_{n}
\boldsymbol{\Phi }}_{n}\,\Big)\notag \\
&+& {\rm{div}}\left( \eta \ \frac{\partial {\boldsymbol{\Psi }}_{\it 1}}
\partial \boldsymbol{x}}\right) +{\rm{div}}\left[ \left( {\Greekmath 0272} \eta \,\vdots\, \frac
\partial {\boldsymbol{\Psi }}_{\it 2}}{\partial \boldsymbol{x}}\right)-\eta \ \frac
\partial {\rm{div}\ \boldsymbol{\Psi }}_{\it 2}}{\partial \boldsymbol{x}
\right]\notag\\
& \vdots &\notag \\
&+&{\rm{div}}\left[ \left({\Greekmath 0272} ^{n-1}\eta \,\vdots\, \frac{\partial {\boldsymbol
\Psi }}_{\it n}}{\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272} ^{n-2}\eta \,\vdots\, \rm{div
\frac{\partial {\boldsymbol{\Psi }}_{\it n}}{\partial \boldsymbol{x}}\right)+{\ldots
+(-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\eta \,\vdots\, {\rm{div}}_{p-1}\frac
\partial {\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)+ \ldots +
(-1)^{n-1}\eta \,\,{\rm{div}}_{n-1}\frac{\partial {\boldsymbol{\Psi }}_{n
}{\partial \boldsymbol{x}}\right]\notag\\
&+&{\rm{div}}\left( \rho \ \frac{\partial {\boldsymbol{\Phi }}_{\it 1}}
\partial \boldsymbol{x}}\right) +{\rm{div}}\left[ \left({\Greekmath 0272} \rho \,\vdots\, \frac
\partial {\boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}}\right)-\rho \ \frac
\partial {\rm{div}\ \boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}
\right] \label{Key1}\\
& \vdots& \notag\\
&+&{\rm{div}}\left[\left({\Greekmath 0272} ^{n-1}\rho\,\vdots\, \frac{\partial {\boldsymbol
\Phi }}_{\it n}}{\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272} ^{n-2}\rho \,\vdots\, \rm{div
\frac{\partial {\boldsymbol{\Phi }}_{\it n}}{\partial \boldsymbol{x}}\right)+ \ldots
+ (-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\rho \,\vdots\,{\rm{div}}_{p-1}\frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}}\right)+{\ldots +
(-1)^{n-1}\rho \,\,{\rm{div}}_{n-1}\frac{\partial {\boldsymbol{\Phi }}_{n
}{\partial \boldsymbol{x}}\right]\notag
\end{eqnarray
and consequently,
\begin{eqnarray*}
\frac{\partial \mathcal{P} }{\partial \boldsymbol{x}} &=&\eta \
\frac{\partial \theta }{\partial \boldsymbol{x}}+\rho \
\frac{\partial \Xi }{\partial
\boldsymbol{x}} +\rm{div}\left(\, C_{\it n}+D_{\it n}\,\right)
\\
&+& \rm{div}\left[\, \eta \, \frac{\partial }{\partial
\boldsymbol{x}}\left(
{\boldsymbol{\Psi }}_{\it 1}-{\rm{div}}{\boldsymbol{\Psi}
_{\it 2}+{\ldots +}(-1)^{{\it n}-1}{\rm{div}}_{{\it n}-1}{\boldsymbol{\Psi}}_{\it n}\right)
\right] +\rm{div}\left[ \,\rho \, \frac{\partial }{\partial \boldsymbol{x}}\left(
{\boldsymbol{\Phi }}_{\it 1}-{\rm{div}}{\boldsymbol{\Phi
}_{\it 2}+{\ldots +}(-1)^{{\it n}-1}{\rm{div}}_{{\it n}-1}{\boldsymbol{\Phi}}_{\it n}\right)
\right] ,
\end{eqnarray*
wit
\begin{eqnarray}
C_{n} &=&\left({\Greekmath 0272} \eta\,\vdots\, \frac{\partial {\boldsymbol{\Psi }}_{\it 2}}
\partial \boldsymbol{x}}\right)+\left({\Greekmath 0272} ^{\it 2}\eta \,\vdots\, \frac{\partial {\boldsymbol
\Psi }}_{\it 3}}{\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272} \eta \,\vdots\, \rm{div}\frac
\partial {\boldsymbol{\Psi }}_{\it 3}}{\partial \boldsymbol{x}}\right)+ \ldots
+\left({\Greekmath 0272} ^{n-1}\eta\,\vdots\, \frac{\partial {\boldsymbol{\Psi }}_{n}}
\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272} ^{n-2}\eta \,\vdots\,\rm{div}\frac{\partial
\boldsymbol{\Psi }}_{\it n}}{\partial \boldsymbol{x}}\right)+ \ldots\label{Key2} \\
&+& (-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\eta\,\vdots\,{\rm{div}}_{p-1}\frac{\partial
\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)+\ldots +(-1)^{n-2}\left({\Greekmath 0272}
\eta \,\vdots\,{\rm{div}}_{n-2}\frac{\partial {\boldsymbol{\Psi }}_{n}}
\partial \boldsymbol{x}}\right),\notag
\\
D_{n} &=&\left({\Greekmath 0272} \rho\,\vdots\, \frac{\partial {\boldsymbol{\Psi }}_{\it 2}}
\partial \boldsymbol{x}}\right)+\left({\Greekmath 0272} ^{\it 2}\rho \,\vdots\, \frac{\partial {\boldsymbol
\Psi }}_{\it 3}}{\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272} \rho \,\vdots\, \rm{div}\frac
\partial {\boldsymbol{\Psi }}_{\it 3}}{\partial \boldsymbol{x}}\right)+ \ldots
+\left({\Greekmath 0272} ^{n-1}\rho\,\vdots\, \frac{\partial {\boldsymbol{\Psi }}_{n}}
\partial \boldsymbol{x}}\right)-\left({\Greekmath 0272} ^{n-2}\rho \,\vdots\,\rm{div}\frac{\partial
\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)+ \ldots \label{Key3}\\
&+& (-1)^{p-1}\left({\Greekmath 0272} ^{n-p}\rho\,\vdots\,{\rm{div}}_{p-1}\frac{\partial
\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)+\ldots +(-1)^{n-2}\left({\Greekmath 0272}
\rho \,\vdots\,{\rm{div}}_{n-2}\frac{\partial {\boldsymbol{\Psi }}_{n}}
\partial \boldsymbol{x}}\right) .\notag
\end{eqnarray
From Eq. \eqref{motion1} and Eqs. \eqref{Key1}-\eqref{Key2}-\eqref{Key3}, we
finally obtain,
\begin{eqnarray*}
&&\frac{\partial \rho \,\boldsymbol{u}}{\partial t} +\rm{div}\left[ \frac
\partial \left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial \boldsymbol{u}}\right]
-\rm{div}\left( \, C_{\it n}+D_{\it n}\,\right) \\
&& -\,\rm{div}\left[ \eta \,\frac{\partial }{\partial
\boldsymbol{x}}\left(
\boldsymbol{\Psi}_{\it 1}-{\rm{div}} {\boldsymbol{\Psi}}_{\it 2}
\ldots + (-1)^{{\it n}-1}{\rm{div}}_{{\it n}-1}{\boldsymbol{\Psi }}_{\it n}\right) \right]
- \rm{div}\left[ \rho \, \frac{\partial }{\partial \boldsymbol{x}}\left(
\boldsymbol{\Phi}_{\it 1}-{\rm{div}}{\boldsymbol{\Phi
}_{\it 2}+{\ldots }+(-1)^{{\it n}-1}{\rm{div}}_{{\it n}-1}{\boldsymbol{\Phi}}_{\it n}\right)
\right] =0.
\end{eqnarray*}
\subsection{Balances of mass and entropy}
For the mass density, we get by successive derivations
\begin{equation*}
\left\{
\begin{array}{l}
\displaystyle\frac{\partial \rho }{\partial t} + \rm{div}\left( \rho \,\boldsymbol{u
\right) =0, \\
\\
\displaystyle \frac{\partial ({\Greekmath 0272} \rho)}{\partial t} +
\rm{div}\left[ {\Greekmath 0272} \left(
\rho \,\boldsymbol{u}\right) \right] =0, \\
\displaystyle \qquad\qquad\qquad\vdots \\
\displaystyle \frac{\partial ({\Greekmath 0272} ^{n}\rho)}{\partial t} +
\rm{div}\left[ {\Greekmath 0272} ^{\it n}\left( \rho \,\boldsymbol{u}\right)
\right] =0,
\end{array
\right.
\end{equation*}
where we recall that,
\begin{equation*}
{\Greekmath 0272} \rho =\frac{\partial\rho}{\partial\boldsymbol{x}},\ \
{\Greekmath 0272} (\rho\,\boldsymbol{u})
=\frac{\partial(\rho\,\boldsymbol{u})}{\partial\boldsymbol{x}},\ \ldots \ ,
{\Greekmath 0272}^n \rho =\frac{\partial^n\rho}{\partial\boldsymbol{x}^{\,n}},\ \
{\Greekmath 0272}^n (\rho\,\boldsymbol{u})
=\frac{\partial^n(\rho\,\boldsymbol{u})}{\partial\boldsymbol{x}^{\,n}}
.
\end{equation*}
If we assume $\left. \boldsymbol{\beta} _{1}\,\right\vert
_{t=0}=\left. {\Greekmath 0272} \rho\, \right\vert _{t=0}$, one can consider $\boldsymbol{\beta} _{1}={\Greekmath 0272} \rho $ as an independent
variable. That is the same
for $\boldsymbol{\beta} _{p}={\Greekmath 0272} ^{p}\rho $ with $\left.
\boldsymbol{\beta} _{p}\,\right\vert _{t=0}=\left. {\Greekmath 0272} ^{p}\rho\,
\right\vert _{t=0}$. Then, all the previous equations are
compatible with the mass
conservation. But
\begin{eqnarray*}
{\Greekmath 0272} \left( \rho \boldsymbol{u}\right) & =&
{\Greekmath 0272}\rho\, \otimes\, \boldsymbol{u} +\rho \ {\Greekmath 0272} \,\boldsymbol{u} \\
& \vdots & \\
{\Greekmath 0272} ^{n}\left( \rho \,\boldsymbol{u}\right) &=& {\Greekmath 0272}
^{n}\rho \, \otimes\,
\boldsymbol{u}+C_{n}^{1}\ {\Greekmath 0272} ^{n-1}\rho \, \otimes\, {\Greekmath 0272} \boldsymbol{u}+
\ldots + {C}_{n}^{p}\,{\Greekmath 0272} ^{n-p}\rho \, \otimes\, {\Greekmath 0272} ^{p
\boldsymbol{u}+ \ldots + \rho \, {\Greekmath 0272} ^{n}\boldsymbol{u},
\end{eqnarray*}
Then,
\begin{equation*}
\left\{
\begin{array}{l}
\displaystyle\frac{\partial \rho }{\partial t}+\rm{div}\left( \rho\, \boldsymbol{u
\right) =0, \\
\\
\displaystyle \frac{\partial {\Greekmath 0272} \rho }{\partial
t}+\rm{div}\left[ \, {\Greekmath 0272}\rho\, \otimes\, \boldsymbol{u} +\rho \ {\Greekmath 0272} \,\boldsymbol{u} \right] = 0, \\
\displaystyle \qquad\qquad\qquad\vdots \\
\displaystyle \frac{\partial {\Greekmath 0272} ^{n}\rho }{\partial
t}+\rm{div}\left[{\Greekmath 0272}
^{n}\rho \, \otimes\,
\boldsymbol{u}+C_{\it n}^{1}\ {\Greekmath 0272} ^{{\it n}-1}\rho \, \otimes\, {\Greekmath 0272} \boldsymbol{u}+
\ldots + {C}_{\it n}^{\it p}\,{\Greekmath 0272} ^{\it n-p}\rho \, \otimes\, {\Greekmath 0272} ^{\it p
\boldsymbol{u}+ \ldots + \rho \, {\Greekmath 0272} ^{\it n}\boldsymbol{u}\right] =0.
\end{array
\right.
\end{equation*}
It is the same for the volumetric entropy if we consider $
\eta, {\Greekmath 0272} \eta, \dots, {\Greekmath 0272}^n \eta $ as independent
variables.
If we note that
\begin{equation*}
\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Phi
}}_{p}}={\Greekmath 0272} ^{p}\rho ,\quad \frac{\partial \mathcal{P} }{\partial
{\boldsymbol{\Psi }}_{p}}={\Greekmath 0272} ^{p}\eta ,
\end{equation*}
we get,
\begin{eqnarray*}
C_{n} &=&\left(\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Psi }}_{\it 1}}\,\vdots\, \frac
\partial {\boldsymbol{\Psi }}_{\it 2}}{\partial \boldsymbol{x}}\right)+\left(\frac{\partial
\mathcal{P} }{\partial {\boldsymbol{\Psi }}_{\it 2}}\,\vdots\,\frac{\partial {\boldsymbol
\Psi }}_{\it 3}}{\partial \boldsymbol{x}}\right)-\left(\frac{\partial \mathcal{P} }{\partial
\boldsymbol{\Psi }}_{\it 1}} \,\vdots\, \rm{div}\frac{\partial {\boldsymbol{\Psi }
_{\it 3}}{\partial \boldsymbol{x}}\right)+ \ldots
+\left(\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Psi }}_{n-1}}\,\vdots\,\frac
\partial {\boldsymbol{\Psi }}_{n}}{\partial \boldsymbol{x}}\right)-\left(\frac{\partial
\mathcal{P} }{\partial {\boldsymbol{\Psi }}_{n-2}}\,\vdots\, \rm{div}\frac{\partial
\boldsymbol{\Psi }}_{\it n}}{\partial \boldsymbol{x}}\right)+ \ldots \\
&+& (-1)^{p-1}\left(\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Psi
}}_{n-p}}\,\vdots\, {\rm{div}}_{p-1}\frac{\partial {\boldsymbol{\Psi
}}_{n}}{\partial
\boldsymbol{x}}\right)+{\ldots }+(-1)^{n-2}\left(\frac{\partial \mathcal{P} }{\partial
\boldsymbol{\Psi }}_{\it 1}}\,\vdots\,{\rm{div}}_{n-2}\frac{\partial {\boldsymbol
\Psi }}_{n}}{\partial \boldsymbol{x}}\right),
\\
D_{n} &=&\left(\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Phi }}_{\it 1}}\,\vdots\, \frac
\partial {\boldsymbol{\Phi }}_{\it 2}}{\partial \boldsymbol{x}}\right)+\left(\frac{\partial
\mathcal{P} }{\partial {\boldsymbol{\Phi }}_{\it 2}}\,\vdots\,\frac{\partial {\boldsymbol
\Phi }}_{\it 3}}{\partial \boldsymbol{x}}\right)-\left(\frac{\partial \mathcal{P} }{\partial
\boldsymbol{\Phi }}_{\it 1}} \,\vdots\, \rm{div}\frac{\partial {\boldsymbol{\Phi }
_{\it 3}}{\partial \boldsymbol{x}}\right)+ \ldots
+\left(\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Phi }}_{n-1}}\,\vdots\,\frac
\partial {\boldsymbol{\Phi }}_{n}}{\partial \boldsymbol{x}}\right)-\left(\frac{\partial
\mathcal{P} }{\partial {\boldsymbol{\Phi }}_{n-2}}\,\vdots\, \rm{div}\frac{\partial
\boldsymbol{\Phi }}_{\it n}}{\partial \boldsymbol{x}}\right)+{\ldots } \\
&+& (-1)^{p-1}\left(\frac{\partial \mathcal{P} }{\partial {\boldsymbol{\Phi
}}_{n-p}}\,\vdots\, {\rm{div}}_{p-1}\frac{\partial {\boldsymbol{\Phi
}}_{n}}{\partial
\boldsymbol{x}}\right)+{\ldots }+(-1)^{n-2}\left(\frac{\partial \mathcal{P} }{\partial
\boldsymbol{\Phi }}_{\it 1}}\,\vdots\,{\rm{div}}_{n-2}\frac{\partial {\boldsymbol
\Phi }}_{n}}{\partial \boldsymbol{x}}\right)
\end{eqnarray*}
and we obtain,
\begin{theorem}
The system of equations of processes for multi-gradient fluids can be written in the
divergence form :
\begin{equation}
\label{System 6}\left\{
\begin{array}{l}
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial R}\right) +\rm{div}\left [\frac{\partial (\mathcal{P}\,{\boldsymbol{u}} )}{\partial\it R}\right ] =0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Phi }}_{\it 1}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Phi }}_{\it 1}}
\frac{\partial \mathcal{P} }{\partial\it R}\frac{\partial \boldsymbol{u}}{\partial {\boldsymbol{x}}
\right] =0 \\
\displaystyle\qquad\qquad\qquad\vdots \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Phi }}_{\it n}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Phi }}_{\it n}}+
{C}_{\it n}^{1}\left(\frac{\partial \mathcal{P} \,}{\partial {\boldsymbol{\Phi }
_{{\it n}-1}}\,\otimes\, \frac{\partial \boldsymbol{u}}{\partial {\boldsymbol{x}}}\right) + \ldots +
{C}_{\it n}^{\it p}\left( \frac{\partial \mathcal{P} \,}{\partial {\boldsymbol{\Phi }
_{\it n-p}}\, \otimes\, \frac{\partial ^{\it p}\boldsymbol{u}}{\partial {\boldsymbol{x}}^{\it p}}\right)+{\ldots} +
\frac{\partial \mathcal{P} }{\partial \it R}\frac{\partial ^{n}\boldsymbol{u}}{\partial {\boldsymbol{
}}^{n}}\right] =0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial \tilde T}\right) +\rm{div}\left[ \frac{\partial \left( \mathcal{P} \,\boldsymbol
u}\right) }{\partial \it\tilde
T}\right] =0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Psi }}_{\it 1}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Psi }}_{\it 1}}
\frac{\partial \mathcal{P} }{\partial\it \tilde T}\frac{\partial \boldsymbol{u}}{\partial {\boldsymbol{x}}
\right] =0 \\
\displaystyle\qquad\qquad\qquad\vdots \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Psi }}_{\it n}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Psi }}_{\it n}}+
{C}_{\it n}^{1}\left( \frac{\partial \mathcal{P}}{\partial {\boldsymbol{\Psi }
_{{\it n}-1}}\,\otimes\, \frac{\partial \boldsymbol{u}}{\partial {\boldsymbol{x}}}\right)+ \ldots +
{C}_{\it n}^{\it p}\left( \frac{\partial \mathcal{P}}{\partial {\boldsymbol{\Psi }
_{\it n-p}}\,\otimes\,\frac{\partial ^{\it p}\boldsymbol{u}}{\partial {\boldsymbol{x}}^{\it p}}\right)+ \ldots +
\frac{\partial \mathcal{P} }{\partial \it\tilde T}\frac{\partial ^{\it n}\boldsymbol{u}}{\partial {\boldsymbol{
}}^{\it n}}\right] =0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial \boldsymbol{u}}\right) + \rm{div}\left[\frac{\partial \left( \mathcal{P}
\,\boldsymbol{u}\right) }{\partial \boldsymbol{u}} -\frac{\partial \mathcal{P} }{\partial\it \tilde T}\frac{\partial }
\partial \boldsymbol{x}}\left( {\ \boldsymbol{\Psi }}_{\it 1}-{\rm{div}}_{\it 2}
\boldsymbol{\Psi }}_{\it 2}+ \ldots + (-1)^{{\it n}-1}{\rm{div}}_{{\it n}-1}{\boldsymbol
\Psi }}_{{\it n}-1}\right)\right.\\
\displaystyle\ \quad \qquad \qquad\quad\left. -\frac{\partial \mathcal{P} }
\partial \tilde \mu}\frac{\partial }{\partial \boldsymbol{x}}\left( {\ \boldsymbol
\Phi }}_{\it 1}-{\rm{div}}_{\it 2}{\boldsymbol{\Phi }}_{\it 2}+ \ldots +(-1)^{n-1}
\rm{div}}_{n-1}{\boldsymbol{\Phi }}_{n-1}\right)
-C_{n}-D_{n}\right]=0,
\end{array
\right.
\end{equation
\end{theorem}
\subsection{Symmetric form and stability of constant states}
System (\ref{System 6}) admits constant solutions $(\rho _{e}, \eta _{e}
, \boldsymbol{u}_{e}$, ${\Greekmath 0272} \rho _{e}=0$, $\ldots$ , ${\Greekmath 0272}^{n} \rho
_{e}=0$, ${\Greekmath 0272} \eta _{e}=0$, $ \ldots$, ${\Greekmath 0272}^{n}{\eta
_{e}=0})$. Since the governing
equations are invariant under Galilean transformation, we can assume that
\boldsymbol{u}_{e}= 0 $.
Near equilibrium, we look for the solutions of the linearized
system which are proportional in the direction $\boldsymbol{k}$ to $\displaystyle e^{i\left(
x-\lambda t\right) }$,$\ $where $x$ is the scalar coordinate in
this spread direction, $\lambda $ is a constant and $i^{2}=-1$. We
denote $u$ as a scalar corresponding to the velocity
in the direction $\boldsymbol{k}$ of $\boldsymbol{u}$
($\boldsymbol{u}=u\ \boldsymbol{k}$). We denote
\begin{equation*}
\boldsymbol{U} = \boldsymbol{U}_{{\it 0}}\,e^{i\left( x-\lambda
t\right) } ,
\end{equation*}
the general form of the perturbations with
\begin{equation*}
\boldsymbol{U}^{\star }=\left[ \,R, {\boldsymbol{\Phi }
_{\it 1}, {\ldots , \boldsymbol{\Phi }}_{n},\tilde T, {\boldsymbol{\Psi }}_{\it 1}, {\ldots
\boldsymbol{\Psi }}_{n}, \boldsymbol{u}\right] \quad
\rm{ and}\quad\
\boldsymbol{U}_{{\it 0}}^{\star } = \left[ \,\it R_0,
\boldsymbol{\Phi }}_{10}, {\ldots ,\boldsymbol{\Phi }}_{n0}, \tilde T_{0},
\boldsymbol{\Psi }}_{10}, \ldots , \boldsymbol{\Psi }_{n0}, \boldsymbol{u}_{0
\right].
\end{equation*}
We obtain
\begin{equation*}
\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }{\partial \boldsymbol
U}}\right) _{e}=\frac{\partial }{\partial \boldsymbol{U}}\left( \frac
\partial \mathcal{P} }{\partial \boldsymbol{U}}\right) _{e}\frac{\partial
\boldsymbol{U}}{\partial t}=-i\lambda \,\frac{\partial }{\partial
\boldsymbol{U}}\left( \frac{\partial \mathcal{P} }{\partial
\boldsymbol{U}}\right) _{e}\,\boldsymbol{U}_{\it 0}\ e^{i\left(
x-\lambda t\right) },
\end{equation*
where subscript $e$ means the values at equilibrium and we denote
\begin{equation*}
\boldsymbol{G}\equiv \frac{\partial }{\partial \boldsymbol{U}}\left( \frac
\partial \,\mathcal{P} \boldsymbol{u}}{\partial \boldsymbol{U}}\right) _{e}^{\star }.
\end{equation*
Fro
\begin{equation*}
{\rm{div}}\left( \frac{\partial \,\mathcal{P} \boldsymbol{u}}{\partial \boldsymbol
U}}\right) =\frac{\partial }{\partial x}\left( \frac{\partial
\,\mathcal{P}
\boldsymbol{u}}{\partial \boldsymbol{U}}\right) ^{\star }=\frac{\partial }
\partial \boldsymbol{U}}\left( \frac{\partial \,\mathcal{P} \boldsymbol{u}}{\partial
\boldsymbol{U}}\right) ^{\star}\frac{\partial
\,\boldsymbol{U}}{\partial x},
\end{equation*
we get
\begin{equation*}
{\rm{div}}\left( \frac{\partial \,\mathcal{P} \boldsymbol{u}}{\partial \boldsymbol
U}}\right) _{e}=i\ \boldsymbol{G\ U}_{{\it 0}}\,e^{i\left( x-\lambda
t\right) }.
\end{equation*
At equilibrium, $ {\Greekmath 0272} \rho _{e}=0, \ldots ,{\Greekmath 0272}^{n} \rho
_{e}=0, $ ${\Greekmath 0272} \eta _{e}=0, \ldots ,{\Greekmath 0272}^{n}{\eta _{e}=0} $, which
implies
\begin{equation*}
\left( \frac{\partial \,\mathcal{P} \boldsymbol{u}}{\partial {\boldsymbol{\Phi }}_{\it 1
}\right) _{e} =0, \,{\ldots }\,,\,\left( \frac{\partial \,\mathcal{P}}
\partial {\boldsymbol{\Phi }}_n}\right) _{e}=0,\quad\left( \frac{\partial \,\mathcal{P}}{\partial {\boldsymbol{\Psi }}_{\it 1}}\right) =0, \, \ldots\,,\,\left( \frac{\partial \,\mathcal{P} \boldsymbol{u}}{\partial
{\boldsymbol{\Psi }}_n}\right) _{e}=0,
\end{equation*
\begin{equation*}
\left( \frac{\partial \,\mathcal{P} }{\partial R}\right) _{e}\frac{\partial ^{p
\boldsymbol{u}}{\partial x^{p}}=\rho _{e}\frac{\partial ^{p}\boldsymbol{u}}
\partial x^{p}}=i^{p}\rho _{e}\boldsymbol{\ u}_{{\it 0}} \,e^{i\left( x-\lambda
t\right) }\quad \mathrm{and}\quad \left( \frac{\partial \,\mathcal{P} }{\partial\tilde T
\right) _{e}\frac{\partial ^{p}\boldsymbol{u}}{\partial x^{p}}=\eta _{e
\frac{\partial ^{p}\boldsymbol{u}}{\partial x^{p}}=i^{p}\eta _{e}\boldsymbol
\ u}_{{\it 0}}\,e^{i\left( x-\lambda t\right) }.
\end{equation*
Due t
\begin{eqnarray*}
\left( \frac{\partial \mathcal{P} \,}{\partial R}\right) _{e}
&=&\rho _{e},\quad
\frac{\partial }{\partial x}\left[ {(-1)}^{p}{\rm{div}}_{p-1}{\boldsymbol
\Phi }}_{p}\right] ={(-1)}^{p}i^{p}\ {\boldsymbol{\Phi }}_{p\it 0}\ e^{i\left(
x-\lambda t\right) },\quad \\
\left( \frac{\partial \mathcal{P} \,}{\partial \tilde T}\right) _{e}
&=&\eta _{e},\quad
\frac{\partial }{\partial x}\left[ {(-1)}^{p}{\rm{div}}_{p-1}{\boldsymbol
\Psi }}_{p}\right] ={(-1)}^{p}i^{p}\ {\boldsymbol{\Psi }}_{p\it 0}\
e^{i\left( x-\lambda t\right) },
\end{eqnarray*
the last equation in system (\ref{System 6}) become
\begin{eqnarray*}
\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }{\partial \boldsymbol
u}}\right) +\rm{div}\left[ \frac{\partial \left( \mathcal{P} \,{\boldsymbol{u}
\right) }{\partial \boldsymbol{u}}\right] - \rho _{e}\left( {\it i}^{\it 2}
{\boldsymbol{\Phi }}_{\it 1}}- {\it i}^{\it 3}{\boldsymbol{\Phi }}_{\it 2}+ \ldots +(-1)
^{{\it n}+1}{\it i}^{{\it n}+1}{\boldsymbol{\Phi }}_{\it n}\right)
- \eta _{e}\left( {\it i}^{2}{\boldsymbol{\Psi }}_{\it 1}-{\it i}^{3}{\boldsymbol{\Psi }
_{\it 2}+{\ldots +(-1)}^{{\it n}+1}{\it i}^{n+1}{\boldsymbol{\Psi }}_{\it n}\right) =0
.
\end{eqnarray*
We denot
\begin{equation*}
\boldsymbol{A}=\frac{\partial }{\partial \boldsymbol{U}}\left( \frac
\partial \,\mathcal{P} }{\partial \boldsymbol{U}}\right) _{e}^{\star}.
\end{equation*
which is a symmetric matrix. From the relations
\begin{equation*}
\rm{div}\left({\it i}^{\it p}\rho _{e}\boldsymbol{u}\right) ={\it i}^{{\it p}+1}\rho _{e
\boldsymbol{u}_{_{0}}e^{{\it i}\left( x-\lambda t\right) }={\it i}^{{\it p}+1}\rho _{e
\boldsymbol{u}\ \quad \mathrm{and}\quad \rm{div}\left({\it i}^{\it p}\eta _{e
\boldsymbol{u}\right) ={\it i}^{{\it p}+1}\eta _{e}\boldsymbol{u}_{_{0}}e^{{\it i}\left(
x-\lambda t\right) }={\it i}^{{\it p}+1}\eta _{e}\boldsymbol{u},
\end{equation*
System (\ref{System 6}) writes
\begin{equation*}
\left\{
\begin{array}{l}
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\ \partial \tilde\mu}\right) +\rm{div}\left[ \frac{\partial \left( \mathcal{P} \,\boldsymbol
u}\right) }{\partial \tilde\mu}\right] =0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Phi }}_{\it 1}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Phi }}_{\it 1}
\right] +{\it i}^{\it 2}\rho _{e}\boldsymbol{u}=0 \\
\displaystyle\qquad \qquad \vdots \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Phi }}_{n}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Phi }}_{\it n}
\right] +{\it i}^{{\it n}+1}\rho _{e}\boldsymbol{u}=0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial \tilde T}\right) +\rm{div}\left[ \frac{\partial \left( \mathcal{P} \,\boldsymbol
u}\right) }{\partial \tilde T}\right] =0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Psi }}_{\it 1}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Psi }}_{\it 1}
\right] +{\it i}^{\it 2}\eta _{e}\boldsymbol{u}=0 \\
\displaystyle\qquad \qquad \vdots \\ \label{System 5}
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial {\boldsymbol{\Psi }}_{n}}\right) +\rm{div}\left[ \frac{\partial
\left( \mathcal{P} \,\boldsymbol{u}\right) }{\partial {\boldsymbol{\Psi }}_{\it n}
\right] +{\it i}^{{\it n}+1}\eta _{e}\boldsymbol{u}=0 \\
\displaystyle\frac{\partial }{\partial t}\left( \frac{\partial \mathcal{P} }
\partial \boldsymbol{u}}\right) +\rm{div}\left[ \frac{\partial \left( \mathcal{P}
\,\boldsymbol{u}\right) }{\partial \boldsymbol{u}}\right] -\rho _{e}\left(
{\it i}^{\it 2}{\boldsymbol{\Phi }}_{\it 1}-{\it i}^{\it 3}{\boldsymbol{\Phi }}_{\it 2}+{\ldots +(-1)
^{{\it n}+1}{\it i}^{{\it n}+1}{\boldsymbol{\Phi }}_{\it n}\right)
-\eta _{e}\left( {\it i}^{\it 2
{\boldsymbol{\Psi }}_{\it 1}-{\it i}^{\it 3}{\boldsymbol{\Psi }}_{\it 2}+{\ldots +(-1)
^{{\it n}+1}{\it i}^{{\it n}+1}{\boldsymbol{\Psi }}_{\it n}\right) =0 ,
\end{array
\right.
\end{equation*}
which can be written in the form
\begin{equation}
- i\ \lambda ~\boldsymbol{A\ U}+i\ \boldsymbol{G\
U}+i^{\it 2}\boldsymbol{C\ U=0} ,\label{vp}
\end{equation
where $\boldsymbol{C}$ is a matrix with $2(n+1)+1$ lines and
2(n+1)+1$ columns which can be written as,
\begin{equation*}
\boldsymbol{C} = \left[
\begin{array}{ccccccccc}
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, & 0\ \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, & i^{2}\rho _{e} \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &
\vdots &
\vdots \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, & i^{n+1}\rho _{e} \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, & 0 \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, & i^{2}\eta _{e} \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &
\vdots &
\vdots \\
0, & 0, & 0, & 0, & 0, & 0, & 0, & 0, & i^{n+1}\eta _{e} \\
0, & -(-i)^{2}\rho _{e}, & {\ldots ,} & -(-i)^{n+1}\rho _{e}, & 0,
& -(-i)^{2}\eta
_{e}, & {\ldots ,} & -(-i)^{n+1}\eta _{e}, &
\end{array
\right],
\end{equation*}
and Eq. \eqref{vp} becomes :
\begin{equation*}
i\, \left( \boldsymbol{G}+i\ \boldsymbol{C}-\lambda
~\boldsymbol{A}\right) \boldsymbol{U}_{\it 0}\, e^{i\left( x-\lambda
t\right)}=0.
\end{equation*}
Due to $\overline{i\,\boldsymbol{C}}^{\;\star }=i\,\boldsymbol{C}$, matrix
i\,\boldsymbol{C}$ is Hermitian operator; consequently, $\boldsymbol{K}=\boldsymbol{G}+i\,\boldsymbol{
}$ is also an Hermitian operator, but $\boldsymbol{A}$ is symmetric. The $\lambda$-roots o
\begin{equation*}
\left( \boldsymbol{K}-\lambda ~\boldsymbol{A}\right) \boldsymbol{U}_{\it 0}=0,
\end{equation*}
are the solutions of characteristic equation,
\begin{equation*}
\det \left( \boldsymbol{K}-\lambda ~\boldsymbol{A}\right) =0,
\end{equation*}
where $\boldsymbol{U}_{\it 0}$ is the eigenvector associated with eigenvalue
\lambda $. Near an equilibrium state, and when Legendre transformation $\mathcal{P} $ of energy $E$ is locally convex, $
\boldsymbol{A}$ is a positive definitive matrix and the
eigenvalues $\lambda $ are real. Consequently,
\begin{corollary}
When $E$ is locally convex, perturbations
$\boldsymbol{U}_{\it 0}\,e^{i \left( x-\lambda t\right)} $ are stable
and the $\boldsymbol{U}$-form is dispersive.
\end{corollary}
\section{Conclusion}
We have extended the cases of capillary fluids \cite{Gavrilyuk2,Gavage} to the most general case of
multi-gradient fluids in density and volumetric entropy. These fluids can be represented by an hyperbolic-parabolic system of equations.
The
divergence form of governing equations implies a system of Hermitian-symmetric equations constituting the most general dispersive model of conservative fluids. The perturbations are stable in the domains where the total volumetric internal energy is a convex function of the main field of new variables. The multi-gradient fluids have common properties with simple systems of classical conservative fluids \cite{Gouin,Serrin}. Multi-gradient fluids correspond to fluid media typified by first integrals represented by Kelvin's theorems \cite{Gouin3}.
{\bigskip }
\parindent 0pt { {\textbf{Acknowledgments}: The author thanks the National Group of Mathematical Physics GNFM-INdAM for its support as visiting professor at the Department of Mathematics of University of Bologna.
{\bigskip }
{\textbf {References}}
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\defLaTeX2e{LaTeX2e}
\def\chkcompat{%
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\ifx\ds@amstex\relax
\message{amstex already loaded}\makeatother\endinpu
\else
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\makeatother\endinput}
{}
\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint }}%
\def\diiint{\mathop{\displaystyle \iiint }}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12 \catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is
not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim You are using the
"align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is
not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim You are using the
"alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is
not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim You are using
the "xalignat*" environment in a style in which it is not
defined.} \expandafter\let\csname endxalignat*\endcsname
=\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is
not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim You are using the
"gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it
is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim You are using
the "multiline*" environment in a style in which it is not
defined.} \expandafter\let\csname endmultiline*\endcsname
=\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in
AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed
in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim You are using a type
of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim You are using a
type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
}
\makeatother
\endinput
|
1,477,468,750,954 | arxiv |
\section{Introduction}
The layered perovskite Mn oxide La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$
(LSMO327), in which MnO$_{2}$ double layers and (La,Sr)$_{2}$O$_{2}$ blocking
layers are stacked alternatively, attracts much attention as
another class of colossal magnetoresistance (CMR) system. Possibly due to the
reduced dimensionality, this system exhibits an extremely large MR
at the hole concentration $x=0.4$\cite{Y.Moritomo_96} and a tunneling
MR phenomenon at $x=0.3$\cite{T.Kimura_96} around $T_{C}$.
Neuron-scattering study on LSMO327 single crystals ($x=0.40-0.48$) by Hirota
{\it et al.}\cite{K.Hirota_98} has revealed that the low-temperature magnetic
structure consists of planar ferromagnetic (FM) and A-type antiferromagnetic
(AFM) components, indicating a canted AF structure, where the
canting angle between neighboring planes changes from
6.3$^{\circ}$ at $x=0.40$ (nearly planner FM) to 180$^{\circ}$ at $x=0.48$
(A-type AF).\cite{K.Hirota_98} The existence of the canted AFM structure is
consistent with previous studies focusing upon the structural
properties.\cite{J.F.Mitchell_96,D.N.Argyriou_97} Kubota {\it et
al.}\cite{M.Kubota_98} carried out a comprehensive powder neutron-scattering work
and established the magnetic phase diagram for $x=0.30-0.50$; there is a planar
FM phase between $x=0.32$ and $0.38$, which is smoothly connected to the canted
AFM region. To understand the magnetic properties of LSMO327 in more detail, it
is necessary to study the excitation spectra, from which one can determine
magnetic interaction strengths and spin-spin correlation lengths.
\section{Theoretical Model}
Figure~\ref{Fig:Structure} shows the magnetic spin arrangement on Mn ions in the
tetragonal $I4/mmm$ cell of La$_{1.2}$Sr$_{1.8}$Mn$_{2}$O$_{7}$. Although there
is a small canting between neighboring layers within a double-layer at
$x=0.40$, we assume a simple planar ferromagnet, which is
sufficient to consider the magnetic interactions between $x= \leq 0.40$.
We expect that the dominant spin-spin interactions should occur between
nearest-neighbor (NN) Mn atoms, though the in-plane interaction $J_{\parallel}$
and the {\em intra}-bilayer interaction $J_{\perp}$ might be different.
Although there is no super-exchange coupling between layers belonging to
different double-layers, there will be the {\em inter}-bilayer interaction
$J'$ through a direct exchange. However, it is supposed to be much weaker than
$J_{\parallel}$ and $J_{\perp}$, thus we neglect it in our simple model
calculation.
\begin{figure}
\begin{center}
\BoxedEPSF{figure1.eps scaled 600}
\vspace{0.5cm}
\end{center}
\caption{The magnetic spin arrangement on Mn ions in the $I4/mmm$ tetragonal
cell of La$_{1.2}$Sr$_{1.8}$Mn$_{2}$O$_{7}$. The lattice parameters are
$a=b=3.87$ and $c=20.1$~\AA~at 10~K.\protect\cite{K.Hirota_98}}
\label{Fig:Structure}
\end{figure}
Let $l=A,B,C,D$ label the four different layers as indicated in
Fig.~\ref{Fig:Structure}. The spin Hamiltonian can then be written in the
Heisenberg form as
\begin{eqnarray} {\cal H} & = & \frac{1}{2}\sum_{i}\sum_{l}\sum_{\delta}
J_{il\delta}{\bf S}_{i}^{l}\cdot {\bf S}_{\delta}^{l} \nonumber \\ & = &
\frac{1}{2}\sum_{i}\sum_{\delta}\left[ {\bf S}_{i}^{A}\left\{J_{\parallel}{\bf
S}_{\delta}^{A}+J_{\perp}{\bf S}_{\delta}^{B}\right\}+ {\bf
S}_{i}^{B}\left\{J_{\parallel}{\bf S}_{\delta}^{B}+J_{\perp}{\bf
S}_{\delta}^{A}\right\}\right.\nonumber \\ & &\left. +{\bf
S}_{i}^{C}\left\{J_{\parallel}{\bf S}_{\delta}^{C}+J_{\perp}{\bf
S}_{\delta}^{D}\right\} +{\bf S}_{i}^{D}\left\{J_{\parallel}{\bf
S}_{\delta}^{D}+J_{\perp}{\bf S}_{\delta}^{C}\right\}
\right],
\label{eq:Hamiltonian}
\end{eqnarray}
where $i$ denotes a unit cell and $\delta$ indicates NN sites corresponding
to a particular interaction, $J_{\parallel}$ or $J_{\perp}$. Following the
standard approach, we make the Holstein-Primakoff
transformation\cite{T.Holstein_40} to boson creation and annihilation opperators
$a_{n}^{\dagger},a_{n}, b_{n}^{\dagger},b_{n}, c_{n}^{\dagger},c_{n},
d_{n}^{\dagger},d_{n}$, which correspond to $A, B, C, D$ layers. By
Fourier transforming to reciprocal space and performing diagonalization, we
obtain the following dispersion relations as the eigenvalues:
\begin{eqnarray}
\hbar\omega ({\bf q}) & = & -2J_{\parallel}S\left(2-\cos aq_{x}-\cos
aq_{y}\right) \nonumber \\
& & - J_{\perp}S\left\{1 \mp |\exp\left(-i2zcq_{z}\right)|\right\}
\nonumber \\
& = & -2J_{\parallel}S\left(2-\cos aq_{x}-\cos aq_{y}\right)
- J_{\perp}S\left(1 \mp 1\right),
\label{eq:Dispersion}
\end{eqnarray}
where $a$ and $c$ are the lattice constants and $2zc$ is the
distance between layers within a double-layer. Although there should be four
different modes, these are classified to two modes, i.e, acoustic (A) and
optical (O), when the inter-bilayer coupling $J'$ is neglected. Note that both
$J_{\parallel}$ and $J_{\perp}$ are negative because they are FM interactions.
By using a unitary matrix diagonalizing the Hamiltonian
Eq.~\ref{eq:Hamiltonian}, we obtain the differential scattering cross section
for spin waves in LSMO327 with $x=0.40$
\begin{eqnarray}
\frac{d^{2}\sigma}{d\Omega dE_{f}} & = &
\left(\frac{\gamma e^{2}}{mc^{2}}\right)^{2} \frac{k_{f}}{k_{i}}
\left\{\frac{1}{2} g f(Q)\right\}^{2} \left(1+\hat{Q}_{x}^{2}\right) e^{-2W(Q)}
\nonumber \\
& \times &
\frac{S}{2} \frac{(2\pi)^{3}}{v_{0}}\frac{4}{N} \sum_{m}\sum_{\bf qG}
\left(n_{q}^{(m)}+\frac{1}{2}\pm\frac{1}{2}\right)
\\
& \times &
\delta (\hbar\omega_{q}^{(m)}\mp\hbar\omega_{q})
\delta ({\bf Q}\mp{\bf q}-{\bf G}) \left\{1\pm\cos(2zc\cdot Q_{z})\right\}
\nonumber
\protect\label{eq:CrossSection}
\end{eqnarray}
where $\gamma$ is the gyromagnetic ratio of the neutron, $f(Q)$ is the
magnetic form factor for a Mn ion, $\exp[-2W(Q)]$ is a Debye-Waller factor,
$\hat{Q}_{x}=Q_{x}/Q$, $n_{q}^{(m)}$ is the bose factor and $m$ denotes a
mode. Since $2zc$ is very close to $a \approx c/5$, the A-branch has
maximum intensity at $l=5n$ ($n$:integer), while the phase of O-branch is shifted
by $\pi$.
\section{Experimental}
La$_{1.2}$Sr$_{1.8}$Mn$_{2}$O$_{7}$ powder was prepared
by solid-state reaction using prescribed amounts of pre-dried
La$_{2}$O$_{3}$ (99.9\%), Mn$_{3}$O$_{4}$ (99.9\%), and SrCO$_{3}$ (99.99\%).
The powder mixture was calcined in the air for 3 days at 1250$^{\circ}$C
--1400$^{\circ}$C with frequent grindings. The calcined powder was then pressed
into a rod and heated at 1450$^{\circ}$C for 24~h. Single crystals were
melt-grown in flowing 100\% O$_{2}$ in a floating zone optical image furnace
with a travelling speed of 15 mm/h. To check the sample homogeneity, we
powderized a part of single crystal and performed x-ray diffraction, which
shows no indication of impurities.
Neutron-scattering measurements were carried out on the triple-axis spectrometer
TOPAN located in the JRR-3M reactor of the Japan Atomic Energy Research
Institute (JAERI). The $(0\ 0\ 2)$ reflection of pyrolytic graphite (PG) was used
to monochromate and analyze the neutron beam, together with a PG filter to
eliminate higher order contamination. The spectrometer
was set up in two conditions in the standard triple-axis mode, typically with
the fixed final energy at 13.5~meV and the horizontal collimation of
B-100$'$-S-100$'$-B. The sample was mounted in an Al can so as to give
the ($h\ 0\ l$) zone in the tetragonal
$I4/mmm$ notation. We studied the same crystal used in
Ref.~\onlinecite{K.Hirota_98} (F-40), which is a single grain with mosaic spread
of $\sim 0.3^{\circ}$ full width at half maximum (FWHM).
\begin{figure}
\begin{center}
\BoxedEPSF{figure2.eps scaled 500}
\vspace{0.5cm}
\end{center}
\caption{The dispersion relations of spin waves at 10~K. Error bars correspond
to the FWHM of peak profiles. Open circle and square indicate the
acoustic branch, and solid triangle indicates the opical branch. Solid and
dotted curves are obtained by fitting to Eq.~\protect\ref{eq:Dispersion} for $0 < q
\leq 0.25$~r.l.u.}
\label{Fig:Dispersion}
\end{figure}
\section{Results and Discussions}
The spin-wave dispersions along $[h\ 0\ 0]$ were measured at 10~K around $(1\ 0\
0)$ and $(1\ 0\ 5)$ for the A-branch, and $(1\ 0\ 2.5)$ for the O-branch, as
shown in Fig.~\ref{Fig:Dispersion}. Error bars correspond to the FWHM
of peak profiles. By fitting all the data points for $0 < q \leq 0.25$~r.l.u.\
simultaneously, we obtain $-J_{\parallel}S = 10.1$~meV and
$-J_{\perp}S=3.1$~meV.
To quantitatively examine the present model, we measured the $l$-dependence of
the spin-wave intensities of A and O branches at a fixed transfer energy $\Delta
E=E_{i}-E_{f}=5$~meV. As shown in Fig.~\ref{Fig:CrossSection}, the differencial
scattering crosssection Eq.~3 is in an excellent agreement
with both the A $(1.1\ 0\ l)$ and O $(1.0\ 0\ l)$ branches. Note that we do
not use any fitting parameters except for intensity scaling.
\begin{figure}
\begin{center}
\BoxedEPSF{figure3.eps scaled 500}
\vspace{0.5cm}
\end{center}
\caption{ The $L$-dependence of the constant-$E$ scan. The solid and open
circles indicate intensities of the acoustic and optical branch respectively.
The solid curve corresponding to the fitting with Eq.~3. The acoustic branch is
dominant at (1~0~0) and (1~0~5), and the optical branch is dominant at (1~0~2.5)
}
\label{Fig:CrossSection}
\end{figure}
The results show that spin-spin correlations are significantly anisotropic. The inter-bilayer interaction is as small as we can not detect. The {\em
intra}-bilayer interaction compared with in-plane interaction, $J_{\perp}
/J_{\parallel}$ is about 0.31. We speculate that $x^{2}-y^{2}$ orbital is
dominant in the Mn $e_{g}$ band, which enhances the double-exchange, i.e.,
ferromagnetic, interactions within a plane. A close relation between the
magnetism and the Mn $e_{g}$ orbital degree of freedom has been also pointed out
by recent studies.\cite{K.Hirota_98,Y.Moritomo_97,Y.Moritomo_98} The in-plane spin wave
stiffness constant $D=-J_{\parallel}Sa^2$ is about 151~meV\AA$^2$, which is
corresponding to the nearly cubic perovsikte
La$_{1-x}$Sr$_{x}$MnO$_{3}$($x=0.2-0.3$) whose $D$ are
188~meV($x=0.3$,$T_{C}=370$~K) and 120~meV($x=0.2$, $T_{C}=310$~K)
~\cite{Hirota_97,Martin_96}. $T_{C}$ (120~K in our material) is very reduced,
indicating a large renormalization due to low dimensionality.
We noticed that the energy-width of constant-$Q$ scan profile becomes broad at
large $q$, particularly $q>0.25$~r.l.u. In the same high $q$ range, the peak
position starts deviating from the dispersion curve obtained from small $q$
data using a conventional Heisenberg model Eq.~\ref{eq:Hamiltonian}. Similar
kind of broadening and deviation are seen in other CMR systems, such as
Nd$_{0.7}$Sr$_{0.3}$MnO$_{3}$,\cite{Baca_98} which has a narrower
electronic band-width than La$_{0.7}$Sr$_{0.3}$MnO$_{3}$. Although it is not
clear that electron-phonon coupling plays a significant role in such anomalies
in LSMO327 as suggested in Nd$_{0.7}$Sr$_{0.3}$MnO$_{3}$, it would be
interesting to study the relation between structural and magnetic properties,
particularly in their dynamics.
\section{Acknowledgments}
The authors thank S. Ishihara and S. Okamoto for constructing the theoretical model.
This work was supported by a Grant-in-Aid for Scientific Research of MONBUSHO.
|
1,477,468,750,955 | arxiv | \section{Introduction}
The nucleon-nucleus interaction is known to be nonlocal in
nature \cite{bethe,frahn1,frahn2}. This nonlocal character arises
because of the many-body effects such as the virtual excitations in
the nucleus and the exchange of the nucleons within the interacting
system \cite{frahn1,frahn2,lemere,balan97}. The explicit use of the nonlocal
interaction framework, therefore, enriches the theoretical description
of the scattering process. Its incorporation, however, leads to the
integro-differential form of the Schr\"{o}dinger equation, which is
difficult to solve. It is written as:
\begin{equation}
\left[\frac{\hbar^2}{2\mu}\nabla^2 + E + V_{SO}{\bf L}\cdot{\boldsymbol \sigma}
\right]\Psi({\bf r}) = \int V({\bf r},{\bf r^\prime})\Psi({\bf r^\prime})
d{\bf r^\prime}
\label{eq1}
\end{equation}
where $\mu$ is the reduced mass of the nucleon-nucleus system, $E$
is the center of mass energy, ($V_{SO}\,{\bf L}\cdot{\boldsymbol \sigma}$) is
the local spin-orbit interaction, $V({\bf r},{\bf r^\prime})$ is the
nonlocal interaction kernel and $\Psi({\bf r})$ is the scattering
wave function.
Any effort to solve Eq.(\ref{eq1}) requires the explicit form of
$V({\bf r},{\bf r^\prime})$, which, unfortunately, is not known. However,
it is expected that this form should be such that in the limit of
vanishing nonlocality, the integro-differential equation reduces to
the conventional homogeneous equation. Guided by this idea, a factorized
form for the nonlocal kernel was proposed by Frahn and Lemmer
\cite{frahn1,frahn2}, which is written as
\begin{equation}
V({\bf r},{\bf r^\prime})= \frac{1}{\pi^{3/2}\beta^3}\,
{\rm exp}\left[-\,\frac{\left|{\bf r} - {\bf r^\prime}\right|^2}{\beta^2}
\right] U \left(\frac{\left|{\bf r} + {\bf r^\prime}\right|}{2}\right)
\label{eq2}
\end{equation}
where $\beta$ is the range parameter and $U$ is the nonlocal interaction.
The term with $\beta$ represents the behaviour of nonlocality. It reduces
to the Dirac $\delta$ function in the limit of vanishing nonlocality.
This prescription has been used by several groups, some of the notable
amongst them being the work of Perey and Buck \cite{pb} and Tian {\it et al.}
\cite{tpm15}. The important aspect of the former study \cite{pb} is the
construction of the energy dependent local equivalent potential which
can be used in the homogeneous Schr\"{o}dinger equation. This result is
obtained by using the gradient approximation, which is found to be reliable
for tightly bound nuclei. However, for nuclei away from the stability line
its validity might be a suspect. In the other study, Tian {\it et al.}
\cite{tpm15} have treated nonlocality by solving Eq.(\ref{eq1}) using
Lanczos method \cite{kimUd1,kimUd2}.
Other approaches to obtain efficient solution to Eq.(\ref{eq1}) with a
general nonlocal kernel include writing the nonlocal kernel in separable
form \cite{ali,ahmad}, expanding nonlocal kernel in terms of Chebyshev
polynomials \cite{raw3}, to name a few. Recently a microscopic approach
to address nonlocality has been developed by Rotureau {\it et al.}
\cite{rot17} wherein the nonlocal optical potential for nucleon-nucleus
scattering is constructed from chiral interactions. These methods,
however, might have computational limitations in analyzing the
nucleon-nucleus scattering data routinely.
In this article we present a readily implementable technique to solve
the integro-differential Schr\"{o}dinger equation. With a very simple
approximation, this technique reduces the integro-differential equation
to a homogeneous differential equation. This is achieved by using the
mean value theorem (MVT) of integral calculus \cite{mvt}. Application
of the MVT converts the nonlocal interaction to a local form. This
local potential is energy independent, but depends upon relative
angular momentum ($l$). The important aspect of this method is that
it does not depend upon any particular choice of the nonlocal form
factor and computationally it is very tractable.
The paper is organized as follows: In Section 2 we present the
MVT technique used to reduce the integro-differential equation to a
homogeneous one and identify the requirements for its applicability.
The accuracy of the solution of the homogeneous equation is established
in Section 3. As our method is applicable to any choice of the
nonlocal form factor, we study its applicability for different choices
in Section 4. Finally, Section 5 is devoted to the comparison of the
predictions of our method with the experimental observables like, total
and differential cross sections. As illustrations, we have studied
neutrons scattering off $^{12}$C, $^{56}$Fe and $^{100}$Mo targets.
\section{Method to solve the integro-differential equation}
For the present, dropping the spin-orbit term in Eq.(\ref{eq1}) we write
partial wave expansion for the scattering wave function, $\Psi$, and
the nonlocal potential, $V({\bf r},{\bf r^\prime})$, as
\begin{eqnarray}
&&\Psi({\bf r})\,=\,\sum_{l,m_l} i^l \,\frac{u(l;r)}{r}\,Y_{lm_l}(\Omega_r)
\,\,\,\,\,{\rm and}
\label{eq3}
\\
&&V({\bf r},{\bf r^\prime})\,=\,\sum_{l^\prime,m^\prime_l}
\frac{g(l^\prime;r,r^\prime)}{rr^\prime} \,Y_{l^\prime m^\prime_l}(\Omega_r)\,
Y^*_{l^\prime m^\prime_l}(\Omega_{r^\prime})
\label{eq4}
\end{eqnarray}
respectively. The resulting Schr\"{o}dinger equation for the
$l^{\rm th}$ partial wave is
\begin{equation}
\left[\frac{d^2}{dr^2} - \frac{l(l+1)}{r^2} +
\frac{2\mu\,E}{\hbar^2} \right] u(l;r)\,=\,
\frac{2\mu}{\hbar^2}\int_0^{r_m} g(l;r,r^\prime)
u(l;r^\prime)\,dr^\prime\,.
\label{eq5}
\end{equation}
The upper limit of the integration over nonlocal kernel,
$\displaystyle{g(l;r,r^\prime)}$, is the matching radius ($r_m$)
at which its contribution to the integral becomes negligible.
The nonlocal kernel for $l^{\rm th}$ partial wave is written as
\begin{eqnarray}
&& g(l;r,r^\prime)=\left(\frac{2r r^\prime}{\sqrt{\pi}\,\beta^3}\right)
{\rm exp}\left(\frac{-r^2 - {r^\prime}^2}{\beta^2}\right)
\label{eq6}
\\
\nonumber
&&\hspace{0.5cm}\times
\int_{-1}^{1} U \left(\frac{|{\bf r}+{\bf r^\prime}|}{2}\right)
{\rm exp}\left(\frac{2rr^\prime {\rm cos}\,\theta}{\beta^2}\right)
P_l\left({\rm cos}\,\theta\right) d\left({\rm cos}\,\theta\right),
\end{eqnarray}
where $\theta$ is the angle between the vectors ${\bf r}$ and
${\bf r^\prime}$ \cite{pb}. In the literature \cite{pb}, $|{\bf r}
+{\bf r^\prime}|$ is approximated as $(r+r^\prime)$ leading to
\begin{eqnarray}
g(l;r,r^\prime)&=&\left(\frac{2r r^\prime}{\sqrt{\pi}\,\beta^3}\right)
{\rm exp}\left(\frac{-r^2 - {r^\prime}^2}{\beta^2}\right)
U \left(\frac{r+r^\prime}{2}\right)
\label{eq7}
\\
\nonumber
&&\hspace{1.5cm}\times
\int_{-1}^{1} {\rm exp}\left(\frac{2rr^\prime {\rm cos}\,\theta}
{\beta^2}\right) P_l\left({\rm cos}\,\theta\right) d\left({\rm cos}\,
\theta\right)\,.
\end{eqnarray}
However, in the present work we do not use the above approximation. The
integral appearing in Eq.(\ref{eq6}) is evaluated numerically. Since
Eq.(\ref{eq7}) is used very often in the literature, effect of the
approximation leading to it is examined explicitly later in Section 3.2.
\subsection{The nonlocal potential, $U(r)$}
The construction of the non-local kernel requires the nucleon-nucleus
potential, $U(r)$. Since in the present work our aim is to propose a new
treatment of nonlocality and establish its correctness, we prefer to have
a well-established prescription for $U(r)$ that is applicable to most of
the nuclei. The primary choice is the conventional Wood-Saxon form commonly
used in the local optical model calculations:
\begin{equation}
U(x) = -\,\left(V_r\,f_r(x)\,+\,i W_i\,f_i(x)\,+\,i W_d\,f_d(x)
\right)
\label{eq8}
\end{equation}
\begin{equation}
\eqalign{
{\rm with}\,\,\,f_y(x) = \left[1\,+\,{\rm exp}\left(\frac{x-R_y}{a_y}
\right)\right]^{-1}\,\,\,\,y\,=\,r\,\,{\rm and}\,\,i
\cr
~~~~~~~f_d(x) = 4\,{\rm exp}\left(\frac{x-R_d}{a_d}\right)\,
\left[1\,+\,{\rm exp}\left(\frac{x-R_d}{a_d}\right)\right]^{-2}
}
\label{eq9}
\end{equation}
Recently Tian, Pang and Ma \cite{tpm15} have obtained a new set of
parameters for this type of potential by fitting the nucleon scattering
data on nuclei ranging from $^{27}$Al to $^{208}$Pb with incident
energies around 10 MeV to 30 MeV. It provides an excellent agreement
with large amount of cross section data and is energy independent. The
numerical values of these parameters can be found in Table~2 of
Ref.\cite{tpm15}. Henceforth, the potential obtained by this
parameterization will be referred to as ``TPM15".
\subsection{The nonlocal kernel}
We now examine the behaviour of the nonlocal kernel, $g(l;r,r^\prime)$,
appearing in the nonhomogeneous term in Eq.(\ref{eq5}). To illustrate,
we plot $g(l;r,r^\prime)$ as a function of $r$ and $r^\prime$ for different
partial waves in Fig.\ref{f1} for neutron-$^{56}$Fe scattering. As it can
be seen, $g(l;r,r^\prime)$ is a well-behaved function which is symmetric
around $r$=$r^\prime$. Its strength diminishes with increasing $l$ and so
does the importance of the nonlocality.
\begin{figure*}[htb!]
\centering
\subfigure[][Real part]{
\centering
\includegraphics[scale=0.43]{fig1a}}
\subfigure[][Imaginary Part]{
\centering
\includegraphics[scale=0.43]{fig1b}}
\caption{Behaviour of the nonlocal kernel as a function of relative
coordinates $r$ and $r^\prime$ for $l$ = 0, 1, 5 and 10. Calculations
are done with TPM15 parameterization for neutron scattering off $^{56}$Fe
nucleus \cite{tpm15}. The non-local range used in calculations is
$\beta$=0.90 fm.}
\label{f1}
\end{figure*}
This behaviour of $g(l;r,r^\prime)$ prompts us to use the mean
value theorem (MVT) of integral calculus \cite{mvt} to solve the
integro-differential equation (Eq.(\ref{eq5})).
\subsection{The MVT technique}
According to the mean value theorem of integral calculus \cite{mvt}, if
a function $q(x)$ is non-negative and integrable on $[a,b]$ and $p(x)$
is continuous on $[a,b]$, then there exists $c \in [a,b]$ such that
\begin{equation}
\int_a^b p(x) q(x) dx = p(c) \int_a^b q(x) dx\,.
\label{eq10}
\end{equation}
The theorem holds for non-positive $q(x)$ as well.
We examine the applicability of this theorem to the kernel in Eq.(\ref{eq5}).
The integrand in Eq.(\ref{eq5}) is a product of $g(l;r,r^\prime)$ and
the wave function, $u(l;r^\prime)$. The wave function is continuous in
the interval $[0, r_m]$. The analytic structure of $g(l;r,r^\prime)$
makes it evident that the kernel is integrable. Considering the behavior
of $g(l;r,r^\prime)$ from Fig.\ref{f1}, the integral in Eq.(\ref{eq5})
can be written as
\begin{equation}
\int_0^{r_m}g(l;r,r^\prime)u(l;r^\prime)\,dr^\prime =
u(l;\xi)\int_{0}^{r_m} g(l;r,r^\prime)\,dr^\prime
\label{eq11}
\end{equation}
with $\xi \in [0, r_m]$. Further, from Fig.\ref{f1} it can be seen that
$g(l;r,r^\prime)$ is strongly peaked at $r$=$r^\prime$ and is symmetric
around it. With this observation, we can expand $u(l;\xi)$ about
$r$=$r^\prime$. The leading term $u(l;r)$, evidently, is the most dominant
in the expansion. Therefore, we choose $u(l;\xi) = u(l;r)$, yielding
\begin{equation}
\int_0^{r_m} g(l;r,r^\prime) u(l;r^\prime) dr^\prime \approx u(l;r)
\int_0^{r_m} g(l;r,r^\prime) dr^\prime .
\label{eq12}
\end{equation}
This leads to the homogenized form of the Schr\"{o}dinger equation
\begin{equation}
\left[\frac{d^2}{dr^2}-\frac{l(l+1)}{r^2} +
\frac{2\mu E}{\hbar^2}\right]u(l;r)\,=\,\frac{2\mu U_{\rm eff}(l;r)}
{\hbar^2}u(l;r)\,,
\label{eq13}
\end{equation}
where the effective local potential, $U_{\rm eff}(l;r)$, is given by
\begin{equation}
U_{\rm eff}(l;r) = \int_0^{r_m}\,g(l;r,r^\prime)\,dr^\prime .
\label{eq14}
\end{equation}
This potential contains the most dominant effect of nonlocality. It is
independent of energy, but depends on $l$. In Fig.\ref{f2} we show
$U_{\rm eff}(l;r)$ in comparison with the TPM15 potential (Eq.(\ref{eq8}))
for neutron-$^{56}$Fe system. It is observed that $U_{\rm eff}(l;r)$
gets reduced in strength as well as modified in shape. This is unlike
the local equivalent potential in Perey and Buck's work \cite{pb}.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.43]{fig2}
\caption{Behaviour of $U_{\rm eff}(l;r)$ as a function of distance
for $l$ = 0, 1, 2 and 3. Calculations are done with TPM15 parameterization
for n-$^{56}$Fe scattering \cite{tpm15}. The non-local range used in
calculations is $\beta$=0.90 fm.}
\label{f2}
\end{figure*}
\section{Accuracy of the method}
The homogenized Schr\"{o}dinger equation (Eq.(\ref{eq13})) obtained for
the description of Eq.(\ref{eq5}), of course, is very neat and useful,
but it has an approximation of calculating the kernel at $r$=$r^\prime$.
This needs to be tested carefully. For this purpose we need to compare
the solution of the homogeneous equation with that of the original
integro-differential equation (Eq.(\ref{eq5})). This is achieved by solving
Eq.(\ref{eq5}) using an iterative scheme. This scheme is initiated by
the solution of Eq.(\ref{eq13}) using a suitable boundary condition.
The subsequent higher order solutions are obtained with the help of the
following iterative scheme:
\begin{eqnarray}
\hspace{-2cm}&&
\left[\frac{d^2}{dr^2}-\frac{l(l+1)}{r^2} + \frac{2\mu E}{\hbar^2}
- \frac{2\mu\,U_{\rm eff}(l;r)}{\hbar^2}\right]u_{i+1}(l;r)
\label{eq15}
\\
\nonumber
&&\hspace{0.5cm}=
\frac{2\mu}{\hbar^2}\int_0^{r_m} g(l;r,r^\prime)
u_i(l;r^\prime)\,dr^\prime - \frac{2\mu U_{\rm eff}(l;r)}{\hbar^2}
u_i(l;r),\,\,\,\,\,({\rm for\,\,all}\,\,i \ge 0)
\end{eqnarray}
The iterations are continued till the absolute value of difference
between the logarithmic derivatives of the wave function at the matching
radius in the $i^{\rm th}$ and the $(i+1)^{\rm th}$ steps is less
than or equal to 10$^{-6}$.
\begin{figure*}[htb!]
\centering
\subfigure[][n-$^{12}$C scattering]{
\centering
\includegraphics[scale=0.43]{fig3a}}
\hspace{1.5cm}
\subfigure[][n-$^{56}$Fe scattering]{
\centering
\includegraphics[scale=0.43]{fig3b}}
\caption{Total cross sections for neutron scattering off $^{12}$C
and $^{56}$Fe nuclei. Calculations are done with TPM15 parameterization
\cite{tpm15}.}
\label{f3}
\end{figure*}
\begin{figure*}[htb!]
\centering
\subfigure[][n-$^{12}$C scattering]{
\centering
\includegraphics[scale=0.4]{fig4a}}
\hspace{1.5cm}
\subfigure[][n-$^{56}$Fe scattering]{
\centering
\includegraphics[scale=0.4]{fig4b}}
\caption{Angular distributions for neutron scattering off $^{12}$C
and $^{56}$Fe nuclei. Calculations are done with TPM15 parameterization
\cite{tpm15}.}
\label{f4}
\end{figure*}
In Figs.\ref{f3}-\ref{f4} we show the calculated total and differential
cross sections for n-$^{12}$C and n-$^{56}$Fe scatterings in the low energy
domain for various approximations to Eq.(\ref{eq5}). Results obtained by
solving Eq.(\ref{eq13}) and Eq.(\ref{eq15}) are labeled as HOMO and FULL
respectively. To understand the impact of the iterative scheme, in
Figs.\ref{f3}-\ref{f4} we also show results obtained after one iteration
(labeled as 1-ITER). We notice that though the homogeneous results have
significant difference (especially for $^{56}$Fe) from the FULL results,
they are, overall, not far away from them. The 1-ITER results, however,
are pretty close to the full results of Eq.(\ref{eq5}). This is very nice,
because in actual calculations one can then use just one iteration and
get results very close to the FULL results. Computationally, once we
have homogeneous results it is straight forward to get 1-ITER results.
\subsection{Dependence on the choice of $U(r)$}
To examine further the dependence of the accuracy of our method on the
choice of $U(r)$, we perform above calculations for another potential.
We construct $\displaystyle{U(r)=V(r)+i W(r)}$ such that
$V(r)$ is obtained microscopically through folding model \cite{satlov}:
\begin{equation}
V(r)\,=\,\int d{\bf r}_2\,\rho({\bf r}_2)\,v(r_{12})\,,
\label{eq16}
\end{equation}
where $v(r_{12})$ is the effective nucleon-nucleon interaction,
$\rho({\bf r}_2)$ is the total nucleon density of the target and $r_{12}$
is the distance between the projectile (neutron) and a nucleon in the
target nucleus. To make it a reasonable representation for the
present study, TPM15 parameterization (see Eqs.(\ref{eq8})-(\ref{eq9}))
is used for $W(r)$. This prescription of $U(r)$ will be referred to as
``FoldTPM15" in subsequent discussions.
For the effective nucleon-nucleon interaction we have
used the well known M3Y prescription \cite{m3y}:
\begin{equation}
v(r)\,=\,\left[7999\,\frac{e^{-4r}}{4r}\,-\,2134\,\frac{e^{-2.5r}}{2.5r}
\right]\,\,{\rm MeV},
\label{eq17}
\end{equation}
where the two Yukawa terms represent the direct contribution of the
interaction. In principal, one needs to take into account the knock-on
contribution as well \cite{satlov}. Since the knock-on contribution in the
nucleon-nucleus scattering is expected to be insignificant in the
low energy range \cite{lemere}, we have not included it in Eq.(\ref{eq17}).
The density distribution of target nucleus has been calculated using
the well established relativistic mean field model \cite{YKG.90},
which is known to reproduce ground state properties of nuclei spanning
the entire periodic table.
\begin{figure*}[th!]
\centering
\subfigure[][The nonlocal kernel]{
\centering
\includegraphics[scale=0.43]{fig5a}
\label{f5a}}
\hspace{1.0cm}
\subfigure[][$U_{\rm eff}(l;r)$]{
\centering
\includegraphics[scale=0.41]{fig5b}
\label{f5b}}
\caption{Behaviour of the real part of the nonlocal kernel and
$U_{\rm eff}(l;r)$ for neutron scattering off $^{56}$Fe nucleus.
Calculations are done with FoldTPM15 prescription. The non-local
range used in calculations is $\beta$=0.90 fm.}
\label{f5}
\end{figure*}
In Fig.\ref{f5a}, we show the behaviour of the real part of the nonlocal
kernel function calculated using FoldTPM15 prescription for neutron
scattering off $^{56}$Fe. It has the required structure of being well
behaved and symmetric about $r$=$r^\prime$ for the applicability
of the MVT technique. We also show the corresponding real part of
$U_{\rm eff}(l;r)$ in Fig.\ref{f5b}.
Comparing the contour plot for $g(l;r,r^\prime)$ (see Fig.\ref{f5a})
with the earlier plot in Fig.\ref{f1} for TPM15, we also notice that the
effect of nonlocality in it probably is much less.
\begin{figure*}[htb!]
\centering
\subfigure[][n-$^{12}$C scattering]{
\centering
\includegraphics[scale=0.43]{fig6a}}
\hspace{1.5cm}
\subfigure[][n-$^{56}$Fe scattering]{
\centering
\includegraphics[scale=0.43]{fig6b}}
\caption{Total cross sections for neutron scattering off $^{12}$C and
$^{56}$Fe nuclei. Calculations are done with FoldTPM15
prescription.}
\label{f6}
\end{figure*}
The calculated total and differential cross sections shown in
Figs.\ref{f6}-\ref{f7} are for (i) solution of the homogeneous equation,
(ii) solution obtained after one iteration and (iii) converged solution of
Eq.(\ref{eq15}) for n-$^{12}$C and n-$^{56}$Fe scatterings. The conclusion
about the accuracy of our technique in this case, if anything, is better
than that seen in Figs.\ref{f3}-\ref{f4}. Thus, we conclude that the
accuracy of our technique is good. This improvement in accuracy, as
mentioned above, may be due to lesser effect of nonlocality in FoldTPM15.
\begin{figure*}[htb!]
\centering
\subfigure[][n-$^{12}$C scattering]{
\centering
\includegraphics[scale=0.4]{fig7a}}
\hspace{1.5cm}
\subfigure[][n-$^{56}$Fe scattering]{
\centering
\includegraphics[scale=0.4]{fig7b}}
\caption{Angular distributions for neutron scattering off $^{12}$C and
$^{56}$Fe nuclei. Calculations are done with FoldTPM15
prescription.}
\label{f7}
\end{figure*}
\subsection{Impact of approximation leading to Eq.({\ref{eq7}})}
As mentioned in Section 2, the nonlocal kernel represented by Eq.(\ref{eq6})
contains the nonlocal potential, $\displaystyle{U\left(\frac{|\bf{r}+
\bf{r^\prime}|}{2}\right)}$ inside the integrand. Common practice is
to approximate $|{\bf r}+{\bf r^\prime}|$ by $(r+r^\prime)$ leading to
Eq.(\ref{eq7}). We study the impact of this approximation on the accuracy
of results obtained by solving Eq.(\ref{eq15}). In Fig.\ref{f8}, we
compare the total cross sections obtained by using Eq.(\ref{eq6}) in
Eq.(\ref{eq15}) with those obtained by using Eq.(\ref{eq7}). Results
are shown for neutron scattering off $^{12}$C and $^{56}$Fe targets
using TPM15 parameterization. We find that use of Eq.(\ref{eq7}) in
Eq.(\ref{eq15}) is good within 1$\%$ in the considered energy range.
\begin{figure*}[htb!]
\centering
\subfigure[][n-$^{12}$C scattering]{
\centering
\includegraphics[scale=0.43]{fig8a}}
\hspace{1.5cm}
\subfigure[][n-$^{56}$Fe scattering]{
\centering
\includegraphics[scale=0.43]{fig8b}}
\caption{Total cross sections calculated using Eqs.(\ref{eq6})-(\ref{eq7})
for neutron scattering off $^{12}$C and $^{56}$Fe nuclei. Calculations
are done with TPM15 parameterization \cite{tpm15}.}
\label{f8}
\end{figure*}
\section{Impact of different choice of nonlocal form factor}
\subsection{Impact of the form of nonlocality}
Recently Rotureau {\it et al.} have developed a method to construct
nonlocal optical potential from first principle \cite{rot17}. In such
microscopic approach, since the nonlocality is inherent in the formalism,
its form could be different from a Gaussian. Therefore, to determine the
impact of different choices of nonlocal form factor, we take, in
addition to the Gaussian form used so far, an exponential form:
$\displaystyle{{\rm exp}\left(-|\vec{r}-\vec{r^\prime}|/\alpha\right)}$,
which has normalization similar to that in Eq.(\ref{eq2}). Further, we
fix the value of $\alpha$ such that both the form factors have same rms
radius. Thus, we have two form factors with same normalization and
rms radius but different shapes. To see the effect of such pair of form
factors we solve the full scattering equation (Eq.(\ref{eq15})) with
these two form factors for $l$=0 and 1. We find that (i) the two wave
functions are very close to each other in magnitude and shape inside the
nucleus, and (ii) on the surface both of them have very close logarithmic
derivatives, which, as we know, determine the phase shifts. This allows
us to conclude that different form factors having same normalization and
same rms radius should give similar results for the scattering as well as
the reaction observables on a nucleus.
\subsection{Impact of the range of nonlocality}
Next we explore the effect of different choices of rms radius for a
particular form factor. We take two values of $\beta$, {\it i.e.,} 0.9
and 0.5 fm, for the Gaussian form and plot in Fig.\ref{f9} wave functions
for $l$=0 and 1 for neutron-$^{56}$Fe scattering at 10 MeV. It is evident
that the wave functions change significantly with $\beta$ in the nuclear
interior and beyond. This amount of difference should be seen in nuclear
reactions as well.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.43]{fig9}
\caption{Wave functions corresponding to two values of $\beta$ as a
function of distance. Results are shown for $l$=0 and 1 for
n-$^{56}$Fe scattering at 10 MeV. Calculations are done with TPM15
parameterization \cite{tpm15}. The vertical dashed line at 4.6 fm
marks the radius of $^{56}$Fe nucleus.}
\label{f9}
\end{figure*}
\section{Comparison with experiments}
Having established the accuracy of our technique, we now present comparison
of the calculated observables with the data. However, now in the
Schr\"{o}dinger equation we also include spin-orbit term. The resulting
Schr\"{o}dinger equation is:
\begin{equation}
\fl \left[\frac{d^2}{dr^2} - \frac{l(l+1)}{r^2} +
\frac{2\mu\,E}{\hbar^2} + \frac{2\mu V_{SO}(r)}{\hbar^2} f_{jl}
\right] u(j;l;r) =\frac{2\mu}{\hbar^2}\int_0^{r_m} g(l;r,r^\prime)
u(j;l;r^\prime) dr^\prime
\label{eq18}
\end{equation}
\begin{equation}
\hspace{-0.5cm}\eqalign{
{\rm where}\,\,\,\,f_{jl} = \frac{1}{2}\left(j(j+1)-l(l+1)-s(s+1)\right)
\,\, ({\rm with}\,\,s=1/2)
\cr
~~~~~~~~V_{SO}(r) = \left(U_{SO}\,+\,i W_{SO}\right) s(r)
\cr
~~~~~~~~s(r) = \left(\frac{\hbar^2}{{m_{\pi}}^2 c^2}\right)
\left(\frac{1}{a_{so} r}\right)
{\rm exp}\left(\frac{r-R_{so}}{a_{so}}\right)\left[1+
{\rm exp}\left(\frac{x-R_{so}}{a_{so}}\right)\right]^{-2}.
}
\label{eq19}
\end{equation}
For calculating $V_{SO}(r)$, TPM15 parameterization \cite{tpm15} is used.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.40]{fig10}
\caption{Calculated total cross sections for neutron scattering off
$^{12}$C, $^{56}$Fe and $^{100}$Mo nuclei along with the data
\cite{rapp,geel,diva,pase}. Calculations are done with TPM15
parameterization \cite{tpm15}.}
\label{f10}
\end{figure*}
As a representation of light and heavy nuclei, we take $^{12}$C, $^{56}$Fe
and $^{100}$Mo nuclei and study neutron scattering in the low energy
domain up to 10 MeV. In Fig.\ref{f10} we plot the calculated total cross
sections for all the three systems along with the experimental data
\cite{rapp,geel,diva,pase}. These results are obtained using TPM15
parameterization. Each figure has three curves: HOMO (Eq.(\ref{eq13})),
FULL (Eq.(\ref{eq15})) and 1-ITER. The 1-ITER results are shown because,
as found earlier in Section 3, one iteration of Eq.(\ref{eq15}) gives
results very close to the full iteration results. Within the spread in
experimental numbers, all the calculated results are consistent with the
data. In Fig.\ref{f11} we show the various experimental and corresponding
calculated angular distributions. We observe that for $^{12}$C all the
three curves are reasonably consistent with the data. For heavier nuclei,
$^{56}$Fe and $^{100}$Mo, while 1-ITER and FULL results are in good accord
with the data, the HOMO results fall short of it.
\begin{figure*}[htb!]
\centering
\subfigure[][n-$^{12}$C scattering]{
\centering
\includegraphics[scale=0.36]{fig11a}
\label{f11a}}
\subfigure[][n-$^{56}$Fe scattering]{
\centering
\includegraphics[scale=0.36]{fig11b}
\label{f11b}}
\subfigure[][n-$^{100}$Mo scattering]{
\centering
\includegraphics[scale=0.36]{fig11c}
\label{f11c}}
\caption{Calculated angular distributions using for neutron scattering
off $^{12}$C, $^{56}$Fe and $^{100}$Mo nuclei along with the data
\cite{white,glas,kinney,kadi,smith,rapaport}. Calculations are done with
TPM15 parameterization \cite{tpm15}.}
\label{f11}
\end{figure*}
For $^{12}$C it may be mentioned that, as the TPM15 parameters are obtained
by fitting nucleon scattering data on $^{27}$Al to $^{208}$Pb nuclei,
probably a better agreement might be achieved if more appropriate choice
of potential is made. To test this possibility, in Fig.\ref{f12} we show
the calculated angular distribution using FoldTPM15 prescription for
n-$^{12}$C scattering. These results reproduce the data well only in
the forward hemisphere in magnitude and shape. This indicates that there
is room for improvisation in the mean field sector for such systems. It may
however be mentioned here, as stated earlier in the paper, this improved
agreement may be due to reduced nonlocality displayed in FoldTPM15.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.36]{fig12}
\caption{Calculated angular distributions for neutron scattering off
$^{12}$C nucleus along with the data \cite{white,glas}. Calculations
are done using FoldTPM15 prescription.}
\label{f12}
\end{figure*}
\section{Summary and conclusion}
A novel technique to solve the integro-differential equation in scattering
studies has been developed. It is achieved by applying the mean
value theorem of integral calculus and using the symmetry of the
kernel in the nonhomogeneous term about $r$=$r^\prime$. The extent of
accuracy of the method is established by comparing the solution of the
homogeneous equation (Eq.(\ref{eq13})) thus obtained, with that of the full
nonhomogeneous equation (Eq.(\ref{eq15})). The later is obtained using
the iterative scheme initiated by solution of the homogeneous equation. It
is found that, though the solution of the homogeneous equation have some
variance with full results, in the first iteration itself this difference
practically disappears. The method is independent of the choice of the
form of the nonlocality. Hence, it can be used to study the sensitivity to
the form of the nonlocality in scattering problems. The effective local
potential appearing in the resultant homogeneous equation is different in
shape and magnitude in nuclear interior as compared to the local part used
in the original nonlocal potential. Further, this effective potential is
found to be $l$-dependent, but energy independent. The total and
differential cross sections calculated in the low beam energy range (up
to around 10 MeV) for neutron scattering off $^{12}$C, $^{56}$Fe and
$^{100}$Mo nuclei compare to a reasonable extent with the corresponding
measured values. However, the 1-ITER results are found to be in good
agreement with the data.
\ack
We thank Swagata Sarkar and R. C. Cowsik for a number of illuminating
discussions. We are thankful to the referee for constructive criticism
and several pertinent observations. NJU acknowledges financial support
from Science and Engineering Research Board (SERB), Govt. of India (grant
number YSS/2015/000900). AB acknowledges financial support from Dept.
of Science and Technology, Govt. of India (grant number
DST/INT/SWD/VR/P-04/2014).
\section*{References}
|
1,477,468,750,956 | arxiv | \section{Introduction}
Given a closed surface $M$, a natural question
is to determine the maximum integer $n$ such that the complete graph
$K_n$ can be embedded (drawn without crossings) into $M$ (e.g., $n=4$
if $M=S^2$ is the $2$-sphere, and $n=7$ if $M$ is a torus). This
classical problem
was raised in the late $19$th century by Heawood \cite{Heawood:MapColourTheorem-1890} and Heffter \cite{Heffter:Nachbargebiete-1891} and completely settled in the 1950--60's through a sequence of works by Gustin, Guy, Mayer, Ringel, Terry, Welch, and Youngs (see \cite[Ch.~1]{Ringel:MapColorTheorem-1974} for a discussion of the history of the problem and detailed references). Heawood already observed that if $K_n$ embeds into $M$ then
\begin{equation}
\label{eq:thread-problem}
\hfill
(n-3)(n-4)\leq 6b_1(M)=12-6\chi(M),
\hfill
\end{equation}
where $\chi(M)$ is the Euler characteristic of $M$ and $b_1(M)=2-\chi(M)$ is
the first $\Z_2$-\emph{Betti number} of $M$, i.e., the dimension of the first
homology group $H_1(M;\Z_2)$ (here and throughout the paper, we work with
homology with $\Z_2$-coefficients).
Conversely, for surfaces $M$ other than
the Klein bottle, the inequality is tight, i.e., $K_n$ embeds into $M$ if and
only if \eqref{eq:thread-problem} holds; this is a hard result, the bulk of the
monograph \cite{Ringel:MapColorTheorem-1974} is devoted to its proof. (The
exceptional case, the Klein bottle, has $b_1=2$, but does not admit an
embedding of $K_7$, only of $K_6$.)\footnote{The inequality
\eqref{eq:thread-problem}, which by a direct calculation is equivalent to
$n\leq c(M):=\lfloor (7+\sqrt{1+24\beta_1(M)})/2\rfloor $, is closely related
to the \emph{Map Coloring Problem} for surfaces (which is the context in which
Heawood originally considered the question). Indeed, it turns out that for
surfaces $M$ \emph{other than the Klein bottle}, $c(M)$ is the maximum
chromatic number of any graph embeddable into $M$. For $M=S^2$ the $2$-sphere
(i.e., $b_1(M)=0$), this is the \emph{Four-Color Theorem}
\cite{Appel:Every-planar-map-is-four-colorable.-I.-Discharging-1977,Appel:Every-planar-map-is-four-colorable.-II.-Reducibility-1977};
for other surfaces (i.e., $b_1(M)>0$) this was originally stated (with an
incomplete proof) by Heawood and is now known as the \emph{Map Color Theorem}
or \emph{Ringel--Youngs Theorem} \cite{Ringel:MapColorTheorem-1974}.
Interestingly, for surfaces $M\neq S^2$, there is a fairly short proof, based
on edge counting and Euler characteristic, that the chromatic number of any
graph embeddable into $M$ is at most $c(M)$ (see \cite[Thms.~4.2 and
4.8]{Ringel:MapColorTheorem-1974}) whereas the hard part is the tightness
of~\eqref{eq:thread-problem}.
}
\smallskip
The question naturally generalizes to higher dimension: Let
$\skelsim{k}{n}$ denote the $k$-skeleton of the $n$-simplex, the
natural higher-dimensional generalization of $K_{n+1}=\skelsim{1}{n}$
(by definition $\skelsim{k}{n}$ has $n+1$ vertices and every subset of
at most $k+1$ vertices forms a face). Given a $2k$-dimensional manifold
$M$, what is the largest $n$ such that $\skelsim{k}{n}$ embeds
(topologically) into $M$? This line of enquiry started in the $1930$'s
when van Kampen~\cite{vanKampen:KomplexeInEuklidischenRaeumen-1932}
and Flores~\cite{Flores:NichtEinbettbar-1933} showed that
$\skelsim{k}{2k+2}$ does not embed into $\R^{2k}$ (the case $k=1$
corresponding to the non-planarity of $K_5$). Somewhat surprisingly,
little else seems to be known, and the following conjecture of
K{\"u}hnel~\cite[Conjecture~B]{Kuhnel:Manifolds-in-the-skeletons-of-convex-polytopes-tightness-and-generalized-Heawood-inequalities-1994}
regarding a \emph{generalized Heawood inequality} remains
unresolved:
\begin{conjecture}[K\"uhnel]
\label{c:kuhnel}
Let $n,k\geq 1$ be integers. If $\skelsim{k}{n}$ embeds in a
compact, $(k-1)$-connected $2k$-manifold $M$ with $k$th $\Z_2$-Betti
number $b_k(M)$ then
\begin{equation} \label{eq:generalized-heawood} \hfill
\binom{n-k-1}{k+1} \le \binom{2k+1}{k+1}b_k(M). \hfill
\end{equation}
\end{conjecture}
\noindent
The classical Heawood inequality \eqref{eq:thread-problem} and the van
Kampen--Flores Theorem correspond the special cases $k=1$ and $b_k=0$,
respectively. K\"uhnel states Conjecture~\ref{c:kuhnel} in slightly
different form, in terms of Euler characteristic of $M$ rather than
$b_k(M)$; our reformulation is equivalent. The
$\Z_2$-coefficients are not important in the statement of the
conjecture but they are convenient for our new developments.
\subsection{New results toward K\"{u}hnel's conjecture}
In this paper, our main result is an estimate, in the spirit of the
generalized Heawood inequality \eqref{eq:generalized-heawood}, on the
largest $n$ such that $|\skelsim k{n}|$ almost embeds into a given
$2k$-dimensional manifold. An almost embedding is a relaxation of the
notion of embedding that is useful in setting up our proof method.
Let $K$ be a finite simplicial complex and let $|K|$ be its underlying
space (geometric realization). We define an \emph{almost-embedding} of
$K$ into a (Hausdorff) topological space $X$ to be a continuous map
$f:|K| \to X$ such that any two disjoint simplices $\sigma, \tau \in
K$ have disjoint images, $f(|\sigma|)\cap f(|\tau|)=\emptyset$. In
particular, every embedding is an almost-embedding as well. Let us
stress, however, that the condition for being an almost-embedding
depends on the actual simplicial complex (the triangulation), not just
the underlying space. That is, if $K$ and $L$ are two different
complexes with $|K| = |L|$ then a map $f:|K|=|L| \to X$ may be an
almost-embedding of $K$ into $X$ but not an almost-embedding of $L$
into~$X$. Our main result is the following.
\begin{theorem}\label{t:kuhnel-2}
If $\skelsim{k}{n}$ almost embeds into a $2k$-manifold $M$ then
\[ n \le 2 \binom{2k+2}{k} b_k(M)+ 2k+4,\]
where $b_k(M)$ is the $k$th $\Z_2$-Betti number of $M$.
\end{theorem}
\noindent
This bound is weaker than the conjectured generalized Heawood
inequality \eqref{eq:generalized-heawood} and is clearly not optimal
(as we already see in the special cases $k=1$ or $b_k(M)=0$).
Apart from applying more generally to almost embeddings, the
hypotheses of Theorem~\ref{t:kuhnel-2} are weaker than those of
Conjecture~\ref{c:kuhnel} in that we do not assume the manifold $M$ to
be $(k-1)$-connected. We conjecture that this connectedness assumption
is not necessary for Conjecture~\ref{c:kuhnel}, i.e., that
\eqref{eq:generalized-heawood} holds whenever $\skelsim{k}{n}$ almost
embeds into a $2k$-manifold $M$. The intuition is that
$\skelsim{k}{n}$ is $(k-1)$-connected and therefore the image of an
almost-embedding cannot ``use'' any parts of $M$ on which nontrivial
homotopy classes of dimension less than $k$ are supported.
\paragraph{Previous work.}
The following special case of Conjecture~\ref{c:kuhnel} was proved by
K\"uhnel~\cite[Thm.~2]{Kuhnel:Manifolds-in-the-skeletons-of-convex-polytopes-tightness-and-generalized-Heawood-inequalities-1994}
(and served as a motivation for the general conjecture): Suppose that
$P$ is an $n$-dimensional simplicial convex polytope, and that there
is a subcomplex of the boundary $\partial P$ of $P$ that is
\emph{$k$-Hamiltonian} (i.e., that contains the $k$-skeleton of $P$)
and that is a triangulation of $M$, a $2k$-dimensional manifold. Then
inequality \eqref{eq:generalized-heawood} holds. To see that this is
indeed a special case of Conjecture~\ref{c:kuhnel}, note that
$\partial P$ is a \emph{piecewise linear} (\emph{PL}) sphere of
dimension $n-1$, i.e., $\partial P$ is combinatorially isomorphic to
some subdivision of $\partial \simplex{n}$ (and, in particular,
$(n-2)$-connected). Therefore, the $k$-skeleton of $P$, and hence
$M$, contains a subdivision of $\skelsim{k}{n}$ and is
$(k-1)$-connected.
In this special case and for $n\geq 2k+2$, equality in
\eqref{eq:generalized-heawood} is attained if and only if $P$ is a
simplex. More generally, equality is attained whenever $M$ is a
triangulated $2k$-manifold on $n+1$ vertices that is
\emph{$(k+1)$-neighborly} (i.e., any subset of at most $k+1$ vertices
forms a face, in which case $\skelsim{k}{n}$ is a subcomplex of $M$).
Some examples of $(k+1)$-neighborly $2k$-manifolds are known, e.g.,
for $k=1$ (the so-called \emph{regular cases} of equality for the
Heawood inequality \cite{Ringel:MapColorTheorem-1974}), for $k=2$
\cite{Kuhnel:The-unique-3-neighborly-4-manifold-with-1983,Kuhnel:The-9-vertex-complex-projective-plane-1983}
(e.g., a $3$-neighborly triangulation of the complex projective plane)
and for $k=4$
\cite{Brehm:15-vertex-triangulations-of-an-8-manifold-1992}, but in
general, a characterization of the higher-dimensional cases of
equality for \eqref{eq:generalized-heawood} (or even of those values
of the parameters for which equality is attained) seems rather hard
(which is maybe not surprising, given how difficult the construction
of examples of equality already is for $k=1$).
\subsection{Generalization to points covered $q$ times}
K\"uhnel's conjecture can be recast in a broader setting suggested by
a generalization by Sarkaria~\cite[Thm 1.5]{Sarkaria:vanKampen} of the
van Kampen--Flores Theorem. Sarkaria's theorem states that if $q$ is
a prime, and $d$ and $k$ integers such that $d \leq \frac{q}{q-1}k$,
then for every continuous map $f\colon |\skelsim k{qk+2q-2}| \to \R^d$
there are $q$ pairwise disjoint simplices $\sigma_1, \dots, \sigma_{q}
\in K$ with intersecting images $f(|\sigma_1|) \cap \dots \cap
f(|\sigma_q|) \neq \emptyset$. Sarkaria's result was generalized by
Volovikov \cite{Volovikov:On-the-van-Kampen-Flores-theorem-1996} for
$q$ being a prime power.
Define an \emph{almost $q$-embedding} of $K$ into a (Hausdorff)
topological space $X$ as a continuous map $f:|K| \to X$ such that any
$q$ pairwise disjoint faces $\sigma_1, \dots, \sigma_{q} \in K$ have
disjoint images $f(|\sigma_1|) \cap \dots \cap f(|\sigma_q|) =
\emptyset$. (So almost $2$-embeddings are almost embeddings.) Again,
being an almost $q$-embedding depends on the actual simplicial complex
(the triangulation), not just the underlying space. Our proof
technique also gives an estimate for almost $q$-embeddings when $q$ is
a prime power.
\begin{theorem}\label{t:kuhnel-general}
Let $q=p^m$ be a prime power. If $\skelsim{k}{n}$ $q$-almost-embeds
into a $d$-manifold $M$ with $d \le \frac{q}{q-1}k$ then
\[ n \le \pth{(q-2)k+2q-2} \binom{qk+2q-2}{k} b_k(M)+(2q-2)k+4q-4,\]
where $b_k(M)$ is the $k$th $\Z_p$-Betti number of $M$.
\end{theorem}
\noindent
Theorem~\ref{t:kuhnel-general} specializes for $q=2$ to Theorem~\ref{t:kuhnel-2}.
\subsection{Proof technique}
Our proof of Theorem~\ref{t:kuhnel-general} strongly relies on a
different generalization of the van Kampen--Flores Theorem, due to
Volovikov~\cite{Volovikov:On-the-van-Kampen-Flores-theorem-1996}
regarding maps into general manifolds but under an additional
homological triviality condition.
\begin{theorem}[Volovikov]
\label{t:volovikov_q}
Let $q = p^m$ be a prime power. Let $f\colon \skelsim k{qk + 2q-2}
\to M$ be a continuous map where $M$ is a compact $d$-manifold
with $d \le \frac{q}{q-1}k$. If the induced homomorphism
\[ f_*\colon H_{k}\pth{\skelsim{k}{qk+2q-2};\Z_p} \to
H_{k}(M;\Z_p)\]
is trivial then $f$ is not a $q$-almost embedding.
\end{theorem}
\noindent
Theorem~\ref{t:volovikov_q} is obtained by specializing the corollary
in Volovikov's
article~\cite{Volovikov:On-the-van-Kampen-Flores-theorem-1996} to $m =
d$ and $s = k+1$.
Note that
Volovikov~\cite{Volovikov:On-the-van-Kampen-Flores-theorem-1996}
formulates the triviality condition in terms of cohomology, i.e., he
requires that $f^*: H^{k}(M;\Z_p) \to H^{k}(\skelsim{k}{2k+2};\Z_p)$
is trivial. However, since we are working with field coefficients and
the (co)homology groups in question are finitely generated, the
homological triviality condition (which is more convenient for us to
work with) and the cohomological one are equivalent.\footnote{More
specifically, by the Universal Coefficient Theorem
\cite[53.5]{Munkres:AlgebraicTopology-1984}, $H_k({\,\cdot\,}
;\Z_p)$ and $H^k({\,\cdot\,} ;\Z_p)$ are dual vector spaces, and
$f^\ast$ is the adjoint of $f_\ast$, hence triviality of $f_\ast$
implies that of $f^\ast$. Moreover, if the homology group $H_k(X
;\Z_p)$ of a space $X$ is finitely generated (as is the case for
both $\skelsim{k}{n}$ and $M$, by assumption) then it is
(non-canonically) isomorphic to its dual vector space $H^k(X
;\Z_p)$. Therefore, $f_\ast$ is trivial if and only if $f^\ast$ is.}
Note that the homological triviality condition is automatically
satisfied if $H_k(M;\Z_p)=0$, e.g., if $M=\R^{2k}$ or $M=S^{2k}$. On
the other hand, without the homological triviality condition, the
assertion is in general not true for other manifolds (e.g., $K_5$
embeds into every closed surface different from the sphere, or
$\skelsim{2}{8}$ embeds into the complex projective plane).
\bigskip
The key idea of our approach is to show that if $n$ is large enough and $f$ is
a mapping from $\skelsim{k}{n}$ to $M$, then there is a $q$-almost-embedding $g$
from $\skelsim ks$ to $|\skelsim kn|$ for some prescribed value of $s$ such that
the composed map $f \circ g\colon \Delta_s \to M$ satisfies Volovikov's condition.
More specifically, the following is our main technical lemma:
\begin{lemma}\label{l:chain_p}
Let $k,s\geq 1$ and $b\geq 0$ be integers. Let $p$ be a prime
number. There exists a value $n_0 := n_0(k,b,s,p)$ with the
following property. Let $n \geq n_0$ and let $f$ be a mapping of
$|\skelsim{k}{n}|$ into a manifold $M$ with $k$th $\Z_p$-Betti
number at most $b$. Then there exists a subdivision $D$ of
$\skelsim{k}{s}$ and a simplicial map $\ensuremath{g_{\simp}}\colon D \to
\skelsim{k}{n}$ with the following properties.
\begin{enumerate}
\item The induced map on the geometric realizations $g\colon |D|
= |\skelsim{k}{s}| \to |\skelsim{k}{n}|$ is an
almost-embedding from $\skelsim{k}{s}$ to $|\skelsim{k}{n}|$.
\item The homomorphism $(f\circ g)_\ast:H_{k}(\skelsim{k}{s}; \Z_p)
\to H_k(M; \Z_p)$ is trivial (see Section~\ref{s:prelim} below for
the precise interpretation of $(f\circ g)_\ast$).
\end{enumerate}
The value $n_0$ can be taken as $\binom{s}{k}b(s-2k) + 2s-2k+1$.
\end{lemma}
\noindent
Therefore, if $s \geq qk + 2q - 2$, then $f \circ g$ cannot be a
$q$-almost embedding by Volovikov's theorem. We deduce that $f$
is not a $q$-almost-embedding either, and
Theorem~\ref{t:kuhnel-general} immediately follows. This deduction
requires the following lemma (proven in Section~\ref{s:prelim}) as
in general, a composition of a $q$-almost-embedding and an
almost-embedding is not always a $q$-almost-embedding.
\begin{lemma}
\label{l:compose_ae_general}
Let $K$ and $L$ be simplicial complexes and $X$ a topological space.
Suppose $g$ is an almost embedding of $K$ into $|L|$ and $f$ is a
$q$-almost embedding of $L$ into $X$ for some integer $q \geq 2$.
Then $f \circ g$ is a $q$-almost embedding of $K$ into $X$, provided
that $g$ is the realization of a simplicial map $\ensuremath{g_{\simp}}$ from some
subdivision $K'$ of $K$ to $L$.
\end{lemma}
\begin{remark}
The third author proved in his thesis~\cite{patak:thesis-2015} a slightly
better bound on $n_0$ in Lemma~\ref{l:chain_p}, namely
$n_0=\binom{s}{k}b(s-2k) + s+1$.
The proof, however, uses colorful version of Lemma~\ref{l:odd}. Since the proof of the colorful version
is long and technical and in the end it only improves the bound in Theorem~\ref{t:kuhnel-2} by $2$,
we have decided to present the more accessible version of the argument.
\end{remark}
\paragraph{Paper organization.}
Before we establish Lemma~\ref{l:chain_p} (in Section~\ref{s:strong}),
thus completing the proof of Theorem~\ref{t:kuhnel-general}, we first prove a
weaker version that introduces the main ideas in a simpler setting,
and yields a weaker bound for $n_0$, stated in
Equation~\eqref{eq:weak-bound}. The reader interested only in
the case $q=2$ may want to consult a preliminary version of this
paper~\cite{Kuhnel_conference_version} tailored to that case (where
homology computations are without signs and the construction of the
subdivision $D$ is simpler).
\section{Preliminaries}
\label{s:prelim}
We begin by fixing some terminology and notation. We will use
$\card(U)$ to denote the cardinality of a set $U$.
We recall that the \emph{stellar subdivision} of a maximal face
$\vartheta$ in a simplicial complex $K$ is obtained by removing
$\vartheta$ from $K$ and adding a cone $a_\vartheta \ast (\partial
\vartheta)$, where $a_\vartheta$ is a newly added vertex, the apex of
the cone (see Figure~\ref{f:stellar}).
\begin{figure}
\begin{center}
\includegraphics{stellar_sigma_i}
\caption{A stellar subdivision of a simplex.}
\label{f:stellar}
\end{center}
\end{figure}
Throughout this paper we only work with homology groups and Betti numbers over
$\Z_p$, and for simplicity, we mostly drop the coefficient
group $\Z_p$ from the notation. Moreover, we will need to switch back and forth
between singular and simplicial homology. More precisely, if $K$ is a
simplicial complex then $H_*(K)$ will mean the simplicial homology of $K$,
whereas $H_*(X)$ will mean the singular homology of a topological space $X$. In
particular, $H_*(|K|)$ denotes the singular homology of the underlying space
$|K|$ of a complex $K$. We use analogous conventions for $C_*(K), C_*(X)$ and
$C_*(|K|)$ on the level of chains, and likewise for the subgroups of cycles and
boundaries, respectively.\footnote{We remark that throughout this paper, we
will only work with spaces that are either (underlying spaces of) simplicial
complexes or topological manifolds. Such spaces are homotopy equivalent to CW
complexes \cite[Corollary~1]{Milnor:On-spaces-having-the-homotopy-type-1959},
and so on the matter of homology, it does not really matter which (ordinary,
i.e., satisfying the dimension axiom) homology theory we use as they are all
naturally equivalent for CW complexes
\cite[Thm.~4.59]{Hatcher:AlgebraicTopology-2002}. However the distinction
between the simplicial and the singular setting will be relevant on the level
of chains.} Given a cycle $c$, we denote by $[c]$ the homology class it
represents.
A mapping $h \colon |K| \to X$ induces a chain map $h_\sharp^{\sing}
\colon C_*(|K|) \to C_*(X)$ on the level of singular chains;
see~\cite[Chapter 2.1]{Hatcher:AlgebraicTopology-2002}. There is also
a canonical chain map $\iota_K \colon C_*(K) \to C_*(|K|)$ inducing
the isomorphism of $H_*(K)$ and $H_*(|K|)$, see again~\cite[Chapter
2.1]{Hatcher:AlgebraicTopology-2002}. We define $h_\sharp \colon
C_*(K) \to C_*(X)$ as $h_\sharp := h_\sharp^{\sing} \circ \iota_K$.
The three chain maps mentioned above also induce maps $h_*^{\sing}$,
$(\iota_K)_*$, and $h_*$ on the level of homology satisfying $h_* =
h_*^{\sing} \circ (\iota_K)_*$.
We need a technical lemma saying that our maps compose, in a right
way, on the level of homology.
\begin{lemma}
\label{l:commutative_diagram}
Let $K$ and $L$ be simplicial complexes and $X$ a topological space.
Let $j_{\simp}$ be a simplicial map from $K$ to $L$, $j\colon |K|
\to |L|$ the continuous map induced by $j_{\simp}$ and $h\colon |L|
\to X$ be another continuous map. Then $h_* \circ (j_{\simp})_* = (h
\circ j)_*$ where $(j_{\simp})_*\colon H_*(K) \to H_*(L)$ is the map
induced by $j_{\simp}$ on the level of simplicial homology and the maps $h_*$ and $(h \circ j)_*$ are as defined above.
\end{lemma}
\begin{proof}
The proof follows from the commutativity of the diagram below.
\includegraphics{diagram}
\medskip
The commutativity of the lower right triangle follows from the definition of $h_*$.
Similarly $(h \circ j)_* = (h \circ j)_*^{\sing} \circ (\iota_K)_*$. The fact that
$(h\circ j)_*^{\sing} = h_*^{\sing} \circ j_*^{\sing}$
follows from
functionarility of the singular homology. The commutativity of the
square follows from the naturality of the equivalence of the singular
and simplicial homology; see~\cite[Thm
34.4]{Munkres:AlgebraicTopology-1984}.
\end{proof}
We now prove the final technical step of our approach, stated in the introduction.
\begin{proof}[Proof of Lemma~\ref{l:compose_ae_general}]
Let $\sigma_1, \dots, \sigma_q$ be $q$ pairwise disjoint faces of
$K$. Our task is to show $f\circ g(|\sigma_1|)\cap \cdots \cap f
\circ g(|\sigma_q|) = \emptyset$. Let $\vartheta_i$ be a face of
$K'$ that subdivides $\sigma_i$ for $i \in [q]$. We are done, if we
prove
\begin{equation}
\label{e:thetas}
f\circ g(|\vartheta_1|)\cap \cdots \cap f
\circ g(|\vartheta_q|) = \emptyset
\end{equation}
for every such possible choice of $\vartheta_1, \dots, \vartheta_q$.
The faces $\vartheta_1, \dots, \vartheta_q$ are pairwise disjoint
since $\sigma_1, \dots, \sigma_q$ are pairwise disjoint. Since
$\ensuremath{g_{\simp}}$ is a simplicial map inducing an almost embedding, the faces
$\ensuremath{g_{\simp}}(\vartheta_1), \dots, \ensuremath{g_{\simp}}(\vartheta_q)$ are pairwise
disjoint faces of $L$. Consequently, \eqref{e:thetas} follows from
the fact that $f$ is a $q$-almost embedding.
\end{proof}
\section{Proof of Lemma~\ref{l:chain_p} with a weaker bound on $n_0$}\label{s:weak}
Let $k,b,s$ be fixed integers. We consider a $2k$-manifold $M$ with
$k$th Betti number $b$, a map $f:|\skelsim{k}{n}| \to M$.
Recall that although we want to build an almost-embedding, homology is computed over~$\Z_p$. The strategy of
our proof of Lemma~\ref{l:chain_p} is to start by designing an
auxiliary chain map
$$
\varphi\colon C_*\left(\skelsim{k}{s}\right)\to C_*\left(\skelsim{k}{n}\right).
$$
that behaves as an almost-embedding, in the sense that whenever
$\sigma$ and $\sigma'$ are disjoint $k$-faces of $\simplex s$,
$\varphi(\sigma)$ and $\varphi(\tau)$ have disjoint supports, and such
that for every $(k+1)$-face $\tau$ of $\simplex{s}$ the homology
class $[(f_\sharp \circ \varphi)(\partial \tau)]$ is trivial. We then
use $\varphi$ to design a subdivision $D$ of $\skelsim ks$ and a
simplicial map $\ensuremath{g_{\simp}}: D \to\skelsim kn$ that induces a map $g: |D|
\to |\skelsim kn|$ with the desired properties: $g$ is an
almost-embedding and $(f \circ g)_*([\partial \tau])$ is trivial for
all $(k+1)$-faces $\tau$ of $\simplex{s}$. Since the cycles $\partial
\tau$, for $(k+1)$-faces $\tau$ of $\simplex s$, generate all
$k$-cycles of $\skelsim ks$, this implies that $(f \circ g)_*$ is
trivial.
The purpose of this section is to give a first implementation of the
above strategy that proves Lemma~\ref{l:chain_p} with a bound of
\begin{equation}
\label{eq:weak-bound}
\hfill
n_0 \ge \pth{\binom{s+1}{k+1}-1}p^{b\binom{s+1}{k+1}} + s + 1.
\hfill
\end{equation}
In Section~\ref{s:strong} we then improve this bound to
$\binom{s}{k}b(s-2k) + 2s-2k+1$ at the cost of some technical
complications (note that the improved bound is independent of $p$).
\bigskip
Throughout the rest of this paper we use the following notations. We
let $\{v_1, v_2, \ldots, v_{n+1}\}$ denote the set of vertices of
$\simplex{n}$ and we assume that $\simplex{s}$ is the induced
subcomplex of $\simplex{n}$ on $\{v_1, v_2, \ldots, v_{s+1}\}$. We let
$U = \{v_{s+2}, v_{s+3}, \ldots, v_{n+1}\}$ denote the set of vertices
of $\simplex{n}$ \emph{unused} by $\simplex{s}$. We let
$m=\binom{s+1}{k+1}$ and denote by $\sigma_1, \sigma_2,\ldots,
\sigma_m$ the $k$-faces of $\simplex s$, ordered lexicographically.
Later on, when working with homology, we compute the simplicial
homology with respect to this fixed order on the vertices of
$\simplex{n}$. In particular, the boundary of a $j$-simplex $\vartheta =
\{v_{i_1}, v_{i_2},\dots, v_{i_{j+1}}\}$ where $i_1 \leq i_2 \leq \dots \leq i_{j+1}$ is
$$
\partial \vartheta = \sum\limits_{\ell=1}^{j+1} (-1)^{\ell+1} \vartheta \setminus
\{v_{i_{\ell}}\}.
$$
\subsection{Construction of $\varphi$}
For every face $\vartheta$ of $\simplex s$ of dimension at most $k-1$
we set $\varphi(\vartheta) = \vartheta$. We then ``route''
each~$\sigma_i$ by mapping it to its stellar subdivision with an apex $u
\in U$, \emph{i.e.} by setting $\varphi(\sigma_i)$ to $\sigma_i +
(-1)^k z(\sigma_i,u)$ where $z(\sigma_i,u)$ denotes the cycle $\partial(\sigma_i \cup
\{u\})$; see Figure~\ref{f:phi_edge} for the case $k = 1$.
\begin{figure}
\begin{center}
\includegraphics{phi_edge}
\caption{Rerouting $\sigma_i$ for $k=1$. The support
of $z(\sigma_i,u)$ is dashed on the left, and the support of
resulting $\varphi(\sigma_i)$ is on the right.}
\label{f:phi_edge}
\end{center}
\end{figure}
We ensure that $\varphi$ behaves as an almost-embedding by using a
different apex $u\in U$ for each $\sigma_i$. The difficulty is to
choose these $m$ apices in a way that $[f_\sharp(\varphi(\partial
\tau))]$ is trivial for every $(k+1)$-face $\tau$ of $\simplex{s}$.
To that end we associate to each $u\in U$ the sequence
$$
\vv(u) := ([f_\sharp(z(\sigma_1,u))],
[f_\sharp(z(\sigma_2,u))],\dots,
[f_\sharp(z(\sigma_m,u)]) \in H_k(M)^m,
$$
and we denote by $\vv_i(u)$ the $i$th element of $\vv(u)$. We work
with $\Z_p$-homology, so $H_k(M)^m$ is finite; more precisely, its
cardinality equals $p^{bm}$. From $n \ge n_0 = (m-1)p^{bm} + s + 1$ we
get that $\card(U) \ge (m-1)\card(H_k(M)^m)+1$.
The pigeonhole
principle then guarantees that there exist $m$ distinct vertices
$u_1, u_2, \ldots, u_m$ of $U$ such that $\vv(u_1) = \vv(u_2) = \cdots
= \vv(u_m)$. We use $u_i$ to ``route'' $\sigma_i$ and put
\begin{equation}\label{eq:defphi}
\hfill
\varphi(\sigma_i):= \sigma_i + (-1)^kz(\sigma_i,u_i).
\hfill
\end{equation}
We finally extend $\varphi$ linearly to $C_*\left(\skelsim{k}{s}\right)$.
\begin{lemma}\label{l:zero_homology_points}
The map $\varphi$ is a chain map and $\bigl[f_\sharp\bigl(\varphi(\partial
\tau)\bigr)\bigr]= 0$ for every $(k+1)$-face $\tau\in\simplex{s}$.
\end{lemma}
Before proving the lemma, we establish a simple claim that will also
be useful later.
\begin{claim}
\label{c:boundary_tau_points}
Let $\tau$ be a $(k+1)$-face of $\simplex s$ and let $u \in U$. Let
$\sigma_{i_1}, \dots, \sigma_{i_{k+2}}$ be all the $k$-faces of $\tau$
sorted lexicographically, that is, $i_1 \leq \cdots \leq i_{k+2}$. Then
\begin{equation}
\label{e:boundary_tau_points}
\partial \tau =
z(\sigma_{i_1}, u) - z(\sigma_{i_2}, u) + \cdots +
(-1)^{k+1}z(\sigma_{i_{k+2}},u).
\end{equation}
\end{claim}
\begin{proof}
This follows from expanding the equation $0 = \partial^2(\tau\cup\{u\})$.
Indeed, \[ \begin{split} 0 = \partial^2(\tau\cup\{u\}) &= \partial\bigl(
\sigma_{i_{k+2}} \cup \{u\} - \sigma_{i_{k+1}} \cup \{u\} + \cdots +
(-1)^{k+1}\sigma_{i_1} \cup \{u\} + (-1)^{k+2}\tau\bigr) \\ &= (-1)^{k+1}\bigl(
- \partial \tau + z(\sigma_{i_1},u) - z(\sigma_{i_2},u) + \cdots +(-1)^{k+1}
z(\sigma_{i_{k+2}},u) \bigr).\\ \end{split} \]
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l:zero_homology_points}]
The map $\varphi$ is the identity on $\ell$-chains with $\ell \leq
k-1$ and Equation~\eqref{eq:defphi} immediately implies that
$\partial \varphi(\sigma) = \partial \sigma$ for every $k$-simplex
$\sigma$. It follows that $\varphi$ is a chain map.
Now let $\tau$ be a $(k+1)$-simplex of $\simplex{s}$ and let
$\sigma_{i_1}, \dots, \sigma_{i_{k+2}}$ be its $k$-faces. We have
\[\begin{aligned}
f_\sharp \circ \varphi (\partial\tau) = f_\sharp \circ \varphi \left(
\sum_{j=1}^{k+2} (-1)^{k+j} \sigma_{i_j} \right) & = f_\sharp \left( \sum_{j=1}^{k+2} (-1)^{j+k}\left(\sigma_{i_j} +
(-1)^kz(\sigma_{i_j},u_{i_j})\right) \right)\\
& = f_\sharp(\partial\tau) +
\sum_{j=1}^{k+2} (-1)^jf_\sharp\bigl(z(\sigma_{i_j}, u_{i_j})\bigr).
\end{aligned}\]
$\bigl[f_\sharp\bigl(z(\sigma_{i_j},u_{\ell})
\bigr)\bigr] = \vv_{i_j}(u_\ell)$ is independent of the value
$\ell$. When passing to the homology classes in the above identity,
we can therefore replace each $u_{i_j}$ with $u_1$, and obtain,
$$
\left[f_\sharp\circ \varphi (\partial\tau)\right] =
[f_\sharp(\partial\tau)] + \sum_{j=1}^{k+2} (-1)^j
\Bigl[f_\sharp\bigl(z(\sigma_{i_j}, u_1)
\bigr)\Bigr]
=
\Bigl[f_\sharp\Bigl(
\partial \tau + \sum_{j=1}^{k+2} (-1)^j z(\sigma_{i_j}, u_1)
\Bigr)\Bigr].
$$
This class is trivial by Claim~\ref{c:boundary_tau_points}.
Figure~\ref{f:homology_trivial} illustrates the geometric
intuition behind this proof.
\end{proof}
\begin{figure}{t}
\begin{center}
\includegraphics{homology_trivial}
\caption{The geometric intuition behind the proof of
Lemma~\ref{l:zero_homology_points}, for $k = 1$ and $u_{i_1}=
u_1$ (cycles of same color are in the same homology class; the
class on the right is trivial, because the edges cancel out in
pairs).\label{f:homology_trivial}}
\end{center}
\end{figure}
\subsection{Subdivisions and orientations}
\label{ss:subdivisions}
Our next task is the construction of $D$ and $g$; however, we first mention
a few properties of subdivisions.
Let us consider a simplicial complex $K$ and a subdivision $S$ of
$K$. (So $K$ and $S$ are regarded as geometric simplicial
complexes, and for every simplex $\eta$ of $S$ there is a simplex
$\vartheta$ of $K$ such that $\eta \subseteq \vartheta$. In this case, we say
that $\eta$ \emph{subdivides} $\vartheta$.) There is a
canonical chain map $\rho\colon C_*(K) \to C_*(S)$ that induces an
isomorphism in homology. Intuitively, $\rho$ maps a simplex
$\vartheta$ of $K$ to a sum of simplices of $S$ of the same dimension
that subdivide $\vartheta$. However, we have to be careful about the
$\pm 1$ coefficients in the sum.
We work with the ordered simplicial homology, that is, we order
the vertices of $K$ as well as the vertices of $S$.
We want to define the mutual orientation $\Or(\eta, \vartheta)
\in \{-1,1\}$ of a $j$-simplex $\vartheta$ of $K$ and a
$j$-simplex~$\eta$ of $S$ that subdivides $\vartheta$. We set up
$\Or(\eta, \vartheta)$ to be $1$ if the orientations of
$\vartheta$ and $\eta$ agree, and $-1$ if they disagree; the
orientation of each geometric simplex is computed relative to the
order of its vertices in $K$ or $S$ (with respect to a fixed base
of their common affine hull, say).
Then we set
\begin{equation}
\label{e:rho}
\rho(\vartheta) = \sum\limits_{\eta} \Or(\eta,\vartheta) \eta
\end{equation}
where the sum is over all simplices $\eta$ in $S$ of the same
dimension as $\vartheta$ which subdivide $\vartheta$. Finally, we
extend $\rho$ to a chain map. It is routine to check that $\rho$
commutes with the boundary operator and that it induces an isomorphism
on homology.
It is also useful to describe $\rho$ in the specific case where $S$ is
a stellar subdivision of a complex $K$ consisting of a single
$k$-simplex. Here, we assume that $w_1, \dots, w_{j+1}$ are the
vertices of $K$ in this order (in $K$ as well as in $S$) and $a$ is
the apex of $S$, which comes last in the order on $S$. We also
consider $S$ as a subcomplex of the $(k+1)$-simplex on $w_1, \dots,
w_{k+1}, a$. And we use the notation $z(\vartheta, a) =
\partial(\vartheta \cup \{a\})$, analogously as previously in the case
of $k$-faces of $\Delta_s$.
\begin{lemma}
\label{l:stellar_rho}
In the setting above, let $\vartheta$ be the $k$-face of $K$. Then
$\rho(\vartheta) = \vartheta + (-1)^k z(\vartheta, a)$.
\end{lemma}
\begin{proof}
Let $\eta_i := \vartheta \cup \{a\} \setminus \{w_i\}$ for $i \in [k+1]$.
Then $\eta_i$ are all faces of $S$ subdividing $\vartheta$. We have
$\Or(\eta_i,\vartheta) = (-1)^{i+k +1}$ as $\eta_i$ has the same orientation
as $\vartheta$ with respect to a modified order of vertices of $\vartheta$
obtained by replacing $w_i$ with $a$. Therefore $\rho(\vartheta) =
\sum\limits_{i=1}^{k+1} (-1)^{i+k+1} \eta_i$.
On the other hand,
$$
z(\vartheta, a) = \partial(\vartheta \cup \{a\}) = \left(\sum\limits_{i=1}^{k+1}(-1)^{i+1}
\eta_i \right) +
(-1)^{k+3} \vartheta =(-1)^k(\rho(\vartheta)-\vartheta).
$$
\end{proof}
\subsection{Construction of $D$ and $g$}
The definition of $\varphi$, an in particular
Equation~\eqref{eq:defphi}, suggests to construct our subdivision $D$
of $\skelsim ks$ by simply replacing every $k$-face of $\skelsim ks$
by its stellar subdivision. Let $a_i$ denote the new vertex
introduced when subdividing $\sigma_i$. We fix a linear order on vertices
of $D$ in such a way that we reuse the order of vertices that also belong to
$\skelsim ks$ and then the vertices $a_i$ follow in arbitrary order.
We define a simplicial map $\ensuremath{g_{\simp}} \colon D \to \skelsim kn$ by
putting $\ensuremath{g_{\simp}}(v) = v$ for every original vertex $v$ of $\skelsim
ks$, and $\ensuremath{g_{\simp}}(a_i) = u_i$ for $i \in [m]$. This $\ensuremath{g_{\simp}}$ induces a
map $g \colon |\skelsim ks| \to |\skelsim kn|$ on the geometric
realizations. Since the $u_i$'s are pairwise distinct, $g$ is an
embedding\footnote{We use the full strength of almost-embeddings when
proving Lemma~\ref{l:chain_p} with the better bound on $n_0$.}, so
Condition~1 of Lemma~\ref{l:chain_p} holds.
In principle, we would like to derive Condition~2 of
Lemma~\ref{l:chain_p} by observing that $g$ `induces' a
chain map from $C_*(\skelsim ks)$ to $C_*( \skelsim kn)$ that
coincides with $\varphi$. Making this a formal statement is thorny
because $g$, as a continuous map, naturally induces a chain map
$g_\sharp$ on singular rather than simplicial chains. We can't use
directly $\ensuremath{g_{\simp}}$ either, since we are interested in a map from
$C_*(\skelsim ks)$ and not from~$C_*(D)$.
We handle this technicality as follows. We consider the chain map $\rho \colon
C_*(\skelsim ks) \to C_*(D)$ from \eqref{e:rho}.
This map induces an isomorphism~$\rho_*$ in
homology. In addition $\varphi = (\ensuremath{g_{\simp}})_\sharp \circ \rho$ where
$(\ensuremath{g_{\simp}})_\sharp \colon C_*(D) \to C_*(\skelsim kn)$ denotes the
(simplicial) chain map induced by $\ensuremath{g_{\simp}}$. Indeed, all three maps are
the identity on simplices of dimension at most $k - 1$. For a $k$-simplex
$\sigma$, the map $\ensuremath{g_{\simp}}$ is an order preserving isomorphism when restricted
to the subdivision of $\sigma$ (in $D$). Therefore, the required equality
$\varphi(\sigma) = (\ensuremath{g_{\simp}})_\sharp \circ \rho(\sigma)$ follows
from~\eqref{eq:defphi} and Lemma~\ref{l:stellar_rho}.
We thus have in homology
$$ f_* \circ \varphi_* = f_* \circ (\ensuremath{g_{\simp}})_* \circ \rho_* $$
and since $\rho_*$ is an isomorphism and $f_* \circ \varphi_*$ is
trivial by Lemma~\ref{l:zero_homology_points}, it follows that $f_* \circ
(\ensuremath{g_{\simp}})_*$ is also trivial. Since $f_* \circ (\ensuremath{g_{\simp}})_* = (f \circ
g)_*$ by Lemma~\ref{l:commutative_diagram}, $(f \circ g)_*$ is trivial
as well. This concludes the proof of Lemma~\ref{l:chain_p} with the
weaker bound.
\section{Proof of Lemma~\ref{l:chain_p
}\label{s:strong}
We now prove Lemma~\ref{l:chain_p} with the bound claimed in the
statement, namely
$$ n_0 = \binom{s}{k}b(s-2k) + 2s - 2k + 1.$$
Let $k,b,s$ be fixed integers. We consider a $2k$-manifold $M$ with
$k$th $\Z_p$-Betti number $b$, a map $f:|\skelsim{k}{n}| \to M$, and we
assume that $n \ge n_0$.
The proof follows the same strategy as in Section~\ref{s:weak}: we
construct a chain map $\varphi\colon C_*(\skelsim ks) \to C_*(\skelsim
kn)$ such that the homology class $[(f_\sharp \circ \varphi)(\partial
\tau)]$ is trivial for all $(k+1)$-faces $\tau$ of $\Delta_s$, then
upgrade $\varphi$ to a continuous map $g\colon |\skelsim ks| \to
|\skelsim kn|$ with the desired properties.
When constructing $\varphi$, we refine the arguments of
Section~\ref{s:weak} to ``route'' each $k$-face using not only one,
but several vertices from $U$; this makes finding ``collisions''
easier, as we can use linear algebra arguments instead of the
pigeonhole principle. This comes at the cost that when upgrading $g$,
we must content ourselves with proving that it is an almost-embedding.
This is sufficient for our purpose and has an additional benefit: the
same group of vertices from $U$ may serve to route several $k$-faces
provided they pairwise intersect in $\skelsim sk$.
\subsection{Construction of $\varphi$}
We use the same notation regarding $v_1, \ldots, v_{n+1}$, $\simplex{n}$,
$\simplex{s}$, $U$, $m=\binom{s+1}{k+1}$ and $\sigma_1, \sigma_2,\ldots,
\sigma_m$ as in Section~\ref{s:weak}.
\paragraph{Multipoints and the map $\vv$.}
As we said we plan to route $k$-faces of $\simplex{s}$ through certain collections of
vertices from $U$ (weighted); we will call these collections multipoints.
It is more convenient to work with them on the level of formal linear
combinations.
Let $C_0(U)$ denote the $\Z_p$-vector space of formal linear combinations of
vertices from $U$. A
\emph{multipoint} is an element of $C_0(U)$ whose
coefficients sum
to $1$ (in $\Z_p$, of course).
The multipoints form an affine subspace of $C_0(U)$ which we
denote by $\M$. The \emph{support}, $\sup(\mu)$, of a multipoint $\mu \in \M$ is the
set of vertices $v \in U$ with non-zero coefficient in $\mu$. We say
that two multipoints are \emph{disjoint} if their supports are
disjoint.
For any $k$-face $\sigma_i$ and any multipoint $\mu = \sum_{u \in U}\lambda_uu$
we define:
$$ z(\sigma_i, \mu):=\sum_{u \in \sup(\mu)} \lambda_u z(\sigma_i,
u) := \sum_{u \in \sup(\mu)} \lambda_u \partial(\sigma_i \cup \{u\}).$$
Now, we proceed as in Section~\ref{s:weak} but replace unused
points by multipoints of $\M$ and the cycles $z(\sigma_i, u)$ with
the cycles $z(\sigma_i, \mu)$. Since $\Z_p$ is a field, $H_k(M)^m$ is
a vector space and we can replace the sequences $\vv(u)$ of
Section~\ref{s:weak} by the linear map
$$\vv: \left\{\begin{array}{rcl} \M & \to & H_k(M)^m\\
\mu & \mapsto & ([f_\sharp(z(\sigma_1,\mu))], [f_\sharp(z(\sigma_2,\mu))],\ldots,
[f_\sharp(z(\sigma_m,\mu))])\end{array}\right.$$
\paragraph{Finding collisions.}
The following lemma takes advantage of the vector space structure of
$H_k(M)^m$ and the affine structure of $\M$ to find disjoint
multipoints $\mu_1, \mu_2, \ldots$ to route the $\sigma_i$'s more
effectively than by simple pigeonhole.
\begin{lemma}\label{l:odd}
For any $r \ge 1$, any $\Z_p$-vector space $V$, and any affine map
$\psi\colon \M \to V$, if $\card(U) \ge
(\dim(\psi(\M))+1)(r-1)+1$ then $\M$ contains $r$ disjoint
multipoints $\mu_1, \mu_2, \ldots, \mu_r$ such that $\psi(\mu_1) =
\psi(\mu_2) = \cdots = \psi(\mu_r)$.
\end{lemma}
\begin{proof}
Let us write $U = \{v_{s+2}, v_{s+3}, \ldots, v_{n+1}\}$ and
$d=\dim(\psi(\M))$. We first prove by induction on $r$ the
following statement:
\begin{quote}
If $\card(U) \ge (d+1)(r-1)+1$ there exist $r$ pairwise disjoint
subsets $I_1, I_2, \ldots,I_r \subseteq U$ whose image under
$\psi$ have affine hulls with non-empty intersection.
\end{quote}
(This is, in a sense, a simple affine version of Tverberg's
theorem.) The statement is obvious for $r = 1$, so assume that $r
\geq 2$ and that the statement holds for $r-1$. Let $A$ denote the
affine hull of $\{\psi(v_{s+2}), \psi(v_{s+3}),\dots, \psi(v_{n+1})\}$
and let $I_r$ denote a minimal cardinality subset of $U$ such that
the affine hull of $\{\psi(v): v \in I_r\}$ equals $A$. Since $\dim
A \le d$ the set $I_r$ has cardinality at most $d+1$. The cardinality
of $U \setminus I_r$ is at least $(d+1)(r-2)+1$ so we can apply the
induction hypothesis for $r-1$ to $U \setminus I_r$. We thus obtain
$r-1$ disjoint subsets $I_1, I_2, \ldots, I_{r-1}$ whose images
under $\psi$ have affine hulls with non-empty intersection. Since
the affine hull of $\psi(U \setminus I_r)$ is contained in the
affine hull of $\psi(I_r)$, the claim follows.
Now, let $a \in V$ be a point common to the affine hulls of
$\psi(I_1), \psi(I_2), \ldots,\psi(I_r)$. Writing $a$ as an affine
combination in each of these spaces, we get
$$ a = \sum_{u \in J_1}\lambda^{(1)}_u\psi(u) = \sum_{u \in
J_2}\lambda^{(2)}_u\psi(u) = \cdots = \sum_{u \in
J_r}\lambda^{(r)}_u\psi(u)$$
where $J_j\subseteq I_j$ and $\sum_{u \in J_j}\lambda^{(j)}_u =1$ for any $j \in [r]$.
Setting $\mu_j = \sum_{u
\in J_j} \lambda_u^{(j)} u$ finishes the proof.
\end{proof}
\paragraph{Computing the dimension of $\vv(\M)$.}
Having in mind to apply Lemma~\ref{l:odd} with $V = H_k(M)^m$ and
$\psi = \vv$, we now need to bound from above the dimension of
$\vv(\M)$. An obvious upper bound is $\dim H_k(M)^m$, which equals $bm
= b\binom{s+1}{k+1}$. A better bound can be obtained by an argument
analogous to the proof of Lemma~\ref{l:zero_homology_points}. We first
extend Claim~\ref{c:boundary_tau_points} to multipoints.
\begin{claim}
\label{c:boundary_tau_multipoints}
Let $\tau$ be a $(k+1)$-face of $\simplex s$ and let $\mu \in \M$. Let
$\sigma_{i_1}, \dots, \sigma_{i_{k+2}}$ be all the $k$-faces of $\tau$
sorted lexicographically. Then
\begin{equation}
\label{e:boundary_tau_multipoints}
\partial \tau =
z(\sigma_{i_1}, \mu) - z(\sigma_{i_2}, \mu) + \cdots +
(-1)^{k+1}z(\sigma_{i_{k+2}},\mu).
\end{equation}
\end{claim}
\begin{proof}
By Claim~\ref{c:boundary_tau_points} we know that
\eqref{e:boundary_tau_multipoints} is true for points. For a multipoint
$\mu = \sum_{u \in U}\lambda_u u$, we get \eqref{e:boundary_tau_multipoints} as a linear combination of
equations~\eqref{e:boundary_tau_points} for the points $u$ with the `weight' $\lambda_u$ (note that
$\sum_{u \in U} \lambda_u = 1$; therefore the corresponding combination of
the left-hand sides of~\eqref{e:boundary_tau_points} equals $\partial
\tau$).
\end{proof}
\begin{lemma}
\label{l:im_vk}
$\dim (\vv(\M)) \le b\binom sk$.
\end{lemma}
\begin{proof}
Let $\tau$ be a $(k+1)$-face of $\Delta_s$ and let $\sigma_{i_1},
\dots, \sigma_{i_{k+2}}$ denote its $k$-faces.
For any multipoint $\mu$, Claim~\ref{c:boundary_tau_multipoints} implies
$$ [f_\sharp(\partial\tau)] = \sum_{j=1}^{k+2} (-1)^{j+1} [f_\sharp(z(\sigma_i,
\mu))] = \sum_{j=1}^{k+2} (-1)^{j+1}\vv_{i_j}(\mu);$$
therefore
$$\qquad \vv_{i_{k+2}}(\mu) = (-1)^{k+1}[f_\sharp(\partial\tau)] +
\sum_{j=1}^{k+1} (-1)^{j+k+1}\vv_{i_j}(\mu).$$
Each vector
$\vv(\mu)$ is thus determined by the values of the $\vv_{j}(\mu)$'s
where $\sigma_j$ contains the vertex $v_1$. Indeed, the vectors
$[f_\sharp(\partial\tau)]$ are independent of $\mu$, and for any
$\sigma_i$ not containing $v_1$ we can eliminate $\vv_i(\mu)$ by
considering $\tau := \sigma_i \cup \{v_1\}$ (and setting $\sigma_{i_{k+2}} =
\sigma_i$). For each of the
$\binom{s}{k}$ faces $\sigma_j$ that contain $v_1$, the vector
$\vv_{j}(\mu)$ takes values in $H_k(M)$ which has dimension at most
$b$. It follows that $\dim \vv(\M) \leq b\binom{s}{k}$.
\end{proof}
\paragraph{Coloring graphs to reduce the number of multipoints used.}
We could now apply Lemma~\ref{l:odd} with $r=m$ to obtain one
multipoint per $k$-face, all pairwise disjoint, to proceed with our
``routing''. As mentioned above, however, we only need that $\varphi$
is an almost-embedding, so we can use the same multipoint for several
$k$-faces provided they pairwise intersect. Optimizing the number of
multipoints used reformulates as the following graph coloring
problem:
\begin{quote}
Assign to each $k$-face $\sigma_i$ of $\Delta_s$ some color $c(i)
\in \N$ such that $\card\{c(i): 1 \le i \le m\}$ is minimal and
disjoint faces use distinct colors.
\end{quote}
\noindent
This question is classically known as Kneser's graph coloring
problem and an optimal solution uses $s-2k+1$ colors~\cite{lovasz78,
Matousek:BorsukUlam-2003}. Let us spell out one such coloring
(proving its optimality is considerably more difficult, but we do not
need to know that it is optimal). For every $k$-face $\sigma_i$ we let
$\min \sigma_i$ denote the smallest index of a vertex in $\sigma_i$.
When $\min \sigma_i \leq s - 2k$ we set $c(i) = \min \sigma_i$,
otherwise we set $c(i) = s - 2k + 1$. Observe that any $k$-face with
color $c \leq s-2k$ contains vertex $v_c$. Moreover, the $k$-faces
with color $s - 2k +1$ consist of $k+1$ vertices each, all from a set
of $2k+1$ vertices. It follows that any two $k$-faces with the same
color have some vertex in common.
\paragraph{Defining $\varphi$.}
We are finally ready to define the chain map $\varphi\colon
C_*(\skelsim{k}{s}) \to C_*(\skelsim{k}{n})$. Recall that we assume
that $n \ge n_0 = (\binom{s}{k}b+1)(r-1) + s +1$. Using the bound of
Lemma~\ref{l:im_vk} we can apply Lemma~\ref{l:odd} with $r = s - 2k +
1$, obtaining $s-2k+1$ multipoints $\mu_1, \mu_2, \ldots, \mu_{s-2k+1}
\in \M$. We set $\varphi(\vartheta) = \vartheta$ for any face
$\vartheta$ of $\simplex{s}$ of dimension less than $k$. We then
``route'' each $k$-face $\sigma_i$ through the multipoint $\mu_{c(i)}$
by putting
\begin{equation}
\label{e:varphi}
\hfill
\varphi(\sigma_i):= \sigma_i + (-1)^k z(\sigma_i,\mu_{c(i)}),
\hfill
\end{equation}
where $c(i)$ is the color of $\sigma_i$ in the coloring of the Kneser
graph proposed above. We finally extend $\varphi$ linearly to
$C_*(\Delta_s)$.
We need the following analogue of Lemma~\ref{l:zero_homology_points}.
\begin{lemma}\label{l:zero_homology_multipoints}
The map $\varphi$ is a chain map and $\bigl[f_\sharp\bigl(\varphi(\partial
\tau)\bigr)\bigr]= 0$ for every $(k+1)$-face
$\tau\in\simplex{s}$.
\end{lemma}
The proof of Lemma~\ref{l:zero_homology_multipoints} is very similar to the
proof of Lemma~\ref{l:zero_homology_points}; it just replaces points with
multipoints and Claim~\ref{c:boundary_tau_points} with
Claim~\ref{c:boundary_tau_multipoints}. We therefore omit the proof.
We next argue that $\varphi$ behaves like an almost embedding.
\begin{lemma}
\label{l:chain_ae}
For any two disjoint faces $\vartheta, \eta$ of $\skelsim{k}{s}$,
the supports of $\varphi(\vartheta)$ and $\varphi(\eta)$ use disjoint
sets of vertices.
\end{lemma}
\begin{proof}
Since $\varphi$ is the identity on chains of dimension at most
$(k-1)$, the statement follows if neither face has dimension $k$.
For any $k$-chain $\sigma_i$, the support of $\varphi(\sigma_i)$
uses only vertices from $\sigma_i$ and from the support of
$\mu_{c(i)}$. Since each $\mu_{c(i)}$ has support in $U$, which
contains no vertex of $\simplex s$, the statement also holds when
exactly one of $\vartheta$ or $\eta$ has dimension $k$. When both
$\vartheta$ and $\eta$ are $k$-faces, their disjointness implies
that they use distinct $\mu_j$'s, and the statement follows from the
fact that distinct $\mu_j$'s have disjoint supports.
\end{proof}
\subsection{Construction of $D$ and $g$}
We define $D$ and $g$ similarly as in Section~\ref{s:weak}, but the
switch from points to multipoints requires to replace stellar
subdivisions by a slightly more complicated decomposition.
\begin{figure}
\begin{center}
\includegraphics{subdivision_S}
\end{center}
\caption{Examples of subdivisions for $k=1$ and $\ell=3$ (left) and
for $k=2$ and $\ell=5$ (right). The bottom pictures show the orientations
of $|X_i|$ in the given ordering.
\label{f:subdivide}}
\end{figure}
\paragraph{The subdivision $D$.}
We define $D$ so that it coincides with $\simplex s$ on the faces of
dimension at most $(k-1)$ and decomposes each face of dimension $k$
independently. The precise subdivision of a $k$-face $\sigma_i$
depends on the cardinality of the support of the multipoint $\mu_{c(i)}$
used to ``route'' $\sigma_i$ under $\varphi$, but the method is
generic and spelled out in the next lemma; refer to
Figure~\ref{f:subdivide}.
\begin{lemma}
\label{l:subdivide}
Let $k \geq 1$ and $\sigma = \{w_1, w_2, \ldots, w_{k+1}\}$ be a
$k$-simplex. For any positive odd integer $\ell \ge 1$ there exist a
subdivision $S$ of $\sigma$ in which no face of dimension $k-1$ or
less is subdivided, and a labelling of the vertices of $S$ by
$\{w_1, w_2, \dots, w_{k+1}, x_1, x_2, \dots, x_\ell\}$ (some labels may
appear several times) satisfying the following properties.
\begin{enumerate}
\item Every vertex in $S$ corresponding to an original vertex $w_i$
of $\sigma$ is labelled by $w_i$.
\item No $k$-face of $S$ has its vertices labelled by $w_1, w_2,
\ldots, w_{k+1}$,
\item for every $j \in [\ell]$, the subdivision $S$ contains exactly one vertex
labelled by $x_j$; this vertex appears in a copy $X_j$ of a
stellar subdivision of a simplex labelled by $w_1, \ldots, w_{k+1}$ with the apex
labelled $x_j$.
\item Let us equip vertices of $S$ with a linear order which respects the
order $w_1 \leq w_2 \leq \cdots \leq w_{k+1} \leq x_1 \leq \cdots \leq
x_\ell$ of the labels. For each $j \in [\ell]$ considering $|X_j|$ as a simplex
in $|S| = |\sigma|$, such $|X_j|$ is oriented coherently with $|\sigma|$
(in the given ordering) if and only if $j$ is odd.
\end{enumerate}
\end{lemma}
\begin{proof}
This proof is done in the language of geometric simplicial complexes (rather
than abstract ones).
The case $\ell=1$ can be done by a stellar subdivision and labelling
the added apex $x_1$. The case $k=1$ is easy, as illustrated in
Figure~\ref{f:subdivide} (left). We therefore assume that $k \ge 2$
and build our subdivision and labelling in four steps:
\begin{itemize}
\item We start with the boundary of our simplex $\sigma$ where each
vertex $w_i$ is labelled by itself. Let $\vartheta$ be the $(k-1)$-face
of $\partial \sigma$ opposite vertex $w_2$, \emph{ie} labelled by
$w_1,w_3,w_4,\cdots w_{k+1}$. We create a vertex in the interior of $\sigma$, label it $w_2$,
and construct a new simplex $\sigma'$ as the join of $\vartheta$
and this new vertex; this is the dark simplex in
Figure~\ref{f:subdivide} (right).
\item We then subdivide $\sigma'$ by considering $\ell - 1$ distinct
hyperplanes passing through the vertices of $\sigma'$ labelled $w_3,
w_4, \ldots, w_{k+1}$ and through an interior points of the edge of
$\sigma'$ labelled $w_1, w_2$. These hyperplanes subdivide $\sigma'$
into $\ell$ smaller simplices.
We label the new interior vertices on the edge of
$\sigma'$ labelled $w_1, w_2$ by alternatively, $w_1$ and $w_2$;
since $\ell$ is odd we can do so in a way that every sub-edge is
bounded by two vertices labelled $w_1, w_2$.
\item We operate a stellar subdivision of each of the $\ell$ smaller
simplices subdividing $\sigma'$, and label the added
apices $x_1, x_2, \ldots, x_\ell$. This way we obtain a subdivision $S'$ of
$\sigma'$.
\item We finally consider each face $\eta$ of $S'$ subdividing $\partial
\sigma'$ and other than $\vartheta$
and add the simplex formed by $\eta$ and
the (original) vertex $w_2$ of $\sigma$. These simplices, together with $S'$,
form the desired subdivision $S$ of $\sigma$.
\end{itemize}
\noindent
It follows from the construction that no face of $\partial
\sigma$ was subdivided.
Property~1 is enforced in the first step and preserved
throughout. We can ensure that Property~2 holds in the following way.
First, we have that any $k$-simplex of $S'$ contains a vertex $x_j$ for some $j
\in [\ell]$. Next, if we consider a $k$-simplex of $S$ which is not in $S'$ it
is a join of a certain $(k-1)$-simplex $\eta$ of $S'$, with $\eta \subset \partial \sigma'$, and the vertex $w_2$ of $\sigma$.
However, the only such $(k-1)$-simplex labelled by $w_1, w_3, w_4, \dots,
w_{k+1}$ is $\vartheta$, but the join of $\vartheta$ and $w_2$ does not belong
to $S$.
Properties~3 and~4 are enforced by the stellar
subdivisions of the third step and by alternating the labels $w_1$ and $w_2$ in
the second step. No other step creates, destroys or
modifies any simplex involving a vertex labelled $x_j$.
\end{proof}
Let $S$ be the subdivision of a simplex $\sigma$ from Lemma~\ref{l:subdivide}. Similarly as in the case of Lemma~\ref{l:stellar_rho}, we need to describe the
chain map $\rho\colon C_*(\sigma) \to C_*(S)$ defined by formula~\eqref{e:rho}.
Actually, only a partial information will be sufficient for us, focusing on
$k$-simplices of $X_j$.
Since for every $j \in [\ell]$, the apex of $X_j$ is the only vertex labelled by
$x_j$, we can use $x_j$ as the name for the apex. Let $\vartheta_j$ be the
$k$-simplex on the vertices of $X_j$ except of $x_j$. Note that this simplex
does not belong to $S$. Following the usual pattern, we also denote
$z(\vartheta_j, x_j) := \partial(\vartheta_j \cup \{x_j\})$.
\begin{lemma}
\label{l:rho_S}
In the setting above,
\begin{equation}
\label{e:rho_S}
\rho(\sigma) = \sum\limits_{j=1}^{\ell} (-1)^{j+1}\left(\vartheta_j + (-1)^k z(\vartheta_j, x_j)
\right) + \sum\limits_{\eta} \Or(\eta,\sigma) \eta
\end{equation}
where the second sum is over all $k$-simplices of $S$ which do not belong to
any $X_j$.
\end{lemma}
\begin{proof}
We expand $\rho(\sigma)$ via~\eqref{e:rho}; however, we further shift the
$k$-simplices in some of the $X_j$ to the first sum in~\eqref{e:rho_S}. This
is done via Lemma~\ref{l:stellar_rho} on each of the $X_j$; the correction
term $(-1)^{j+1}$ comes from Property~4 of Lemma~\ref{l:subdivide}.
\end{proof}
The subdivision $D$ of $\skelsim ks$ is now defined as follows. First,
we leave the $(k-1)$-skeleton untouched. Next for each $k$-simplex $\sigma_i$
we consider the multipoint $\mu = \mu_{c(i)} = \sum_{u \in U} \lambda_u
u$ (leaving
the dependence on the index $i$ implicit in the affine combination). We
recall that $\lambda_u$ are elements of $\Z_p$; however, we temporarily
consider them as elements of $\Z$, in the interval $\{0, 1, \dots, p-1\}$. We
consider some $u' \in U$, which belongs to the support of $\mu$, and we set $\kappa_u := \lambda_u$ for any $u \in U
\setminus \{u'\}$ (as elements of $\Z$) whereas we set $\kappa_{u'} := 1 -
\sum_{u \in U \setminus \{u'\}} \lambda_u$. It follows that $\kappa_u \equiv
\lambda_u \pmod p$ for any $u \in U$ as $\sum_{u \in U} \lambda_u \equiv 1
\pmod p$ (they sum to $1$ as elements of $\Z_p$). Next, we set $\ell_i :=
\sum_{u \in U} |\kappa_u|$. It follows that $\ell_i$ is odd, and we set $S(i)$
to be the subdivision of $\sigma_i$ obtained from Lemma~\ref{l:subdivide} with
$\ell := \ell_i$. The final subdivision $D$ is obtained by subdividing each
$\sigma_i$ this way. For working with the chains, we need to specify a global
linear order on the vertices $D$. We pick an arbitrary such order that respects
the prescribed order on each $S(i)$.
According to this subdivision, we have a chain map $\rho \colon C_*(\skelsim
ks) \to C_*(D)$ defined in Subsection~\ref{ss:subdivisions}. On faces of
dimension at most $(k-1)$ it is an identity; on $k$-faces, it is determined by
the formula from Lemma~\ref{l:rho_S}.
\paragraph{The simplicial map $\ensuremath{g_{\simp}}$.}
We now define a simplicial map $\ensuremath{g_{\simp}}\colon D \to \skelsim kn$. We
first set $\ensuremath{g_{\simp}}(v) = v$ for every vertex $v$ of $\simplex s$.
Next, we consider some $k$-face $\sigma_i = \{w_1, \dots, w_{k+1}\}$. We
denote by $v_1, v_2 \ldots, v_{k+1}$ the
vertices on the boundary of $S(i)$, being understood that each
$v_j$ is labelled by $w_j$. We map each interior vertex of $S(i)$ labelled
with $w_j$ to $w_j$. It remains to map interior vertices of $S(i)$ labelled
$x_j$ for $j \in [\ell]$. Using the notation from the definition of $D$, we
consider the integers $\kappa_u$ for $u \in U$ (with respect to our
$\sigma_i$). If $\kappa_u > 0$, then we pick $\kappa_u$ vertices $x_j$ with $j$
odd and we map them to $u$. If $\kappa_u < 0$, which may happen only for $u =
u'$ (coming again from the definition of $D$), then we pick $-\kappa_u$
vertices $x_j$ with $j$ even and we map them to $u$. Of course, for two
distinct elements $u_1$ and $u_2$ from $U$ we pick distinct points $x_j$. The
parameter $\ell = \ell_i$ is set up exactly in such a way that we cover all
$x_j$. Now we need to know that $\rho$ and $\ensuremath{g_{\simp}}$ compose to $\varphi$ on the
level of chains.
\begin{lemma}
$(\ensuremath{g_{\simp}})_{\sharp} \circ \rho = \varphi$.
\end{lemma}
\begin{proof}
All three maps are the identity on $\skelsim{k-1}s$ so let us focus
on the $k$-faces. Consider a $k$-face $\sigma_i$, the value $\rho(\sigma_i)$ is
given by the formula in Lemma~\ref{l:rho_S} with $S = S(i)$. However, for expressing
$(\ensuremath{g_{\simp}})_{\sharp} \circ \rho(\sigma_i)$ we may ignore the second sum in
formula~\eqref{e:rho_S} since a $k$-simplex $\eta$ of $S$ that does not belong to
any $X_j$ contains two vertices with a same label by Lemma~\ref{l:subdivide},
which implies that $(\ensuremath{g_{\simp}})_{\sharp}(\eta) = 0$.
Therefore
\begin{equation}
\label{e:g_rho}
(\ensuremath{g_{\simp}})_{\sharp} \circ \rho(\sigma_i) =
(\ensuremath{g_{\simp}})_{\sharp} \Bigl(
\sum\limits_{j=1}^{\ell} (-1)^{j+1}\left(\vartheta_j + (-1)^k z(\vartheta_j, x_j)
\right) \Bigr) = \sum\limits_{u \in U} \kappa_u(\sigma_i + (-1)^k z (\sigma_i,
u)).
\end{equation}
The last equality follows from the definition of $\ensuremath{g_{\simp}}$ considering that
$\ensuremath{g_{\simp}}$ preserves the prescribed linear orders on $D$ and $\skelsim kn$. The
sign $(-1)^{j+1}$ disappears as the vertices $x_j$ with $j$ even contribute to
$\kappa_u$ with the opposite sign. We know that $\kappa_u \mod p = \lambda_u$
and that $\sum_{u \in U} \kappa_u = 1$. Therefore the expression on the
right-hand side of~\eqref{e:g_rho} equals $\sigma_i + (-1)^k z(\sigma_i, \mu)$,
that is, $\varphi(\sigma_i)$ as required.
\end{proof}
\paragraph{The continuous map $g$.}
Since $D$ is a subdivision of $\skelsim ks$, we have $|\skelsim ks| =
|D|$ and the simplicial map $\ensuremath{g_{\simp}}\colon D \to \skelsim kn$ induces a
continuous map $g \colon |\skelsim ks| \to |\skelsim kn|$. All that
remains to do is check that $g$ satisfies the two conditions of
Lemma~\ref{l:chain_p}. Condition~1 follows from a direct translation of
Lemma~\ref{l:chain_ae}; note that in the definition of $\ensuremath{g_{\simp}}$ we map $x_j$ to
$u \in U$ only if $\kappa_u \neq 0$. Condition~2 can be verified by a computation
in the same way as in Section~\ref{s:weak}. Specifically, in homology we have
$$ f_* \circ \varphi_* = f_* \circ (\ensuremath{g_{\simp}})_* \circ \rho_*$$
and we know that $f_* \circ \varphi_*$ is trivial on $\skelsim ks$ by
Lemma~\ref{l:zero_homology_multipoints}. As $\rho_*$ is an
isomorphism, this implies that $f_* \circ (\ensuremath{g_{\simp}})_*$ is trivial.
Lemma~\ref{l:commutative_diagram} then implies that $(f \circ g)_*$ is
trivial. This concludes the proof of Lemma~\ref{l:chain_p}.
\bigskip
\begin{acknowledgement} U.W.\ learned about Conjecture~\ref{c:kuhnel}
from Wolfgang K\"uhnel when attending the \emph{Mini Symposia on
Discrete Geometry and Discrete Topology} at the \emph{Jahrestagung
der Deutschen Mathematiker-Vereinigung} in M\"unchen in 2010. He
would like to thank the organizers Frank Lutz and Achill Sch\"urmann
for the invitation, and Prof.~K\"uhnel for stimulating discussions.
\end{acknowledgement}
\bibliographystyle{alpha}
|
1,477,468,750,957 | arxiv | \section{Introduction}
The pioneering work of K. C. Kulander in the late 1980s \cite{kulander,kulander1} has paved the way for the numerical solution of the time-dependent
Schr\"{o}dinger equation (TDSE) to become a very important and powerful tool for studying the laser-atom interaction and related strong-field phenomena. The constant increase in computer power and processor speed of personal computers in the last thirty years has led to the development of numerous numerical methods for solving the TDSE (see, for example, \cite{tong_gs, muller, nurhuda, bengtsson, peng, gordon, telnov}).
Nowadays, many software codes are available for studying processes such as multiphoton ionization, above-threshold ionization,
high-order above-threshold ionization, and high-order harmonic generation \cite{qprop, altdse, cltdse, scid-tdse}.
All these methods have one thing in common, namely the TDSE is solved within the single-active-electron (SAE) approximation for a model
atom, while the laser-atom interaction is treated in dipole approximation, either using the length or the velocity gauge form
of the interaction operator.
Propagation of an initial bound state under the influence of a strong laser field is only one part of
the problem. Extraction of the physical observables at the end of the laser pulse poses another challenging task. Modern-day
photoionization experiments designed for recording photoelectron spectra (PES) can be used to simultaneously measure the photoelectron
kinetic energy and its angular distribution (see, for example, \cite{holo, pad_exp1, pad_exp2}). As the resolution of these experimental
techniques increased, the theoretical calculation of highly accurate PES from \emph{ab initio} methods such as numerical solution of the
TDSE became essential in order to distinguish different mechanisms that play a role in a photoionization process.
Formal exact PES for a one-electron photoionization process can be calculated by projection of the time-dependent wave function at the end of the laser pulse onto the continuum states of the field-free Hamiltonian. We call this method the PCS (Projection onto Continuum States) method.
For long laser pulses at near-infrared wavelengths and moderate intensities the photoelectron can travel very far away from the origin.
In order to include the fastest photoelectrons the volume within which the wave function is simulated has to be very large.
Another deficiency of the PCS method is that the continuum states, onto which we project the solutions
of the TDSE at the end of the laser pulse, are analytically known only for the pure Coulomb potential, while for non-Coulomb potentials
they have to be obtained numerically. That is why many approximative methods for extracting PES with no need to calculate the continuum
states have emerged in the last three decades. One of the earliest methods used for extracting the PES from the time-dependent wave
function is the so-called window-operator (WO) method \cite{wop}. It has been successfully used in the past for PES calculations for
atomic targets exposed to a strong laser field \cite{wop_app}. Recently, the WO method has also been used for studying high-order
above-threshold ionization of the H$_2^+$ molecular ion \cite{fetic_mhati}.
There is also the so-called tSURFF method \cite{tsurff}, which is designed to replace the projection onto continuum states
with a time integral of the outer-surface flux, allowing one to use much a smaller simulation volume. An extension of the tSURFF method called
iSURF method \cite{morales} has also been used for calculating PES. Another way of calculating PES without explicit calculation of the
continuum states is to propagate the wave function under the influence of the field-free Hamiltonian for some period of time after the
laser pulse has been turned off, so that even the slowest parts of the wave function have reached the asymptotic zone \cite{madsen}. However, for neutral atomic targets
this method requires a large spatial grid to include the part of the wave function associated with
the fastest photoelectrons.
From a numerical point of view, the above-mentioned approximative methods may be appealing since they are less time consuming than the
exact PCS method. However, they can mask some fine details in the PES due to neglecting the nature of the continuum
state associated with a photoelectron. Therefore, approximative methods used for extracting PES from the wave function have to be checked
for consistency by comparing with the exact method. In this paper we compare the results obtained using the exact PCS method with those
obtained with the WO method.
This paper is organized as follows. In Sec.~\ref{sec:num} we first describe our numerical method for solving the Schr\"{o}dinger equation.
Next, we introduce the method of extracting PES from the time-dependent wave function using the method of projecting onto continuum states
and the window-operator method. In Sec.~\ref{sec:results} we present our results for PES obtained by these two methods. We compare
results for three different targets, fluorine negative ions and hydrogen and argon atoms, modeled by different types of the binding
potential. Finally, we summarize our results and give conclusions in Sec.~\ref{sec:sum}.
Atomic units (a.u.; $\hbar=1$, $4\pi\varepsilon_0=1$, $e=1$, and $m_e=1$) are used throughout the paper, unless otherwise stated.
\section{Numerical methods}\label{sec:num}
\subsection{Method of solving the Schr\"{o}dinger equation}\label{subsec:tdse}
We start by solving the stationary Schr\"{o}dinger equation for an arbitrary spherically symmetric binding potential $V(\mathbf{r})=V(r)$
in spherical coordinates:
\begin{equation}
H_0\psi(\mathbf{r})=E\psi(\mathbf{r}),\quad H_0=-\frac{1}{2}\nabla^{2}+V(r).
\end{equation}
We are looking for solutions in the form
\begin{equation}
\psi_{n\ell m}(\mathbf{r})=\frac{u_{n\ell}(r)}{r}Y_\ell^m(\Omega),\quad \Omega\equiv (\theta,\varphi),
\end{equation}
where the $Y_{\ell}^{m}(\Omega)$ are spherical harmonics. The radial function $u_{n\ell}(r)$ is a solution of the radial Schr\"{o}dinger
equation:
\begin{equation}
H_\ell(r)u_{n\ell}(r) = E_{n\ell}u_{n\ell}(r), \label{tdse:rad}
\end{equation}
\begin{equation}
H_\ell(r)=-\frac{1}{2}\frac{d^{2}}{dr^{2}}+V(r)+\frac{\ell(\ell+1)}{2r^{2}},\label{tdse:rad1}
\end{equation}
where $n$ is the principal quantum number and $\ell$ is the orbital quantum number. For bound states with the energy $E_{n\ell}<0$ the
corresponding radial wave function $u_{n\ell}(r)$ has to obey the boundary conditions $u_{n\ell}(0)=0$ and $u_{n\ell}(r) \to 0$ for
$r\to\infty$. The radial equation (\ref{tdse:rad}) is solved numerically in the interval $[0,r_{\max}]$ by expanding the radial
function into the B-spline basis set as
\begin{equation}
u_{n\ell}(r) = \sum_{j=2}^{N-1}c_{j}^{n\ell}B_{j}^{(k_{s})}(r),\label{tdse:bsp}
\end{equation}
where $N$ represents the number of B-spline functions in the domain $[0,r_{\max}]$ and $k_s$ is the order of the B-spline function.
All results presented in this paper have been obtained using the order $k_s=10$ and for simplicity we omit it in further expressions.
Since we require that the radial function vanishes at the boundary, we exclude the first and the last B-spline function
in the expansion (\ref{tdse:bsp}). For more details on the properties of the B-spline basis, see \cite{Bachau}.
Inserting (\ref{tdse:bsp}) into (\ref{tdse:rad}), multiplying the obtained equation with $B_{i}(r)$, and integrating over the radial
coordinate for fixed orbital quantum number $\ell$, we obtain a generalized eigenvalue problem in the form of a matrix equation:
\begin{equation}
\mathbf{H}_{0}^{\ell}\mathbf{c}^{n\ell}=E\mathbf{S}\mathbf{c}^{n\ell},\label{tdse:eigen}
\end{equation}
where
\begin{equation}
\left(\mathbf{H}_0^\ell\right)_{ij}=\int_0^{r_{\max}}B_i(r)H_\ell(r)B_j(r)dr,
\end{equation}
\begin{equation}
(\mathbf{S})_{ij}=\int_{0}^{r_{\max}}B_{i}(r)B_{j}(r)dr.
\end{equation}
The overlap matrix $\mathbf{S}$ originates from the fact that the B-spline functions do not form an orthogonal basis set. All integrals
involving B-spline functions are calculated with the Gauss-Legendre quadrature rule. Using standard diagonalization procedure for
solving (\ref{tdse:eigen}) we obtain the ground-state energy and the corresponding eigenvector, which is used as an initial state in the TDSE.
In order to describe the laser-atom interaction we numerically solve the time-dependent Schr\"{o}dinger equation:
\begin{equation}
i\frac{\partial\Psi(\mathbf{r},t)}{\partial t}=\left[H_0+V_I(t)\right]\Psi(\mathbf{r},t),\label{tdse}
\end{equation}
where $V_I(t)$ is the interaction operator in the dipole approximation and velocity gauge. We assume that the laser field is linearly
polarized along the $z$ axis, so that the interaction operator can be written as
\begin{eqnarray}
V_I(t)=-i\mathbf{A}(t)\cdot\mathbf{\nabla}=-iA(t)\left(\cos\theta\frac{\partial}{\partial r}-\frac{\sin\theta}{r}\frac{\partial}{\partial\theta}\right),
\end{eqnarray}
where $A(t)=-\int^{t}E(t')dt'$ and $E(t)$ is the electric field given by
\begin{eqnarray}
E(t)=E_0\sin^2\left(\frac{\omega t}{2N_c}\right)\cos(\omega t),\quad t\in[0,T_p],
\end{eqnarray}
where $\omega=2\pi/T$ is the laser-field frequency and $T_p=N_cT$ is the pulse duration, with $N_c$ the number of
optical cycles. The amplitude $E_0$ is related to the intensity $I$ of the laser field by the relation $E_0=\sqrt{I/I_A}$ where
$I_A=3.509\times 10^{16}~\text{W}/\text{cm}^{2}$ is the atomic unit of intensity.
The TDSE is solved by expanding the time-dependent wave function in the basis of B-spline functions and spherical harmonics:
\begin{equation}
\Psi(r,\Omega, t) = \sum_{j=2}^{N-1}\sum_{\ell=0}^{L-1} c_{j\ell}(t)\frac{B_{j}(r)}{r}Y_{\ell}^{m_0}(\Omega),
\label{tdse:expan}
\end{equation}
where the expansion coefficients $c_{j\ell}(t)$ are time-dependent. For a linearly polarized laser field, the magnetic quantum number is
constant and we set it equal to $m_0=0$.
Inserting the expansion (\ref{tdse:expan}) into (\ref{tdse}), multiplying the obtained result by $B_{i}(r)Y_{\ell'}^{m_{0}*}(\Omega)/r$, and integrating
over the spherical coordinates, we obtain the TDSE in the form of the following matrix equation:
\begin{eqnarray}
i(\mathbf{S}\otimes\mathbb{1}_{\ell})\frac{d\mathbf{c}(t)}{dt}=\left[\mathbf{H}_{0}^{\ell}\otimes\mathbb{1}_{\ell}
-iA(t)\mathbf{W}_I\right]\mathbf{c}(t),\label{tdse:matrix}
\end{eqnarray}
where $\mathbb{1}_{\ell}$ is the identity matrix in $\ell$-space and
\begin{eqnarray}
\mathbf{c}(t) &=&
\big[(c_{20},\dots, c_{N-10}),(c_{21},\dots, c_{N-11}),\nonumber\\ &~&\dots,(c_{2L-1},\dots, c_{N-1L-1})\big]^{T},
\end{eqnarray}
is a time-dependent vector. The matrices $\mathbf{S}$ and $\mathbf{H}_{0}^{\ell}$ are diagonal in $\ell$-space while the matrix
$\mathbf{W}_{I}$ couples the $\ell-1$ and $\ell+1$ $\ell$-block:
\begin{eqnarray}
( \mathbf{W}_{I})_{ij}^{\ell'\ell} &=& (\mathbf{Q})_{ij}\left[\ell c_{\ell-1}^{m_0}
\delta_{\ell',\ell-1} -
(\ell+1)c_{\ell}^{m_0}\delta_{\ell',\ell+1}\right]
\nonumber\\&& +(\mathbf{P})_{ij} \left[c_{\ell-1}^{m_0}\delta_{\ell',\ell-1} + c_{\ell}^{m_0}\delta_{\ell',\ell+1}\right],
\end{eqnarray}
where
\begin{eqnarray}
c_\ell^{m_0}&=&\sqrt{\frac{(\ell+1)^{2}-m_0^2}{(2\ell+1)(2\ell+3)}},\\
(\mathbf{Q})_{ij}&=&\int_{0}^{r_{\max}}\frac{B_{i}(r)B_{j}(r)}{r}dr,\\
(\mathbf{P})_{ij}&=&\int_{0}^{r_{\max}}B_{i}(r) \frac{dB_{j}(r)}{dr}dr.
\end{eqnarray}
Since the matrix $\mathbf{W}_I$ couples only the $\ell-1$ and the $\ell+1$ $\ell$-block, it can be decomposed in a sum of mutually commuting matrices
\begin{eqnarray}
\mathbf{W}_I = \sum_{\ell=0}^{L-2}\left(\mathbf{P}\otimes\mathbf{L}_{\ell m_0} + \mathbf{Q}\otimes\mathbf{T}_{\ell m_0}\right),
\end{eqnarray}
where
\begin{eqnarray}
\mathbf{L}_{\ell m_0}&=&c_{\ell}^{m_0}\left(\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right),\\
\mathbf{T}_{\ell m_0} &=&
(\ell+1)c_{\ell}^{m_0}
\left(\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array}\right),
\end{eqnarray}
are effectively $2\times2$ matrices acting upon the vector $[\mathbf{c}_{\ell}, \mathbf{c}_{\ell+1}]^{T}=[(c_{2l},\dots, c_{N-1l}),(c_{2\ell+1},\dots, c_{N-1\ell+1})]^{T}$.
The formal solution of the matrix equation (\ref{tdse:matrix}) can be written as
\begin{eqnarray}
\mathbf{c}(t+\Delta t) &=& \exp\bigg\{-i(\mathbf{S}^{-1} \otimes\mathbb{1}_{\ell})\nonumber\\ &~&\times \int_{t}^{t+\Delta t}
\left[\mathbf{H}_{0}\otimes\mathbb{1}_{\ell} - iA(t')\mathbf{W}_{I}\right]dt'\bigg\} \mathbf{c}(t).\nonumber\\
\end{eqnarray}
The evolution of the inital wave function is described by the same numerical recipe as in \cite{qprop}, but without using finite difference
expressions. Our final expression for this time evolution is
\begin{eqnarray}
\displaystyle\mathbf{c}(t+\Delta t) &=& \prod_{l=L-2}^{0}\Bigg[ \frac{\mathbf{S}\otimes\mathbb{1}_{\ell}-
\frac{\Delta t}{4}A(t+\Delta t)\mathbf{P}\otimes \mathbf{L}_{\ell m_0}}
{\mathbf{S}\otimes\mathbb{1}_{\ell}+\frac{\Delta t}{4}A(t+\Delta t)\mathbf{P}\otimes \mathbf{L}_{\ell m_0}}\nonumber\\
&~&\times \frac{\mathbf{S}\otimes\mathbb{1}_{\ell}- \frac{\Delta t}{4}A(t+\Delta t)\mathbf{Q}\otimes\mathbf{T}_{\ell m_0}}
{\mathbf{S}\otimes\mathbb{1}_{\ell}+\frac{\Delta t}{4}A(t+\Delta t) \mathbf{Q}\otimes \mathbf{T}_{\ell m_0}} \Bigg]
\nonumber\\ &~&\times \prod_{\ell=0}^{L-1} \frac{(\mathbf{S}-i\frac{\Delta t}{2}\mathbf{H}_{0}^{\ell})\otimes\mathbb{1}_{\ell}}
{(\mathbf{S}+i\frac{\Delta t}{2}\mathbf{H}_{0}^{\ell})\otimes\mathbb{1}_{\ell}} \nonumber \\
&~& \times \prod_{\ell=0}^{L-2} \Bigg[ \frac{\mathbf{S}\otimes\mathbb{1}_{\ell}-\frac{\Delta t}{4}A(t)\mathbf{Q}\otimes \mathbf{T}_{\ell m_0}}{\mathbf{S}\otimes\mathbb{1}_{\ell}
+ \frac{\Delta t}{4}A(t)\mathbf{Q}\otimes \mathbf{T}_{\ell m_0}}\nonumber\\&~&\times
\frac{\mathbf{S}\otimes\mathbb{1}_{\ell}-\frac{\Delta t}{4}A(t)\mathbf{P}\otimes \mathbf{L}_{\ell m_0}}
{\mathbf{S}\otimes\mathbb{1}_{\ell}+\frac{\Delta t}{4}A(t)\mathbf{P}\otimes \mathbf{L}_{\ell m_0}}
\Bigg]\mathbf{c}(t).
\end{eqnarray}
\subsection{Extracting the photoelectron spectra from the time-dependent wave function}
The photoelectron spectra can be extracted from the time-dependent wave function $\Psi(\mathbf{r}, t)$ at the end of the laser pulse by projecting it
onto the continuum states having the momentum $\mathbf{k}=(k,\Omega_\mathbf{k})$, $\Omega_\mathbf{k}\equiv(\theta_\mathbf{k},\varphi_\mathbf{k})$. These continuum states are
solutions of the stationary Schr\"{o}dinger equation for an electron moving in a spherically symmetric potential $V(r)$. There are two
linearly independent continuum states labeled $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ and $ \Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$, which satisfy
different boundary conditions at large distance from the atomic target:
\begin{equation}
\Phi_{\mathbf{k}}^{(\pm)}(\mathbf{r})\xrightarrow{r\to\infty} (2\pi)^{-3/2}\left(e^{i\mathbf{k} \cdot \mathbf{r}}
+f^{(\pm)}(\theta_\mathbf{k})\frac{e^{\pm ikr}}{r}\right),
\end{equation}
where $f^{(\pm)}(\theta_\mathbf{k})$ is the usual scattering amplitude. The solutions $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ represent continuum states that
obey the so-called outgoing boundary condition whereas the solutions
$\Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$ represent continuum states that obey the so-called incoming
boundary condition. The difference between these two continuum states becomes manifest in the time dependence of their corresponding wave
packets as shown in \cite{roman}. Here we only give the main result. Namely, a long time after the interaction with the target, the
continuum states $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ and $\Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$ behave as follows:
\begin{eqnarray}
\Phi_{\mathbf{k}}^{(+)}(\mathbf{r},t)&\xrightarrow{t\to\infty}& (2\pi)^{-3/2} e^{i(\mathbf{k} \cdot \mathbf{r}-E_\mathbf{k} t)}
+ \text{a scattering wave},\nonumber \\ \Phi_{\mathbf{k}}^{(-)}(\mathbf{r},t)&\xrightarrow{t\to\infty}& (2\pi)^{-3/2} e^{i(\mathbf{k} \cdot \mathbf{r}-E_\mathbf{k} t)}.
\end{eqnarray}
In an ionization experiment, the electron liberated by ionization winds up in a quantum state having linear momentum
$\mathbf{k}$. Therefore, the continuum state $\Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$ is suitable for describing an ionization experiment while the continuum
state $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ is employed for a collision experiment. For more detailed analysis and discussion, see \cite{starace}.
Both continuum states can be written as partial wave expansions:
\begin{equation}
\Phi_{\mathbf{k}}^{(\pm)}(\mathbf{r}) = \sqrt{\frac{2}{\pi}}\frac{1}{k}\sum_{\ell,m} i^{\ell}e^{\pm i\Delta_{\ell}}\frac{u_{\ell}(k,r)}{r}
Y_{\ell}^{m}(\Omega)Y_{\ell}^{m*}(\Omega_\mathbf{k}),\label{cont_st}
\end{equation}
where $\Delta_{\ell}$ is the scattering phase shift of the $\ell$th partial wave. The radial function $u_{\ell}(k,r)$ is a solution of the radial
Schr\"{o}dinger equation (\ref{tdse:rad}) for fixed orbital quantum number and kinetic energy $E_\mathbf{k}=k^{2}/2$. The continuum states
(\ref{cont_st}) are normalized on the momentum scale, i.e., $\langle \Phi_{\mathbf{k}'}^{(\pm)}| \Phi_{\mathbf{k}}^{(\pm)}\rangle = \delta(\mathbf{k}'-\mathbf{k})$.
For the pure Coulomb potential $V(r) = -Z/r$, the scattering phase shift $\Delta_\ell$ is equal to the Coulomb phase shift
$\sigma_\ell=\arg\Gamma(\ell+1 + i \eta)$, with $\eta = -Z/k$ the Sommerfeld parameter. The radial function $u_\ell(k,r)$ is given by
the regular Coulomb function $u_\ell(k,r)=F_\ell(\eta,kr)$, which is known in analytical form.
Coulomb functions $F_{\ell}(\eta,kr)$ and corresponding
phase shifts $\sigma_{\ell}$ are calculated using a subroutine from \cite{peng1}.
For the modified Coulomb potential
\begin{equation}
V(r) = -\frac{Z}{r} + V_{s}(r),
\end{equation}
the scattering phase shift $\Delta_\ell$ is the sum of the Coulomb phase shift $\sigma_\ell$ and the phase shift $\hat{\delta}_\ell$ due to the
presence of the short-range potential $V_s(r)$. In this case, the radial equation is solved numerically by the Numerov method in the interval
$r\in [0, r_0]$, where $r_0$ is the chosen size of the spherical box, and the phase shift $\hat{\delta}_\ell$ is obtained by matching the
numerical solution $u_\ell(k,r)$ to the known asymptotic solution \cite{joachain}:
\begin{equation}
\mathcal{N}u_{\ell}(k, r) = \cos\hat{\delta}_{\ell}F_{\ell}(\eta,kr) + \sin\hat{\delta}_{\ell}G_{\ell}(\eta,kr), \label{matching}
\end{equation}
where $G_{\ell}(\eta,kr)$ is the irregular Coulomb function and $\mathcal{N}$ is a normalization constant. To avoid having to calculate
derivatives, the phase shift $\hat{\delta}_\ell$ is obtained by matching at two different points $r_1$ and $r_2$ close to the boundary $r_0$:
\begin{equation}
\tan\hat{\delta}_{\ell} = \frac{\kappa F_{\ell}(\eta,kr_{2}) - F_{\ell}(\eta, kr_{1})} {G_{\ell}(\eta,kr_{1}) -\kappa G_{\ell}(\eta, kr_{2})},\quad
\kappa = \frac{u_{\ell}(k,r_{1})}{u_{\ell}(k, r_{2})}.
\end{equation}
For a pure short-range potential $V(r) = V_{s}(r)$~($\eta=0$), the Coulomb functions $F_{\ell}(\eta,kr)$ and
$G_{\ell}(\eta,kr)$ must be replaced by the spherical Bessel function $j_{\ell}(kr)$ and the spherical Neumann function $n_{\ell}(kr)$:
\begin{equation}
F_{\ell}(0,kr) = krj_{\ell}(kr), \quad G_{\ell}(0,kr) = -krn_{\ell}(kr).
\end{equation}
The spherical Bessel and Neumann functions and the Coulomb functions are calculated using a subroutine from \cite{coul90}.
After obtaining the phase shift $\hat{\delta}_{\ell}$, the numerical solution $u_{\ell}(k,r)$ is normalized according to (\ref{matching}).
The probability of finding the electron at the end of the laser pulse in a continuum state with the
momentum $\mathbf{k} = (k,\Omega_{\mathbf{k}})$ is given by
\begin{equation}
P(k, \Omega_{\mathbf{k}}) = \frac{d^{3}P}{k^{2} dk d\Omega_{\mathbf{k}}} = \left|\langle \Phi_{\mathbf{k}}^{(-)} | \Psi(T_{p})\rangle\right|^{2}.\label{pad_1}
\end{equation}
Inserting (\ref{cont_st}) and (\ref{tdse:expan}) into (\ref{pad_1}) we obtain the expression
\begin{equation}
P(k, \Omega_{\mathbf{k}})= \frac{2}{\pi}\frac{1}{k^{2}}\Big|\sum_{i,\ell}c_{i\ell}(T_{p})(-i)^{\ell}e^{i\Delta_{\ell}}
Y_{\ell}^{m_0}(\Omega_{\mathbf{k}})I_{i\ell}(k)\Big|^{2}, \label{prob}
\end{equation}
where we have introduced the integral
\begin{eqnarray}
I_{i\ell}(k)&=& \int_{0}^{r_{0}}u_{\ell}(k,r)B_{i}(r)dr
+\int_{r_{0}}^{r_{\max}}\Big[\cos\hat{\delta}_{\ell}F_{\ell}(\eta,kr)
\nonumber\\ &~& + \sin\hat{\delta}_{\ell}G_{\ell}(\eta,kr)\Big] B_{i}(r)dr.
\end{eqnarray}
The photoelectron angular distribution (PAD), i.e., the probability $P(E_\mathbf{k},\theta_\mathbf{k})$ of detecting the electron with kinetic
energy $E_\mathbf{k}$ emitted in the direction $\theta_\mathbf{k}$, is given by replacing $k = \sqrt{2E_\mathbf{k}}$ in (\ref{pad_1}) and integrating over $\varphi_\mathbf{k}$:
\begin{eqnarray}
P(E_\mathbf{k},\theta_\mathbf{k})&=&\frac{d^2P}{\sin\theta_\mathbf{k} dE_\mathbf{k} d\theta_\mathbf{k}}\nonumber\\ &=&\frac{1}{\pi\sqrt{2E_\mathbf{k}}}\Big|\sum_{i,\ell}c_{i\ell}(T_p)(-i)^{\ell}
e^{i\Delta_{\ell}}\nonumber\\&~&\times\sqrt{2l+1}P_{\ell}^{m_0}(\cos\theta_\mathbf{k})I_{i\ell}(k)\Big|^{2},\label{pad_2}
\end{eqnarray}
where $P_{\ell}^{m_0}(\cos\theta_\mathbf{k})$ are associated Legendre polynomials.
\subsection{Window-operator method}
Obtaining the photoelectron angular distribution by projecting onto continuum states can be a challenging task since the continuum states are
highly oscillatory functions. Therefore, the numerical integration has to be done with high precision and stability to get the
photoelectron spectra with an accuracy of a few orders of magnitude. This is especially true for non-Coulomb potentials since in this case the continuum states
must be obtained numerically. In this section we present the implementation of the WO method, which can be used for the extraction of the PES
without the need to calculate the continuum states.
The WO method is based on the projection operator $W_{\gamma}(E_\mathbf{k})$ defined by
\begin{equation}
W_{\gamma}(E_\mathbf{k}) = \frac{\gamma^{2^{n}}}{(H_{0}-E_\mathbf{k})^{2^{n}} + \gamma^{2^{n}}},\label{wo}
\end{equation}
which extracts the component $|\chi_{\gamma}(E_\mathbf{k})\rangle$ of the final wave vector $|\Psi(T_{p})\rangle$ that contributes to energies
within the bin of the width $2\gamma$, centered at $E_\mathbf{k}$:
\begin{equation}
|\chi_{\gamma}(E_\mathbf{k})\rangle = W_{\gamma}(E_\mathbf{k})|\Psi(T_{p})\rangle.\label{wo_eq}
\end{equation}
We set $n=3$ and expand the wave vector into the basis (\ref{tdse:expan}):
\begin{equation}
\chi_{\gamma}(E_\mathbf{k}, r,\Omega) = \sum_{i=2}^{N-1}\sum_ {\ell=0}^{L-1}b_{i\ell}^{(\gamma)}(E_\mathbf{k})\frac{B_{i}(r)}{r}Y_{\ell}^{m_0}(\Omega).
\end{equation}
To obtain the coefficients $b_{i\ell}^{(\gamma)}(E_\mathbf{k})$ we solve Eqn.~(\ref{wo_eq}) by factorizing (\ref{wo}) \cite{qprop} and transforming it into a series of matrix equations:
\begin{eqnarray}
&~& \mathbb{1}_{\ell}\otimes\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}-\gamma e^{i\nu_{34}})\right]
\left[\mathbf{H}_0^\ell-\mathbf{S}(E_\mathbf{k}+\gamma e^{i\nu_{34}})\right]\mathbf{b}_{1}^{(\gamma)} \nonumber\\
&~& = \gamma^{2^{3}}\mathbb{1}_{\ell}\otimes\mathbf{S}\mathbf{c}(T_{p}),\nonumber\\
&~& \mathbb{1}_{\ell}\otimes\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}-\gamma e^{i\nu_{33}})\right]
\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}+\gamma e^{i\nu_{33}})\right]\mathbf{b}_{2}^{(\gamma)} \nonumber\\
&~& = \mathbb{1}_{\ell}\otimes\mathbf{S}\mathbf{b}_{1}^{(\gamma)},\nonumber\\
&~& \mathbb{1}_{\ell}\otimes\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}-\gamma e^{i\nu_{32}})\right]
\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}+\gamma e^{i\nu_{32}})\right]\mathbf{b}_{3}^{(\gamma)} \nonumber\\
&~& = \mathbb{1}_{\ell}\otimes\mathbf{S}\mathbf{b}_{2}^{(\gamma)},\nonumber\\
&~& \mathbb{1}_{\ell}\otimes\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}-\gamma e^{i\nu_{31}})\right]
\left[\mathbf{H}_{0}^{\ell}-\mathbf{S}(E_\mathbf{k}+\gamma e^{i\nu_{31}})\right]\mathbf{b}^{(\gamma)} \nonumber\\
&~& = \mathbb{1}_{\ell}\otimes\mathbf{S}\mathbf{b}_{3}^{(\gamma)},
\end{eqnarray}
where $\nu_{3j}= (2j-1)\pi/2^{3}$. After obtaining $\mathbf{b}^{(\gamma)}$, the probability of finding the electron with the energy $E_\mathbf{k}$ is calculated as
\begin{eqnarray}
P_{\gamma}(E_\mathbf{k}) &=& \int dV \chi_{\gamma}^{*}(E_\mathbf{k},r, \Omega)\chi_{\gamma}(E_\mathbf{k},r, \Omega)\nonumber\\
&=&\int d\Omega dr P_{\gamma}(E_\mathbf{k}, r,\Omega),
\end{eqnarray}
where
\begin{eqnarray}
P_{\gamma}(E_\mathbf{k}, r,\Omega) =
\Bigg| \sum_{i=2}^{N-1}\sum_{\ell=0}^{L-1}
b_{i\ell}^{(\gamma)}(E_\mathbf{k})B_{i}(r)Y_{\ell}^{m_0}(\Omega) \Bigg|^{2}.
\end{eqnarray}
Now we make the assumption that the solid-angle element $d\Omega$ in position space is approximately
equal to the solid-angle element $d\Omega_\mathbf{k}$ in momentum space (for details, see \cite{deGruyter}). This means that information about the probability distribution in energy and in angle
is obtained by integrating $P_{\gamma}(E_\mathbf{k}, r,\Omega_{\mathbf{k}})\approx P_{\gamma}(E_\mathbf{k}, r,\Omega)$ over the radial coordinate.
In
this case we define the probability $P_{\gamma}(E_\mathbf{k},
\Omega_{\mathbf{k}}) = P_{\gamma}(E_\mathbf{k}, \theta_\mathbf{k} )/(2\pi)$ which is equal, up
to a constant factor, to the PAD, Eq.~(\ref{pad_2}).
\section{Results and Discussion}
\label{sec:results}
\begin{figure}[b!]
\vspace{1.3cm}
\centering
\includegraphics[scale=0.45]{wop_pm_f-.eps}
\caption{The differential detachment probabilities of F$^-$ ions for emission of electrons
in the directions $\theta_\mathbf{k}=0^{\circ}$, $90^{\circ}$, and $180^{\circ}$, as functions
of the photoelectron energy in units of the ponderomotive energy $U_p$, for the following
laser-field parameters: $I=1.3\times 10^{13}~\text{W}/\text{cm}^{2}$,
$\lambda =1800~\text{nm}$, and $N_c=6$. The results are obtained
by projecting the time-dependent wave function $\Psi(T_{p})$ onto the
$\Phi_{\mathbf{k}}^{(-)}$ states (black solid line) and $\Phi_{\mathbf{k}}^{(+)}$ states
(green dot-dashed line) and using the WO method with $\gamma = 2\times 10^{-3}$
(red dashed line).} \label{results:f-_pad_vs_wop}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.45]{pad_f-.ps}
\caption{Full PADs for the same parameters as in Fig.~\ref{results:f-_pad_vs_wop}. The upper panel shows the PAD obtained
by projecting onto the continuum states $\Phi_\mathbf{k}^{(-)}$ while the lower panel shows the PAD obtained by the WO method. The WO method gives
additional structure for angles $\theta_\mathbf{k} \in (30^{\circ}, 150^{\circ})$ and energies $E_\mathbf{k}>3U_p$. }\label{results:f-_pad_full}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.45]{h_pad_vs_wop.eps}
\caption{The differential ionization probabilities of H atoms for emission of
electrons in the directions $\theta_\mathbf{k}=0^{\circ}$, $90^{\circ}$, and $180^{\circ}$, as
functions of the photoelectron energy in units of the ponderomotive energy $U_p$, for the
following laser-field parameters: $I=10^{14}~\text{W}/\text{cm}^{2}$,
$\lambda =800~\text{nm}$, and $N_c=6$. The results are obtained by projecting
the time-dependent wave function $\Psi(T_p)$ onto the $\Phi_{\mathbf{k}}^{(-)}$ states (black solid line)
and the $\Phi_{\mathbf{k}}^{(+)}$ states (green dot-dashed line) and by
using the WO method with $\gamma=6\times 10^{-3}$ (red dashed line).}\label{results:h_pad_vs_wop}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.45]{pad_h.ps}
\caption{Full PADs for the H atom and laser-field parameters as in
Fig.~\ref{results:h_pad_vs_wop}. The upper panel shows the PAD obtained by projecting
onto the Coulomb wave for the free particle and the lower panel shows the PAD obtained by the WO method. The WO method gives additional
interference structures for angles $\theta_\mathbf{k} \in (30^{\circ}, 150^{\circ})$ and $E_\mathbf{k}>4U_p$.}\label{results:h_pad_full}
\end{figure}
In this section we present the results for the PES obtained by the methods discussed in the previous section. We begin by comparing the spectra obtained
using the PCS and WO methods for a short-range potential. As the target we use the fluorine
negative ion $\mathrm{F}^{-}$. Within the SAE approximation we model the corresponding potential by the Green-Sellin-Zachor potential
with a polarization correction included \cite{GSZpot}:
\begin{equation}
V(r) = -\frac{Z}{r\left[1+H\left(e^{r/D}-1\right)\right]}-\frac{\alpha}{2\left(r^2 + r_p^2\right)^{3/2}},
\end{equation}
with $Z=9$, $D=0.6708$, $H=1.6011$, $\alpha=2.002$, and $r_{p}=1.5906$. The $2p$ ground state of F$^-$ has the electron affinity
equal to $I_p=3.404~\text{eV}$. In Fig.~\ref{results:f-_pad_vs_wop} we present the results for PAD in the directions $\theta_\mathbf{k}=0^{\circ}$,
$90^{\circ}$, and $180^{\circ}$, obtained by projecting the time-dependent wave function $\Psi(T_p)$ onto continuum states
satisfying incoming boundary condition (black solid line), outgoing boundary condition (green dot-dashed line),
and using the WO method with $\gamma = 2\times 10^{-3}$ (red dashed line) for
the laser-field parameters $I=1.3\times10^{13}~\text{W}/\text{cm}^{2}$, $\lambda =1800~\text{nm}$, and $N_c=6$.
The photoelectron energy is given in units of the ponderomotive energy $U_p=E_0^2/(4\omega^2)$. The TDSE is solved within a spherical box of
the size $r_{\max}=2200~\text{a.u.}$ with the time step $\Delta t = 0.1~\text{a.u.}$ To achieve convergence we used
$L=40$ partial waves with $N=5000$ B-spline functions. The convergence was checked with respect to the variation of all these
parameters. The continuum states were obtained numerically in a spherical box of the size $r_0=30~\text{a.u.}$ To allow for the best
visual comparison, the WO spectra were multiplied by a constant factor so that optimal overlap is achieved with the
PAD given by Eq.~(\ref{pad_2}). We notice that for $\theta_\mathbf{k}=0^{\circ}$ and $\theta_\mathbf{k}=180^{\circ}$ these two methods produce
almost identical photoelectron spectra, in contrast to the spectrum in the perpendicular direction with respect to the polarization axis,
i.e., for $\theta_\mathbf{k}=90^{\circ}$, where we notice a significant difference.
The WO method gives a large plateau-like annex, which extends
approximately up to $9U_p$, whereas the PAD obtained by projection onto the $\Phi_\mathbf{k}^{(-)}$ states drops very quickly beyond $2U_p$.
The results obtained projecting onto the states $\Phi_\mathbf{k}^{(+)}$ exhibit almost the same plateau-like annex. We will discuss this later.
We notice here (and will again in the subsequent figures) that the calculated spectra do not observe backward-forward symmetry. This is due
to the rather short pulse duration (recall $N_c = 6$); it can nicely be
explained in terms of quantum orbits \cite{fewcyclerapid, rescTR}.
In Fig.~\ref{results:f-_pad_full} we present logarithmically scaled full PADs obtained either by projecting
on the states $\Phi_\mathbf{k}^{(-)}$ (upper panel) or by the WO method (lower panel). Both spectra have been
normalized to unity and the color map covers seven orders of magnitude.
As we can see, for small and very large angles,
these two methods produce almost identical interference structures in the PADs. However, there is a substantial difference between
the two PADs in the angular range $\theta_\mathbf{k} \in (25^{\circ}, 150^{\circ})$ for $E_\mathbf{k}>3U_p$.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.45]{wop_pm_ar.eps}
\caption{The differential ionization probabilities of the Ar atom for emission of
electrons in the directions $\theta_\mathbf{k}=0^{\circ}$, $90^{\circ}$, and $180^{\circ}$, as
functions of the photoelectron energy in units of the ponderomotive energy $U_p$, for the
following laser-field parameters: $I=8\times 10^{13}~\text{W}/\text{cm}^{2}$,
$\lambda =800~\text{nm}$, and $N_c=6$. The results are obtained
by projecting the time-dependent wave function $\Psi(T_{p})$ onto the
$\Phi_{\mathbf{k}}^{(-)}$ states (black solid line) and $\Phi_{\mathbf{k}}^{(+)}$ states
(green dot-dashed line) and by using the WO method with $\gamma = 6\times 10^{-3}$ (red dashed line).}\label{results:ar_pad_vs_wop}
\end{figure}
Next we investigate the PAD for the hydrogen atom with its pure Coulomb potential. In Fig.~\ref{results:h_pad_vs_wop} we show the PES
for $I=10^{14}~\text{W}/\text{cm}^{2}$, $\lambda =800~\text{nm}$, and $N_c=6$. The initial state is $1s$ ($I_{p}=13.605~\text{eV}$).
The TDSE is solved in a spherical box of the size $r_{\max}=2200~\text{a.u.}$ using $L=40$ partial
wave and $N=5000$ B-spline functions. The time step is set to $\Delta t = 0.1~\text{a.u.}$ The spectra obtained using the WO method
are calculated with $\gamma = 6\times 10^{-3}$. Again, we see that the WO method as well as PCS on outgoing-boundary-condition states give a plateau-like annex in the perpendicular
direction, which is absent from the PAD obtained by projecting onto the Coulomb wave (the state $\Phi_\mathbf{k}^{(-)}$). The same conclusion can be
obtained by comparing the full PADs, normalized to unity and presented in Fig.~\ref{results:h_pad_full}. In the lower panel the
PAD obtained using the WO method clearly shows additional interference structures just as in the case of $\mathrm{F}^{-}$ ions.
As the last example we use modified the Coulomb potential to model the $3p$ state of the argon atom in the SAE approximation.
This potential is given by \cite{tong}
\begin{equation}
V(r) = -\frac{1+a_{1}e^{-a_{2}r}+a_{3}re^{-a_{4}r}+ a_{5}e^{-a_{6}r}}{r},\label{TongPot}
\end{equation}
with $a_{1}=16.039$, $a_{2}=2.007$, $a_{3}=-25.543$, $a_{4}=4.525$, $a_{5}=0.961$, and $a_{6}=0.443$. Using the potential (\ref{TongPot}) we calculated
the ionization potential of the $3p$ state and obtained $I_{p}=15.774~\text{eV}$. The TDSE is solved within a spherical box of the size
$r_{\max}=1800~\text{a.u.}$ with the time step $\Delta t = 0.05~\text{a.u.}$ Convergence is achieved with $L=40$ partial waves
with $N=6000$ B-spline functions. The continuum states are calculated within a spherical box of the size $r_{0}=30~\text{a.u.}$ We used the
laser-field parameters $I=8\times10^{13}~\text{W}/\text{cm}^{2}$, $\lambda =800~\text{nm}$, and $N_c=6$.
The results for $\theta_\mathbf{k}=0^{\circ}$, $90^{\circ}$, and $180^{\circ}$ are presented in Fig.~\ref{results:ar_pad_vs_wop}.
For $\theta_\mathbf{k}=90^{\circ}$ we again notice a plateau-like structure in the spectrum obtained by the WO method and by projecting on the states $\Phi_{\mathbf{k}}^{(+)}$. This is also visible
from the full PADs presented in Fig.~\ref{results:ar_pad_full}.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.45]{pad_ar.ps}
\caption{Full PADs for the Ar atom and the same laser-field parameters as in Fig.~\ref{results:h_pad_vs_wop}. The upper panel shows the PAD
obtained by projecting onto the continuum states $\Phi_\mathbf{k}^{(-)}$ and the lower panel shows the PAD obtained by the WO method. The WO method gives
additional structure for angles $\theta_\mathbf{k} \in (30^{\circ}, 150^{\circ})$ and $E_\mathbf{k}>4U_p$.} \label{results:ar_pad_full}
\end{figure}
From all these examples we can conclude that this plateau-like structure observed
at large angles is not caused by the nature of the spherical potential $V(r)$
but has a different origin. Let us now explain the discrepancy between the
spectra obtained by projection on the states $\Phi_{\mathbf{k}}^{(-)}$ on the one hand
and by projection on $\Phi_{\mathbf{k}}^{(+)}$ or by the WO method on the other, which we
noticed in all examples presented above.
As we have already discussed, the continuum states have to satisfy the incoming
boundary condition in order to properly describe the PES.
This boundary condition is automatically included in the continuum state (\ref{cont_st}) by the
phase factor $i^{\ell}e^{-i\Delta_{\ell}}$ for each partial wave. For a better understanding of the origin of the artificial
plateau-like annex that we see in the spectra obtained using the WO
method, in Figs.~\ref{results:f-_pad_vs_wop}, \ref{results:h_pad_vs_wop}, and
\ref{results:ar_pad_vs_wop} we have also presented the PADs
obtained projecting onto the continuum states $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$.
As we can see, the PAD in the direction $\theta_\mathbf{k}=90^{\circ}$, calculated
using the wrong continuum states $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$, gives the
same artificial plateau-like structures as the WO method.
Therefore, we conclude that the effect that we see in the PADs obtained by
the WO method is caused by the
boundary condition satisfied by the continuum states. Since this boundary
condition is not included or defined anywhere in the
WO method, the energy component $\chi_{\gamma}(E_\mathbf{k},r, \Omega)$ extracted
from the time-dependent wave function $\Psi(\mathbf{r},T_{p})$
is a mixture of the contributions from
the $\Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$ and $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ continuum states. That is why we
see in the spectrum obtained by the WO method a plateau-like structure in the
perpendicular direction. Only the continuum states
$\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ contribute to this spurious plateau.
It is worth noting that another consequence of taking the wrong boundary
condition is also visible in the spectrum in the direction
$\theta_\mathbf{k}=0^{\circ}$ for $\mathrm{Ar}$ (Fig.~\ref{results:ar_pad_vs_wop}). Namely, the
destructive interference at approximately $8.8U_p$
is far less pronounced in the spectrum obtained by the WO method than in the spectrum
obtained by projecting onto the states
$\Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$. The reason is the interplay between the two
different contributions, one that comes from the continuum
state $\Phi_{\mathbf{k}}^{(+)}(\mathbf{r})$ and the other that comes from the $\Phi_{\mathbf{k}}^{(-)}(\mathbf{r})$
continuum state, which is smaller by a few orders of magnitude. The same feature
we see in the spectrum for $\mathrm{F}^{-}$ for $\theta_\mathbf{k}=0^{\circ}$ (Fig.~\ref{results:f-_pad_vs_wop})
at the kinetic energy just above $8U_p$ (it is less pronounced than in the Ar case).
Rescattering plateaus at angles substantially off the polarization direction of the laser field like those calculated for the outgoing boundary conditions or by the WO method and exhibited in Figs.~\ref{results:f-_pad_vs_wop}--\ref{results:ar_pad_full} are difficult to understand for physical reasons. All gross features observed so far in angle-dependent above-threshold-ionization spectra have been amenable to explanation in terms of the classical three-step scenario. However, this does not allow for electron
energies perpendicularly to the field direction in access of about $2U_p$ \cite{rescTR,moeller14}. The reason is that within the three-step model there is no force acting on the electron in the perpendicular direction by the laser field. Hence, the perpendicular momentum has to come either from direct ionization or from rescattering. Direct ionization has a cutoff of about $2U_p$. High-energy rescattering requires that the electron return to its parent atom with high energy, and such an electron will invariably undergo additional longitudinal acceleration after the rescattering, so that its final momentum will not be emitted at right angle to the field.
\section{Summary and conclusions}
\label{sec:sum}
We presented a method of solving the time-dependent Schr\"{o}dinger equation (within the SAE and dipole approximations) for an atom (or
a negative ion) bound by a spherically symmetric potential and exposed to a strong laser field, by expanding the time-dependent wave function
in a basis of B-spline functions and spherical harmonics and propagating it with an appropriate algorithm. The emphasis is on the
method of extracting the angle-resolved photoelectron spectra from the time-dependent wave function. This is done by projecting the
time-dependent wave function at the end of the laser pulse onto the continuum states $\Phi_\mathbf{k}$, which are solutions of the
Schr\"{o}dinger equation in the absence of the laser field (the PCS method).
In the context of strong-laser-field ionization, the photoelectrons having
the momentum $\mathbf{k}$ are observed at large distances ($r\rightarrow\infty$) in the positive time limit ($t\rightarrow +\infty$). Therefore,
it is the \textit{incoming (ingoing-wave)} solutions $\Phi_\mathbf{k}^{(-)}$ that are relevant. These solutions merge with the plane-wave solutions at the time
$t\rightarrow +\infty$: $\Phi_\mathbf{k}^{(-)}(\mathbf{r},t)\rightarrow (2\pi)^{-3/2}e^{i(\mathbf{k}\cdot\mathbf{r} - E_\mathbf{k} t)}$.
We have also presented another method of extracting the photoelectron spectra from the TDSE solutions: the window-operator method.
The WO method extracts the part of the exact solution of the TDSE at the end of the laser pulse which contributes a small interval of energies near a fixed energy $E_\mathbf{k}$. The problem with this method is that it does not single out the
contribution of the solution $\Phi_\mathbf{k}^{(-)}$, but it includes an unknown linear superposition of the states $\Phi_\mathbf{k}^{(-)}$ and
$\Phi_\mathbf{k}^{(+)}$. Therefore, it may lead and does lead to unphysical results, depending on the considered region of the spectrum.
By comparing the results obtained using the exact PCS method with those obtained
using the WO method for various potentials $V(r)$ we concluded that the WO method fails for an interval of the electron
emission angles around the perpendicular direction (the angle $\theta=90^\circ$ with respect to the polarization axis of the linearly
polarized laser field). For $\theta=90^\circ$, the WO method gives a plateau-like structure, which extends up to energies
$E_\mathbf{k}\sim 9U_p$, while the spectra obtained using the exact PCS method drop very fast beyond $E_\mathbf{k}\sim 2-3U_p$. The full PADs show that
this unphysical structure in the spectra obtained using the WO method appears for angles $\theta_\mathbf{k} \in (30^{\circ}, 150^{\circ})$ and
energies $E_\mathbf{k}>4U_p$. Furthermore, for values of the angle $\theta_\mathbf{k}$ for which the results obtained using the PCS method
exhibit interference minima, the WO method smoothes out these minima, due to the spurious contribution of the states $\Phi_\mathbf{k}^{(+)}$.
We have checked our results using three different type of the potentials $V(r)$: a short-range potential (F$^-$ ion), the pure Coulomb
potential (H atom), and a modified Coulomb potential (Ar atom).
Our conclusion is that the WO method is an
approximative method that can be used to extract the photoelectron spectrum. It should be used with care since it may
produce additional interference structures in the spectrum that have no physical significance. These additional
structures are a consequence of the wrong boundary conditions tacitly imposed onto the continuum states by the WO method. That is why every approximative method used
for calculating the photoelectron spectra should be tested against the exact method of projecting the time-dependent wave function onto
continuum states satisfying incoming boundary condition.
\begin{acknowledgments}
We acknowledge support by the Alexander von Humboldt Foundation and by the Ministry for Education, Science and Youth
Canton Sarajevo, Bosnia and Herzegovina.
\end{acknowledgments}
|
1,477,468,750,958 | arxiv | \section{Introduction}
\par
Motivated by the theoretical problems in the standard model (SM),
many extended models beyond the SM have been established. Among them the
large extra dimensions (LED) model proposed by Arkani-Hamed,
Dimopoulos, and Dvali in Ref.\cite{1} may be one of the promising
models which can solve the long-standing mass hierarchy problem.
This model used the idea of extra dimensions to bring gravity
effects from the Plank scale down to the electroweak scale. In the
LED model, the spacetime dimension is $D=4 + \delta$ with $\delta$
being the dimension of extra space, where the gravity and gauge
interactions are unified at one fundamental scale $M_S \sim TeV$
(the order of the electroweak scale). The graviton propagates in the
$D$-dimensional spacetime, while the SM particles exist only in the
usual ($3+1$)-dimensions.
\par
Taking into account of the bad behavior of quantum gravity in the
ultraviolet (UV) region, it is expedient to construct a low-energy
effective theory to describe the gravity-gauge-matter system in the
current (3+1)-dimensional spacetime. In the phenomenological sense,
this can be achieved through the Kaluza-Klein (KK) reduction in the
brane-world scenario \cite{2}. After applying this treatment to the
LED model, a $D$-dimensional massless graviton can be perceived as a
tower of massive KK modes propagating in the (3+1)-dimensional
spacetime. It turns out that the weakness of gravitational coupling
to the SM particles, suppressed by $\overline{M}_P$ (the reduced
Planck scale $\overline{M}_P=\frac{M_P}{\sqrt{8\pi}}$), can be
compensated by summing over numerous KK states. This scenario can
result in distinct effects at the high-energy colliders \cite{3}. Up
to now, many studies on the virtual KK graviton effects up to
the QCD next-to-leading order (NLO) in the LED model have
emerged. These include the processes of fermion-pair, multijet, and
vector-boson-pair production \cite{4,5,6}. In Ref.\cite{CMS-1} the CMS
Collaboration has performed a search for LED in the diphoton final
state events at the $\sqrt{s}= 7~TeV$ LHC with an integrated
luminosity of $36~pb^{-1}$. They set lower limits on the cutoff
scale $M_S$ in the range $1.6-2.3~TeV$ at the $95\%$ confidence
level. The dijet angular distribution results from the CMS and ATLAS
experiments appeared in Ref.\cite{CMS-2} and provide even stronger
limits on $M_S$, i.e., $M_S > 3.4~TeV$ (CMS) and $M_S > 3.2~TeV$
(ATLAS). Recently, the production of a $W$-pair at hadronic colliders
in the LED model has been studied up to the QCD NLO by Neelima
Agarwal, {\it et al} \cite{7}.
\par
In this paper, we revisit the NLO QCD corrections to the $W$-pair
production process at the LHC in the framework of the LED model, and
improve upon the results of Ref.\cite{7} by including the effects of
top-quark mass and the contribution from the $b\bar{b}$-fusion channel.
We provide the LED effect discovery and exclusion regions, the
kinematical distributions up to NLO in QCD by taking into account
the subsequential $W$-boson leptonic decay. The rest of the paper is
organized as follows. In Sec. II, we briefly go into the related
Feynman rules in the LED model. In Sec. III, the leading-order(LO) cross section for the \ppww process is described. In Sec. IV, we calculate the
NLO QCD corrections. In Sec. V, we present the numerical results
for the LO and NLO QCD corrected integrated cross section for the
$W$-pair production process and the distributions of final $W$-boson
decay products. Finally, a short summary is given.
\vskip 5mm
\section{ Related theories}
\label{related theories}
\par
The LED model consists of the pure gravity sector and the SM sector.
In this model the manifold, in which gravity propagates, is not the
ordinary four-dimensional spacetime manifold $\mathbb{R}^4$, but
$\mathbb{R}^4 \times {\cal M}$, where ${\cal M}$ is a compact
manifold of dimension $\delta$. For simplicity, one can tentatively
assume that ${\cal M}$ is a $\delta$-torus with radius $R$ and
volume $V_{\delta} = (2 \pi R)^{\delta}$ without loss of physical
significance.
\par
In our work we use the de Donder gauge. The Feynman rules for the
propagator of the spin-2 KK graviton and the relevant vertices which we
use are listed below. There $G_{\rm KK}^{\mu \nu}$, $\psi$, $W^{\pm
\mu}$, $A^{a \mu}$, and $\eta^a$ represent the fields of the
graviton, quark, $W$-boson, gluon, and $SU(3)$ ghost, respectively.
\begin{itemize}
\item
$\textrm{spin-2}~{\rm KK~graviton}~\textrm{propagator}~\textrm{
after summation over KK states}: $
\begin{eqnarray}
\tilde{G}_{\rm KK}^{\mu \nu \alpha \beta}=\frac{1}{2} D(s)
\left[\eta^{\mu \alpha} \eta^{\nu \beta} +
\eta^{\mu \beta} \eta^{\nu \alpha} - \frac{2}{D-2}\eta^{\mu \nu} \eta^{\alpha \beta} \right]
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_3)-\bar{\psi}(k_1)-\psi(k_2)~\textrm{vertex}: $
\begin{eqnarray}
-i \frac{1}{4\overline{M}_P} \left[\gamma^{\mu} (k_1 + k_2)^{\nu} +
\gamma^{\nu} (k_1 + k_2)^{\mu} - 2 \eta^{\mu \nu} (\rlap/{k}_1 +
\rlap/{k}_2 - 2 m_{\psi}) \right]
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_4)-\bar{\psi}(k_1)-\psi(k_2)-A^{a\rho}(k_3)~\textrm{vertex}:
$
\begin{eqnarray}
i g_{s} \frac{1}{2\overline{M}_P} \left( \gamma^{\mu} \eta^{\nu
\rho} + \gamma^{\nu} \eta^{\mu \rho} - 2 \gamma^{\rho}\eta^{\mu \nu}
\right)T^{a}
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_3)-A^{a\rho}(k_1)-A^{b\sigma}(k_2)~\textrm{vertex}: $
\begin{eqnarray}
i \frac{2}{\overline{M}_P} \delta^{a b} \left[(C^{\mu \nu \rho
\sigma \tau \beta} - C^{\mu \nu \rho \beta \sigma \tau}) k_{1\tau}
k_{2\beta} + \frac{1}{\alpha_3}E^{\mu \nu \rho
\sigma}(k_1,k_2)\right]
\end{eqnarray}
\item
$G_{\rm KK}^{\mu \nu}(k_3)-W^{+ \rho}(k_1)-W^{-
\sigma}(k_2)~\textrm{vertex}: $
\begin{eqnarray}
i \frac{2}{\overline{M}_P} \left[B^{\mu \nu \rho \sigma} m_W^2 +
(C^{\mu \nu \rho \sigma \tau \beta} - C^{\mu \nu \rho \beta \sigma
\tau}) k_{1\tau} k_{2\beta} + \frac{1}{\xi}E^{\mu \nu \rho
\sigma}(k_1,k_2)\right]
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_4)-A^{a\rho}(k_1)-A^{b\sigma}(k_2)-A^{c\lambda}(k_3)~\textrm{vertex}:
$
\begin{eqnarray}
\frac{2}{\overline{M}_P}g_{s}f^{a b c} \left[(k_1-k_3)_{\tau}C^{\mu
\nu \tau \sigma \rho \lambda}+ (k_2-k_1)_{\tau}C^{\mu \nu \sigma
\rho \tau \lambda} + (k_3-k_2)_{\tau}C^{\mu \nu \lambda \sigma \tau
\rho}\right]
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_5)-A^{a\rho}(k_1)-A^{b\sigma}(k_2)-A^{c\lambda}(k_3)-A^{d\delta}(k_4)~\textrm{vetex}:
$
\begin{eqnarray}
-i \frac{1}{\overline{M}_P}g_{s}^2 [f^{e a c}f^{e b d}D^{\mu \nu
\rho \sigma \lambda \delta}+ f^{e a b}f^{e c d}D^{\mu \nu \rho
\lambda \sigma \delta}+ f^{e a d}f^{e b c}D^{\mu \nu \rho \sigma
\delta \lambda} ]
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_3)-\bar{\eta}^a(k_1)-{\eta}^b(k_2)~\textrm{vertex}: $
\begin{eqnarray}
- i \frac{2}{\overline{M}_P}\delta^{a b}B^{\alpha \beta \mu \nu}
k_{1\alpha} k_{2\beta}
\end{eqnarray}
\item
$G_{\rm KK}^{\mu
\nu}(k_3)-\bar{\eta}^a(k_1)-{\eta}^b(k_2)-A^{c\rho}(k_3)~\textrm{vertex}:
$
\begin{eqnarray}
\frac{2}{\overline{M}_P}g_{s}f^{a b c} B^{\alpha \rho \mu \nu}
k_{1\alpha}
\end{eqnarray}
\end{itemize}
where $g_s$ is the strong coupling constant, $T^a$ and $f^{abc}$ are
SU(3) generators and structure constants, $D = n + \delta$, $n = 4 -
2 \epsilon$, $\overline{M}_p$ is the reduced Planck mass, $\alpha_3$ and
$\xi$ are SU(3) and charged SU(2) gauge fixing parameters, and
$D(s)$ can be expressed as \cite{2}
\begin{equation}
D(s)\ =\ {s^{\delta/2-1}\over\Gamma(\delta/2)}
{R^{\delta}\over(4\pi)^{\delta/2}} \biggl[\pi + 2i
I(\Lambda/\sqrt{s})\biggr] \label{DS}
\end{equation}
and
\begin{equation}
I(\Lambda/\sqrt{s})\ =\ P \int_0^{\Lambda/\sqrt{s}}dy\
{y^{\delta-1}\over 1-y^2}\ . \label{B6}
\end{equation}
The integral $I(\Lambda/\sqrt{s})$ contains an ultraviolet cutoff
$\Lambda$ on the KK modes \cite{2,3}. In this work we set it to
be the fundamental scale $M_S$. It should be understood that a
point $y=1$ has been removed from the integration path. Besides,
all the momenta are assumed to be incoming to the vertices,
except that the fermionic momenta are set to be along the fermion
flow directions. The coefficients $A^{\mu \nu}$, $B^{\mu \nu
\alpha \beta}$, $C^{\rho \sigma \mu \mu \alpha \beta}$,
$D^{\mu \nu \rho \sigma \lambda \delta}$, and
$E^{\mu \nu \rho \sigma}(k_{1},k_{2})$ are expressed as
\begin{eqnarray}
A^{\mu \nu} & = & \frac{1}{2}\eta^{\mu \nu},~~~~~~~ B^{\mu \nu
\alpha \beta} = \frac{1}{2}
(\eta^{\mu \nu}\eta^{\alpha \beta}
-\eta^{\mu \alpha}\eta^{\nu \beta}
-\eta^{\mu \beta}\eta^{\nu \alpha}),
\nb \\
C^{\rho \sigma \mu \nu \alpha \beta} & = & \frac{1}{2}
[\eta^{\rho \sigma}\eta^{\mu \nu}\eta^{\alpha \beta}
-(\eta^{\rho \mu}\eta^{\sigma \nu}\eta^{\alpha \beta}
+\eta^{\rho \nu}\eta^{\sigma \mu}\eta^{\alpha \beta}
+\eta^{\rho \alpha}\eta^{\sigma \beta}\eta^{\mu \nu}
+\eta^{\rho \beta}\eta^{\sigma \alpha}\eta^{\mu \nu})],
\nb \\
D^{\mu \nu \rho \sigma \lambda \delta} & = &
\eta^{\mu \nu}(\eta^{\rho \sigma}\eta^{ \lambda \delta}
-\eta^{\rho \delta}\eta^{ \sigma \lambda})
+(\eta^{\mu \rho}\eta^{\nu \delta}\eta^{\lambda \sigma}
+\eta^{\mu \lambda}\eta^{\nu \sigma}\eta^{\rho \delta}
-\eta^{\mu \rho}\eta^{\nu \sigma}\eta^{\lambda \delta}
-\eta^{\mu \lambda}\eta^{\nu \delta}\eta^{\rho \sigma}
+ (\mu \leftrightarrow \nu)), \nb \\
E^{\mu \nu \rho \sigma}(k_{1},k_{2}) & = &
\eta^{\mu \nu}(k_1^{\rho} k_1^{\sigma} + k_2^{\rho} k_2^{\sigma}
+ k_1^{\rho} k_2^{\sigma}) - \left [\eta^{\nu \sigma} k_1^{\mu} k_1^{\rho}
+ \eta^{\nu \rho} k_2^{\mu} k_2^{\sigma} + (\mu \leftrightarrow \nu)\right ]. \nb
\end{eqnarray}
\par
We code programmatically the related Feynman rules in the
FeynArts 3.5 package \cite{10} to generate the Feynman diagrams and
the relevant amplitudes. The FormCalc 5.4 \cite{11} package is
implemented subsequently to simplify the amplitudes.
\vskip 5mm
\section{ LO cross section for \ppww }
\par
We treat the up-, down-, charm-, strange-, and bottom-quark as massless
particles, and adopt the five-flavor scheme in the leading
order and QCD next-to-leading order calculations. The LO
contribution to the parent process \ppww includes the
quark-antiquark $(q=u,d,s,c,b)$ annihilations and the gluon-gluon
fusion partonic processes: $q(p_1)+\bar q(p_2) \to
W^+(p_3)+W^-(p_4)$ and $g(p_1)+g(p_2) \to W^+(p_3)+W^-(p_4)$. There
$p_{i}$ $(i=1,2,3,4)$ represent the four-momenta of the incoming and
outgoing particles, respectively. The corresponding Feynman diagrams
are shown in Figs.\ref{fig1} and \ref{fig2}. Figures.\ref{fig1}(1)
and .\ref{fig1}(2) are the LO SM-like diagrams for partonic process $q\bar q
\to W^+W^-$. In Fig.\ref{fig1}(1) the internal wavy line means
exchanging $\gamma$ or a $Z^0$-boson. There we ignore the diagrams
with exchanging Higgs boson, since the initial quarks are all
massless.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.8]{fig1.eps}
\caption{\label{fig1} The tree-level Feynman diagrams for the
partonic processes \qqww in the LED model. (1) and (2) are the
SM-like diagrams, where $q$ represents the $u$-, $d$-, $c$-, $s$- and
$b$-quark. (3) is the extra diagram with KK graviton exchange. }
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.8]{fig2.eps}
\caption{\label{fig2} The tree-level Feynman diagram for the
partonic process \ggww in the LED model. }
\end{center}
\end{figure*}
\par
We express the tree-level amplitudes for the partonic processes
$q\bar{q} \to W^+W^-$ and $gg \to W^+W^-$ as
\begin{equation}
\label{TreeAmplitude1} {\cal M}_{q\bar{q}}^{0} = {\cal
M}_{q\bar{q}}^{0,SM} + {\cal M}_{q\bar{q}}^{0,LED},~~~~{\cal
M}_{gg}^{0} = {\cal M}_{gg}^{0,LED},
\end{equation}
where ${\cal M}_{q\bar{q}}^{0,SM}$ $(q=u,d,c,s,b)$ is the amplitude
contributed by the tree-level SM-like diagrams, while ${\cal
M}_{q\bar{q}}^{0,LED}$ and ${\cal M}_{gg}^{0,LED}$ are the
tree-level amplitudes with KK graviton exchange. Our calculations
show that the analytical expression of the SM matrix element squares
summed (averaged) over the final (initial) state spins and colors at
the LO, $\overline{|{\cal M}_{q\bar{q}}^{0,SM}|^2}$, for the
partonic process without massive internal or external quark (i.e., $q
= u, d, c, s$), is the same as that presented in Ref.\cite{7}. But
for the partonic process $b \bar{b} \to W^+ W^-$, there is a
t-channel diagram with a massive top-quark exchange. Its explicit
expression of $\overline{|{\cal M}_{b\bar{b}}^{0,SM}|^2}$ is
presented below by adopting the notations in Ref.\cite{7}.
\begin{equation}
\label{Amplitude-t} \overline{|{\cal M}_{b\bar{b}}^{0,SM}|^2}=
\frac{e^4}{N} \left(A_1^b B_{1m}^b +A_2^b B_{2m}^b +A_3^b
B_{3m}^b\right),
\end{equation}
where N is the number of colors. The explicit expressions for
$A_1^b$, $A_2^b$, and $A_3^b$ can be obtained from the Eqs.(8) in
Ref.\cite{7} by taking the replacement of $u \to b$. The kinematic
invariants $B_{1m}^b$, $B_{2m}^b$, and $B_{3m}^b$ can be expressed as
\begin{equation}
B_{1m}^b = \frac{u^2}{(u-m_t^2)^2}B_1^d(t,u,s),~~~ B_{2m}^b =
B_2^d(t,u,s),~~~ B_{3m}^b = \frac{u}{(u-m_t^2)}B_3^d(t,u,s),
\end{equation}
where $B_1^d(t,u,s)$, $B_2^d(t,u,s)$ and $B_3^d(t,u,s)$ are
presented in Eqs.(9)-(12) of Ref.\cite{7}. Then the LO cross
sections for the unpolarized $W$-pair production processes at the
partonic level can be expressed as
\begin{eqnarray}\label{int}
\hat{\sigma}_{ij}^{LO} &=& \frac{1}{4|\vec{p}|\sqrt{\hat{s}}}\int
{\rm d}\Gamma_2 \overline{|{\cal M}_{ij}^{0}|^2},
~~(ij=u\bar{u},d\bar{d},c\bar{c},s\bar{s},b\bar{b},gg),
\end{eqnarray}
where $\vec{p}$ is the momentum of one initial parton in
center-of-mass system (c.m.s.) and ${\rm d} \Gamma_2$ is the two-body
phase space element expressed as
\begin{eqnarray}
{\rm d} \Gamma_2 = (2\pi)^4 \delta^{(4)} (p_1+p_2-p_3 - p_4)
\prod_{i=3,4} \frac{d^3 \vec{p}_i}{(2\pi)^3 2E_i}.
\end{eqnarray}
\par
By convoluting $\hat{\sigma}^{LO}_{i j}$ with the parton
distribution functions (PDFs) of the colliding protons, the LO cross
section for the parent process, \ppww, can be written as
\begin{eqnarray}
\sigma_{LO} &=& \sum^{c\bar c,b\bar b,gg}_{ij=u\bar u,d\bar d,s\bar
s}\frac{1}{1+\delta_{ij}} \int dx_A dx_B \left[ G_{i/A}(x_A,\mu_f)
G_{j/B}(x_B,\mu_f)\hat{\sigma}^{LO}_{ij}(\sqrt{\hat{s}})
+(i \leftrightarrow j) \right], \nb \\
\end{eqnarray}
where $G_{i/P}$ $(i = g, q, \bar{q})$ represent the PDFs of parton
$i$ in proton $P$, $\mu_{f}$ is the factorization scale,
$\sqrt{\hat{s}}=x_A x_B \sqrt{s}$, $x_A$ and $x_B$ describe the
momentum fractions of parton (gluon or quark) in protons $A$ and
$B$, respectively.
\vskip 5mm
\section{NLO QCD corrections }
\par
The complete NLO QCD correction to the parent process \ppww consists
of following components. (1) The virtual contribution from the QCD
one-loop and the corresponding counterterm diagrams to the partonic
channels $q\bar{q} \to W^+W^-$ and $gg \to W^+W^-$. (2) The
contribution of the real gluon emission partonic processes. (3) The
contribution of the real light-(anti)quark emission partonic
processes. And (4) the corresponding contribution of the PDF
counterterms. There inevitably exist the ultraviolet (UV) and
infrared (IR) divergences in the NLO calculations, and we adopt the
dimensional regularization scheme in $n=4-2 \epsilon$
dimensions to isolate and manipulate these divergences.
\par
{A. Virtual corrections }
\par
The Feynman diagrams for the virtual corrections to the \qqww and
\ggww partonic processes are shown in Fig.\ref{fig3} and
Fig.\ref{fig4}, respectively. In Figs.\ref{fig4}(3) and .\ref{fig4}(4) the
diagrams involving Yukawa coupling between Higgs boson and top
quarks are included, but the diagrams involving Yukawa coupling
between Higgs boson and massless quarks are excluded due to their
vanishing contribution. There exist UV and soft/collinear IR
singularities in the calculations of these one-loop diagrams. To
remove the UV divergences, we need only the wave function
renormalization constants for the quark and gluon fields. We
introduce the renormalization constants $\delta Z_{\psi_{q,L,R}}$
for massless quark (q=u,d,c,s,b) fields and $\delta Z_{A}$ for the gluon
field defined as
\begin{eqnarray}
\psi^{0}_{q,L,R} = (1+\delta Z_{\psi_{q,L,R}})^{1/2}\psi_{q,L,R},~~~
A^{a0}_{\mu} = (1+\delta Z_{A})^{1/2}A^{a}_{\mu}.
\end{eqnarray}
In the modified minimal subtraction ($\overline{MS}$) renormalization scheme the renormalization
constants for the massless quarks are expressed as
\begin{eqnarray}
\delta Z_{\psi_{q,L}} &=&
-\frac{\alpha_{s}}{4\pi}C_{F}(\Delta_{UV}-\Delta_{IR}),
~~~\delta Z_{\psi_{q,R}} = -\frac{\alpha_{s}}{4\pi}C_{F}\left(\Delta_{UV}-\Delta_{IR}\right), \\
\delta Z_{A} &=&
\frac{\alpha_{s}}{4\pi}\left(\frac{5}{3}C_{A}-\frac{4}{3}n^{UV}_{f}T_{F}\right)\Delta_{UV}
+\frac{\alpha_{s}}{4\pi}\left(\frac{5}{3}C_{A}-\frac{4}{3}n^{IR}_{f}T_{F}\right)\Delta_{IR},
\end{eqnarray}
To remove the UV and IR divergences in the $b\bar b$-fusion subprocess, we need
introduce the counterterms for the top-quark field and its mass, i.e.,
\begin{eqnarray}
\psi^{0}_{t,L,R} = (1+\delta Z_{\psi_{t,L,R}})^{1/2}\psi_{t,L,R},~~~
m^{0}_{t} = m_{t} + \delta m_{t}.
\end{eqnarray}
We use the on-mass-shell scheme to renormalize the top-quark field and
mass. They are expressed as
\begin{eqnarray}
\delta Z_{\psi_{t,L}} &=&
-\frac{\alpha_{s}}{4\pi}C_{F}(\Delta_{UV}+2\Delta_{IR}+3\ln{\frac{\mu_r^{2}}{m^{2}_{t}}}+4), \\
\delta Z_{\psi_{t,R}} &=&
-\frac{\alpha_{s}}{4\pi}C_{F}(\Delta_{UV}+2\Delta_{IR}+3\ln{\frac{\mu_r^{2}}{m^{2}_{t}}}+4), \\
\frac{\delta m_{t}}{m_t} &=&
-\frac{3\alpha_{s}}{4\pi}C_{F}(\Delta_{UV}+\ln{\frac{\mu_r^{2}}{m^{2}_{t}}}+\frac{4}{3}),
\end{eqnarray}
In the above equations $\mu_r$ is the renormalization scale,
$C_{F}=\frac{4}{3}$, $C_{A}=3$, $T_{F}=\frac{1}{2}$, $n^{UV}_{f}=6$
corresponds to the six flavor quarks ($u$, $d$, $c$, $s$, $t$, $b$),
whereas $n^{IR}_{f}=5$ is the number of the massless quarks ($u$,
$d$, $s$, $c$, $b$). Moreover, $\Delta_{UV}=
\frac{1}{\epsilon_{UV}}\Gamma
(1+\epsilon_{UV})(4\pi)^{\epsilon_{UV}}$ and
$\Delta_{IR}=\frac{1}{\epsilon_{IR}}\Gamma
(1+\epsilon_{IR})(4\pi)^{\epsilon_{IR}}$ refer to the UV and IR
divergences, respectively.
\begin{figure*}
\includegraphics[scale=0.8]{fig3.eps}
\vspace*{-0.3cm} \centering \caption{\label{fig3} The QCD one-loop
Feynman diagrams for the partonic process \qqww. (1)-(6) are the
SM-like diagrams. (7)-(10) are the diagrams with KK graviton
exchange.}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.8]{fig4.eps}
\caption{\label{fig4} The QCD one-loop Feynman diagrams for
partonic process \ggww. (1)-(7) are the SM-like diagrams. (8)-(20) are
the diagrams with KK graviton exchange. In all diagrams $q_m$
represents $u$-, $d$-, $c$-, $s$-, $b$- and $t$-quark except the
diagrams in Figs.\ref{fig4}(3) and Fig.\ref{fig4}(4) involving the
coupling between Higgs boson and top quarks, where $q_m$ denotes
only top quark. }
\end{center}
\end{figure*}
\par
Then the results for the differential cross sections for the $q\bar
q$ annihilation and $gg$ fusion partonic channels are UV finite but
soft/collinear IR divergent. The soft/collinear IR singularities can
be canceled by adding the contributions of the real emission
partonic processes and the corresponding PDF counterterms.
\par
{B. Real gluon emission}
\par
The real gluon emission contributions are from $g(p_1)+g(p_2) \to
W^+(p_3) + W^-(p_4) + g(p_5)$ and $q(p_1)+\bar q (p_2) \to W^+(p_3)
+ W^-(p_4) + g(p_5)$ partonic processes. The corresponding Feynman
diagrams are shown in Fig.\ref{fig5} and Fig.\ref{fig6},
respectively. We employ the two cutoff phase space slicing (TCPSS)
method \cite{13} to calculate the contributions from the real gluon
emission partonic processes. An arbitrary soft cutoff $\delta_{s}$
is introduced to separate the gluon emission subprocess phase space
into two regions, soft gluon and hard gluon regions. Furthermore,
another cutoff $\delta_{c}$ is introduced to decompose the real hard
gluon emission phase space region into hard collinear ($HC$)
and hard noncollinear ($\overline{HC}$) regions. The partonic
differential cross section for the real gluon emission subprocess
can be expressed as
\begin{eqnarray}
d\hat{\sigma}_g ~=~ d\hat{\sigma}^S_g+d\hat{\sigma}^H_g &=&
d\hat{\sigma}^S_g+d\hat{\sigma}^{HC}_g+d\hat{\sigma}^{\overline{HC}}_g.
\end{eqnarray}
\begin{figure*}
\begin{center}
\includegraphics [scale=0.8]{fig5.eps}
\caption{\label{fig5} The tree-level Feynman diagrams for the real
gluon emission subprocess \qqwwg. (1)-(5) are the SM-like diagrams.
(6)-(9) are the extra diagrams with KK graviton exchange.}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics [scale=0.8]{fig6.eps}
\caption{\label{fig6} The tree-level Feynman diagrams for the real
gluon emission subprocess \ggwwg. There is no SM-like diagram.}
\end{center}
\end{figure*}
\par
{C. Real light-(anti)quark emission}
\par
In addition to the real gluon emission discussed above, there are
contributions from the massless light-(anti)quark
($u,d,c,s,b$,$\bar{u},\bar{d},\bar{c},\bar{s},\bar{b}$) emission
partonic processes. In the five-flavor scheme the massless
light-quark $q$ involves $u$-, $d$-, $c$-, $s$-, $b$-quarks.
Considering the fact that the final (anti)bottom-quark can be tagged
in experiments and the collinear IR singularities of the real
(anti)bottom-quark emission subprocesses are completely canceled by
those of the corresponding PDF counter terms, we do not include the
contributions of the bottom and antibottom emissions, and adopt the
five-flavor PDFs \cite{14}. We depict the Feynman diagrams for the
partonic processes $qg \to W^+ W^- + q$ and $\bar qg \to W^+ W^-
+ \bar q$ in Fig.\ref{fig7}. These partonic processes contain only
the initial state collinear singularities. By using the TCPSS method
described above, we can split the phase space into collinear and
noncollinear regions. The differential cross sections for the
partonic processes $qg \to W^+W^-q$ and $\bar{q}g \to W^+W^-\bar{q}$
can then be expressed as
\begin{eqnarray}
d\hat{\sigma}_{q(\bar{q})}(q(\bar{q}) g \to W^+W^-q(\bar{q})) ~~=~~
d\hat{\sigma}^{C}_{q(\bar{q})} +
d\hat{\sigma}^{\overline{C}}_{q(\bar{q})},
\end{eqnarray}
where $d\hat{\sigma}^{\overline{C}}_{q}$ and
$d\hat{\sigma}^{\overline{C}}_{\bar{q}}$ are finite and can be
evaluated by using the general Monte Carlo method.
\begin{figure*}
\begin{center}
\includegraphics [scale=0.8]{fig7.eps}
\caption{\label{fig7} The tree-level Feynman diagrams for the real
light-(anti)quark emission subprocesses \qgwwq. (1)-(5) are the
SM-like diagrams. (6)-(9) are the diagrams with KK graviton exchange.}
\end{center}
\end{figure*}
\par
{D. NLO QCD corrections to the \ppww process }
\par
Combining the renormalized virtual corrections and the real
gluon/light-(anti)quark emission contributions, the partonic cross
sections still contain the collinear divergence, which can be
absorbed into the redefinition of the PDFs at the NLO according to
the mass factorization \cite{15}. We find that after the summation
of all the NLO QCD corrections, the soft and collinear IR
divergences vanish. We can see from above discussion that the
final total ${\cal O}(\alpha_{s})$ corrections consist of the
two-body term $\sigma^{(2)}$ and the three-body term $\sigma^{(3)}$.
The total cross section up to the QCD NLO is expressed as
\begin{eqnarray}
\sigma_{NLO} = \sigma_{LO} + \Delta \sigma_{NLO}= \sigma_{LO} + \sigma^{(2)} + \sigma^{(3)}.
\end{eqnarray}
It is UV finite, IR safe, and cutoff $\delta_{c}/\delta_{s}$
independent \cite{13,16}, which will further be checked in the
numerical evaluations.
\vskip 10mm
\section{Numerical results and discussions }
\par
In this section, we present the numerical results of the integrated
cross sections and the kinematic distributions of the final
particles for the \ppww process in both the SM and LED model up to
the QCD NLO. In order to verify the correctness of our numerical
calculations, we made the following checks:
\begin{itemize}
\item[(i)] In Table \ref{tab1}, we list our numerical results of the
LO and NLO QCD corrected integrated cross sections in the SM for the
\ppww process by taking the input parameters, PDFs, and event
selection criterion from Table 4 of Ref.\cite{17}. It
shows that our LO and NLO QCD corrected cross sections in the SM are
in good agreement with those in Ref.\cite{17} within the integration
errors.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
{} LHC &Ref.\cite{17} &FeynArts &CompHEP \\
\hline Born[pb] &86.7 &86.711(6) &86.7(1) \\
\hline NLO QCD[pb] &127.8 &127.7(1) &----- \\
\hline
\end{tabular}
\caption{ \label{tab1} The LO and NLO QCD corrected cross sections
for the $pp \to W^{+}W^{-}+X$ process in the SM by taking the same
input parameters and event selection criterion as those in
Ref.\cite{17}. }
\end{center}
\end{table}
\item[(ii)] The UV and IR safeties are verified numerically after combining
all the contributions at the QCD NLO.
\item[(iii)] We calculate the NLO QCD corrections to integrated cross section for
the $pp \to u\bar{u} \to W^+W^- + X$ process as functions of the cutoff
$\delta_{s}$ at the $\sqrt{s} = 14~TeV$ LHC in the LED model, where we take
$\mu_f = \mu_r = \mu_0 = m_W$, $M_S = 3.5~TeV$, $\delta=3$ and
$\delta_{c} = \delta_s/50$. Some of the results are listed in Table \ref{tab1-1}.
It is shown clearly that the NLO QCD correction ($\Delta\sigma_{NLO}^{LED}$)
does not depend on the arbitrarily chosen values of $\delta_{s}$ and $\delta_{c}$
within the calculation errors. In the further numerical calculations,
we fix $\delta_s=10^{-3}$ and $\delta_c=\delta_s/50$.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|}
\hline
{} $\delta_{s}$ & $\Delta\sigma_{NLO}^{LED}$[pb] \\
\hline $2\times10^{-3}$ & 13.19(3) \\
\hline $1\times10^{-3}$ & 13.20(3) \\
\hline $7\times10^{-4}$ & 13.21(3) \\
\hline $4\times10^{-4}$ & 13.22(5) \\
\hline $2\times10^{-4}$ & 13.24(5) \\
\hline $1\times10^{-4}$ & 13.26(6) \\
\hline $7\times10^{-5}$ & 13.25(6) \\
\hline $4\times10^{-5}$ & 13.27(6) \\
\hline $2\times10^{-5}$ & 13.26(7) \\
\hline
\end{tabular}
\caption{ \label{tab1-1} The dependence of the NLO QCD correction to
the integrated cross section for the process $pp \to u\bar{u} \to W^+W^- + X$
at the $\sqrt{s} = 14~TeV$ LHC in the LED model, where we set $\mu_f =
\mu_r = \mu_0 = m_W$, $M_S = 3.5~TeV$, $\delta=3$ and $\delta_{c} =
\delta_s/50$. }
\end{center}
\end{table}
\item[(iv)] We calculate the SM LO $W$-pair invariant mass
distribution ($d\sigma^{SM}_{LO}/dM_{WW}$) for the \ppww process
with the same input parameters, PDFs and event selection criterion
as those used in Ref.\cite{7}. The numerical results, which are
obtained by using FeynArts and CompHEP packages separately, are
coincident with each other.
\item[(v)] For further comparison with the previous work of N. Agarwal,
{\it et al}, we recalculate the LO and NLO QCD corrected
distributions of $W$-pair invariant mass in both the SM and LED
model ($d\sigma^{SM,LED}_{LO}/dM_{WW}$,
$d\sigma^{SM,LED}_{NLO}/dM_{WW}$) for the \ppww process at the
$\sqrt{s}=14~TeV$ LHC, where we set all the quarks being massless
except $m_t=172.0~GeV$, and take the PDFs and event selection
criterion from Ref.\cite{7}. We plot our LO and
NLO QCD corrected results in the SM in Fig.\ref{fig7-1}(a), and the
results in the LED model in Fig.\ref{fig7-1}(b). In these two
figures we depict also the corresponding curves from N. Agarwal's
paper \cite{7} for comparison. We can see that there exist obvious
discrepancies between ours and the corresponding N. Agarwal
results, especially in the large $M_{WW}$ region. One of the reasons
for the disagreement is because we have included the effects of
top-quark mass in our calculations. From Figs.\ref{fig7-1}(a,b) we
can see the K-factors of the QCD corrections increase to large
numbers of $K=5.29$ and $K=2.33$ separately, when $W$-pair invariant
mass approaches $M_{WW} = 1300~GeV$. This occurs because the
K-factors in Figs.\ref{fig7-1}(a,b) are the results with only
constraint on W-bosons ($|y_W| < 2.5$ ) \cite{7}. In this case the
differential cross section, $d\sigma^{SM}_{NLO}/dM_{WW}$, receives
a large contribution from the hard jet emission corrections
($W^+W^-+jet$). For example, in Fig.\ref{fig7-1}(a) at $M_{WW} =
1300~GeV$ the K-factor contributed by hard jet emission processes
can reach $3.59$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.7]{fig7-1a.EPS
\hspace{0in
\includegraphics[scale=0.7]{fig7-1b.EPS
\hspace{0in
\caption{ \label{fig7-1} Invariant mass ($M_{WW}$) distributions at
the LO and NLO for the $pp \to W^{+}W^{-}+X$ process at the
$\sqrt{s}=14~TeV$ LHC. There the input parameters, PDFs, and event
selection criterion are taken from Ref.\cite{7} (where $M_S=2~TeV$,
$\delta=3$, and $\mu_r=\mu_f=M_{WW}$). For comparison with previous
work, we depict also the corresponding curves from N. Agarwal's
paper \cite{7} in both panels. (a) The distributions in the SM. (b)
The distributions in the LED model. }
\end{center}
\end{figure}
\end{itemize}
\par
In the following calculations we take one-loop and two-loop running
$\alpha_{s}$ in the LO and QCD NLO calculations, respectively
\cite{18}. The QCD parameters are taken as $\Lambda_5^{LO}=165~MeV$,
$\Lambda_5^{\overline{MS}}=226~MeV$, the number of the active
flavors is $n_{f}=5$, the Cabibbo-Kobayashi-Maskawa matrix is set as a unit matrix, and
the colliding c.m.s. energy is taken as $\sqrt{s}=7~TeV$ and
$\sqrt{s}=14~TeV$ for the early and future LHC. To satisfy the
unitary constraint, we adopt the cut $\sqrt{\hat{s}}<M_{S}$ for the
whole phase space. We assume $m_H=120~GeV$ and the renormalization
and factorization scales to be equal ($\mu_r=\mu_f \equiv \mu$), and
we define $\mu_0 \equiv m_W$. We use the CTEQ6L1 and CTEQ6M PDFs
\cite{19,20} in the LO and QCD NLO calculations, respectively. The
other related input parameters are taken from \cite{18}:
$\alpha^{-1}(m_{Z}) = 127.916$, $m_W = 80.399~GeV$,
$m_Z=91.1876~GeV$ and $m_t=172.0~GeV$. As we know that the constraints on the
final particles are necessary in realistic experimental
event collections, and the theoretical calculation should keep the perturbative
convergence. We adopt the following event selection constraints additionally.
(1) For the real emission contributions, we accept the events which satisfy
the condition that the jet pseudorapidity $|y_{jet}|>2.5$ or the jet transverse
momentum $p_{T}^{jet}<50~GeV$. (2) The $W$-pair invariant mass is restricted
in the range of $M_{WW}>400~GeV$.
\par
In Figs.\ref{fig8}(a,b), the upper panels show the scale ($\mu$)
dependence of the LO and the NLO QCD corrected cross sections in the
SM and LED model at the $\sqrt{s}=7~GeV$ and $\sqrt{s}=14~TeV$ LHC
separately, and the corresponding $K$-factors $(K(\mu)\equiv
\frac{\sigma_{NLO}(\mu)}{\sigma_{LO}(\mu)})$ are illustrated in the
lower panels. There we take $M_{S}=3.5~TeV$ and $\delta =3$. The
scale-dependent $K$-factor in the LED model varies from $1.18$
($1.53$) to $1.19$ ($1.11$) when $\mu$ goes from $0.5\mu_0$ to
$2\mu_0$ at the $\sqrt{s}=7~TeV$ LHC (the $\sqrt{s}=14~TeV$ LHC). We
see from these upper panels that the NLO QCD corrections in the SM
and LED model do not reduce the factorization/renormalization scale
uncertainty, especially at the $\sqrt{s}=14~TeV$ LHC. That is
because the LO result underestimates the scale dependence due to the
LO contribution being from pure electroweak partonic processes. And
we find that the $K$-factors plotted in the lower figures keep the
convergence of the perturbative series in the plotted $\mu$ range at
both the $\sqrt{s}=7~TeV$ and $\sqrt{s}=14~TeV$ LHC. In further
calculations we fix $\mu = m_{W}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.8]{fig8a.EPS
\includegraphics[scale=0.8]{fig8b.EPS
\hspace{0in
\caption{\label{fig8} The scale dependence of the LO and NLO QCD corrected
cross sections for the process \ppww in the SM and LED model, and the
corresponding $K$-factor $[K(\mu)\equiv \frac{\sigma_{NLO}(\mu)}
{\sigma_{LO}(\mu)}]$ with $M_{S}=3.5~TeV$ and $\delta =3$.
(a) At the $\sqrt{s}=7~TeV$ LHC. (b) At the $\sqrt{s}=14~TeV$ LHC. }
\end{center}
\end{figure}
\par
In Figs.\ref{fig9}(a,b), we depict the LO and NLO QCD corrected
cross sections and the corresponding $K$-factors for the process
\ppww in the LED model as the functions of the fundamental scale
$M_{S}$, with $\mu=m_W$ and the extra space dimension number
$\delta$ being $3$, $4$, and $5$, respectively. From the figures one
can find that the more distinct LED effect exhibits with the smaller values of $M_{S}$ and $\delta$.
\begin{figure}[htbp]
\includegraphics[scale=0.8]{fig9a.EPS
\hspace{0in
\includegraphics[scale=0.8]{fig9b.EPS
\hspace{0in
\caption{\label{fig9} The LO and NLO QCD corrected cross sections
and the corresponding $K$-factors for the process \ppww in the LED
model as functions of $M_{S}$ with $\mu=m_W$ and $\delta =3,4,5$.
(a) At the $\sqrt{s}=7~TeV$ LHC. (b) At the $\sqrt{s}=14~TeV$ LHC. }
\end{figure}
\par
The LO and NLO QCD corrected distributions of the $W$-pair invariant
mass and the corresponding $K$-factors ($K(M_{WW})\equiv
\frac{d\sigma_{NLO}}{dM_{WW}}/\frac{d\sigma_{LO}}{dM_{WW}}$) for the
process $pp \to W^{+}W^{-}+X$ in the SM and LED model at the
early and future LHC, are shown in Figs.\ref{fig10-1}(a) and (b),
separately. There the results are for $M_{S}=3.5~TeV$, $\mu=m_W$, at
a fixed value 3 for the number of extra dimensions and obtained by
taking the input parameters and the event selection constraints mentioned
above. As we expected, the LO and NLO QCD corrected differential
cross sections of the $W$-pair invariant mass become less with the
increment of $M_{WW}$.
\begin{figure}[htbp]
\includegraphics[width=3.2in,height=3in]{fig10-1a.EPS
\hspace{0in
\includegraphics[width=3.2in,height=3in]{fig10-1b.EPS
\hspace{0in
\caption{\label{fig10-1} The LO and NLO QCD corrected distributions
of the $W$-pair invariant mass $(d\sigma_{LO}/dM_{WW}$,
$d\sigma_{NLO}/dM_{WW})$ and the corresponding $K$-factors for $pp
\to W^{+}W^{-}+X$ with $M_{S}=3.5~TeV$, $\mu=m_W$ and $\delta =3$ in
the SM and LED model. (a) At the $\sqrt{s}=7~TeV$ LHC. (b) At the
$\sqrt{s}=14~TeV$ LHC. }
\end{figure}
\par
It is clear that if the deviation of the cross section from the SM
prediction is large enough, the LED effect including the NLO QCD
corrections can be found. We assume that the LED effect can and
cannot be observed, only if
\begin{eqnarray}
\label{upper} \Delta\sigma_{NLO}
=|\sigma_{NLO}^{LED}-\sigma_{NLO}^{SM}| \geq
\frac{5\sqrt{\mathcal{L}\sigma_{NLO}^{LED}}}{\mathcal{L}}\equiv
5\sigma
\end{eqnarray}
and
\begin{eqnarray}
\label{lower} \Delta\sigma_{NLO}
=|\sigma_{NLO}^{LED}-\sigma_{NLO}^{SM}| \leq
\frac{3\sqrt{\mathcal{L}\sigma_{NLO}^{LED}}}{\mathcal{L}}\equiv
3\sigma,
\end{eqnarray}
respectively. In Figs.\ref{fig10}(a,b), we present the discovery and
exclusion regions in the luminosity-fundamental scale space
($\mathcal{L}-M_{S}$) for the \ppww process with $\delta = 3$.
Figure.\ref{fig10}(a) is for the $\sqrt{s}=7~TeV$ LHC and
Fig.\ref{fig10}(b) for the $\sqrt{s}=14~TeV$ LHC, where the LED
effect can and cannot be observed in the dark and gray- region,
separately. We list some typical data which are read out from
Figs.\ref{fig10}(a,b) in Table \ref{tab3}. There the discovery and
exclusion fundamental scale $M_S$ values at the early and future LHC
are presented. It shows that by using the $W$-boson pair production
events we could set an exclusion limit on the cutoff scale $M_S$ to be
$1.80~TeV$ at the $95\%$ confidence level at the $\sqrt{s} = 7~TeV$
LHC with an integrated luminosity of $36~(pb)^{-1}$. This is in the
$M_S$ lower limit range of $ 1.6\sim 2.3~TeV$ obtained
experimentally by the CMS using the diphoton final state data
samples \cite{CMS-1}.
\begin{figure}[htbp]
\includegraphics[width=3.2in,height=3in]{fig10a.EPS
\hspace{0in
\includegraphics[width=3.2in,height=3in]{fig10b.EPS
\hspace{0in
\caption{\label{fig10} The LED effect discovery area (dark) and the
exclusion area (gray) in the $\mathcal{L}-M_{S}$ space for the \ppww
process. (a) At the $\sqrt{s}=7~TeV$ LHC. (b) At the
$\sqrt{s}=14~TeV$ LHC. }
\end{figure}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
{}Luminosity$(\mathcal{L})$ &\multicolumn{2}{c|}{$\sqrt{s}=7~TeV$} &\multicolumn{2}{c|}{$\sqrt{s}=14~TeV$} \\ \cline{2-5}
{}($\mathcal{L}$) &$M_{S}[TeV]$($3\sigma$)&$M_{S}[TeV]$($5\sigma$) &$M_{S}[TeV]$($3\sigma$)&$M_{S}[TeV]$($5\sigma$) \\
\hline 100 ${fb}^{-1}$ &3.83 &3.50 &5.69 &5.46 \\
\hline 200 ${fb}^{-1}$ &3.98 &3.74 &5.79 &5.62 \\
\hline 300 ${fb}^{-1}$ &4.17 &3.85 &5.83 &5.70 \\
\hline 36 ${pb}^{-1}$ &1.80 &1.68 &2.03 &1.89 \\
\hline
\end{tabular}
\caption{ \label{tab3} The discovery ($\Delta\sigma_{tot} \geq
5\sigma$) and exclusion ($\Delta\sigma_{tot} \leq 3\sigma$) LED
model fundamental scale ($M_S$) values for the \ppww process at the
early ($\sqrt{s}=7~TeV$) and future ($\sqrt{s}=14~TeV$) LHC. }
\end{center}
\end{table}
\par
Now we consider the subsequential leptonic (electron, muon) decay of
one of the two $W$-bosons. In collecting the \ppwlv ($\ell = e,
\mu$) events we do not distinguish the leptonic charge. We fix the
branching fraction for $W$-boson decay ($W^{\mp} \to \ell^{\mp}
\stackrel{(-)}{\nu},~\ell = e, \mu$) as $21.32\%$ \cite{18},
$\mathcal{L}=300~fb^{-1}$, and take the number of the extra
dimensions $\delta = 3$, the constraints of $M_{WW}>400~GeV$,
$p_{T}^{l} > p_{T,l}^{cut}=100~GeV$, and the jet event selection
criterion as declared above. We show the discovery and exclusion
regions in the $\mathcal{L}-M_{S}$ space for the processes \ppwlv
($\ell = e, \mu$) in Figs.\ref{fig11}(a) and (b) for the
$\sqrt{s}=7~TeV$ and $\sqrt{s}=14~TeV$ LHC, respectively, The dark
and gray- regions represent the parameter space where the LED effect
can and cannot be observed separately. Some representative data for
the discovery and exclusive fundamental scale $M_S$ values at the
early ($\sqrt{s}=7~TeV$) and future ($\sqrt{s}=14~TeV$) LHC read out
from Figs.\ref{fig11}(a,b) are presented in Table \ref{tab4}.
We can see from the table that by using \ppwlv ($\ell = e, \mu$)
processes with the constraints of $M_{WW}>400~GeV$, $p_{T}^{l} >
p_{T,l}^{cut}=100~GeV$, and our chosen jet event selection
criterion, we could get exclusion lower limit on $M_S$ as $2.19~TeV$
at the $95\%$ confidence level at the $\sqrt{s} = 7~TeV$ LHC with an
integrated luminosity of $36~(pb)^{-1}$, which is larger than that
obtained by analyzing the $W$-pair production events as described above.
\begin{figure}[htbp]
\includegraphics[width=3.2in,height=3in]{fig11a.EPS
\hspace{0in
\includegraphics[width=3.2in,height=3in]{fig11b.EPS
\hspace{0in
\caption{\label{fig11} The LED effect discovery area (dark) and
exclusion area (gray) in the $\mathcal{L}-M_{S}$ space for the
\ppwlv ($\ell = e, \mu$) processes with the constraints of
$M_{WW}>400~GeV$, $p_{T}^{l} > p_{T,l}^{cut}=100~GeV$, and the jet
event selection criterion declared above. (a) at the
$\sqrt{s}=7~TeV$ LHC. (b) at the $\sqrt{s}=14~TeV$ LHC. }
\end{figure}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline {}Luminosity$(\mathcal{L})$ &\multicolumn{2}{c|}{$\sqrt{s}=7TeV$}
&\multicolumn{2}{c|}{$\sqrt{s}=14TeV$} \\ \cline{2-5}
{}($\mathcal{L}$) &$M_{S}[TeV]$($3\sigma$)&$M_{S}[TeV]$($5\sigma$) &$M_{S}[TeV]$($3\sigma$)&$M_{S}[TeV]$($5\sigma$) \\
\hline 100 ${fb}^{-1}$ &4.42 &3.95 &6.69 &6.41 \\
\hline 200 ${fb}^{-1}$ &4.65 &4.29 &6.82 &6.62 \\
\hline 300 ${fb}^{-1}$ &4.74 &4.45 &6.87 &6.71 \\
\hline 36 ${pb}^{-1}$ &2.19 &1.96 &2.98 &2.80 \\
\hline
\end{tabular}
\caption{ \label{tab4} The discovery ($\Delta\sigma_{tot} \geq
5\sigma$) and exclusion ($\Delta\sigma_{tot} \leq 3\sigma$)
fundamental scale ($M_{S}$) values for the \ppwlv ($\ell = e, \mu$)
processes in the $\mathcal{L}-M_{S}$ space for the \ppwlv ($\ell =
e, \mu$) processes with the constraints of $M_{WW}>400~GeV$ and
$p_{T}^{l} > p_{T,l}^{cut}=100~GeV$: at the $\sqrt{s}=7~TeV$
LHC and at the $\sqrt{s}=14~TeV$ LHC. }
\end{center}
\end{table}
\par
We depict the LED discovery and exclusion regions in the
$M_{S}-p_{T,l}^{cut}$ space for the processes \ppwlv ($\ell = e,
\mu$) in Figs.\ref{fig12}(a) and (b) with $\delta = 3$,
$\mathcal{L}=300~fb^{-1}$, $M_{WW}>400~GeV$, and the branching
fraction for $W$-boson decays ($W^{\mp} \to \ell^{\mp}
\stackrel{(-)}{\nu},~\ell = e, \mu$) as $21.32\%$, where
Fig.\ref{fig12}(a) and Fig.\ref{fig12}(b) are for the
$\sqrt{s}=7~TeV$ and $\sqrt{s}=14~TeV$ LHC respectively. The dark
and gray- regions represent the parameter regions where the LED effect
can and cannot be observed, separately, with the constraints of
$p_{T,l} > p_{T,l}^{cut}$ and the $W$-pair invariant mass $M_{WW}>
400~GeV$. Some representative data are listed in Table \ref{tab5} for the discovery and exclusion
fundamental scale $M_S$ values with different $p_{T,l}^{cut}$ values
at the $\sqrt{s}=7~TeV$ and $\sqrt{s}=14~TeV$ LHC as shown in
Figs.\ref{fig12}(a,b). We can see
that in the case where we fix the integral luminosity (e.g.
$\mathcal{L}=300~fb^{-1}$), we could improve slightly the low limit
on $M_S$ if we adopt a larger lower cut on lepton transverse momentum
($p_{T,l}^{cut}$).
\begin{figure}[htbp]
\includegraphics[width=3.2in,height=3in]{fig12a.EPS
\hspace{0in
\includegraphics[width=3.2in,height=3in]{fig12b.EPS
\hspace{0in
\caption{\label{fig12} The LED effect discovery area (dark) and
exclusion area (gray) in the $M_{S}-p_{T,l}^{cut}$ space for the
\ppwlv ($\ell = e, \mu$) processes with $\delta = 3$ and
$\mathcal{L}=300~fb^{-1}$. (a) At the $\sqrt{s}=7~TeV$ LHC. (b) At
the $\sqrt{s}=14~TeV$ LHC. }
\end{figure}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
{}$p_T^l$ cut value &\multicolumn{2}{c|}{$7~TeV$} &\multicolumn{2}{c|}{$14~TeV$} \\ \cline{2-5}
{}($p_{T,l}^{cut}$) &$M_{S}[TeV]$($3\sigma$)&$M_{S}[TeV]$($5\sigma$) &$M_{S}[TeV]$($3\sigma$)&$M_{S}[TeV]$($5\sigma$) \\
\hline 50 GeV &4.61 &4.24 &6.67 &6.60 \\
\hline 100 GeV &4.74 &4.45 &6.87 &6.71 \\
\hline 150 GeV &4.92 &4.61 &6.92 &6.80 \\
\hline
\end{tabular}
\caption{ \label{tab5} The discovery ($\Delta\sigma_{tot} \geq
5\sigma$) and exclusion ($\Delta\sigma_{tot} \leq 3\sigma$) LED
model fundamental scale ($M_{S}$) values in the
$M_{S}-p_{T,l}^{cut}$ space for the \ppwlv ($\ell = e, \mu$)
processes with the constraints of $M_{WW}>400~GeV$ and $p_{T}^{l} >
p_{T,l}^{cut}$. (a) at the $\sqrt{s}=7~TeV$ LHC. (b) at the
$\sqrt{s}=14~TeV$ LHC. }
\end{center}
\end{table}
\vskip 5mm
\section{Summary}
\par
We calculate the NLO QCD corrections to the $pp \to W^+W^- \to
W^{\pm}l^{\mp}\stackrel{(-)}{\nu} + X$ process in the SM and LED
model at the LHC. We investigate the integrated cross sections, the
distributions of some kinematic variables and how they are affected
by radiative corrections. The calculations are compared with
previous works, and finally the reliable numerical results are
obtained. We find that the NLO QCD corrected results do not show
remarkable reduction of the scale uncertainties of the LO cross
sections in both the SM and LED model, because the uncertainty of
the LO cross section is underestimated. The scale-dependent
$K$-factor is found to be the value from $1.18$ ($1.53$) to $1.19$
($1.11$) when $\mu$ goes from $0.5\mu_0$ to $2\mu_0$ at the
$\sqrt{s}=7~TeV$ ($\sqrt{s}=14~TeV$) LHC, with the constraints of
$M_{WW}>400~GeV$ and our jet event selection criterion. The
$5\sigma$ discovery and $3\sigma$ exclusion ranges for the LED
parameters $M_{S}$ are also obtained in the NLO QCD. The inclusion
of the effects of the virtual KK graviton turns out to enhance the
differential distributions of kinematical observables generally. We
conclude that the NLO QCD correction to the $W$-pair production
make it possible to precisely test the $TeV$ quantum gravity in
the LED scenario at the LHC.
\vskip 5mm
\par
\noindent{\large\bf Acknowledgments:} This work was supported in
part by the National Natural Science Foundation of China
(No.10875112, No.11075150, No.11005101), and the Specialized
Research Fund for the Doctoral Program of Higher
Education(No.20093402110030).
\vskip 10mm
|
1,477,468,750,959 | arxiv | \section{Introduction}
H.
Weyl, by the use of scale transformations,
attempted to unify gravitation and electro-magnetism
within the framework of general relativity
early this century
\cite{Weyl} and in so doing initiated
the `gauge revolution' in
physics.
Like Einstein and many others
who followed, Weyl recognised that the
two forces which propagate at the speed of
light must be intimately connected.
Today we understand that
there are profound similarities between
electromagnetism and gravitation in spite of
the obvious differences; both are gauge
field theories, both are mediated by
massless bosons (if we accept the
reality of gravitons)
and both manifest as
waves in the vacuum in classical
(non-quantum) theory. However at present
these two
forces are described by very different
physical principles; in the case of gravitation
by a metric theory of general relativity
which relates gravity to space-time structure
whilst in the classical (and quantum) field
theory of
electro-magnetism space-time geometry is only in
the background. Thus whilst the standard descriptions
are not contradictory they are also not
cohesive; intuitively we feel that two such
similar forces should be based on similar
physical principles. Indeed, the general idea
that nature, at its most fundamental level
of structure, should be simple would seem
to require a single set of physical principles
to underpin these two forces lest nature be
required to `reinvent' itself to create two
forces based on two quite different foundations.
Thus few would dispute the need for a unified
theory but the dichotomy persists despite
nearly a century of work. The discovery of
the weak and strong interactions has
further complicated the picture; the
successful $SU3_{(C)}{\times}SU2_{(L)}{\times}U_1$
resisting the incorporation of the
(nonrenormalisable) Einstein Lagrangian.
Recently deAndrade and Pereira \cite{A&P}
have pointed out that, in addition to the
known result that General Relativity
can be recast into an equivalent gauge
theory of the translation group due to teleparallel
geometry \cite{CM}\cite{YM}(leading to `dual' descriptions of
gravitation - one describing gravitation as
propagating space-time curvature and the
other `teleparallel' description describing
gravitation as propagating torsion), electromagnetism
additionally can have such a dual description
and that the gauge invariance of the theory
is in fact NOT violated by the couping to
torsion. This is in contradistinction to the
usual wisdom which precludes torsion coupling to
Proca's equation for $m=0$ \cite{four} so
that theories of torsion in electro-magnetism
usually imply photon mass. More will be said
about this apparent conflict later - and solutions
proposed - but
consider the following. If it is possible to
have `dual' descriptions of gravitation might
it be possible to have `dual' descriptions of
electro-magnetism which in some way `complement'
gravity theory?
Consider the motivation for this proposition
a different way. The Coleman-Mandula (C-M or no-go
theorem) is the rock which bars the way for
unification. This theorem forbids the (non-trivial)
union of compact groups (such as U1) and non-compact
groups (such as the Poincar\'{e} group or the
Lorentz group). However, operators
which interconvert bosons and fermions bypass the
theorem; this is the underlying motivation for
supersymmetry. This theory however requires
a whole menagerie of superpartner particles
for which there is currently no empirical evidence. Whilst
it is thus assumed that the superpartners are
more massive than currently accessible energies
the situation is somewhat unsatisfactory. An alternative
is highly desirable.
Part of
the motivation
for this paper is an attempt to avoid the
C-M theorem by creating `dual' and
complementary descriptions of
electromagnetism in a single metric theory.
Roughly speaking what is formed is
a teleparallel version of electromagnetism
with zero non-metricity
(curvature=0, torsion$\neq0$ for the free-field)
but with substantial differences from
previous attempts. Chief among these is the
attempt to mirror supersymmetry by
the creation of a spinorial representation
for bosonic torsion. Normally we interpret a
metric as defining distances in space-time for
an observer. Any component of a metric
which defines a $ds^2=0$ component does not contribute
to such a length; i.e. it does not define a measurement
in an observer frame as such.
Similarly given the equation of geodesic
or autoparallel line;
\begin{equation}
{{dx^{\alpha}}\over{ds^2}}
+
\Gamma^{\alpha}_{\beta\gamma}
{{dx^{\beta}}\over{ds}}
{{dx^{\gamma}}\over{ds}}
=0
\end{equation}
it is clear that
the torsion tensor
$\Gamma^{\alpha}_{[\beta\gamma]}
={1\over2}
(\Gamma^{\alpha}_{\beta\gamma}
-\Gamma^{\alpha}_{\gamma\beta})
$
does not contribute since the differentials
commute. This leads to interpretation
of torsion as a non-propagating spin-contact
interaction (see for example \cite{FP}).
It will be shown below however that a different
interpretation is possible consistent with the
findings of deAndrade and Pereira.
A $ds^2=0$ component
is `on the light cone';
this will be used as a vacant mathematical slot into
which is plugged a spinorial
description of electro-magnetism.
This exploits the epicentre of the C-M
theorem; the Lorentz group (and effectively
also the Poincar\'{e} since all distances
are zero for an `observer' on a wave of light) is non-compact
precisely because the speed of light
is not in any observer's frame. The analogue of
a supersymmetry transformation then becomes the
interconversion of two mathematical
descriptions of one force; one spinorial and
one bosonic which is called in the text a
`translational' procedure. This `translation'
is the weakest link in the theoretical construction
but some concrete mathematical support
for its consistency is supplied by studying the construction
of stress-energy tensors from Lagrangians
with anti-symmetric metrics
in section VII. With this translation procedure
the superpartner of the photon becomes - the
photon!; but this superpartner is only detectable
to an observer who is travelling at the speed of
light so it never appears in experiments!
What are the problems? Different people
might give different answers to this
question but the following are a selection of
the main obstacles to four-dimensional
geometric unification of electro-magnetism
and gravitation in the framework of
general relativity;
1. For a dimensionless metric the
scalar curvature R has dimension $l^{-2}$
so that in four dimensions ${\cal{L}}=kR$
requires constant k to have dimension
$l^{-2}$ and so the theory is non-renormalisable.
2. How can the electro-magnetic vector
potential $A_{\mu}$ be placed in the
tensor $g_{\mu\nu}$ without spoiling
its tensor character or destroying
General Relativity; i.e. if we require the
strong equivalence principle
$g_{\mu\nu\;;\;\phi}=0$ to hold. Note
that this is related to the first problem
because $A_{\mu}$ is dimensional with
dimension $l^{-1}$.
3. How can the free-field stress-energy tensor
be extracted from the metric? From the
Lagrangian?
4. How can source terms be included in
the metric? Again these must not
spoil the qualities of the metric
or destroy gravitation theory.
5. How can we couple the stress-energy
tensors for gravitation and electro-magnetism
into one equation relating to curvature?
6. Does the theory have scope for
generalisation to the electro-weak
interaction?
7. And what about quantisation?
8. The Coleman-Mandula theorem.
The paper is organised as follows;
firstly there is a brief review of
the history - particularly
regarding homothetic
curvature (\'{a}-l\'{a}-Eddington)
with which many readers may not be
familiar. A metric is then defined and its
consistency proven. It contains
both a symmetric and an anti-symmetric
part. A connection is then
defined on the basis of the vanishing of the
covariant derivative of the metric
(vanishing non-metricity).
The stress-energy tensor is then extracted
by expanding the (homothetic) curvature tensor in the
form of an Einstein equation.
The metric is then redefined to accomodate
source terms and the source-stress-energy
tensor formed. Lastly the metric
is applied to the Lagrangian formalism.
Due to constraints of space
the prospects for electro-weak-gravitation
unification and explicit interaction terms
are not
discussed. The notation used throughout
is perhaps somewhat traditional as
the particular mathematical structure explored
does not lend itself ideally to the
notation of differential forms (e.g.
the use of notation with $\omega$ connection one-forms,
exterior derivative, exterior product etc; see
for example Trautman \cite{T}) because every
index must be carefully tracked for anti-commutivity.
The notation used is consistently applied
and certainly familiar to anyone accustomed
to the standard texts on General Relativity.
Part of the work is an extension of ideas
presented previously
\cite{GF}.
There is an extensive literature in this
field but the ideas proposed in this paper
are quite different from any previously published
work to the best of my knowledge.
\section{Historical background}
It is instructive to
consider the original efforts since
the principles uncovered by the pioneers
in the field underpin all efforts that
followed.
Theoretical efforts to form a unified
description of gravity and electro-magnetism
in the classical framework date from
early this century beginning particularly
with the work of N\"{o}rdstrom, Weyl, Eddington,
Einstein and
Cartan. An excellent review is found
in reference \cite{russian}.
Weyl's scheme \cite{Weyl} revolved around
scale-transformations (gauge transformations)
. This work failed to provide a viable
unified theory of gravitation and
electro-magnetism, which was Weyl's original intent,
but subsequently proved very fruitful in other
areas; Weyl is truely the father of the
modern approach of gauge field theory.
In essence Weyl's idea was to extend the
geometric foundations of Riemannian
geometry by allowing for scale transformations
to vectors with parallel transport. This
approach was criticised, particularly by
Einstein, as being incompatible with
observation; in particular it means that
that the physical properties of
measuring rods and clocks
depends upon their history. For example,
two identical clocks, initially synchronised
to run at the same rate in the same inertial
frame, will no longer do so if they are brought
together at a later time into the same
inertial frame having travelled through
different paths in space-time according to
Weyl's scheme.
Subsequently
Eddington \cite{Eddington} attempted to extend
Weyl's theory. In Eddington's theory,
as in Weyl's,
the connection forms the basic geometric
object and is related to the electromagnetic
potential $A_\beta$;
\begin{equation}
\Gamma^{\alpha}_{\alpha\beta}
=
\Gamma^{\alpha}_{\beta\alpha}
=\lambda.A_{\beta}
\end{equation}
-for constant dimensionless lambda.
Eddington then proceeds to form the anti-symmetric
electro-magnetic tensor $F_{\beta\gamma}$
viz homothetic curvature;
\begin{eqnarray}
R^\alpha_{\alpha\beta\gamma}
&=&
\partial
_{\beta}\,\Gamma^{\alpha}_{\alpha\gamma}
-\partial_{\gamma}\,\Gamma^\alpha_{
\alpha\beta}
-\Gamma^{\alpha}_{\phi\beta}\,\Gamma^
{\phi}_{\alpha\gamma}+
\Gamma^{\alpha}_{\phi\gamma}\,
\Gamma^{\phi}_{\alpha\beta}
\nonumber\\
&=&
F_{\beta\gamma}
\label{dotty}
\end{eqnarray}
-where the two product terms in (\ref{dotty})
have been equated to zero as is usually done in general
relativity. In order to overcome the
criticism of Weyl's theory with regard to
measuring rods and clocks Eddington imposes
an assumed metric condition on the curvature tensor
(Eddington's `natural gauge');
\[
{\phi}g^{\alpha\beta}
=
R^{\alpha\beta}
\]
where $\phi$ is a constant
of dimension $l^{-2}$. This constraint
effectively `fine-tunes' the metric to the
space-time curvature in an attempt to
avoid the problem of measurement
associated with the Weyl
non-metric geometry.
There are a number of problems associated
with the Eddington approach which
emerged. In particular these involve the number of
unknowns in the differential equations
resulting from the use of
the connection as the main geometric
element (about 40) and higher-order
derivative terms which arise in the
theory.
More generally, we can see an inconsistency with
general relativity because the curvature
tensor is equated viz Einstein's
equation to the stress-energy tensor in G.R;
the corresponding object in electro-magnetism
is quadratic in the $F_{\mu\nu}$ not first-order
in it. In fact it is
the E-M stress-energy tensor which
should appear on the R.H.S. of
Einstein's equation contributing, at the
very least, to the gravitational
potential as it is a source of
mass-energy equivalence. Also in the Weyl/Eddington theory
the potential is identified with the connection;
in G.R. it is identified with the metric.
More recent studies using the Weyl/Eddington
approach are found in refs \cite{WE}.
Cartan appears to have been the first
person to explore the possibility of theories
involving torsion in the context of general
covariance and classified possible theories
on the basis of
affine vs metric,
(affine theories, such as Weyl's, are
`non-metric'), the presence or
absence of rotation curvature
(defined as present if
$R^{\alpha}_{\gamma\beta\alpha}
=R^{\alpha}_{\beta\gamma\alpha}\;
{\neq}\;0$), the presence or
absence of homothetic curvature
(
present if $R^{\alpha}_{\alpha\beta\gamma}
=-R^{\alpha}_{\alpha\gamma\beta}\;
{\neq}\;0$) and the presence or
absence of torsion
(presnt if $\Gamma^{\alpha}_{\beta\gamma}
=-\Gamma^{\alpha}_{\gamma\beta}\;
{\neq}\;0$).
Riemann-Cartan geometries
have non-vanishing torsion.
Many R-C geometries involve adding an
anti-symmetric piece to the
metric which is in some way related
to the Maxwell tensor $F_{\alpha\beta}$;
\[
g_{\alpha\beta}= \eta_{\alpha\beta}
+h_{\alpha\beta}+{\lambda}F_{\alpha\beta}
\]
where $h_{\alpha\beta}$ is the gravitational
potential in the weak field
approximation. It is now understood
that torsion is related to translations
\cite{Hehl}
(torsion `breaks' parallelograms -
it is also related to the theory of
crystal dislocations \cite{Bilby})
whilst rotation curvature is related to
rotations. This situation is somewhat
paradoxical (as has been noted) since
the (non-compact) Poincar\'{e} group,
the `gauge' group for gravitation,
is the group of translations.
In the treatment given here we will see
the reverse by embedding
electro-magnetism (a U1 or compact
rotation group symmetry) in torsion;
which has the geometry
of translations not rotations!
An attempt to give a geometric interpretation
to these apparent contradictions will be given
in the discussion section when all the
geometric machinery needed has been developed.
At about the same time as Eddington
published his theory Kaluza published his
five-dimensional version of
gravitational-electromagnetic unification
\cite{KK}.
The fifth dimension in the Kaluza-Klein theory
is a periodic space which spontaneously
compactifies. (More contemporary versions of
the Kaluza-Klein geometry attempt to extend
the compactified space to higher dimensions
to accomodate the $S.U._3\mbox{x}S.U._2\mbox{x}
U_1$ standard model; see ref. \cite{KK} for
examples. See also \cite{F.W.}). The Kaluza-Klein theory
is appealing for a number of basic reasons.
Often overlooked but of basic importance is the fact
that the metric in the theory parallels general
relativity by containing the potential of the
theory; most alternative attempts at unification
have attempted to site the vector potential $A_\mu$
in the connection.
However, the Kaluza-Klein theory remains a five-dimensional
theory and ideally we would like a four-dimensional
theory;
the universe
is not observed to be anything other than
four dimensional so we have no empirical
evidence for the extra dimensions.
In addition to the Kaluza-Klein theory there are
numerous theories
identifying an anti-symmetric component of the
metric with the Maxwell tensor $F^{\mu\nu}$
such as Einstein's unified field theory
\cite{UFT} and later contributions to $U_4$
theory (with torsion) development from
Sciama \cite{3} and Kibble \cite{4}
and others. More
contemproary approaches
include 3D Riemann-Cartan geometry
with Yang-Mills fields (\cite{Mielke}
\cite{Rad} and contained references).
Lunev \cite{Lunev} develops an
Einstein equation for Yang-Mills
fields but the approach used differs
from the one employed in this paper
by placing the potential in the
connection.
Garcia deAndrade and Hammond \cite{Ham}
employ the Eddington approach of equating
homothetic curvature and the Maxwell
tensor and interpret massive torsion
quanta as massive photons.
More recently Unzicker\cite{U} has
explored teleparallel geometry and
electromagnetism although the latter
approch is very different from the one presented
here.
Attempts to embed a description of
electro-magnetism in general covariant
theory have difficulty because, unlike gravity
which can be `transformed away' {\it locally}
in a free-fall frame, the electro-magnetic
field cannot be `transformed away' by a Lorentz
transformation. There appears however to be
a loophole that can be exploited here without
explicitly breaking Lorentz invariance; that of
working on the light-cone itself - in a non-observer
frame. What this means will be discussed below.
It has been pointed out
\cite{four} that
gauge freedom in Proca's equation for $m=0$
precludes torsion in electro-magnetism (but not
in the case $m\neq0$ for a spin-1 field). Thus
theories with torsion in electro-magnetism
frequently imply photon mass \cite{FP}. However,
in the derivation presented it will shown that
a constraint emerges from the connection which
permits the description of torsion in
electro-magnetism with massless photons;
possibly providing an explanation for the
apparent conflict between the results of Hehl et. al.
and deAndrade et. al. alluded to above.
\section{Metric}
Consider the following metric;
\begin{equation}
\left({\begin{array}{cccc}
+I_4&{i\over2}\sigma_{01}&{i\over2}\sigma_{02}&
{i\over2}\sigma_{03}\\{i\over2}\sigma_{10}&-I_4&
{i\over2}\sigma_{12}&{i\over2}\sigma_{13}\\
{i\over2}\sigma_{20}&{i\over2}\sigma_{21}&-I_4&
{i\over2}\sigma_{23}\\{i\over2}\sigma_{30}&
{i\over2}\sigma_{31}&{i\over2}\sigma_{32}&-I_4\
\end{array}}\right)
=
I_4.\eta_{\alpha\beta}+
{i\over2}\sigma_{\alpha\beta}
\label{sigmametric}
\end{equation}
\\
where $
\sigma^{\alpha\beta}=
{i\over2}\left[\gamma^{\alpha},\gamma^{\beta}
\right]$
and introducing the notation $
\sigma^{'\alpha\beta}
={1\over2}\sigma^{\alpha\beta}
$ we have (noting in the sum
$\sigma_{\alpha\phi}^{'}\sigma^{'\phi}
_{\;\;\;\beta},\;\phi$ can only take two
values as $\alpha$ and $\beta$ are different valued);
\begin{eqnarray}
g_{\alpha\phi}\bar{g}^\phi_{\,\,\,\beta}
&=&\left(I_4.\eta_{\alpha\phi}
+i\sigma^{'}_{\alpha\phi}\right)\left(
I_4.\eta^\phi_{\,\,\,\beta}-i\sigma^{'\phi}
_{\,\,\,\,\beta}\right)\nonumber\\&=&
\left(I_4.\eta_{\alpha\beta}
+i\sigma^{'}_{\alpha\beta}\right)
=g_{\alpha\beta}
\label{one}
\end{eqnarray}
where $\bar{g}$ is the complex-conjugate
transpose ($\dagger$)
or `dual' metric
viz;
\begin{equation}
\left(\sigma^{'}_{\alpha\beta}
\right)^\dagger=
{-i\over4}\left[\gamma_\alpha,
\gamma_\beta\right]^\dagger=
\sigma^{'\alpha\beta}
\label{phase}
\end{equation}
so that
\begin{equation}
\left
(g_{\alpha\beta}\right)^\dagger
=\bar{g}^{\alpha\beta}=\left(I_4.\eta^
{\alpha\beta}-i\sigma^{'\alpha
\beta}\right)
=g^{\beta\alpha}
\label{inverse}
\end{equation}
Note that;
\begin{equation}
g_{\alpha\phi}\;g_{\beta}^{\;\;\;\phi}
=g_{\alpha\beta}
\label{consistencyconstraint}
\end{equation}
is exceedingly constraining. The general solution
for the Lorentz tangent space metric (+,-,-,-)
includes an anti-symmetric component.
Now the Dirac gamma matrices do not transform as
four-vectors. However, the sigma matrices formed
from them transform as tensors;
\begin{equation}
{i\over2}\sigma_{\alpha\beta}
=
{-1\over4}[\gamma_{\alpha},\gamma_{\beta}]_{-}
\label{commutator}
\end{equation}
which transforms as a true tensor. It is the antisymmetric
version of the fundamental tensor which can
be defined as;
\begin{equation}
\eta_{\alpha\beta}=
{1\over2}\{\gamma_{\alpha},\gamma_{\beta}\}_{+}
\label{anticommuator}
\end{equation}
Which dictates the solution space for
(\ref{consistencyconstraint})
for which metric (\ref{sigmametric}) and its
dual are the general
solution.
Of course the anti-symmetric piece does not span
the space S.0.(3,1) at all but is confined to
the end point of boosts - i.e. $ds^2=0$ - {\it{which
is not in the group}}. The S.0.(3,1) group is
non-compact precisely because the velocity of light,
the end-point of boosts, is not in the group. What metric
(\ref{sigmametric}) does is insert a new part of the metric
onto the light cone. The transformations associated
with using the sigma matrix as the fundamental
tensor constitute a new group confined to be
on the light cone itself. Let us call this group C.0.(3,1).
We will study its properties later but we note that;
\[
\mbox{S.O.}(3,1)\;\cup\;{\mbox{C.O.}}(3,1)
\]
will be (trivially) topologically compact.
Of all possible anti-symmetric pieces which may be
added to the metric $\eta_{\alpha\beta}$
only one possible term satisfies both
the consistency equation (eq.(\ref{consistencyconstraint}))
and Lorentz covariance
and it is, not suprisingly, the fundamental tensor formed
by substituting the commutator for the anticommutator
of the gamma matrices. There is a profound underlying
symmetry in this which I do not fully understand. It
should be apparent, however, that the metric
(\ref{sigmametric}) is
highly non-trivial and unique.
{\bf From now on I will
drop the prime on the sigma matrices
it being understood that {\it all}
subsequent expressions
containing sigma matrices are of
the primed form (i.e. the
factor of $1\over2$ is absorbed
into the definition).} The off-diagonal elements
of this metric are 4x4 matrices so
the metric is 16x16. Diagonal
identities are 4x4 and each space-time
index can be multiplied into
a 4x4 identity to couple to the
metric. Thus although the
metric is now a 16x16 matrix
we still only need four
parameters to describe the space
which is, physically and mathematically,
still
therefore four dimensional; as
implied by the R.H.S. of (\ref{sigmametric})
where the Greek indices range over 4 values.
Normally we expect the antisymmetric
$\sigma$ matrices to contract
against spinors - but here they
will be contracted against
commuting co-ordinates.
Apart from a brief comment
at the end of the paper I will not deal
further with the issue of co-ordinates.
The theory is constructed in a co-ordinate
independent fashion.
Contraction with the dual metric
produces the scalar identity (omitting
the implicit matrix multiplier of $I_{4}$);
\[
g_{\alpha\beta}\,\bar{g}^{\beta\alpha}
=+4\,-3=+1
\]
although the metric is not invertible as
a Kronecker delta.
Consequently care is required
in raising and lowering indices (see below).
The apparent lack of invertibilty of the
metric causes no problem; the
different parts of the metric label different
fields and indices for each field are
appropriately raised and lowered with
with each field's respective metric with
cross terms generating interactions. Thus when
the physical content of the theory is inserted
the metric is well behaved.
Note that Lorentz scalars
such as $P^2=m^2$ and $ds^2$
are still invariant under this metric. This is
important because we wish to construct a theory
which preserves physical measurements (one of the
main criticisms of the Weyl approach was that
it did not preserve physical measurement
invariants in different regions of space
\cite{russian}).
To obtain a
dynamical theory we will require the derivatives
of the off-diagonal anti-symmetric
part of metric (\ref{sigmametric})
to be non-vanishing. To facilitate this
we introduce a parameter $|P|$,
with non-vanishing space-time derivatives
and modulus unity,
and incorporate $|P|$ into the
sigma matrices;
\begin{equation}
\sigma_{\alpha\beta}\equiv|P|
\sigma_{\alpha\beta}
\label{|P|}
\end{equation}
We will take $|P|$ as a one-parameter group
$|P|=e^{{\pm}ik{\cdot}x}$
where $k^{\mu}$ is the photon four-momentum and
$x_{\mu}$ the
space-time four-vector in units $\hbar=c=1$.
We will see below that consistency of the
metric can be maintained with this
added phase-factor.
\section{Connection coefficients}
We will require the vanishing of
the covariant
derivative of the metric;
\begin{equation}
g_{\alpha\beta;\gamma}=\left
(I_4.\eta_{\alpha\beta}+
i|P|\,\sigma_{\alpha\beta}\right)_
{;\gamma}=0.
\label{metric}
\end{equation}
We consider only a free-fall frame
in which the
derivatives of the diagonal elements
of the metric
vanish; the derivatives of off-diagonal
elements
however will
be non-vanishing in this
frame (as we shall see this applies
when there is an
electro-magnetic field present). Now
\begin{equation}
\left(
|P|\sigma_{\alpha\phi}
\right)_{,\gamma}\,|P|\,\sigma^
{\phi}_{\;\beta}
\approx
i{|P|}_{,\gamma}
\sigma_{\alpha\beta}
=i\left({|P|}
\sigma_{\alpha\beta}
\right)_{,\,\gamma}
\label{P}
\end{equation}
($\approx$ here
means equal
up to a (local) phase factor).
From now on the parameter
$|P|$ will be absorbed
into the definition of the
sigma matrices
($|P|\sigma_{\alpha\beta}\equiv
\sigma_{\alpha\beta}$) in all expressions.
For both indices downstairs I will
use $e^{+ik{\cdot}x}$ and for the dual
with both indices upstairs the
$e^{-ik{\cdot}x}$ so that
\[
(g_{\alpha\beta})^{\dagger}=\bar{g}^{\alpha\beta}
\]
Raising or lowering a single index thus eliminates the
phase factor as a consequence.
Writing and defining the covariant
derivative of the asymmetric
part of the metric with
three different labellings;
\begin{equation}
g^A_{\alpha\beta;\gamma}=g^
A_{\alpha\beta,\gamma}-
\Gamma_{\gamma\alpha}^\phi
\,i\sigma_{\phi\beta}-
\Gamma_{\gamma\beta}^\phi\,i
\sigma_{\alpha\phi}
\label{alpha}
\end{equation}
\begin{equation}
g^A_{\gamma\beta;\alpha}=g^
A_{\gamma\beta,\alpha}-
\Gamma_{\alpha\gamma}^\phi
\,i\sigma_{\phi\beta}-
\Gamma_{\alpha\beta}^\phi
\,i\sigma_{\gamma\phi}
\label{beta}
\end{equation}
\begin{equation}
g^A_{\alpha\gamma;\beta}=g^
A_{\alpha\gamma,\beta}-
\Gamma_{\beta\alpha}^\phi\,i
\sigma_{\phi\gamma}-
\Gamma_{\beta\gamma}^\phi\,i
\sigma_{\alpha\phi}
\label{gamma}
\end{equation}
\begin{equation}
\Gamma_{\alpha\beta}^\phi={1\over2}
\left(g_{\alpha\epsilon,\beta}+g_
{\epsilon\beta,\alpha}
-g_{\alpha\beta,\epsilon}\right)
\bar{g}^{\epsilon\phi}
\label{zot}
\end{equation}
using (\ref{zot}) and raising indices
with the
anti-symmetric part
of (\ref{inverse}) (we have a choice in
this situation of raising indices either
with the symmetric part of the metric
{\it or}, the antisymmetric part or both. For the
free-field (no interactions) we require
only the anti-symmetric part of the
metric which means, because the derivatives of the
diagonal part vanish, we are effectively
working with a purely anti-symmetric metric in the
derivation), we have
(\ref{gamma}) + (\ref{beta})
- (\ref{alpha}) gives;
\begin{eqnarray}
g_{\alpha\gamma\,,\,\beta}
&+&g_{\gamma\beta\,,\,\alpha}
-g_{\alpha\beta\,,\,\gamma}
\nonumber\\
&=&i\sigma_{\alpha\gamma\,,\,\beta}
+i\sigma_{\gamma\beta\,,\,\alpha}
-i\sigma_{\alpha\beta\,,\,\gamma}
\label{proof}
\end{eqnarray}
provided we define
contractions on the derivative index
{\it {from its right}} as;
\begin{equation}
\sigma_{\alpha\beta\,,}{}^{\phi}
\sigma_{\phi\gamma}=
+i\sigma_{\alpha\beta\,,\,\gamma}
\label{deriv}
\end{equation}
showing (\ref{zot}) is consistent.
Notice that the connection so defined
is antisymmetric in its lower
two indices; this
is a torsion connection. Also note that
although $\Gamma^{\phi}_{\alpha\,\beta}
=-\Gamma^{\phi}_{\beta\,\alpha}$
we cannot use this to interchange indices
and sum connection components; $\sigma_{\alpha
\,\beta\,,\,\epsilon}\neq-\sigma_{\epsilon\,\beta
\,,\,\alpha}$ for individual components.
\section{Homothetic curvature;
$R^{\alpha}_{\alpha\beta\gamma}$}
In contrast to gravitational theory
the homothetic curvature is non-zero. We
contract
over the first upper and first lower
index of the curvature
tensor \cite{two}\cite{Eddington}
\begin{equation}
R^\alpha_{\alpha\beta\gamma}=\partial
_{\beta}\,\Gamma^{\alpha}_{\alpha\gamma}
-\partial_{\gamma}\,\Gamma^\alpha_{
\alpha\beta}
-\Gamma^{\alpha}_{\phi\beta}\,\Gamma^
{\phi}_{\alpha\gamma}+
\Gamma^{\alpha}_{\phi\gamma}\,
\Gamma^{\phi}_{\alpha\beta}
\label{dot}
\end{equation}
which is anti-symmetric in its two
uncontracted indices. For a
theory of electro-magnetism we require
first derivatives of the
potential terms. Thus we are interested
in the product of connection
coefficients in (\ref{dot}) which, for
the case at hand,
are non-vanishing in the
presence of metric (\ref{metric}).
We will later see that
the other two terms with second derivatives
of the metric cancel in (\ref{dot}).
Now consider the Bianci identity;
\begin{equation}
R^\alpha_{\alpha\beta\gamma\,;\,\delta}
+R^\alpha_{\alpha\delta\beta\,;\,\gamma}+
R^\alpha_{\alpha\gamma\delta\,;\,\beta}=0
\label{bianci}
\end{equation}
and contract with the full metric;
\begin{equation}
\left(R^\alpha_{
\alpha\beta\gamma\,;\,\delta}
+R^\alpha_{\alpha\delta\beta\,;\,\gamma}+
R^\alpha_{\alpha\gamma\delta\,;\,\beta}\right)
\bar{g}^{\beta\gamma}
=0
\end{equation}
relabelling and using the fact that
a product of symmetric and
anti-symmetric parts with the same
indices is zero we obtain;
\begin{eqnarray}
-i
\left(
R^\alpha_{\alpha\beta\gamma\,;\,\delta}
-2R^\alpha_{\alpha\beta\delta\,;\,\gamma}
\right)
\sigma^{\beta\gamma}&=&0\nonumber\\
{1\over2}R^{A}_{\,\,\,;\,\delta}-R^\gamma_{\,\,\,
\delta\,;\,\gamma}
&=&0
\label{einstein's}
\end{eqnarray}
where the scalar $-i
\,R^\alpha_
{\alpha\beta\gamma;\,\delta}
\sigma^{\beta\gamma}
\equiv R^{A}_{\,\,\,\,;\,\delta}$
and indices are contracted in the tensor part.
Finally relabelling and raising indices with
the {\it symmetric} part of the metric
we obtain;
\begin{equation}
\left({1\over2}\eta^{\phi\delta}R_A-R^{\delta\phi}
\right)_{;\,\delta}=0
\label{delta}
\end{equation}
Although this equation appears identical to Einstein's
equation it contains very different information.
Note also that
in (\ref{delta}) I have employed the opposite sign
convention than is usual in Einstein's equation. This
is a reasonable assertion since
the gravitational potential is unbounded
from below whilst the electro-magnetic potential
for a charged object is unbounded from above
as $r\rightarrow0$ so we expect curvatures which enter
with opposite sign.
\section{calculation of electro-magnetic torsion}
Using (\ref{inverse}), (\ref{zot})
and (\ref{deriv}) we have;
\begin{equation}
\Gamma^{\alpha}_{\alpha\beta}
={1\over2}g_{\alpha\epsilon,\beta}\,\bar{g}^
{\epsilon\alpha}
=-\Gamma^\alpha_{\beta\alpha}
\end{equation}
and thus;
\begin{eqnarray}
&\partial&
_{\beta}\,\Gamma^{\alpha}_{\alpha\gamma}
-\partial_{\gamma}\,\Gamma^\alpha_{
\alpha\beta}
\nonumber\\
&=&
{1\over2}g_{\alpha\epsilon,\gamma,\beta}
\bar{g}^{\epsilon\alpha}
+{1\over2}g_{\alpha\epsilon,\gamma}
\bar{g}^{\epsilon\alpha}_{\,\,\,,\beta}
-{1\over2}g_{\alpha\epsilon,\beta,\gamma}
\bar{g}^{\epsilon\alpha}
-{1\over2}g_{\alpha\epsilon,\beta}
\bar{g}^{\epsilon\alpha}_{\,\,\,,\gamma}
\nonumber\\
&=&
{1\over2}g_{\alpha\epsilon,\gamma}
\bar{g}^{\epsilon\alpha}_{\,\,\,,\beta}
-{1\over2}\bar{g}_{\epsilon\alpha,\beta}
g^{\alpha\epsilon}_{\,\,\,,\gamma}
\nonumber\\
&=&0
\end{eqnarray}
where the last line follows because the
$g_{\epsilon\alpha}$'s
commute as do the derivative indices.
Hence the components containing derivatives of
the connection
of ({\ref{dot}) vanish and we have;
\begin{eqnarray}
&-&i
R^\alpha_{\alpha\gamma\beta}
\sigma^{\gamma\beta}=
-i\left(
\Gamma^\alpha_{\phi\beta}\,\Gamma^\phi_{\alpha\gamma}-
\Gamma^\alpha_{\phi\gamma}\,\Gamma^\phi_{\alpha\beta}
\right)\sigma^{\gamma\beta}
\nonumber\\
&=&
-2i
\Gamma^\alpha_{\phi\beta}\,\Gamma^\phi_{\alpha\gamma}
\sigma^{\gamma\beta}
\nonumber\\
&=&
-{i\over2}
\left(
\begin{array}{c}
i\sigma_{\phi}{}^{\alpha}{}_{,\,\beta}\\
\mbox{\tiny(A)}
\end{array}
\begin{array}{c}
+i\sigma^{\,\alpha}_{\;\;\;\beta\,,\,\phi}\\
\mbox{\tiny(B)}
\end{array}
\begin{array}{c}
-i\sigma_{\phi\,\beta\,,}^{\;\;\;\;\;\;\alpha}\\
\mbox{\tiny(C)}
\end{array}
\right).
\nonumber\\
&\;&\;
\left(
\begin{array}{c}
+i\sigma_{\alpha}{}^{\phi}{}_{,\,\gamma}\\
\mbox{\tiny(D)}
\end{array}
\begin{array}{c}
+i\sigma^{\phi}{}_{\gamma\,,\,\alpha}\\
\mbox{\tiny(E)}
\end{array}
\begin{array}{c}
-i\sigma_{\alpha\,\gamma\,,}^{\;\;\;\;\;\;\phi}\\
\mbox{\tiny(F)}
\end{array}
\right)\sigma^{\gamma\beta}
\label{array}
\end{eqnarray}
Now consider the product involving terms (B) and (E);
\begin{eqnarray}
-{i\over2}
i\sigma^{\,\alpha}_{\;\;\;\beta\,,\,\phi}\:
i\sigma^{\,\phi}_{\;\;\;\gamma\,,\,\alpha}
\sigma^{\gamma\beta}
&=&
-{1\over2}\sigma^{\,\alpha\beta\,,\,\phi}\:
\sigma_{\phi\beta\,,\,\alpha}
\nonumber\\
&
\stackrel
{\mbox{(def.)}}
{\equiv}&
-{1\over2}\partial^{\phi}A^{\alpha}\partial_{\alpha}
A_{\phi}
\label{Adefinition}
\end{eqnarray}
The last line involves a contraction over
$\beta$ and a
dimensional transmutation
to define the A field. This
definition is the
`translation' alluded to earlier in the
paper and is discussed extensively
later in the paper.
Similarly
the product of terms (C) and (F)
of eq (\ref{array}) gives an identical
$-{1\over2}\partial^{\phi}A^{\alpha}\partial_{\alpha}
A_{\phi}$. For (B).(F) and (C).(E)
of eq. (\ref{array}) we
obtain;
\[
-{i\over2}
\sigma^{\,\alpha}_{\;\;\;\beta\,,\,\phi}
\sigma_{\alpha\,\gamma\,,}^{\;\;\;\;\;\;\phi}
\sigma^{\gamma\beta}
\equiv
+{1\over2}\partial^{\phi}A^{\alpha}\partial_{\phi}
A_{\alpha}
\]
The products (B).(D), (C).(D)
are zero because the
$\sigma^{\gamma\beta}$
commutes past the derivative index of
$\sigma^{\phi}{}_{\alpha\,,\,\gamma}$
and hence contracts with opposite
sign on the $\gamma$ and
$\beta$. The products (A)(E) and
(A)(F) are also zero for the same
reason (to see this first anti-commute
the two matrices;
$
\sigma_{\phi\,,\,\beta}^{\;\;\alpha}\,
\sigma^{\,\phi}_{\;\;\;\gamma\,,\,\alpha}
$
- note also that two sigma's
with dummy contracted indices anti-commute
if, with relabelling, there is only one
index interchange on the sigma's - otherwise
they commute).
The last product term ((A).(D) in eqn. (\ref{array}))
is zero because the derivative indices commute.
Hence we have for the scalar
part of (\ref{delta}) summing contributions
R equals;
\begin{equation}
-\partial^{\phi}A^{\alpha}\partial_{\alpha}
A_{\phi}
+\partial^{\phi}A^{\alpha}\partial_{\phi}
A_{\alpha}
=
{1\over2}F^{\phi\alpha}F_{\phi\alpha}
\end{equation}
The tensor part of (\ref{delta}) is similarly
calculated. A subtlety however arises with regard to
translations into forms like (\ref{Adefinition}) because
of the anti-symmetry of the tensor piece $R_{\gamma}^{\;\;\delta}$.
I will calculate the terms first, impose anti-symmetry on the
translation into the A-field terms `by hand' and then explain the
meaning of the translation later in the text;
\begin{eqnarray}
&+&2i
R^\alpha_{\alpha\delta\beta}\,
\sigma^{\gamma\beta}
=
+2i\left(
\Gamma^{\alpha}_{\phi\beta}\,
\Gamma^{\phi}_{\alpha\delta}-
\Gamma^{\alpha}_{\phi\delta}\,
\Gamma^{\phi}_{\alpha\beta}
\right)\sigma^{\gamma\beta}
\nonumber\\
&=&+
{i\over2}
\left(
\begin{array}{c}
\mbox{\tiny(A)}\\
i\sigma_{\phi}{}^{\alpha}{}_{,\,\beta}
\end{array}
\begin{array}{c}
\mbox{\tiny(B)}\\
+i\sigma^{\,\alpha}_{\;\;\;\beta\,,\,\phi}
\end{array}
\begin{array}{c}
\mbox{\tiny(C)}\\
-i\sigma_{\phi\,\beta\,,}^{\;\;\;\;\;\;\alpha}
\end{array}
\right).
\nonumber\\
&\;&\;
\left(
\begin{array}{c}
\mbox{\tiny(D')}\\
i\sigma_{\alpha\;\;\,,\,\delta}^{\;\;\phi}
\end{array}
\begin{array}{c}
\mbox{\tiny(E')}\\
+i\sigma^{\,\phi}_{\;\;\;\delta\,,\,\alpha}
\end{array}
\begin{array}{c}
\mbox{\tiny(F')}\\
-i\sigma_{\alpha\,\delta\,,}^{\;\;\;\;\;\;\phi}
\end{array}
\right)\sigma^{\gamma\beta}
\nonumber\\
&\;&\;-
{i\over2}
\left(
\begin{array}{c}
i\sigma_{\phi\;\;\,,\,\delta}^{\;\;\alpha}\\
\mbox{\tiny(A')}
\end{array}
\begin{array}{c}
+i\sigma^{\,\alpha}_{\;\;\;\delta\,,\,\phi}\\
\mbox{\tiny(B')}
\end{array}
\begin{array}{c}
-i\sigma_{\phi\,\delta\,,}^{\;\;\;\;\;\;\alpha}\\
\mbox{\tiny(C')}
\end{array}
\right).
\nonumber\\
&\;&\;
\left(
\begin{array}{c}
i\sigma_{\alpha}{}^{\phi}{}_{,\,\beta}
\\
\mbox{\tiny(G)}
\end{array}
\begin{array}{c}
+i\sigma^{\,\phi}_{\;\;\;\beta\,,\,\alpha}\\
\mbox{\tiny(H)}
\end{array}
\begin{array}{c}
-i\sigma_{\alpha\,\beta\,,}^{\;\;\;\;\;\;\phi}\\
\mbox{\tiny(I)}
\end{array}
\right)\sigma^{\gamma\beta}
\label{array2}
\end{eqnarray}
The only products which are zero in
(\ref{array2}) are (A).(D') and (A').(G).
Relabelling dummies shows that the
remaining products in
(\ref{array2}) anti-commute.
For (B').(H) we have;
\begin{equation}
+{i\over2}
\sigma^{\,\alpha}{}_{\delta\,,\,\phi}\,
\sigma^{\,\phi}_{\;\;\;\beta\,,\,\alpha}
\sigma^{\gamma\beta}
=-{1\over2}
\sigma^{\,\phi\gamma}_{\;\;\;\;\,,\,\alpha}\,
\sigma^{\,\alpha}_{\;\;\;\delta\,,\,\phi}
\label{odd}
\end{equation}
which sums with (B).(E'). Analogous
contributions arise from (C).(F') and
(C').(I).
The crossed term
(B).(F'), (B').(I),
(C).(E') and (C').(H),
each give;
\begin{equation}
{i\over2}
\sigma^{\,\alpha}_{\;\;\;\beta\,,\,\phi}\,
\sigma_{\alpha\delta\,,}{}^{\phi}
\sigma^{\gamma\beta}
\equiv
+{1\over2}\partial^{\phi}A^{\gamma}
\partial_{\phi}A_{\delta}
\label{Aterm2}
\end{equation}
For similar reasons the product
(A).(E') gives
\[-1/2\,\partial^{\gamma}A^{\alpha}
\partial_{\alpha}A_{\delta}\]
and similarly for (A).(F'),
(B').(G) and (C').(G).
Products (B).(D'), (C).(D'),
(A').(H) and (A').(I) are easily
evaluated and each gives
$-{1\over2}\partial^{\phi}A^{\gamma}
\partial_{\delta}A_{\phi}$.
To translate
(\ref{odd})
the $\alpha$ contraction on the indices
delivers a $+i\partial_{\,\delta}$ and the
$\phi$ contraction a $-i\partial^{\,\gamma}$
; the -i sign because with relabelling
it can be seen that the two matrices
\[
\sigma^{\phi\gamma}_{\;\;\;\;,\alpha}\;
\sigma^{\alpha}_{\;\;\;\delta\,,\,\phi}
\]
anti-commute. Hence we obtain;
\begin{equation}
-{1\over2}
\sigma^{\,\phi\gamma}_{\;\;\;\;\,,\,\alpha}\,
\sigma^{\,\alpha}_{\;\;\;\delta\,,\,\phi}
\equiv
+{1\over2}\partial^{\gamma}A^{\phi}
\partial_{\delta}A_{\phi}
\label{acontraction}
\end{equation}
Summing the non-zero components of the
tensor part we have;
\begin{eqnarray}
+2i
R^{\alpha}_{\alpha\delta\beta}\,
\sigma^{\gamma\beta}
=\,&\,&
-2R^{\gamma}{}_{\delta}=
+2R_{\delta}{}^{\gamma}
\nonumber\\
{\equiv}\,
&-&2\partial^{\gamma}A^{\alpha}
\partial_{\alpha}A_{\delta}
+2\partial^{\alpha}A^{\gamma}
\partial_{\alpha}A_{\delta}
\nonumber\\
&+&2\partial^{\gamma}A^{\alpha}
\partial_{\delta}A_{\alpha}
-2\partial^{\alpha}A^{\gamma}
\partial_{\delta}A_{\alpha}
\nonumber\\
{\equiv}\,&\,&2F^{\gamma\,\alpha}
F_{\delta\,\alpha}
\label{F}
\end{eqnarray}
Raising indices with the symmetric
part of the metric we finally
obtain the traceless electro-magnetic
stress-energy tensor;
\begin{equation}
-{1\over{\kappa^2}}R^{\delta\,\gamma}
+{1\over2{\kappa^2}}\eta^{\gamma\delta}
R=F^{\gamma\,\alpha}
F_{\alpha}^{\;\;\delta}+{1\over4}
\eta^{\gamma\,\delta}
F^{\mu\,\nu}
F_{\mu\,\nu}
\label{set}
\end{equation}
In forming equation (\ref{set}) I have replaced
the equivalence relation ($\equiv$) by an = sign
and a dimensional constant $\kappa^{-2}$
(the gravitational coupling constant).
This is discussed
in section IX.
The derivation of the traceless gauge-invariant
free-field stress-energy tensor equated to the
Einstein-like equation is something of a
mathematical miracle. There must be exactly the
right number and type of non-zero pieces to construct the
tensor and the factor of 2 difference between the
scalar R and the tensor $R^{\delta\gamma}$ on the
L.H.S. of eq.(\ref{set}) gets translated into an
effective factor of 4 difference on the R.H.S.
only because of the spinorial representation
used and the translation procedure. This is a
actually a non-trivial result. I suspect it is
the only way a traceless gauge-invariant F.F.
tensor can be extracted from a standard
Larangian.
The issue of the anti-symmetry in
$\delta$ and $\gamma$
of (\ref{F}) is discussed below.
\section{discussion of homothetic curvature}
The above derivation effectively eliminates the
symmetric part of the metric.
Applying the anti-symmetry constraint $\mu\neq\nu$
for a purely anti-symmetric metric;
\begin{equation}
\bar{g}_{A}^{\mu\nu}=
-i\sigma^{\mu\nu}={1\over4}
[\gamma^{\mu},\gamma^{\nu}]_{-}
={1\over2}\gamma^{\mu}\gamma^{\nu}
_{\;(\mu\neq\nu)}
\label{vector}
\end{equation}
so, using (\ref{P});
\begin{eqnarray}
&\sigma&^{\nu\mu\,,\,\delta}
\sigma_{\nu\delta\,,\,\mu}
={1\over4}\partial^{\delta}\left(
\gamma^{\mu}\gamma^{\nu}\right)
\partial_{\mu}\left(
\gamma_{\nu}\gamma_{\delta}\right)
\nonumber\\
&\approx&
{1\over4}\left(\partial^{\delta}
\gamma^{\mu}\right)\gamma^{\nu}\gamma_{\nu}
\left(\partial_{\mu}\
\gamma_{\delta}\right)
=
\left(
\partial^{\delta}
\gamma^{\mu}
\right)
\left(
\partial_{\mu}
\gamma_{\delta}
\right)
\label{cont}
\end{eqnarray}
which identifies the $A^{\mu}$ field as a
$\gamma^{\mu}$ 4x16 matrix
$
|P|\gamma^{\mu}%
\equi
A^{\mu}
$
transforming as a
{\it vector} under the 16x16 anti-symmetric
metric (\ref{vector});
{\it with respect to commuting
co-ordinates} (note;
use of commuting co-ordinates
implicit
in the derivation of results
- note also that (\ref{P}) implies that
the A field is only defined up to a local phase).
Normally an infinitessimal rotation is given
by
\[ \delta x^{i}=\epsilon^{ij}\eta_{jk}x^{k}
=\epsilon^{i}_{\,\,\,k}x^{k}\]
where $\epsilon^{ij}$ is antisymmetric.
Now $R^{\gamma}_{\;\;\delta}$ in
(\ref{F}) is anti-symmetric
but under $g^{A}$ an infinitesimal rotation
is given by; $\delta x^{i}=s^{ik}g^{A}_{kj}x^{j}$
where $s^i_{\,j}$ is symmetric thus variation of the
Lagrangian \cite{three} (for generic field $\phi\;$)
will give;
\begin{eqnarray}
0&=&
\nonumber\\
&s&_{\mu\nu}\partial_{\rho}
[
\frac{\delta\cal{L}}{\delta\partial_{\rho}}
(\partial^{\mu}\phi x^{\nu}+\partial^{\nu}
\phi
x^{\mu})
-
g^{\rho\nu}x^{\mu}\cal{L}-
\mbox{\it g}^{\,\rho\mu}
\mbox{\it x}^{\nu}\cal{L}
\nonumber
]
\end{eqnarray}
with the divergence of the conserved current;
\[
\partial_{\rho}
\cal{M}^{\rho\,,\,\mu\nu}=T^{\mu\nu}
+T^{\nu\mu}
\]
which is zero if the stress-energy tensor
$T^{\mu\nu}$ is {\it anti-symmetric}
under $g^{A}$ (in other words, in the framework
of an anti-symmetric metric the stress-energy
tensor must be {\it anti-symmetric} to obtain
conservation of angular momentum - this is the
opposite to the situation with a purely symmetric
metric where the stress-energy tensor must be
{\it symmetric} to conserve angular momentum).
Effectively
we have a choice of description;
(1) we can describe the A field as a
conventional vector
with symmetric metric in commuting co-ordinates,
or (2) as a `$\gamma$' vector with anti-symmetric metric
in commuting co-ordinates.
We know from (\ref{vector})
that $A^{\mu}$ must transform as a
4 vector under the
space-time metric.
Raising indices in
(\ref{set}) with the
diagonal part of (\ref{metric}) implies we
revert to description (1) instead of
(2) where the $A^{\mu}$ is no-longer
a $\gamma^{\mu}$ vector but a simple 4-vector
transforming under symmetric metric
and $T^{\gamma\delta}$
is instead symmetric because angular momentum
conservation must be present regardless
of the choice of description.
However, this of course
means that we must equivalently substitute
a {\it symmetric}
$R^{\gamma\delta}$ in eq (\ref{set}) for
the anti-symmetric
value that arises in eq (\ref{F}). This
relates back to the
A-field definitions like eq (\ref{acontraction})
the notation
of which is appropriate for a symmetric term.
It is the L.H.S.
of eq (\ref{acontraction}), and analogous
contributions,
which should properly be summed to form the
antisymmetric object
$R^{\gamma}_{\;\;\delta}$
in eq (\ref{F})
; the conventional A-field definition (in commuting
co-ordinates)
is only
appropriate when we translate to the symmetric objects
(i.e. eq(\ref{set})). I have introduced the A-field
notation ( eq(\ref{Adefinition}), eq
(\ref{acontraction}) etc)
early as this facilitates comprehension and also
demonstrates that
there is a consistent mathematical method for
performing the
translation. It must be noted however that
there is always
an inherent choice of sign on the tensor part when we
perform a translation between an anti-symmetric and a
symmetric object; this is the price we pay
for working in a spinorial representation
against an anti-symmetric metric which becomes
a representation {\it{up to a sign}}.
For example eq (\ref{delta}) can
be rewritten as;$\left({1\over2}
\eta^{\phi\delta}R_A+R^{\phi\delta}
\right)_{;\,\delta}=0$ where the
tensor part now has opposite
sign. It can however be argued that
the same traceless stress-energy
tensor will result since we can choose
a sign from the residual phase factor
from the index raising operation in eq(\ref{array2})
(the phase can be made to vanish for the scalar $R_A$).
Lastly in this section note that the
Lagrangian for the free-field is now
given by the scalar curvature;
\begin{equation}
{\cal{L}}= {1\over\kappa^2}
{\sqrt{|g|}}{\lambda}R=-{1\over4}
F_{\alpha\beta}
F^{\alpha\beta}
\label{lagrangian}
\end{equation}
where $\lambda$ is a normalisation
constant. Because
the representation effectively normalises
the A field the norm of g
is a constant and can be absorbed into
the $\kappa$.
Just how the $\kappa^2$ gravitational
constant is absorbed into the free-field
is discussed later in the text.
\section{Rotation curvature and source
terms}
Re writing eq.(\ref{dot}) for conventional
curvature we have;
\begin{equation}
R^\alpha_{\gamma\beta\alpha}=\partial
_{\beta}\,\Gamma^{\alpha}_{\gamma\alpha}
-\partial_{\alpha}\,\Gamma^\alpha_{
\gamma\beta}
-\Gamma^{\alpha}_{\phi\beta}\,\Gamma^
{\phi}_{\gamma\alpha}+
\Gamma^{\alpha}_{\phi\alpha}\,
\Gamma^{\phi}_{\gamma\beta}
\label{dot2}
\end{equation}
In General Relativity the symmetric metric
feeds into the rotation curvature
viz the symmetric connection and
defines the stress-energy tensor viz
Einstein's equation. It is relatively easy
to prove that the rotation curvature is
zero for the component of metric (\ref{metric})
that is on the light cone (i.e. the antisymmetric
part of the metric). We identify the rotation
curvature with material sources; i.e. particles
with mass and these sources should be identified
with the symmetric part of the metric. The phase
factor identified with the $ds^2=0$ part of the
metric (the $A_\mu$ field)
contains an implicit factor of $\hbar=1$ and thus
imply a `waviness' to space-time structure
at the quantum scale. It is this wave-like
structure of small-scale space-time that has
replaced the fifth dimension of Kaluza-Kline
theory; the space-time structure itself has
been given the properties of the harmonic
oscillator.
Thus in order to obtain particle sources for
the theory we must now modify the small-scale
structure of space-time for the symmetric part
of the metric. The appropriate phase factor
will now be based on particle momentum
and the metric takes the form ($\hbar=c=1$);
\begin{equation}
g_{\mu\nu}=
e^{ip{\cdot}x}I_4.\eta_{\mu\nu}
+ie^{ik{\cdot}x}\sigma_{\mu\nu}
\label{sourcemetric}
\end{equation}
where $p^{\alpha}$ is the source four-momentum
and $k^{\alpha}$ is the photon four-momentum.
The first two components of the expansion of
the rotation curvature (R.H.S. \ref{dot2}) are zero
since;
\[
\partial_{\gamma}
\left(
\partial_{\beta}e^{ip{\cdot}x}
\right)
e^{-ip{\cdot}x}
=ip_{\beta}\partial_{\gamma}
\left(e^{ip{\cdot}x}e^{-ip{\cdot}x}
\right)
=o
\]
so there is no interference
with gravitation at the level of
the E.M. sources and we have;
\begin{equation}
R^\alpha_{\gamma\beta\alpha}=
-\Gamma^{\alpha}_{\phi\beta}\,\Gamma^
{\phi}_{\gamma\alpha}+
\Gamma^{\alpha}_{\phi\alpha}\,
\Gamma^{\phi}_{\gamma\beta}
\end{equation}
For the source the metric is symmetric and
the connection takes the usual symmetric
form. We may
take it as identical to (\ref{zot})
with the anti-symmetric part omitted
and thus we obtain (for notational
convenience dropping the $I_4$
and absorbing the phase-factor
into the definition of $\eta$ in
an analogous manner as was done with the
$\sigma$ matrices);
\begin{eqnarray}
4R^\alpha_{\gamma\beta\alpha}&=&
-
\left(\eta_{\phi\;\;,\beta}^{\,\,\,\alpha}+
\eta^{\alpha}_{\;\;\beta\,,\,\phi}
-\eta_{\phi\beta\,,}^{\;\;\;\;\;\alpha}
\right)e^{-ip{\cdot}x}.
\nonumber\\
&\,&\;\;\;\;\;\;\;
\left(\eta_{\gamma\;\;,\alpha}^{\,\,\,\phi}+
\eta^{\phi}_{\;\;\alpha\,,\,\gamma}
-\eta_{\gamma\alpha\,,}^{\;\;\;\;\;\phi}
\right)e^{-ip{\cdot}x}
\nonumber\\
&+&
\left(\eta_{\phi\;\;,\alpha}^{\,\,\,\alpha}+
\eta^{\alpha}_{\;\;\alpha\,,\,\phi}
-\eta_{\phi\alpha\,,}^{\;\;\;\;\;\alpha}
\right)e^{-ip{\cdot}x}.
\nonumber\\
&\,&\;\;\;\;\;\;\;\;
\left(\eta_{\gamma\;\;,\beta}^{\,\,\,\phi}+
\eta^{\phi}_{\;\;\beta\,,\,\gamma}
-\eta_{\gamma\beta\,,}^{\;\;\;\;\;\phi}
\right)e^{-ip{\cdot}x}
\nonumber\\
&=&
+2\left(\partial_{\gamma}e^{ip{\cdot}x}\right)
e^{-ip{\cdot}x}
\left(\partial_{\beta}e^{ip{\cdot}x}\right)
e^{-ip{\cdot}x}
\nonumber\\
&-&2\eta_{\gamma\beta}
\left(\partial_{\phi}e^{ip{\cdot}x}
\right)e^{-ip{\cdot}x}
\left(
\partial^{\phi}e^{ip{\cdot}x}\right)
e^{-ip{\cdot}x}
\end{eqnarray}
from which we obtain;
\begin{equation}
R^{\alpha}_{\gamma\beta\alpha}
=
-{1\over2}p_{\gamma}p_{\beta}
+{{m_{o}^2}\over2}\eta_{\gamma\beta}
\end{equation}
where $m_{o}^2$ is the
square of the rest mass and
\begin{equation}
R=R^{\alpha}_{\gamma\beta\alpha}
\eta^{\beta\gamma}
=
+{3\over2}m^2_{o}
\label{nophase}
\end{equation}
so
\begin{equation}
-R^{\alpha}_{\gamma\beta\alpha}
+{1\over2}
\eta_{\gamma\beta}R
=
{m^2_{o}\over2}\left(
U_{\gamma}U_{\beta}+{1\over2}\eta_{\gamma\beta}
\right)
\label{particlestressenergytensor}
\end{equation}
where $U_{\gamma}={{dx_{\gamma}^p}
\over{d\tau}}$ where $\tau$ is the
proper time and p denotes the particle
position. I have supressed the
phase factors associated
with the symmetric
metric contraction in
eq(\ref{nophase}) because what is being
performed here is a translation to a
classical description of a point
particle. We will eventually add a
factor of dimension $l^3$
to the symmetric part of metric
(\ref{sourcemetric}) in order
to create a Lagrangian density of the
appropriate dimension. With translation a
factor of $l^{-3}$ will appear in the
fields. In anticipation
of this we add a factor of dimension $l^{-3}$ to
obtain a translation to a classical
particle description with position $x(\tau)$
and use;
\begin{eqnarray}
d^3(x)&=&{\int}d\tau\;\delta\left(
x^o-x^o_p\tiny{(\tau)}\right)
\delta^3\left(x^i-x^i_p\tiny{(\tau)}\right)
\nonumber\\
&=&
{d\tau\over{dx^o}}\delta^3(x^i-x^i_p
\tiny{(\tau)})
\nonumber\\
\label{61}
\end{eqnarray}
so that eq(\ref{particlestressenergytensor}) finally
gives (dropping the factor of 1/2 which is analogous
to the zero-point energy of an harmonic oscillator);
\begin{equation}
{m_o\over2}{\lambda}T_{\gamma\beta}
=
{m^2_{o}\over2}
U_{\gamma}U_{\beta}
{d\tau\over{dx^o}}\delta^3(x^i-x^i_p
\tiny{(\tau)})
\end{equation}
which is the correct form for the classical
stress-energy tensor for a point-particle with
unit charge \cite{CFT}. The three-dimensional
delta function has been substituted for the
fields in the classical description (c.f eq(\ref{st})).
(The delta function, in the classical limit that the
space-time spread for the particle approaches
a point, behaves as the inverse of the irreducible
metric - see eqs (\ref{I.I.}),(\ref{I.A.})
and (\ref{I.V.}) - thus the use of the delta function is
only valid in the `classical limit' and not
in a quantum description in which case
the irreducible metric can not be treated as
the inverse of a delta function).
Lambda is a constant to be
determined.
Note that I have used the same sign
convention for the particle stress-energy
tensor that I employed for the free-field
stress-energy tensor for consistency
(see the section titled Homothetic Curvature).
The origin of the
zero-point additional energy is analogous to
the non-vanishing of the zero-point energy of
a simple harmonic oscillator that is
see in quantum physics. It
is an indication that the transition to
the point-particle description is not
entirely appropriate. Note also
that, due to Einstein's equation, the
covariant derivative of the
particle stress-energy tensor
vanishes in the absence of the free-field.
\section{The concept of irreducibility}
In four dimensions the Lagrangian density
must have dimension $l^{-4}$. Formally the
metric must be dimensionless. This immediately
leads to a problem with the theory
presented above as follows.
The Lagrangian density ${\cal{L}}=-{1\over4}F
_{\alpha\beta}F^{\alpha\beta}$ has dimension
$L^{-4}$ because the $A^\alpha$ field is given
dimension $l^{-1}$ and each derivative contributes
an $l^{-1}$.
The contracted curvature tensor (whether homothetic
or rotation), when derived from a dimensionless
metric, thus has dimension $l^{-2}$. It is this
fact that makes the coupling constant of the
graviational field $l^{-2}$ and renders quantum
gravity non-renormalisable.
Thus it appears that in performing the translation
between symmetric and anti-symmetric representations
of the electro-magnetic field we must also
introduce a dimensional transmutation in
order to give the free-field Lagrangian the
appropriate dimension.
Ultimately this is the crux of the
problem of unifying electro-magentism and
gravitation and also the central issue
causing the difficulty quantising gravitation.
Very much in the spirit of H Weyl's ideas, I
want now to explore a possible solution to
this problem that centres about the issue of
scale-transformations. The following
is a sketch, not entirely rigorous, of
the central ideas involved.
Einstein hints at the problem in his last
published paper
\cite{UFT} when he discusses the
obvious difference between the inherent
discontinuity of quantum objects and
the continuum of space-time; an apparent
schitzophrenia that has no deeper physical
explanation in current theory.
Let us firstly assume that discontinuity
is the fundamental element of phsical
structure and that the continuum is built
up from a more fundamental element of
structure that is ultimately completely
discontinuous. This would imply that both
matter and space-time are built from the
same basic `stuff'. (A strong empirical
hint that this must be the case is
seen with phenomena such as
creation of particle pairs
from the vacuum in HEP). The most basic
element of structure that could be
postulated seems to me to be something
like an `on-off' or (0,1) duality. A
plausable associated metric would be;
\begin{equation}
d(a,b)=|{\epsilon}(a,b)|
\label{I.I.}
\end{equation}
The meaning of eq.(\ref{I.I.})
is that the distance between points labelled
a and b is the absolute value unity (i.e. 1)
if $a{\neq}b$ and zero if $a=b$. This is, of
course, regardless of where the points a and
b happen to be located. Indeed, according to this
metric it makes no sense to talk about where the
points are; only that they are separate or
distinguishable. No `background' space-time
as such exists according to this metric; we want
to {\it{build}} a four-dimensional space-time
out of this metric. We postulate the following
algebra for the metric;
\begin{equation}
|\epsilon(a,b)|.|\epsilon(b,c)| = |\epsilon(a,c)|
\end{equation}
so that the product of two objects of
dimension $l^1$ is not $l^2$ but $l^1$. I call such an
object an {\it{irreducible interval}} and its dimensionality
is also set irreducibly at unity;
dimensionality is thus in some
sense quantised in this scheme. The {\it{number}} one is
defined as a {\it{counting}} of the existence of the
interval from one end to the other. Iterated countings
still only define the number one. The number zero may
be thought of as the non-existence of the interval or the
point upon which counting is initiated.
Iterated counting may be symbolised as;
\begin{equation}
|\epsilon|^n=1\;\;\;({\forall}n\neq\aleph_0)
\end{equation}
The object
$|\epsilon|^{\aleph_0}$
with transfinite (completed infinite) index
is not definable in a singular irreducible
dimension. We assume it defines a two dimensional
space bounded by irreducible intervals.
Such a space must contain at least three points
on its boundary.
Its associated metric is written as;
\begin{eqnarray}
d^2(a,b,c)=|\epsilon^2(a,b,c)|=|\epsilon^2|
\nonumber\\
|\epsilon|^{\aleph_0}=|\epsilon^2|
\label{I.A.}
\end{eqnarray}
The `area' bounded by the irreducible intervals
and defined by metric (\ref{I.A.}) I will
call an `irreducible area' or I.A. Its
cardinality is that of the counting numbers
$\aleph_0$
(i.e. the field of rational numbers)
{\it{not}} that of the continuum.
(By contrast the cardinality of the irreducible
interval is strictly finite).
It is this kind of object that I want to assume
forms the superstructure of the photon. On the light
cone we assume it doesn't define a space with the
property of the real continuum. To get a continuum
we must assume the continuum hypothesis (i.e.
that the next highest transfinite cardinal above
$\aleph_0$ is c the cardinality of the continuum),
and that propagation of the photon with respect to
all and any observers generates such an equivalent space.
The metric may be written;
\begin{eqnarray}
d^3(a,b,c,d)=|\epsilon^3(a,b,c,d)|=|\epsilon^3|
\nonumber\\
|\epsilon^2|^{\aleph_0}=|\epsilon^3|
\label{I.V.}
\end{eqnarray}
(The last equation
is analogous to the equation $2^{\aleph_0}=c$).
Metric (\ref{I.V.}) describes {\it{irreducible
volumes}} (I.V.'s) the contained space of which is assumed
to have the mathematical property of the continuum
but no contained fourth dimension (no time). Such a
space provides a candidate for both quantum objects
with mass and propagating photons; of course for the
latter we must add time as the dynamical factor
generating the volume if we assume that photons
are propagating I.A.'s. with respect to objects with
mass. On the boundary of massive objects we
will expect to find I.A.'s and thus an associated
massless field.
Of course with this kind of senario the time
dimension itself is not really geometrically
defined; it is an assumed added parameter. It is
possible to extend the geometric/mathematical
analogy to postulate a more geometric origin
for time but here we will assume that the
addition of time does not alter the
cardinality; space-time has the same cardinality
as 3-space which is that of the continuum.
Note that, even though a timeless 3-space
defined by metric (\ref{I.V.}) is a continuum
it is {\it{irreducible}} in the sense that
it cannot be subdivided because to do so would violate
the irreducibility of the bounding intervals (
or equivalently the bounding areas) upon which the
hierarchy of structure is built; it is in this
sense quantised
irreducibly and immortal. Dynamics can only
occur on the boundary of the object; never in its
interior.
It is possible to postulate that the
physical manifestation of metrics (\ref{I.I.}),
(\ref{I.A.}) and (\ref{I.V.}) is
local gauge invariance. To see how this idea
works geometrically consider three
points selected at random
on a circle consisting of
a real continuum of points;
\setlength{\unitlength}{1mm}
\begin{picture}
(30,30)(-25,0)
\put(15,15){\circle{20}}
\put(10,10){\line(6,5){11}}
\put(10,10){\line(0,0){9.5}}
\put(10,19.5){\line(4,0){11}}
\end{picture}
Now, we know form the work of G. Cantor
that for a continuum of points a 1:1
mapping can be defined from the points on
any finite length line segment onto any other
line segment of arbitrary length.
Thus for the continuum of
points on the circle we can define a
1:1 and onto mapping of the circle onto
itself which does not leave the triangle
invariant; the three points defining
the triangle can be shifted around the
circle by such a transformation. This is
another way of saying that the continuum
can be compressed or stretched to an
arbitrary degree without the structure of
the continuum itself varying. Such a mapping is an exact
analogue of a local U1 gauge transformation
on the circle. However, under metric (\ref{I.I.}),
and indeed only under metric (\ref{I.I.}),
the triangle itself may be regarded as invariant
since the angles subtended by the sides of the triangle
are not defined under such a metric since each edge
of the triangle always has unit length. Such a
triangle is `irreducible' under a local gauge
transformation. Alternatively we may view
the concept of the combination of an
irreducible geometry embedded in
the structure of the continuum as a
deeper explanation of the origin of
local gauge invariance itself; i.e. that the
embedding of absolute discontinuity into
the continuum gives rise to local gauge
invariance. I have in mind here the basic
foundation of quantum objects embedded in
the continuum of space-time; or,
alternatively in the language of the
geometry presented above, of quantum
objects actually {\it{generating}} the
space-time continuum. Such an embedding
is a fundamental union of the discrete and the
continuous.
We now postulate that a photon literally
has intrinsic geometric structure built up
from irreducible intervals which have geometric
and physical definition only on the light cone
itself (a triangular geometry, for example,
might be candidate) or
more particularly as some
form of irreducible geometry
defined in a two-dimensional plane
orthogonal to the direction of motion
of the photon and propagating
at the speed of light. The geometric irreducibility,
which is inherently non-local,
itself is unobservable; we see its physical
manifestation {\it{indirectly}} through the
unobservability of local gauge; i.e. local gauge
invariance of electro-magnetism. (Of course the same
must apply to the {\it{boundary}} of a three-dimensional
object defined by metric (\ref{I.V.}); such
a geometry is assumed to be a massive fermion
quantum object and the boundary its associated
electro-magnetic and gravitational fields; there
must be
implications here for the theory of neutrinos
but I will not discuss this issue in this paper).
We can now reinterpret the translation process
for the free-field electro-magnetic stress-energy
tensor as follows. The anti-symmetric part of
metric (\ref{metric}) is an irreducible
metric on the light cone; this means that it
does not hold in any observer's frame.
Each $\sigma_{\alpha\beta}$ term,
which ultimately will contribute one
$A_\alpha$ or $A_\beta$
term,
is assumed to have dimension $l^{-2}$ and,
in addition to a dimensionless
phase factor $e^{{\pm}ik{\cdot}x}$,
contains an intrinsic product of an
I.A. to make the whole object (c.f. eq(\ref{|P|}))
\begin{equation}
g^A_{\alpha\beta}=
|\epsilon^2{\tiny{(\alpha,\beta)}}|.
e^{{\pm}ik{\cdot}x}.\sigma_{\alpha\beta}
=
|P|\sigma_{\alpha\beta}\equiv
\sigma_{\alpha\beta}
\label{irredmetric}
\end{equation}
dimensionless. (The I.A. here is rather
like a dimensional polarisation tensor; because of the
peculiar algebra of these metrics we can
still use the $g_A$ to raise and lower
indices).
Now the dimension $|\epsilon|$ is `infinitely smaller'
than the dimension $|\epsilon^2|$. In anticipation
of imposing a scale on the irreducible metric as
a part of the translation procedure let us
define the irreducible area viz a term $d|\epsilon^2|$;
\[
|\epsilon^2|=
\int_{-\infty}^{+\infty}|\epsilon|.\;d|\epsilon^2|
\]
where $d|\epsilon^2|$ is the (infinitessimal but
denumerable) increment
in area in a direction orthogonal to $|\epsilon|$.
The integration is carried over all space.
Also we have;
\[
|\epsilon^3|=
\int_{-\infty}^{+\infty}|\epsilon^2|.\;d|\epsilon^3|
\]
With translation to an observer frame the I.A.
ceases to exist (we
must assume that it becomes absorbed into the
structure of the continuum
\cite{GF}) and the continuum
has its dimension boosted from two to $3+1$
dimensions. The idea is to regard the metric
$|\epsilon|$ as related to the graviton
(this metric must be spin-2 since the
generator of the discrete
associated group $S_2$, the permutation group
of two objects, will not change sign with a
rotation by $\pi$), the metric $|\epsilon^2|$
as related to the photon (the three-point
discrete group C3v of the triangle changes
sign with rotation by $\pi$ of one of
its generator axes) and the metric
$|\epsilon^3|$ as the metric of a
massive spinor (viz-a-viz the 4-point
discrete spinor group Td). The `smallness'
of $|\epsilon|$ may then be expressed in
translation to the observer frame viz a
Taylor expansion;
\begin{equation}
|\epsilon^2|\approx
(\partial_{\mu}|\epsilon^2|).\kappa
\approx\kappa^2
\label{translation}
\end{equation}
Which implies that in translation to the
observer frame the metric $|\epsilon|$ and
the $d|\epsilon^2|$ are
`small' in relation to the electro-magnetic
irreducible metric to the
order of the gravitational constant. Notice that with
translation (\ref{translation})
the metric equation $|\epsilon|.|\epsilon|
=|\epsilon|$
no longer holds because a scale has been set.
Similarly, we assume that the volume increment
$d|\epsilon^3|$
is `small' with respect
to the volume metric $|\epsilon^3|$ to the
order of the fine structure constant in relation
to the mass-scale $m_o$ of the object defined by the
three-dimensional metric. Thus with translation we set;
\begin{equation}
|\epsilon^3|\approx
|\epsilon^2|.{\alpha\over{m_o}}
\approx{\kappa^2\alpha\over{m_o}}
\label{3trans}
\end{equation}
Lastly we must impose ananalogous set of conditions
on the symmetric part of the metric;
\begin{equation}
g^S_{\alpha\beta}=
|\epsilon_i^3|e^{ip_i{\cdot}x}\eta_{\alpha\beta}
\end{equation}
where $\eta_{\alpha\beta}$ now contains a
product of two fields each
of dimension $l^{-{3\over2}}$ i.e. spinor fields,
and the irreducible volume metric of
dimension $l^3$ appears.
The i here labels particle types.
We must assume that with a translation procedure
from a symmetric to an anti-symmetric representation
(in some sense counter-balancing the translation
of the boson field $A_\mu$ from an anti-symmetric to a
symmetric representation)
the $\eta$ field translates
into a spinorial representation
(using (\ref{3trans})) viz the ansatz;
\begin{equation}
|\epsilon_i^3|e^{ip_i{\cdot}x}
\eta_{\alpha\beta}
\equiv
{\kappa^2\alpha\over{m_o}}\;
\overline{\Psi}_i{\Psi_i}\gamma_{\alpha}\gamma_{\beta}
\label{st}
\end{equation}
of the spinor $\Psi_i$.
Note that the R.H.S.of eq.(\ref{st}) is
antisymmetric in $\alpha$ and $\beta$ so this
involves a translation between symmetric and
anti-symmetric representations (i.e. we don't
equate both sides to zero!).
Whether or not the Dirac Lagrangian can
be extracted from this form of
translation remains to be
seen.
The special algebra of these irreducible
metrics, as
before, allows the use of the total metric
to raise and lower indices. The
irreducible metric $|\epsilon^3_i|$ behaves
algebraically like a dimensionless quantity
prior to translation.
Our Lagrangian reads;
\begin{equation}
{\cal{L}}
=
{\kappa^{-2}}{\sqrt{|g|}}R
\label{Lag}
\end{equation}
\section{discussion; supersymmetry or
superslimmetry?}
In this paper only the photon has
been given `dual' representations both
as spinor and vector. (Equation (\ref{st})
is just a speculation for further work).
The `operator'
which interconverts the photon representations is
the procedure that converts
homothetic curvature due to
torsion into rotation curvature. What does this
mean geometrically?
Consider again the triangle embedded in
the circle pictured previously. Torsion breaks
parallelograms (or equivalently triangles)
but under the irreducible metric
the triangle does not break when subjected to
torsion. In fact it's `unbreakability' under
the torsion induced homothetic curvature
is nothing other that an expression of
U1 local-gauge invariance
as was previously demonstrated. But this is
a compact rotation symmetry! Thus we see that
the interconversion of bosonic and fermionic
representations is bridging compact and non-compact
groups because the torsion is the generator of
translations. This enables us to have
a more fundamental physical reason for the
occurrence of local gauge invariance in
nature; invariance of the structure of the
continuum to arbitrary deformations.
It is the
structure of the continuum which is truely
fundamental; the matter fields and the
forces between them appear as the superstructure
keeping the continuum continuous.
Looking at the invariance of the irreducible geometry
used to describe the photon
under torsion induced translation is equivalent,
at least from the geometric point of view,
to `dressing' the photon with its own
gravitational self-interaction. To see
this note that, since torsion breaks parallelograms
if the triangle were defined by
ordinary geometry it would break
if the intervals defining it were not irreducible.
Consider the simplest break; a rupture at one
of the vertices of the triangle. The result
will be an object with four vertices. The
extra interval, under the assumptions presented,
is the geometry of a graviton. The restoration
of the geometry of the triangle would then correspond
to the resorption of the emitted graviton. In
this manner irreducibility of the
geometry of the triangle - that is,
its invariance under torsion induced
translations - is equivalent to `dressing'
the photon with the gauge field of translations;
its own gravitational self-interaction.
Eq.(\ref{translation}) gives us
a scale for the interaction; each photon is
dressed by gravitons of the order of the
Planck length. Thus the `breaks' induced by
torsion are extremely small scale.
A similar interpretation can be given to
eq.(\ref{3trans}); irreducibility equals
local gauge invariance and here the global
U1 phase of the photon is the gauge group
restoring invariance of the phase of the
source of the electro-magnetic field. The
irreducible metric is implicitly including
the gravitational and electromagnetic
self-interactions of the the source; the
source is `dressed' by its own fields.
It seems to me that we have the following
senario. We have dual descriptions of
electromagnetism. Interpreted in an observer
frame torsion and its induced homothetic
curvature is unobservable; it could only
ever appear as a non-propagating contact
interaction. However, to
an observer `travelling on a ray of light'
(which, for the sake of explanatory convenience,
we shall admit)
the torsion and homothetic curvature
would appear real and the observers
world would be a strange place where photons
behave as spinors and only the anti-symmetric
part of the photon's stress-energy tensor is a
conserved quantity. To our observer riding
on a photon the photon obeys a Pauli exclusion
principle; which is to say in a space where
$ds^2=0$ our observer is in no position to
`see' any other photon other than the one he
or she is unfortunate enough to be ensconced
with.
Then there is the other description of
electromagnetism with which we are more
familiar. In it the photons are bosons
and the conserved stress-energy tensor is
of course symmetric. In this frame the
homothetic curvature to our observer
perched on a photon appears
to
the more familiar observer on Earth
as rotation
of space-time in the local vicinity of
each individual photon
when we call it
the wave nature of light.
It is in this manner that
the C-M theorem may be overcome; not
by supersymmetry but with a slimmer
menagerie of fundamental objects -
which is of course desirable -
with each particle providing its own
`superpartner' of which the photon,
in this case, may provide the
prototype example. This seems to me
more natural than conventional
supersymmetry given that it generates
local gauge invariance rather than
assuming it and does not generate
unobserved objects.
Thus
unification
in four dimensions is not yet a
closed subject and hopefully this
paper has stimulated some interest
in it. In particular
obtaining a gauge-invariant traceless
stress-energy tensor for the free-field
is quite a non-trivial result peculiar
to the mathematical structure I
have presented. Note that
exactly the right components must
be present in the expansion of the
curvature tensor for the mathematics to work.
Variation of the Lagrangian (\ref{Lag})
now leads to the Einstein equation on the L.H.S.
and the sum of the free-field and particle
stress-energy tensors on the R.H.S. and both
gravitation and electro-magnetism are
accommodated in the single equation. The
unification of the gauge couplings is
speculative but is assumed to be linked
to the structure of the continuum beyond
the Planck scale.
Using irreducible metrics means that we really
must go beyond the conventional conception of
space-time. Instead of a fifth dimension to
define electric charge, particles now appear
rather like non-local bubbles in the vacuum inside which time
is absent. The closer we look
at the bubble the smaller it gets
(I have in mind here electrons and quarks).
The quantisation of charge
must now be related to the
topology of the boundary of this space.
The decomposition of the vacuum is more
severe on the light cone where the structure of the
continuum itself is actually altered.
It would be appropriate to summarise what
has been done in order to get the
mathematical content in perspective.
Firstly the metric structure of space-time
has been generalised;
1. to include a $U(1)$
phase factor `on the light-cone' the generator
of which is the photon momentum. The sigma
matrices in some sense `carry' the representation
$e^{{\pm}ik{\cdot}x}$
on the light cone. The phase factor itself means
that, in the presence of the photon, space
on the light cone has an intrinsic wave-structure.
The associated torsion is generating, viz the
homothetic curvature, the corresponding free-field
stress-energy tensor.
This, however, is not really
a true Riemann-Cartan geometry because
the torsion on the light
cone translates to non-torsional
rotation curvature in the
`observer frame'. The `frame' in which this
torsion is defined is `on the light cone' i.e.;
it is not an observer frame. To reposition the
representation in an observer frame it must be
translated from an anti-symmetric representation
into a symmetric representation. This is analogous
to transforming homothetic curvature due to
torsion into
rotation curvature. Once translated the
stress-energy tensor for the free-field
must be added to the rotation curvature
coming from the source as both must now be regarded
as contributing to the rotation curvature.
2. The second generalisation involves adding
a $U(1)$ phase factor related to the charged
source momentum with non-zero rest mass
to the symmetric part of the
metric. This is `on the observer frame'
(i.e.$ds^2\neq0$) and
generates rotation curvature in a manner
analogous to the rotation curvature generated
in Gravitation theory. The potential
generating a current in this
case is then the 4-momentum of the charged
electron or proton. This current is a
conserved quantity;
\begin{equation}
-{i\over4}\Gamma^\alpha_{\gamma\alpha}
=\left(
-i\partial_{\gamma}e^{ip{\cdot}x}\right)
e^{-ip{\cdot}x}=p_{\gamma}
=j_{\gamma}
\end{equation}
so that clearly $\partial^{\gamma}j_\gamma=0$
when we treat momentum and position as independent
variables in the quantum representation.
The presence of a phase factor in the
metric for an electron means that
\[
p^2=m_o^2e^{ip{\cdot}x}
\]
but if we reinterpret this in terms
of operators and wave-functions we have
instead;
\[
\hat{p}^\mu\hat{p}^\nu
g_{\mu\nu}
=
\hat{p}^\mu\hat{p}^\nu
\eta_{\mu\nu}
<x\,|\,p>
=m_o^2<x\,|\,p>
\]
Thus at short distance scales (there is an
intrinsic factor of $\hbar$ in the phase
factor) space-time
takes on a wave-nature which means that a
classical particle description is no
longer appropriate.
In particular the non-zero zero point
energy present in the stress-energy tensor
means that we are dealing with an
harmonic oscillator with non-zero
minimum energy. In the framework of
general relativity the wave part of
the wave-particle duality
is thus due to space-time curvature
at short scales; it's another way of
looking at the world. Unfortunately the
precise quantitative difference
in the
scales of the coupling parameters for electro-magnetism
and gravitation has not been given an explanation but
it has been noted that, prior to translation, the
difference is infinite! It is thus perhaps no surprise
that there results with translation a large difference
(at least at low energies).
After translating the free-field stress-energy
tensor to a symmetric form we may consider
it a component part of the rotation curvature
generated by metric (\ref{sourcemetric}) instead of
homothetic curvature.
With this proviso summing the contributions to rotation
curvature arising from the metric
(\ref{sourcemetric}) we may write;
\begin{eqnarray}
&-&
{1\over{\kappa^2}}
(R^{\gamma\delta}
-{1\over2}\eta^{\delta\gamma}R)
=
{\alpha\over2}.
m_o
U^{\gamma}U^{\delta}
{d\tau\over{dx^o}}\delta^3(x^i-x^i_p
\tiny{(\tau)})
\nonumber\\
&+&F^{\gamma\,\alpha}
F_{\alpha}^{\;\;\delta}+{1\over4}
\eta^{\gamma\,\delta}
F^{\mu\,\nu}
F_{\mu\,\nu}
=T^{\gamma\delta}_P+T^{\gamma\delta}_F
\label{final}
\end{eqnarray}
where the
zero-point energy has been discarded
as is usually done in quantum theory.
The covariant derivative of both sides of
eq.(\ref{final}) must vanish which gives
us the inhomogenous Maxwell equations.
The homogenous Maxwell equations result
from considering the free-field alone
(i.e. equation (\ref{set})) by setting
source 4-momentum to zero. The classical
equations of motion for a point particle
in an electro-magnetic field result
if we substitute the ordinary derivative for
the covariant derivative. Thus we can recover
classical electro-dynamics.
Also note that
we can also add to the right side of
eq.(\ref{final})
the contributions from gravitation by letting the
symmetric part of the metric vary with the
gravitational potentials. It will of course
enter with the opposite sign to the contributions
from the electromagnetic field. For macroscopic
charged matter we would of course have to integrate
up the source term in eq.(\ref{final}). An object
with opposite charge will, however, enter with
opposite sign if we set
$\eta_{\alpha\beta}e^{ip{\cdot}x}$
= unit negative charge and
$\eta_{\alpha\beta}e^{-ip{\cdot}x}$
as unit positive charge say.
In forming
a combined graviational and electromagnetic
curvature equation it must however be remembered
that the curvature is no longer purely
gravitational in nature. At the micro-scale
of space-time there is severe curvature
due to mass carrying electric charge which is not
gravitational in nature.
Does Einstein's metric theory of Gravity remain
intact under the above derivation of the
electro-magnetic stress-energy tensor? Does the
equivalence principle still hold?
To answer these questions requires some
interpretation of the above derivation and
metric (\ref{metric}). Noting that the
derivation of the free-field electro-magnetic
stress-energy tensor was carried out with
a completely antisymmetric metric and that;
\[
g^A_{\alpha\beta}\;dx^{\alpha}dx^{\beta}=ds^2=0
\]
we can interpret the antisymmetric part of
metric (\ref{metric}) as a `co-moving' metric
in the light-cone frame of the photon; the null geodesic.
As noted above, one consequence of this is that
Lorentz scalars for macroscopic matter
remain invariant under the total
metric (i.e. the combination of
symmetric and anti-symmetric parts)
and we avoid the sort of problems
related to measuring rods and clocks that
led to so much criticism of the Weyl/Eddington
theory. (I believe that at one stage Einstein
himself attempted to construct a version of
the Weyl/Eddington theory `on the light
cone' to avoid the associated measurement
problems \cite{russian}).
The torsion I interpret as an essentially `local'
phenomena in the vicinity of each individual photon.
(See \cite{four}\cite{Hehl} for similar ideas).
For $g=g_a + g_s$ the vanishing of the covariant
derivative of the metric employed in the derivation
of the connection means that the (strong)
equivalence principle applies to the
symmetric part of the metric which leads to
gravitation theory. The additional presence of
a phase factor at the quantum scale will be
expected to vanish for bulk matter as might
be expected in the classical limit. Thus
the classical theory of general relativity
remains intact.
Notice that (using eq (\ref{deriv}) and the definition of
$A^{\mu}$);
\[
\sigma^{\alpha\beta}_{\;\;\;\;,\alpha}
=-i\sigma^{\alpha\beta}_{\;\;\;\;,\gamma}
\,\sigma^{\gamma}_{\;\;\alpha}
=+i\sigma^{\gamma}_{\;\;\alpha}
\sigma^{\alpha\beta}_{\;\;\;,\gamma}
=-\sigma^{\alpha\beta}_{\;\;\;\;,\alpha}
\]
so that $\partial_{\mu}A^{\mu}=0$ and the
connection imposes the Lorentz condition.
This is the constraint in Proca's equation
which allows torsion for $m\neq0$ \cite{four} which the
co-moving metric makes implicit at $m=0$.
|
1,477,468,750,960 | arxiv | \section{Introduction}
Online market places, where people sell goods to other participants, are one of the most popular applications on the internet. EBay had a revenue of 2.6 billion US dollar in the first quarter of 2019 alone~\cite{ebayq1} and is one of the top 10 internet companies in terms of revenue in the world~\cite{largestinternet}.
However, existing online market places are exclusively centralized, i.e., run by one provider that has access to the complete data of all users. This universal knowledge of user behaviour in the system constitutes a serious privacy issue. The provider might abuse its knowledge about users' preferences to manipulate them. Even if the provider does not purposefully mislead users, accidental publication of user data~\cite{ebayprivacy} can enable criminals to do so.
Furthermore, online market places are known for fraud~\cite{ebayaccused}. While reputation systems (e.g., \cite{resnick2002trust,josang2002beta,aggarwal2016recommender}) can help to counteract fraud, they generally meet their match in a Sybil attack~\cite{douceur2002sybil}: One user inserts multiple seemingly distinct identities in the system who then boost each others' reputation.
Previous work on mitigating Sybil attack relies on puzzles~\cite{borisov2006computational}, detection of malicious communities~\cite{danezis2009sybilinfer}, or the existence of real-world trust relationships~\cite{mittal2012x}. Puzzles require the questionable assumption that the malicious party has similar resources as a normal user. The latter two approaches assume that communities of Sybils exhibit certain structures, such as few connections to non-Sybil nodes. It is unclear how valid these assumptions are and hence the guarantees provided by the proposed approaches are limited.
In this work, we solve both the privacy issue caused by centralization and offer protection against Sybil attacks.
Our system, \emph{MarketPalace}, relies on a centralized authority only for registration. During registration, the server checks whether a user is already in the system leveraging self-sovereign identity (SSI)~\cite{MUHLE201880}. More precisely, when joining the system, a user has to provide a hash of a uniquely identifying attribute, verified by \emph{I Reveal
My Attributes (IRMA)~\cite{irmadocs}}, a self-sovereign identity solution.
If someone previously used the same hash, the server denies the user access.
Otherwise, they can then join the system and trade using a peer-to-peer network. We leverage \emph{libp2p} and the \emph{InterPlanetary FileSystem (IPFS)}~\cite{dias2016distributed} to allow users to post and remove listings, bid, chat, and negotiate the price in a decentralized manner.
We implemented an initial prototype of the system. Our evaluation indicates that the prototype propagates listings at acceptable speeds for small user groups. Hence, it is particularly useful for trading goods locally.
Our system is the first to leverage SSI solutions to achieve Sybil resistance in a decentralized system.
\section{Background}
In this section, we first give an introduction to Self-Sovereign Identity. This is followed by a brief explanation of the InterPlaneteary FileSystem, which we use to build a decentralized marketplace.
\subsection{Self-Sovereign Identity}
\label{SSI}
In this age of digital information, online identification has become increasingly more important. Various data breaches (e.g. Cambridge Analytica Scandal \cite{8436400}, Sony Playstation Network \cite{10.1007/978-3-319-18621-4_3}) expose that having service providers act as central authorities raises privacy concerns.
Self-Sovereign Identity (SSI) is an identity management system, which allows individuals to own and control their digital identity \cite{MUHLE201880}. C. Allen proposed 10 requirements that a system has to fulfill to be considered an SSI system~\cite{criteria-10}.
These requirements can be further grouped into three categories: \textit{security, controllability} and \textit{portability} as presented in \autoref{tab:sovrin} \cite{tobin2016inevitable}.
\begin{table}[]
\centering
\begin{tabular}{ | c | c | c | }
\hline
\textbf{Security} & \textbf{Controllability} & \textbf{Portability} \\
\hline
Protection & Existence & Interoperability\\
\hline
Persistence & Persistence & Transparency\\
\hline
Minimisation & Control & Access \\
\hline
& Consent & \\
\hline
\end{tabular}
\caption{C. Allen's ten principles grouped in three categories.}
\label{tab:sovrin}
\end{table}
In essence, the \textit{security} requirement is to protect the user data from unauthorized access and minimizes the data exposure to authorized data. For example, when a users wants to buy alcohol, proving that he or she is over 18\footnote{In the Netherlands, the minimum age to buy alcohol is 18} suffices. There is no need to expose the complete identity or even the exact age. \textit{Controllability} means that users must be in control of who can see and access their data. No data should be accessed without the user's consent. Finally, \textit{portability} here means that users must be able to use their digital identity wherever they want independently from other services.
Q. Stokkink and J. Pouwelse argued that claims are worth nothing if they cannot be shown to hold true \cite{criteria-11}. This introduces the eleventh property, \textit{provability}. In conclusion, the ten properties described by C. Allen, accompanied with the property of claims being provable, form the list of features that a profound SSI solution is ought to have.
\subsection{InterPlanetary FileSystem (IPFS)}
IPFS is a distributed file storage system. Anyone can upload files to IPFS because the files are stored solely on your own node after uploading. The nodes in the network are connected so that it is possible to search for content in the complete network.
IPFS identifies content based on its hash, called the Content ID.
If a user wants to access content, they have to search for it using the respective Content ID. Based on the Content ID, IPFS locates a user that stores the requested content if at least one such user exists and is currently online. By default, content is replicated on requesting nodes but there is no automatic replication for content that has not been requested.
In order to realize communication between nodes, IPFS has an internal networking library called libp2p. The library provides
two functionalities that are useful for establishing a market place:
\begin{itemize}
\item Establishing and managing connections
\item Peer discovery using Kademlia~\cite{maymounkov2002kademlia}
\end{itemize}
Libp2p enables us to connect to other users in the network and to search for specific users. IPFS allows us to store information such as listings and bids.
\section{Design and implementation}
\label{sec:design}
In this section, three main aspects of the MarketPalace design and their implementation are laid out: registration, keys, and the market. For registration, we explain the one-time-entrance process leveraging self-sovereign identity. Afterwards, we elaborate on the different usages of keys in the system. Lastly, the design of the P2P market system is explained.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.26]{images/overviewComponents.png}
\caption{Overview of the different components of MarketPalace.}
\label{fig:sysoverview}
\end{figure}
In a nutshell, a user participates in MarketPalace as follows: Unregistered users first need to complete the registration process. Here, they have to disclose the hash of uniquely identifying and verified attributes to a centralized authority.
These attributes are used to determine the uniqueness of the user. Once this uniqueness is established, the user generates a key-pair that is then used in the decentralized market (see \autoref{fig:sysoverview}). The centralized server signs the public key of the user to indicate that the user completed the registration process.
Afterwards, users can join the market place. They can now post and remove listings, bid, chat and negotiate prices directly with the other participants.
\subsection{Registration}
\label{sec:design-registration}
The most straightforward and likely most effective approach at eliminating or at least mitigating Sybil attacks is trusted certification~\cite{sybilattacksurvey}.
Trusted certification requires a centralized authority that ensures that each user is assigned exactly one identity. Preferably, we want to avoid manual or in-person identification processes because they hinder scalability. Furthermore, we want to have a third-party identity platform that can provide us with immutable uniquely identifiable information.
When identifiable information is retrieved from a third-party identity platform, its uniqueness in the context of our market has to be verified. This requirement introduces the need for a database to verify that the information has not been previously used. In this section, both the identity platform and the database are discussed in more detail.
\subsubsection{IRMA}
\label{sec:irma}
We decided to use IRMA as our identity platform for two reasons.
Our first reason for choosing IRMA is that it is an attribute-based identity platform. It relies on Idemix technology and uses personal smart cards as carriers of credentials and attributes \cite{alpar2013credential}. Thus, IRMA allows users to disclose potentially unique attributes.
Second, IRMA meets ten out of eleven requirements (see \autoref{SSI}) to be a profound SSI solution. It lacks the \textit{persistence} property, as all attributes are stored solely on a user's device, meaning attributes need to be acquired again after getting a new device. We nevertheless found that IRMA is the best fit for our system as its developers are currently working on integrating a back-up system to provide persistence.
After users have downloaded the mobile IRMA application, they can collect attributes such as date of birth and social security number (SSN)\footnote{Since IRMA operates in the Netherlands, this attribute is called BSN}. Users can import their SSN into the IRMA app with their DigiD (Dutch Digital Identity service). When a fraudster has DigiD credentials of someone else, he or she can still impersonate this person and bypass our authentication mechanism. However, this type of fraud can also happen with a physical passport and is hence out-of-scope for this project.
In order to use IRMA, we set up a centralized server, the door server. Users can then use IRMA to reveal attributes to the server. The server verifies that these attributes are correct, i.e., authenticated by IRMA, and unique.
Our registration process is implemented in multiple components. First, we configured the IRMA server. The IRMA server makes it possible to perform IRMA sessions, such as disclosing attributes. The \textit{irmago}\footnote{\url{https://github.com/privacybydesign/irmago}} implementation by Privacy by Design holds the \textit{irmaserver} library from which you can start the server. Next to the \textit{irmaserver}, \textit{irmago} also holds a client library \textit{irmaclient}. Since we decided to implement a web interface, we decided to not use the Go implementation of the \textit{irmaclient}, but rather use the \textit{irmajs}\footnote{\url{https://github.com/privacybydesign/irmajs}} client of Privacy by Design.
The functionality of the registration process is then implemented by \textit{doorserver.js}, to which the client-side web pages are connected. The \textit{doorserver} is an HTTPS server implemented in NodeJS. Its only functionality is the communication between the sign-up webpage and the NodeJS components that handle the registration process. The sign-up webpage opens a socket connection with the \textit{doorserver} and sends a message that initiates the disclosure process when the user clicks the \textit{Disclose attributes} button on the web interface. The \textit{doorserver} in turn sends multiple messages to the client-side during the registration process. For example, the creation of the Quick Response (QR) code is initiated by \textit{doorserver}, but also the generated keys are sent from the \textit{doorserver} to the sign-up webpage.
\subsubsection{Hashed attributes database}
After a new user discloses their SSN, it is stored in a remote Amazon Relational Database using an SQL client. Since the purpose of this database is solely uniqueness verification, we use a deterministic one-way hash function (SHA-256) and store this hash instead of the plaintext. Before actually storing the hash, the server performs a check whether it already exists in the database. If so, the user is likely trying to register twice. As we want to prevent users from registering multiple times, the server denies the user access to the market place.
The database for storing hashes has one table \textit{hash} with one column \textit{idhash}. Communication between our application and the database is handled by a script \textit{database\_script.js}. The functionality of the script is to make a connection with the database, look up the hash and insert the hash if it has not been inserted previously.
Having a database containing all the hashed attributes introduces a single point of failure. However, such a failure only implies that new users cannot sign up. Users that are already registered can continue using the market place even if the registration server fails.
\subsection{Keys}
In a market place, sellers and buyers have to communicate. In the interest of the user, this communication should be confidential. In order to make use of the market place, it is furthermore essential that each participant holds ownership over an object that can be used to prove the authenticity of the sender and receiver. Thus, we make use of RSA keys to provide both confidentiality and authentication.
In the following, we first explain how users import keys they previously generated on their own devices. Then we briefly explain how the door server signs the user's public key with MarketPalace's private key in order to verify that the user has properly registered. Finally, we discuss how users should authenticate after finishing registration.
\subsubsection{Key import}
\label{sec:keyimport}
After we ensured that a user's hashed attributes do not exist in our database yet, the user can import a key-pair. The user creates or uses a pre-existing RSA key-pair and imports the public key $PK_{user}$ during the IRMA session.
During this session, the public key is signed with the private key of the central MarketPalace server if the user submits a unique attribute. After the creation of the signature, the signed key will be returned to the user.
The session is completed and with the private key $SK_{user}$, the public key $PK_{user}$, and the signature $Sig$ on the public key, a user can now make use of the market.
\subsubsection{Password}
User keys need to be retrieved from the device's storage every time the user makes use of the market place. To decrease the chance of keys leaking, the user provides a password for encrypting the private key.
However, for our evaluation, we decided to modify this system to make development more feasible. In our current system, instead of using a password, we simply ask the user to fill in the first 5 characters of the private key, which we will use to match the private key and if it is correct, we will allow the user to enter the market and make use of the stored private key.
\subsection{Market}
We leveraged libp2p and IPFS to implement our marketplace.
The marketplace uses a P2P network architecture. A P2P network architecture is a good fit for a customer-to-customer market place since the communication can go directly between both parties without interference from a third party. Users are in full control of what they share with others. Another reason why we chose the P2P architecture is because it is cost-efficient. In a traditional centralized market place, all of the data is handled on the server. This brings along costs for the network infrastructure.
As enabled by IPFS, we only run bootstrap servers. These bootstrap servers are needed in order for peers to initially find other peers. Once they have established a connection with other peers, this server is no longer needed. This means that the bootstrap servers do not store much data and are not very demanding of resources, which makes them cheap to run.
Since this is a P2P network, all communication is between peers and therefore, there are no servers involved. Listings are pushed periodically to nodes close to the current peer based on their public key. Listings of other users are also sent along this update. Since listings are replicated and stored on multiple nodes,
they remain available even if a node is not online.
\begin{figure}[h!]
\centering
\includegraphics[ scale=0.35]{images/overview.JPG}
\caption{Overview of the three layers of the market component with their corresponding classes.}
\label{fig:systemoverview}
\end{figure}
In general, we have three levels of packages (see \autoref{fig:systemoverview}. Level 3 is the highest level and it contains API implementations so that it can be called by the user interface. Level 2 contains the \textit{managers} package. This package contains object-oriented classes with each their own responsibilities. For example, \textit{listingManager} handles the listings, \textit{ipfsManager} manages P2P storage and \textit{chatManager} manages chat channels. These classes are accessible and provide service to the level above. Level 1 makes use of lower level classes such as \textit{connection}, which takes care of connection establishment and maintenance.
\section{Security}
\label{sec:impl-security}
Since we are building a fraud-resistant market place, we need to address the security of our system. In various parts of the system, different security decisions were made, which will be laid out in this section.
\subsection{IRMA server options}
The security of the IRMA server depends on the configuration and location of the server. When the IRMA server sends attributes over the internet, we need to provide confidentiality and integrity to protect the privacy of the user.
For this reason, we decided to handle the disclosure session results on a server that is hosted on the same machine as the IRMA server. This functionality is implemented with the door server, as explained in \autoref{sec:irma}. IRMA allows for the usage of an API token to only have authorized requesters, but also signing session requests with a key is possible \cite{irmadocs}. However, because the session requester is a trusted entity, we chose \textit{none} as \textit{authmethod}. The session results can be handled securely on the backend.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/marketpalacestructureNEW.png}
\caption{Schematical representation of the registration process of MarketPalace and the protected connections in the IRMA disclosure process (blue).}
\label{fig:systemtls}
\end{figure}
We now specify how we protect communication that is not on the same machine, primarily the communication between the user and the door server.
As can be seen in \autoref{fig:systemtls}, the session request between the user and the door server is initiated in step 2 after the user has clicked a button in step 1. The door server creates a QR code and sends it to the front-end where the user can scan it (step 3). In step 4, the personal attributes are sent from the user's IRMA application to our IRMA server. In order to provide confidentiality and integrity, IRMA has implemented a TLS-connection between the server and mobile application, which can be enabled by adding a key and certificate to the configuration of the IRMA server. We hence achieve privacy-preserving disclosure of personal attributes through the use of TLS and only internal communication between front-end and door server.
The API calls made in the decentralized market place and the corresponding replies, which are typically listings, are not encrypted. That is because we consider listing data public and hence not privacy-sensitive. In the future, however, listings could be encrypted as well with the public keys of users.
\subsection{Signed listings}
We use RSA to sign listings in order to authenticate the owner. Listings are signed by users before they are sent over the network. The listings are first hashed to speed up the signing process. Afterwards, they are signed by the private key of the publishing user. The receiver can verify that the user that claims to be the owner of the listing is actually the one that sent the packet~\cite{milanov}.
\subsection{Market security}
We require all communication, including e.g., chats, to be encrypted and authenticated.
To achieve these two security goals, we use the signed RSA keys generated during registration.
A sender encrypts all messages with the public key of the receiver. The sender furthermore includes its public key with the signature received during registration in the message. The complete message is signed by the sender.
Upon arrival of a message, the receiver decrypts the message. They then check if the signature on the included public key is valid. If that is the case, they accept that the sender is a valid user. If not, they drop the message.
After verifying the validity of the included public key, the receiver verifies that the signature on the message indeed is valid using the included public key. If the signatures are valid, they accept the message. Otherwise, the message is discarded.
\section{Performance evaluation}
In this section, we will take a look at various actions in the network and their performance. Note that the experiments are too small to model latency because that is not the focus of this paper. The conclusion will be formed upon reviewing the results from experimenting with these actions.
In order to have the most recent overview of all the available listings in the market place, it is important that new information is received by all users as soon as possible. Therefore, we considered answering the following questions: How long does it take to add and receive a listing?
The time required for performing these operations depends on the system. This is affected by Distributed Object Location and routing as well as content caching, replication and migration~\cite{Androutsellis-Theotokis:2004:SPC:1041680.1041681}.
The time to add a listings is the time between the moment of hitting the button \textit{Add listing} until it is received by a randomly chosen peer in the network. That is, assuming that a page refreshes every second, the time in seconds the listing will show up in the receiver's overview of all listings. The time to retrieve a listing is the time between the moment of a peer sending a request to receive listings from another peer and the moment of actually receiving it.
In order to answer this question, we set up an experiment where we measured the time between these two moments.
The moment node \textit{u} initiated adding a listing into the network and the moment a randomly chosen node \textit{v} has received the same listing. To do so we set up four nodes and manually carried out the steps and measured the difference between the sending time and receiving time between two nodes. We have run this test 100 times.
Before we discuss the result, we first need to understand the factors that can potentially influence the measured time. When a new listing is created it does not necessarily mean it will be pushed into the network right away. There is a timer within every node regulating the frequency of pushing listings (including the newly created listing) into the network. The purpose of this design decision is to balance the load in a larger network. We call the sender node \textit{u} and the receiver node \textit{v}. Although this only would be relevant in large networks, we decided to include it in this experiment. This heavily influences our results and taking out this parameter would improve the performance. We have concluded that there are theoretically four parameters in adding and receiving a listing on the P2P network:
\begin{enumerate}
\item The time remaining on the timer of \textit{u} until the listings are pushed into the network.
\item The cumulative time remaining on the timers of other nodes on the route from \textit{u} to \textit{v} before they push the listings to the next node excluding the time traveling through the physical network medium.
\item The total amount of nodes in the network.
\item The total time it takes for the network packets containing the listing to travel through the physical network medium.
\end{enumerate}
The first factor depends on the time remaining on the timer within \textit{u}. After the timer expires, the new listing is pushed to the network. In our evaluation, we set the timer to 90 seconds.
The second factor depends on the cumulative time remaining in the timers of the nodes that lay on the propagation route from \textit{u} to \textit{v}. This time excludes the time it takes for the signal to travel over the physical medium. For example, the second factor for a network with a route through two nodes before it reaches \textit{v}, with the remaining timer of the first node being 20 seconds and the remaining timer of the second node being 15 seconds, would be 35 seconds. However, this second factor is excluded from our experiment since we only send directly from \textit{u} to \textit{v}. In other words, \textit{v} is in the list of the 20 closest peers of \textit{u}.
The third factor plays also a role since every node pushes the listings to the 20 closest neighbors in the network. If the network is large, more hops are required in order to reach the furthest node in the network.
The fourth factor depends on the cumulative time that it takes for a signal to travel on the physical medium on the propagation route from \textit{u} to \textit{v}. Nowadays, the speed of signals traveling through the physical medium is extremely fast and can be considered a negligible factor in our experimentation.
\begin{table}[]
\centering
\begin{tabular}{ | c | c | }
\hline
\textbf{Mean} & 36.7 \\
\hline
\textbf{Median} & 32.5 \\
\hline
\textbf{Standard deviation} & 26.6 \\
\hline
\textbf{Mode} & 1 \\
\hline
\end{tabular}
\caption{Results of 100 measurements with four nodes in the network.}
\label{tab:my_result_label}
\end{table}
The mean at 36.7 (see \autoref{tab:my_result_label}) shows that it takes 36.7 seconds on average until another peer can see a new listing. Given that listings usually remain in the systems for days, such an initial delay seems acceptable.
The standard deviation of 26.6 indicates that the chance of a listing added and received in the network takes more than 89.9 seconds is less than 5\%. This number could be useful in determining time-outs:if there is no acknowledgement within 89.9 seconds, we may consider the operation failed and re-send the listing.
These results give insight when the two chosen nodes are neighbors. In the future, we will run the experiment with more than 20 nodes in the network. This decreases the chance that two arbitrary chosen nodes \textit{u} and \textit{v} are neighbors. Running the experiment in a larger network can help us tweak parameters such as the duration of the timer and the size of the neighbors.
\section{Conclusion}
This paper presented MarketPalace, a novel online market place. MarketPalace replaces the central provider with a P2P Network, thus avoiding a global observer of all user actions. As decentralized systems are highly vulnerable to Sybil attacks, we leverage self-sovereign identity to ensure that each user can join with at most one node. Our evaluation only considered networks of a few nodes, indicating that our solution is feasible for trading in a small geographical region.
In the future, we aim to build a reputation system on top of MarketPalace.
Furthermore, we want to extend the scale of the network. Here, we consider two aspects: First, we aim to optimize delays by reconsidering the use of timers in the propagation. Second, we want to explore how to group a large network into smaller parts based on locality, to account for the fact that users tend to trade with people in their vicinity.
\bibliographystyle{IEEEtran}
|
1,477,468,750,961 | arxiv | \section{Introduction}
In recent years, considerable effort has been directed towards identifying and classifying
phase transitions far from thermal equilibrium. Such nonequilibrium transitions can
be found in a wide variety of problems in biology, chemistry, and physics. Examples
include population dynamics, the spreading of epidemics, surface chemical reactions, catalysis,
granular flow, traffic jams as well as growing surfaces and interfaces (see, e.g.,
\cite{Liggett85,ZhdanovKasemo94,SchmittmannZia95,MarroDickman99,Hinrichsen00,Odor04,Luebeck04,TauberHowardVollmayrLee05}).
Nonequilibrium phase transitions are characterized by large scale fluctuations
and collective behavior in space and time very similar to the behavior at equilibrium
critical points.
A particularly interesting situation arises when an equilibrium or nonequilibrium
many-particle system is defined on a randomly diluted lattice. Then, two distinct types of
fluctuations are combined, \emph{viz.} the dynamical fluctuations of the
many-particle system and the static geometric fluctuations due to lattice percolation
\cite{StaufferAharony_book91}. In
equilibrium systems, their interplay gives rise to novel universality classes for
the thermal \cite{Bergstresser77,StephenGrest77,GefenMandelbrotAharony80}
and quantum \cite{SenthilSachdev96,Sandvik02,VojtaSchmalian05b,WangSandvik06} phase transitions
across the lattice percolation threshold.
In this paper, we investigate the interplay between dynamical fluctuations and geometric
criticality
in nonequilibrium many-particle systems. We focus on a particularly well-studied type of
transitions, the so-called absorbing state transitions, that separate active, fluctuating steady
states from inactive (absorbing) states in which fluctuations cease completely.
The generic universality class for absorbing state transitions is directed percolation
(DP)
\cite{GrassbergerdelaTorre79}. It is conjectured \cite{Janssen81,Grassberger82} to be valid
for all absorbing state transitions with scalar order parameter and no extra symmetries
or conservation laws. In the presence of symmetries and/or conservation laws, other universality
classes can be realized, such as the DP$n$ class in systems with $n$ symmetric absorbing states
\cite{Hinrichsen97}.
For definiteness, we consider the contact process \cite{HarrisTE74}, a prototypical system
in the DP universality class. We show that the contact process on a randomly site or bond
diluted lattice has two different nonequilibrium phase transitions: (i) a generic disordered
DP transition at weak dilutions (below the lattice percolation threshold) driven by the dynamic
fluctuations of the contact process and (ii) the
transition across the lattice percolation threshold driven by the geometric criticality of
the lattice.
The former transition has been investigated for a number of years
\cite{Kinzel85,Noest86,MoreiraDickman96,Janssen97}; it has recently reattracted considerable attention because
it is governed by an exotic infinite-randomness fixed point
\cite{HooyberghsIgloiVanderzande03,HooyberghsIgloiVanderzande04,VojtaDickison05,VojtaFarquharMast09}.
In contrast, the latter transition has received much less attention.
Here, we develop a theory for the nonequilibrium transition across the lattice
percolation threshold by combining classical percolation theory with the properties of
the supercritical contact process on a finite-size cluster. We show that the critical
point is characterized by ultraslow activated (exponential) dynamical scaling and
accompanied by strong Griffiths singularities. The scaling scenario is qualitatively
similar to the generic disordered DP transition, but with different critical exponent
values. To confirm the universality of this exotic scenario, we also investigate the
generalized contact process with $n$ (symmetric) absorbing states \cite{Hinrichsen97}.
This is a particularly interesting problem because the generic transition of the
disordered generalized contact process does \emph{not} appear to be of
infinite-randomness type
\cite{HooyberghsIgloiVanderzande03,HooyberghsIgloiVanderzande04}.
The paper is organized as follows. In Sec.\ \ref{sec:processes}, we introduce our
models, the simple and generalized contact processes on a randomly diluted lattice.
We also discuss the phase diagrams. In Sec.\ \ref{sec:percolation} we briefly summarize
the results of classical percolation theory to the extent necessary for our purposes.
Section \ref{sec:theory} contains the main part of the paper, the theory of the
nonequilibrium transition across the lattice percolation threshold. Section
\ref{sec:generality} is devoted to the question of the generality of the arising
scaling scenario. We conclude in Sec.\ \ref{sec:conclusions}. A short account of part of
this work has already been published in Ref. \cite{VojtaLee06}.
\section{Simple and generalized contact processes on diluted lattices}
\label{sec:processes}
\subsection{Contact process}
The clean contact process \cite{HarrisTE74} is a prototypical system in the DP
universality class. It is defined on a $d$-dimensional hypercubic lattice. (We consider
$d\ge2$ since we will be interested in diluting the lattice.) Each lattice site
$\mathbf{r}$ can be active (infected, state A) or inactive (healthy, state I).
During the time evolution of the contact process which is a continuous-time Markov
process, each active site becomes inactive at a rate $\mu$ (``healing'') while each inactive
site becomes active at a rate $\lambda m / (2d)$ where $m$ is the number of active nearest
neighbor sites (``infection''). The infection rate
$\lambda$ and the healing rate $\mu$
are external parameters. Their ratio controls the behavior of the contact process.
For $\lambda \ll \mu$, healing dominates over infection, and the absorbing state without any active
sites is the only steady state of the system (inactive phase). For sufficiently large infection rate
$\lambda$, there is a steady state with a nonzero density of active sites (active phase).
These two phases are separated by a nonequilibrium phase transition in the DP universality
class at a critical value $(\lambda/\mu)_c^0$ of the ratio of the infection and healing
rates.
The basic observable in the contact process is the average density of active sites
at time $t$,
\begin{equation}
\rho(t) = \frac 1 {L^d} \sum_{\mathbf{r}} \langle n_\mathbf{r}(t) \rangle
\label{eq:rho_definition}
\end{equation}
where $n_\mathbf{r}(t)=1$ if the site $\mathbf{r}$ is active at time $t$ and $n_\mathbf{r}(t)=0$
if it is inactive. $L$ is the linear system size, and $\langle \ldots \rangle$ denotes the
average over all
realizations of the Markov process. The longtime limit of this density (i.e., the steady
state density)
\begin{equation}
\rho_{\rm stat} = \lim_{t\to\infty} \rho(t)
\label{eq:OP_definition}
\end{equation}
is the order parameter of the nonequilibrium phase transition.
\subsection{Generalized contact process}
Following Hinrichsen \cite{Hinrichsen97}, we now generalize the contact process
by introducing $n$ different inactive states I$_k$ with $k=1 \ldots n$ ($n=1$
corresponds to the simple contact process). Here,
$k$ is sometimes called the ``color'' label. The time evolution is
again a continuous-time Markov process. The first two rates are equivalent to those of
the simple contact process: An active site can decay into each of the inactive states
I$_k$ with rate $\mu/n$, and a site in any of the inactive states becomes active at
a rate $\lambda m / (2d)$ with $m$ the number of active nearest-neighbor sites.
To introduce competition between the different inactive states, we define a third rate:
If two neighboring sites are in \emph{different} inactive states, each can become
active with a rate $\sigma$. This last rule prevents the boundaries between domains
of different inactive states from sticking together infinitely. Instead they can
separate, leaving active sites behind.
The properties of the clean generalized contact process have been studied in some detail
in the literature \cite{Hinrichsen97,HooyberghsCarlonVanderzande01}. If the boundary
activation rate $\sigma$ vanishes, the behavior becomes identical to the simple contact
process for all $n$. (This becomes obvious by simply dropping the color label and treating all
inactive sites as identical.) For $\sigma>0$, the system becomes ``more active'' than the
simple contact process, and the universality class changes. In one space dimension, a
phase transition exists for $n=1$ (in the DP universality class) and for $n=2$
(in the $Z_2$-symmetric directed percolation (DP2) class which coincides with the
the parity-conserving (PC) class in one dimension \cite{Hinrichsen00}). For $n\ge 3$ the
system is always in the active phase, and no phase transition exists at finite values of
$\lambda,\mu$ and $\sigma$.
The generalized contact process in higher space dimensions presumably behaves in an
analogous fashion:
There is a DP transition for $n=1$ while the properties for $n>1$ are different.
For sufficiently large $n$, the system is always active
\footnote{For $d=2, n=2$, Hinrichsen \cite{Hinrichsen97} finds a
mean-field transition while our own simulations suggest that the system always
active. Since this difference is of no importance for the present paper,
it will be addressed elsewhere.}.
\subsection{Lattice dilution}
We now introduce quenched site dilution by randomly removing each lattice site with
probability $p$. (Bond dilution could be introduced analogously.)
As long as the vacancy concentration $p$ remains below the lattice percolation
threshold $p_c$, the lattice consists of an infinite connected cluster of sites accompanied
by a spectrum of finite-size clusters. In contrast, at dilutions above $p_c$, the lattice
consists of disconnected finite-size clusters only.
Figure \ref{fig:pds} schematically shows the resulting phase diagrams of the
nonequilibrium process as a function of the infection rate $\lambda$ and dilution $p$,
keeping the healing rate $\mu$ and the boundary activation rate $\sigma$, if any,
constant.
\begin{figure}
\includegraphics[width=7.cm,clip]{pd2.eps}\\[2ex]
\includegraphics[width=7.cm,clip]{pd3.eps}
\caption{(Color online:) Schematic phase diagrams for the simple and generalized contact
processes on a diluted lattice in dimensions $d\ge 2$ as a function of dilution $p$ and
inverse infection rate $\lambda^{-1}$ (healing and boundary activation rates $\mu$ and
$\sigma$ are fixed). Case (a) applies to systems that display a phase transition at
$\lambda_c^0$ in the absence of dilution. There is a multicritical point (MCP) at
$(p_c,\lambda_*)$ separating the generic transition from the lattice percolation
transition. Case (b) is for systems that are always active in the absence of dilution.}
\label{fig:pds}
\end{figure}
Depending on the properties of the clean undiluted system, there are two qualitatively
different cases.
(a) If the undiluted system has a phase transition at a nonzero critical infection rate
$\lambda_c^0$, the active phase survives for all vacancy concentrations below the
percolation threshold, $p<p_c$. It even survives at the percolation threshold $p_c$ on
the critical percolation cluster because it is connected, infinitely extended, and its
fractal dimension $D_f$ is larger than unity. The critical infection rate $\lambda_c$
increases with increasing dilution $p$ to compensate for the missing neighbors, reaching
$\lambda_*$ at $p_c$. The active phase cannot exist for $p>p_c$ because the lattice
consists of finite-size clusters only, and the nonequilibrium process will eventually end
up in one of the absorbing states on any finite-size cluster. Thus, in case (a), our
system features two nonequilibrium phase transitions, (i) a generic (disordered)
transition for dilutions $p<p_c$, driven by the dynamic fluctuations of the
nonequilibrium process and (ii) the transition across the lattice percolation threshold
driven by the geometric criticality of the lattice. They are separated by a multicritical
point at $(p_c,\lambda_*)$ which was studied numerically in Ref.\ \cite{DahmenSittlerHinrichsen07}.
(b) If the undiluted system is always active (as for the generalized contact
process with a sufficiently high number of inactive states), the phase diagram is
simpler. The active phase covers the entire region $p\le p_c$ for all $\lambda>0$
($\lambda_*$ is formally zero)
while the inactive phase exists in the region $p>p_c$. There is no generic (disordered)
nonequilibrium phase transition, only the transition across the lattice percolation
threshold.
The focus of the present paper is the nonequilibrium phase transition across the
lattice percolation threshold that exists in both cases. In order to develop a theory
for this transition, we combine classical percolation theory with the properties
of the nonequilibrium process on a finite-size cluster. In the next section we therefore
briefly summarize key results of percolation theory.
\section{Classical percolation theory}
\label{sec:percolation}
Consider a regular lattice in $d$ dimensions. If each lattice site is removed with
probability $p$ \footnote{We define $p$ as the fraction of sites removed rather than
the fraction of sites present.}, an obvious question is whether or not the lattice is
still connected in the sense that there is a cluster of connected
(nearest neighbor) sites that spans the entire system. This question defines the
percolation problem (see Ref.\ \cite{StaufferAharony_book91} for an introduction).
In the thermodynamic limit of infinite system volume, there is a sharp boundary
between the cases of a connected or disconnected lattice. If the vacancy concentration
$p$ stays below the percolation threshold $p_c$, an infinite cluster of connected sites
exists (with a probability of unity). For $p>p_c$, an infinite cluster does not exist,
instead, the lattice consists of many disconnected finite-size clusters.
The behavior of the lattice for vacancy concentrations close to the percolation threshold
can be understood as a (geometric) continuous phase transition or critical phenomenon.
The order parameter is the probability $P_\infty$ of a site to belong to the infinite
connected percolation cluster. It is obviously zero in the disconnected phase ($p>p_c$)
and nonzero in the percolating phase ($p<p_c$). Close to $p_c$ it varies as
\begin{equation}
P_\infty \sim |p-p_c|^{\beta_c} \qquad (p<p_c)
\label{eq:percolation-beta}
\end{equation}
where $\beta_c$ is the order parameter critical exponent of classical percolation. Note
that we use a subscript $c$ to distinguish quantities associated with the classical
lattice percolation problem from those of the nonequilibrium phase transitions discussed
later. In addition to the infinite cluster, we also need to characterize the finite
clusters on both sides of the transition. Their typical size, the correlation or
connectedness length $\xi_c$ diverges as
\begin{equation}
\xi_c \sim |p-p_c|^{-\nu_c}
\label{eq:percolation-nu}
\end{equation}
with $\nu_c$ the correlation length exponent. The average mass $S_c$ (number of sites) of a
finite cluster diverges with the susceptibility exponent $\gamma_c$ according to
\begin{equation}
S_c \sim |p-p_c|^{-\gamma_c}~.
\label{eq:percolation-gamma}
\end{equation}
The complete information about the percolation critical behavior is contained in the cluster
size distribution $n_s$, i.e., the number of clusters with $s$ sites excluding the infinite
cluster (normalized by the total number of lattice sites). Close to the percolation
threshold, it obeys the scaling form
\begin{equation}
n_{s} ( \Delta) =s^{-\tau_c }f\left( \Delta s^{\sigma_c }\right) .
\label{eq:percscaling}
\end{equation}
Here $\Delta=p-p_c$, and $\tau_c $ and $\sigma_c$ are critical exponents.
The scaling function $f(x)$ is analytic for small $x$ and has a single maximum at
some $x_{\rm max}>0$. For large $|x|$, it drops off rapidly
\begin{eqnarray}
f(x) &\sim& \exp\left[- B_1 x^{1/\sigma_c}\right] \quad ~(x>0),
\label{eq:scaling-function-disconnected}\\
f(x) &\sim& \exp\left[- \left(B_2 x^{1/\sigma_c}\right)^{1-1/d}\right] \quad ~(x<0),
\label{eq:scaling-function-connected}
\end{eqnarray}
where $B_1$ and $B_2$ are constants of order unity. All classical percolation exponents are
determined by $\tau_c$ and $\sigma_c$ including the correlation lengths exponent
$\nu_c =({\tau_c -1})/{(d\sigma_c )}$, the order parameter exponent
$\beta_c=(\tau_c-2)/\sigma_c$, and the susceptibility exponent
$\gamma_c=(3-\tau_c)/\sigma_c$.
Right at the percolation threshold, the cluster size distribution does not contain a
characteristic scale. The structure of the critical percolation cluster is thus fractal
with the fractal dimension being given by $D_f=d/(\tau_c-1)$.
\section{Nonequilibrium transition across the lattice percolation threshold}
\label{sec:theory}
\subsection{Single-cluster dynamics}
\label{subsec:single-cluster}
To develop a theory of the nonequilibrium phase transition across the lattice percolation
threshold, we first study the nonequilibrium process on a single connected finite-size cluster of
$s$ sites. For definiteness, this section focuses on the simple contact process. The generalized
contact process will be considered in Sec.\ \ref{sec:generality}.
The crucial observation is that on the percolation transition line (for
$\lambda>\lambda_*$), the contact process is supercritical, i.e., the cluster is locally in the active phase.
The time evolution of
such a cluster, starting from a fully active lattice, therefore proceeds in two stages:
Initially, the density $\rho_s$ of active sites decays rapidly towards a metastable state
(which corresponds to the steady state of the equivalent \emph{infinite} system) with a
nonzero density of active sites and islands of the inactive phase of linear size
$\xi_s^{c}$ (see Fig.\ \ref{fig:cp_cluster}).
\begin{figure}
\includegraphics[width=6.5cm,clip]{cp_cluster.eps}
\caption{(Color online:) Schematic of the metastable state of the supercritical
contact process on a single percolation cluster.
A and I denote active and inactive sites, and $\xi_s^c$ is the connected
correlation length of the density fluctuations \emph{on} the cluster.}
\label{fig:cp_cluster}
\end{figure}
This
metastable state can then decay into the inactive (absorbing) state only via a rare
collective fluctuation involving \emph{all} sites of the cluster. We thus expect
the long-time decay of the density to be of exponential form (suppressing subleading
pre-exponential factors),
\begin{equation}
\rho_s(t) \sim \exp[{-t/t_s(s)}]~,
\label{eq:cluster-decay}
\end{equation}
with a long lifetime $t_s$ that increases exponentially with the cluster size $s$
\begin{equation}
t_s(s) = t_0 \exp[{A(\lambda)s}]
\label{eq:cluster-lifetime}
\end{equation}
for sufficiently large $s$. Here, $t_0$ is some microscopic time scale.
The lifetime increases the faster with $s$
the further the cluster is in the active phase. This means, the prefactor $A(\lambda)$ which
plays the role of an inverse correlation volume vanishes
at the multicritical value $\lambda_*$ and monotonically increases with increasing
$\lambda$. Close to the multicritical point, the behavior of $A(\lambda)$ can be inferred
from scaling. Since $A(\lambda)$ has the dimension of an inverse volume, it varies as
\begin{equation}
A(\lambda) \sim (\lambda-\lambda_*)^{\nu_* D_f}
\label{eq:Alambda}
\end{equation}
where $\nu_*$ is the correlation length exponent of the multicritical point and $D_f$ is
the (fractal) space dimensionality of the underlying cluster.
Note that (\ref{eq:cluster-lifetime}) establishes an exponential relation between
length and time scales at the transition. Because the number of sites $s$ of a
percolation cluster is related to its linear size $R_s$ via $s \sim R_s^{D_f}$,
eq.\ (\ref{eq:cluster-lifetime}) implies
\begin{equation}
\ln t_s \sim R_s^{D_f}~.
\label{eq:activated-scaling}
\end{equation}
Thus, the dynamical scaling is activated rather than power-law with the tunneling
exponent being identical to the fractal dimension of the critical percolation cluster,
$\psi=D_f$.
To confirm the above phenomenological arguments, we have performed extensive Monte-Carlo
simulations of the contact process on finite-size clusters using clean one-dimensional
and two-dimensional systems as well as diluted lattices. Our simulation method is based on the
algorithm by Dickman \cite{Dickman99} and described in detail in Refs.\
\cite{VojtaDickison05,VojtaFarquharMast09}.
A characteristic set of results is shown in
Fig.\ \ref{fig:decay_cp_1d}.
\begin{figure}
\includegraphics[width=8cm,clip]{decay_log_1d.eps}\\[2ex]
\includegraphics[width=8cm,clip]{decay_lin_1d.eps}
\caption{(Color online:) Contact process on one-dimensional clusters of size $s$, starting from
a fully active lattice at $\lambda=3.8, \mu=1$ which is in the active phase. (a) Double-logarithmic plot
of density vs. time showing the two-stage time-evolution via a metastable state.
(b) Log-linear plot demonstrating that the
long-time decay is exponential. All data are averages over $10^5$ independent runs.}
\label{fig:decay_cp_1d}
\end{figure}
It shows the time evolution of the contact process on several one-dimensional clusters of
different size $s$, starting from a fully active lattice. The infection rate
$\lambda=3.8$ (we set $\mu=1$) puts the clusters (locally) in the ordered phase, i.e., it
is supercritical,
since the critical value in one dimension is $\lambda_c=3.298$. All data
are averages over $10^5$ independent trials. The double-logarithmic plot of density $\rho_s$
vs. time $t$ in Fig.\ \ref{fig:decay_cp_1d}a clearly shows the two-stage time evolution
consisting of a rapid initial decay (independent of cluster size) towards a metastable
state followed by a long-time decay towards the absorbing state which becomes slower with
increasing cluster size. Replotting the data in log-linear form in Fig.\
\ref{fig:decay_cp_1d}b confirms that the long-time decay is exponential, as predicted
in (\ref{eq:cluster-decay}).
The lifetime $t_s$ of the contact process on the cluster can be determined by fitting
the asymptotic part of the $\rho_s(t)$ curve to (\ref{eq:cluster-decay}). Figure
\ref{fig:lifetime-CP-1d} shows the lifetime as a function of cluster size $s$ for four
different values of the infection rate $\lambda$.
\begin{figure}
\includegraphics[width=8cm,clip]{lifetime_cp1d.eps}
\caption{(Color online:) Lifetime $t_s$ as a function of cluster size $s$ for different
values of the infection rate $\lambda$. The other parameters are as in Fig.\
\ref{fig:decay_cp_1d}. The dashed lines are fits of the large-$s$ behavior to
the exponential dependence (\ref{eq:cluster-lifetime}). Inset: Correlation volume
$A^{-1}$ as a function of the distance from bulk criticality. The dashed line is a
power-law fit.}
\label{fig:lifetime-CP-1d}
\end{figure}
Clearly, for sufficiently large clusters, the lifetime depends exponentially on the
cluster size, as predicted in (\ref{eq:cluster-lifetime}). (The data for $\lambda=3.4$
which is very close to the bulk critical point of $\lambda_c=3.298$ have not fully
reached the asymptotic regime as can be seen from the remaining slight curvature of the
plot.) By fitting the large-$s$ behavior of the lifetime curves to the exponential law
(\ref{eq:cluster-lifetime}), we obtain an estimate of the inverse correlation volume
$A$. The inset of Fig.\ \ref{fig:lifetime-CP-1d} shows this correlation volume as a
function of the distance from the bulk critical point. In accordance with (\ref{eq:Alambda})
it behaves as a power law. The exponent value of approximately 0.95 is in reasonable agreement
with the prediction $\nu = 1.097$ for our one-dimensional clusters.
We have performed analogous simulations for various sets of two-dimensional clusters as
well as finite-size diluted lattices. In all cases, the Monte-Carlo results confirm the
phenomenological theory summarized in eqs.\ (\ref{eq:cluster-decay}),
(\ref{eq:cluster-lifetime}), and (\ref{eq:Alambda}).
\subsection{Steady-state density and density decay }
\label{subsec:density}
We now consider the full problem, the contact process on a diluted lattice close to the
percolation threshold. To obtain observables of the entire system, we must sum over all
percolation clusters.
Let us start by analyzing static quantities such as the steady state density $\rho_{\rm st}$
of active sites (the order parameter of the nonequilibrium transition) and the spatial correlation length
$\xi_\perp$. Finite-size percolation clusters do not contribute to the steady-state density
because the contact process eventually decays into the absorbing inactive state on any
finite-size cluster. A steady-state density can thus exist only on the infinite
percolation cluster for $p<p_c$. For $\lambda>\lambda_*$, the infinite cluster is supercritical, i.e.,
a finite fraction of its sites is active.
Thus, the total steady-state density is proportional to the number of sites in the
infinite cluster,
\begin{equation}
\rho_{\rm st} \sim P_\infty(p) \sim \left\{
\begin{array}{cc} |p-p_c|^{\beta_c} & ~~ (p<p_c) \\
0 & ~~ (p>p_c)
\end{array} \right.~.
\label{eq:steady-state-density}
\end{equation}
Consequently, the order parameter exponent $\beta$ of the nonequilibrium transition is
identical to the corresponding exponent $\beta_c$ of the lattice percolation problem.
The (average) spatial correlation length $\xi_\perp$ of the nonequilibrium process can be
found using a similar argument. On the one hand, the spatial correlations of the contact process cannot
extend beyond the connectedness length $\xi_c$ of the underlying diluted lattice because
different percolation clusters are completely decoupled. This implies $\xi_\perp \lesssim
\xi_c$. On the other hand, for $\lambda>\lambda_*$, all sites on the same percolation
cluster are strongly correlated in space, implying $\xi_\perp \gtrsim \xi_c$. We
therefore conclude
\begin{equation}
\xi_\perp \approx \xi_c~,
\label{eq:spatial-correlation-length}
\end{equation}
and the correlation length exponent $\nu_\perp$ is also identical to its lattice percolation
counterpart $\nu_c$.
We now turn to the dynamics of the nonequilibrium transition across the percolation
threshold. In order to find the time evolution of the total density of active sites
(starting from a completely active lattice), we sum over all percolation clusters
by combining the cluster size distribution (\ref{eq:percscaling}) with the
single-cluster time evolution (\ref{eq:cluster-decay}). The total density is thus given
by
\begin{eqnarray}
\rho(t,\Delta) &=& \int ds \, s \, n_s(\Delta)\, \rho_s(t) \nonumber \\
&\sim& \int ds \, s \, n_s(\Delta)\, \exp[-t/t_s(s)]
\label{eq:total-density}
\end{eqnarray}
In the following, we evaluate this integral at the transition as well as in the
active and inactive phases.
Right at the percolation threshold, the scaling function in the cluster size distribution
(\ref{eq:percscaling}) is a constant, $f(0)$, and (\ref{eq:total-density}) simplifies to
\begin{equation}
\rho(t,0) \sim \int ds ~s^{1-\tau_c} \exp[-(t/t_0e^{As})]~.
\label{eq:total-density-critical-integral}
\end{equation}
To estimate this integral, we note that only sufficiently large clusters, with a minimum
size of $s_{\rm min}(t)= A^{-1} \ln(t/t_0)$, contribute to the total density at time $t$,
\begin{equation}
\rho(t,0) \sim \int_{s_{\rm min}}^\infty ds \, s^{1-\tau_c} \sim s_{\rm min}^{2-\tau_c}~.
\label{eq:total-density-critical-estimate}
\end{equation}
The leading long-time dependence of the total density right at the percolation threshold
thus takes the unusual logarithmic form
\begin{equation}
\rho(t,0) \sim [\ln(t/t_0)]^{-\bar\delta} ~,
\label{eq:total-density-critical-result}
\end{equation}
again reflecting the activated dynamical scaling,
with the critical exponent given by $\bar\delta=\tau_c-2=\beta_c/(\nu_c D_f)$.
In the disconnected, inactive phase ($p>p_c$) we need to use expression
(\ref{eq:scaling-function-disconnected}) for the scaling function of the cluster size
distribution. The resulting integral for the time evolution of the density reads
\begin{equation}
\rho(t,\Delta) \sim \int ds s^{1-\tau_c} \exp [-B_1 s \Delta^{1/\sigma_c} -(t/t_0 e^{A s})].
\label{eq:total-density-inactive-integral}
\end{equation}
For long times, the leading behavior of the integral can be calculated using the
saddle-point method. Minimizing the exponent of the integrand
shows that the main contribution at time $t$
to the integral (\ref{eq:total-density-inactive-integral}) comes from clusters of size
$s_0 = -A^{-1} \ln[B_1\Delta^{1/\sigma_c}t_0/(At)]$. Inserting this into the integrand
results in a power-law density decay
\begin{equation}
\rho(t,\Delta) \sim (t/t_0)^{-d/z'} \qquad (p>p_c)~.
\label{eq:total-density-inactive-result}
\end{equation}
The nonuniversal exponent $z'$ is given by $z' =(
Ad/B_1)\Delta^{-1/\sigma_c} \sim \xi_\perp^{D_f}$, i.e.,
it diverges at the critical point $p=p_c$.
In the percolating, active phase ($p<p_c$), the infinite percolation cluster contributes
a nonzero steady state density $\rho_{\rm st}(\Delta)$ given by
(\ref{eq:steady-state-density}). However, the long-time approach of the
density towards this value is determined by the slow decay of the metastable states of
large finite-size percolation clusters. To estimate their contribution, we must
use the expression (\ref{eq:scaling-function-connected}) for the scaling function of the cluster size
distribution. The resulting integral now reads
\begin{eqnarray}
\rho(t,\Delta) -\rho_{st}(\Delta) &\sim& \int ds \,s^{1-\tau_c}
\exp\left[-(B_2 s |\Delta|^{1/\sigma_c})^{1-1/d} \right. \nonumber \\&&-
\left. (t/t_0 e^{As})\right]~.
\label{eq:total-density-active-integral}
\end{eqnarray}
We again apply the saddle-point method to find the leading low-time behavior of this
integral. Minimizing the exponent shows the main contribution coming from clusters of
size $s_0 = -A^{-1} \ln[B_2 |\Delta|^{1/\sigma_c}(d-1)/(Atd)]$. By inserting this into
the integrand, we find a nonexponential density decay of the form
\begin{equation}
\rho(t,\Delta)-\rho_{\rm st}(\Delta) \sim e^{ -\left[(d/z'') \ln(t/t_0) \right]^{1-1/d}} \qquad
(p<p_c)~.
\label{eq:total-density-active-result}
\end{equation}
Here, $z''= (A d/B_2) |\Delta|^{-1/\sigma_c} \sim \xi_\perp^{D_f}$ is another
nonuniversal exponent which diverges at the critical point.
The slow nonexponential relaxation of the total density on both sides of the actual
transition as given in (\ref{eq:total-density-inactive-result}) and (\ref{eq:total-density-active-result})
is characteristic of a Griffiths phase \cite{Griffiths69} in the contact process
\cite{Noest88}. It is brought about by the competition between the exponentially
decreasing probability for finding a large percolation cluster off criticality and the
exponentially increasing lifetime of such a cluster. Note that time $t$ and spatial
correlation length $\xi_\perp$ enter the off-critical decay laws
(\ref{eq:total-density-inactive-result}) and (\ref{eq:total-density-active-result}) in
terms of the combination $\ln(t/t_0)/ \xi_\perp^{D_f}$ again reflecting the activated
character of the dynamical scaling.
\subsection{Spreading from a single seed}
After having discussed the time evolution of the density starting from a completely
infected lattice, we now consider the survival probability $P_s(t)$ for runs starting
from a single random seed site. To estimate $P_s(t)$, we note that the probability of a
random seed site to belong to a cluster of size $s$ is given by $s\,n_s(\Delta)$. The
activity of the contact process is confined to this seed cluster. Following the arguments
leading to (\ref{eq:cluster-decay}), the probability that this cluster survives is
proportional to $\exp(-t/t_s)$. The average survival probability at time $t$ can thus be
written as a sum over all possible seed clusters,
\begin{equation}
P_s(t,\Delta) \sim \int ds \, s \, n_s(\Delta)\, \exp[-t/t_s(s)]~.
\label{eq:survival-probability-integral}
\end{equation}
This is exactly the same integral as the one governing the density decay
(\ref{eq:total-density}). We conclude that the time dependence of the survival
probability for runs starting from a single seed is identical to the time evolution of
the density when starting from a fully infected lattice, as is expected for the contact
process under very general conditions (see, e.g., Ref.\ \cite{Hinrichsen00}).
To determine the (average) total number $N(t)$ of active sites in a cloud spreading from a single
seed, we observe that a supercritical cloud initially grows ballistically.
This means its radius grows linearly with time, and the number of
active sites follows a power law. This ballistic growth stops when the number of active sites
is of the order of the
cluster size $s$. After that, the number of active sites stays approximately
constant. The number $N_s(t)$ of active sites on a percolation cluster of size $s$ is
thus given by
\begin{equation}
N_s(t) \sim \left \{
\begin{array}{cc}
(t/t_0)^{D_f} & \quad (t<t_i(s))\\
s & \quad (t>t_i(s))
\end{array}
\right.
\label{eq:Ns}
\end{equation}
where $t_i(s) \sim R_s(s) \sim t_0 s^{1/D_f}$ is the saturation time of this cluster.
Note that $N_s$ decays to zero only after the much longer cluster lifetime $t_s(s)=t_0
\exp[{A(\lambda)s}]$ given in (\ref{eq:cluster-lifetime}).
We now average over all possible positions of the seed site as in
(\ref{eq:survival-probability-integral}). This yields
\begin{equation}
N(t,\Delta) \sim \int_{s_{\rm min}}^\infty ds \, s \, n_s(\Delta) \, N_s(t)
\label{eq:total-N}
\end{equation}
with $s_{\rm min}\sim A^{-1} \ln(t/t_0)$. At criticality, this integral is easily
evaluated, giving
\begin{equation}
N(t,0) \sim t^{D_f(3-\tau_c)}= t^{\gamma_c/\nu_c}~.
\label{eq:N-result}
\end{equation}
The lower bound of the integral (i.e., the logarithmically slow long-time decay of the
clusters) produces a subleading correction only. Consequently, we arrive at the somewhat
surprising conclusion that the initial spreading follows a power-law and is thus much
faster than the long-time density decay. In contrast, at the infinite-randomness critical
point governing the generic ($p<p_c$) transition, both the initial spreading and the
long-time decay follow logarithmic laws
\cite{HooyberghsIgloiVanderzande03,HooyberghsIgloiVanderzande04,VojtaDickison05,VojtaFarquharMast09}.
Note that a similar situation occurs at the percolation quantum phase transition in the
diluted transverse-field Ising model \cite{SenthilSachdev96} where the
temperature-dependence of the correlation length does not follow the naively expected
logarithmic law.
\subsection{External source field}
\label{subsec:source}
In this subsection we discuss the effects of spontaneous activity creation on
our nonequilibrium phase transition. Specifically, in addition to healing and infection,
we now consider a third process by which an inactive site can spontaneously turn into
an active site at rate $h$. This rate plays the role of an external ``source field'' conjugate
to the order parameter.
To find the steady state density in the presence of such a source field, we first
consider a single percolation cluster. As before, we are interested in the supercritical regime
$\lambda>\lambda_*$. At any given time $t$, a cluster of size $s$ will be active (on
average), if at least one of the $s$ sites has spontaneously become active within
one lifetime $t_s(s)=t_0 e^{As}$ before $t$, i.e., in the interval $[t-t_s(s),t]$. For a
small external field $h$, the average number of active sites created on a cluster of size
s is $M_s(h) = hst_s(s) = hst_0 e^{As}$. This linear response expression is valid as
long as $M_s \ll s$. The probability $w_s(h)$ for a cluster of size $s$ to be active
in the steady state is thus given by
\begin{equation}
w_s(h) \approx \left \{
\begin{array}{cc}
M_s(h) & \quad (M_s(h)<1)\\
1 & \quad (M_s(h)>1)
\end{array}
\right. ~.
\label{eq:field-probability}
\end{equation}
Turning to the full lattice, the total steady state density is obtained by summing over
all clusters
\begin{equation}
\rho_{\rm st}(h,\Delta) \sim \int ds \, s \, n_s(\Delta) \min[1,M_s(h)]~.
\label{eq:density-field-integral}
\end{equation}
This integral can be evaluated along the same lines as the corresponding integral
(\ref{eq:total-density}) for the time-evolution of the zero-field density.
For small fields $h$, we obtain
\begin{eqnarray}
\rho_{\rm st} (h,0) &\sim& [\ln(h_0/h)]^{-\bar\delta}~ \qquad\qquad (p=p_c), \label{eq:rho-h-critical}\\
\rho_{\rm st} (h,\Delta) &\sim& (h/h_0)^{d/z'} ~\qquad\qquad (p>p_c), \\
\delta\rho_{\rm st} (h,\Delta) &\sim& e^{\left[ (d/z'') \ln(h/h_0)
\right]^{1-1/d}} \quad (p<p_c)~,
\end{eqnarray}
where $\delta\rho_{\rm st} (h,\Delta)=\rho_{\rm st} (h,\Delta)-\rho_{\rm st} (0,\Delta)$ is the excess
density due to the field in the active phase and $h_0 = 1/t_0$. At criticality, $p=p_c$, the relation between density $\rho_{\rm st}$ and field $h$ is
logarithmic because the field represents a rate (inverse time) and the dynamical scaling is activated.
Off criticality, we find strong Griffiths singularities analogous to those in the time-dependence of the density.
The exponents $z'$ and $z''$ take the same values as calculated after eqs.\ (\ref{eq:total-density-inactive-result})
and (\ref{eq:total-density-active-result}), respectively.
\subsection{Scaling theory}
In Sections \ref{subsec:density} and \ref{subsec:source}, we have determined the critical
behavior of the density of active sites by explicitly averaging the single cluster
dynamics over all percolation clusters. The same results can also be obtained from
writing down a general scaling theory of the density for the case of activated dynamical
scaling \cite{VojtaDickison05,VojtaFarquharMast09}.
According to (\ref{eq:steady-state-density}), in the active phase, the density is
proportional to the number of sites in the infinite percolation cluster. Its scale
dimension must therefore be identical to the scale dimension of $P_\infty$ which is
$\beta_c/\nu_c$. Time must enter the theory via the scaling combination
$\ln(t/t_0)b^\psi$ with the tunneling exponent given by $\psi=D_f$ and $b$ an arbitrary
length scale factor. This scaling combination reflects the activated dynamical scaling,
i.e., the exponential relation (\ref{eq:activated-scaling}) between length and time
scales. Finally, the source field $h$, being a rate, scales like inverse time. This leads
to the following scaling theory of the density,
\begin{eqnarray}
\rho[\Delta,\ln(t/t_0),\ln(h_0/h)] = \hspace*{4cm} \nonumber \\
=b^{\beta_c/\nu_c}\rho[\Delta b^{-1/\nu_c}, \ln(t/t_0) b^\psi,
\ln(h_0/h) b^\psi]~
\label{eq:density-scaling}
\end{eqnarray}
This scaling theory is compatible with all our explicit results which can be rederived by
setting the arbitrary scale factor $b$ to the appropriate values.
\section{Generality of the activated scaling scenario}
\label{sec:generality}
In Section \ref{sec:theory}, we have developed a theory for the nonequilibrium phase
transition of the simple contact process across the lattice percolation threshold and
found it to be characterized by unconventional activated dynamical scaling.
In the present section, we investigate how general this
exotic behavior is for absorbing state transitions by considering the generalized contact
process with several absorbing states.
This is a particularly interesting question because the generic transitions ($p<p_c$)
of the diluted simple and generalized contact processes appear to behave differently.
The generic transition in the simple contact process has been shown to be of
infinite-randomness type with activated dynamical scaling using both a strong-disorder
renormalization group \cite{HooyberghsIgloiVanderzande03,HooyberghsIgloiVanderzande04}
and Monte-Carlo simulations \cite{VojtaDickison05,VojtaFarquharMast09}. In contrast,
the strong-disorder renormalization group treatment of the disordered generalized contact
process \cite{HooyberghsIgloiVanderzande04} suggests more conventional behavior, even
though the ultimate fate of the transition could not be determined.
To address the same question for our transition across the lattice percolation threshold,
we note that any difference between the simple and the generalized contact processes must
stem from the single-cluster dynamics because the underlying lattice is identical. In the
following we therefore first give heuristic arguments for the single-cluster dynamics of the
supercritical generalized contact process and then verify them by Monte-Carlo simulations.
If the percolation cluster is locally in the active phase ($\lambda>\lambda_*$), the
density time evolution, starting from a fully active lattice, proceeds in two stages,
analogously to the simple contact process. There is a rapid initial decay to a metastable
state with a nonzero
density of active sites and finite-size islands of each of the inactive phases
(see Fig.\ \ref{fig:gcp_cluster}).
\begin{figure}
\includegraphics[width=6.5cm,clip]{gcp_cluster.eps}
\caption{(Color online:) Schematic of the metastable state of the supercritical
generalized contact process
with two inactive states on a single percolation cluster.
A denotes the active state, and I$_1$ and I$_2$ are the inactive states.
$\xi_s^c$ is the connected
correlation length of the density fluctuations \emph{on} the cluster.}
\label{fig:gcp_cluster}
\end{figure}
For this metastable state to decay into one of the $n$ absorbing configurations, all
sites must go into \emph{the same} inactive state which requires a rare large density
fluctuation. Let us assume for definiteness that the decay is into the I$_1$ state. The
main difference to the simple contact process considered in Sec.\
\ref{subsec:single-cluster} is that sites that are in inactive states I$_2 \ldots$ I$_n$
cannot directly decay into I$_1$. This means, each of the inactive islands in states I$_2
\ldots$ I$_n$ first needs to be ``eaten'' by the active regions before the entire cluster
can decay into the I$_1$ state. This can only happen via infection from the boundary of
the inactive island and is thus a slow process. However, since the characteristic size of
the inactive islands in the metastable state is finite (it is given by the connected
density correlation length $\xi_s^c$ on the cluster), this process happens with a nonzero
rate that is independent of the size $s$ of the underlying percolation cluster (for
sufficiently large $s$).
The decay of the metastable state into one of the absorbing states is therefore brought
about by the rare collective decay of a large number of \emph{independent} correlation
volumes just as in the simple contact process. As a result, the lifetime $t_s(s)$ depends
exponentially on the number of involved correlation volumes, i.e., it depends
exponentially on the cluster size $s$. We thus find that the long-time density decay of
the generalized contact process on a single large percolation cluster is governed by
the same equations (\ref{eq:cluster-decay}) and (\ref{eq:cluster-lifetime}) as the decay
of the simple contact process.
To verify these phenomenological arguments, we have performed large-scale Monte-Carlo
simulations of the generalized contact process with two and three absorbing states
on clean and disordered one-dimensional and two-dimensional lattices. In all cases, we have
first performed bulk simulations (spreading from a single seed) to find the bulk critical
point. An example is shown in Fig.\ \ref{fig:spreading-gcp}, details of the bulk critical
behavior will be reported elsewhere.
\begin{figure}
\includegraphics[width=8cm]{gcp_1d_2n_v3.eps}
\caption{(Color online:) Bulk phase transition of the generalized contact process with two
absorbing states in $d=1$ measured via spreading from a single seed: Number $N$ of active
sites vs. time $t$ for different healing rates $\mu$. The infection and boundary activation
rates are fixed, $\lambda=\sigma=1$, and the data are averages over $10^6$ runs. The critical
point appears to be close to $\mu=0.628$ in agreement with \cite{Hinrichsen97}.}
\label{fig:spreading-gcp}
\end{figure}
After having determined the critical point, if any, we have selected several parameter
sets in the bulk active phase and studied the long-time density decay of the generalized
contact process on finite size clusters. As expected, the decay proceeds via the two
stages discussed above. As in Sec.\ \ref{subsec:single-cluster}, we extract the lifetime
$t_s$ from the slow exponential long-time part of the decay. Two characteristic sets of
results are shown in Fig.\ \ref{fig:decay-gcp}.
\begin{figure}
\includegraphics[width=8cm]{decay_gcp1D2n_v4.eps}
\includegraphics[width=8cm]{decay_gcp2D2n_v6.eps}
\caption{(Color online:) Lifetime $t_s$ as a function of cluster size $s$ for the generalized
contact process with two inactive states at different values of the healing rate $\mu$.
The infection and boundary activation rates are fixed, $\lambda=\sigma=1$, and the data are
averages over $10^6$ runs. (a) $d=1$ where the bulk system has a transition, see Fig.\
\ref{fig:spreading-gcp}. (b) $d=2$, where we do not find a bulk transition because the system
is always active \cite{Bibnote1}. The dashed lines are fits of the large-$s$ behaviors to
the exponential law (\ref{eq:cluster-lifetime}). }
\label{fig:decay-gcp}
\end{figure}
The figure confirms that the lifetime of the generalized contact process on a finite-size
cluster depends exponentially on the number of sites in the cluster, as given in
(\ref{eq:cluster-lifetime}). We have obtained analogous results for all cases investigated,
verifying the phenomenological theory given above.
Because the long-time dynamics of the generalized contact process on a single supercritical
cluster follows the same behavior (\ref{eq:cluster-decay}) and
(\ref{eq:cluster-lifetime}) as that of the simple contact process,
we conclude that its nonequilibrium transition across the percolation threshold will also be
governed by the theory developed on Sec.\ \ref{sec:theory}. In other words, the lattice
percolation transitions of the simple and generalized contact processes belong to the same
universality class, irrespective of the number $n$ of absorbing states.
\section{Conclusions}
\label{sec:conclusions}
In this final section of the paper, we first summarize our results, discuss their
generality,
and relate them to the behavior of certain quantum phase transitions on diluted lattices. We then
compare the recently found infinite-randomness critical point at the generic transition
($p<p_c$) to the behavior at our lattice percolation transition. Finally, we relate our
findings to a general classification of phase transitions with quenched spatial disorder
\cite{Vojta06}.
To summarize, we have investigated absorbing state phase transitions on randomly diluted
lattices, taking the simple and generalized contact processes as examples. We have
focused on the nonequilibrium phase transition across the lattice percolation threshold
and shown that it can be understood by combining the time evolution of the supercritical
nonequilibrium process on a finite-size cluster with results from classical lattice
percolation theory. The interplay between geometric criticality and dynamic fluctuations
at this transition leads to a novel universality class. It is characterized by ultraslow
activated (i.e., exponential) rather than power-law dynamical scaling and accompanied by
a nonexponential decay in the Griffiths regions.
All critical exponents of the nonequilibrium phase transition can be expressed in terms
of the classical lattice percolation exponents. Their values are known exactly in two
space dimensions and with good numerical accuracy in three space dimensions; they are
summarized in Table \ref{table:exponents}.
\begin{table}[tbp]
\caption{Critical exponents of the nonequilibrium phase transition across the percolation
threshold in two and three space dimensions. } \label{table:exponents}
\begin{ruledtabular}
\begin{tabular}{l c c}
Exponent & $~d=2~$ & $~d=3~$ \\
\hline
$\beta =\beta_c$ & 5/36 & 0.417 \\
$\nu =\nu_c$ & 4/3 & 0.875 \\
$\psi =D_f=d-\beta_c/\nu_c$ & 91/48& 2.523 \\
$\bar\delta=\beta_c/(\nu_c D_f) $ & 5/91 & 0.188 \\
\end{tabular}%
\end{ruledtabular}
\end{table}
Thus, our transition in $d=2$ provides one of the few examples of a nonequilibrium phase
transition with exactly known critical exponents.
The logarithmically slow dynamics (\ref{eq:total-density-critical-result}), (\ref{eq:rho-h-critical})
at criticality together with the small value of the exponent $\bar\delta$ make a
numerical verification of our theory by simulations of the full diluted lattice a very
costly proposition. The results of recent Monte-Carlo simulations in two dimensions
\cite{VojtaFarquharMast09} at $p=p_c$ are compatible with our theory but not yet sufficient
to be considered a quantitative verification. This remains a task for the future.
The unconventional critical behavior of our nonequilibrium phase transition at $p=p_c$ is
the direct result of combining the power-law spectrum (\ref{eq:percscaling}) of cluster
sizes with the exponential relation (\ref{eq:activated-scaling}) between length and time
scales. We therefore expect other equilibrium or nonequilibrium systems that share these
two characteristics to display similar critical behavior at the lattice percolation
transition. One prototypical example is the transverse-field Ising model on a diluted
lattice. In this system, the quantum-mechanical energy gap (which represents an inverse
time) of a cluster decreases exponentially with the cluster size. Consequently, the
critical behavior of the diluted transverse-field Ising model across the lattice
percolation threshold is very similar to the one found in this paper
\cite{SenthilSachdev96}. Other candidates are magnetic quantum phase transitions in
metallic systems or certain superconductor-metal quantum phase transitions
\cite{VojtaSchmalian05,HoyosKotabageVojta07,DRMS08,VojtaKotabageHoyos09}, even though a
pure percolation scenario may be hard to realize in metallic systems.
Our work has focused on the nonequilibrium phase transition across the lattice
percolation threshold. It is instructive to compare its critical behavior to that of the
generic transition occurring for $p<p_c$ (see Fig.\ \ref{fig:pds}). Hooyberghs et al.
\cite{HooyberghsIgloiVanderzande03,HooyberghsIgloiVanderzande04} applied a strong
disorder renormalization group to the one-dimensional disordered contact process. They
found an exotic infinite-randomness critical point in the universality class of the
random-transverse field Ising model (which likely governs the transition for any disorder
strength \cite{Hoyos08}). The same analogy is expected to hold in two space
dimensions. Recently, these predictions were confirmed by large scale Monte-Carlo
simulations \cite{VojtaDickison05,VojtaFarquharMast09}. Our nonequilibrium transition
across the lattice percolation threshold shares some characteristics with these
infinite-randomness critical points, in particular, the activated dynamical scaling
which leads to a logarithmically slow density decay at criticality.
However, the generic and percolation transitions are in different universality classes
with different critical exponent values. Moreover, the initial spreading from a single
seed is qualitatively different (logarithmically slow at the generic infinite-randomness
critical point but of power-law type at our percolation transition). Finally, at the
percolation transition the simple and generalized contact processes are in the same
universality class while this does not seem to be the case for the generic transition
\cite{HooyberghsIgloiVanderzande04}.
The results of this paper are in agreement with a recent general classification of phase
transitions with quenched spatial disorder and short-range interactions
\cite{VojtaSchmalian05,Vojta06}. It is based on the effective dimensionality $d_{\rm
eff}$ of the droplets or clusters. Three classes need to be distinguished: (a) If the
clusters are below the lower critical dimension of the problem, $d_{\rm eff} < d_c^-$,
the critical behavior is conventional (power-law scaling and exponentially weak Griffiths
effects). This is the case for most classical equilibrium transitions. (b) If $d_{\rm
eff} = d_c^-$, the dynamical scaling is activated and accompanied by strong Griffiths
effects. This case is realized at the nonequilibrium transition considered here as well
as the generic transition of the disordered contact process. It also applies to various
quantum phase transitions \cite{Fisher92,SenthilSachdev96,HoyosKotabageVojta07}. (c) If
$d_{\rm eff} > d_c^-$, a single supercritical cluster can undergo the phase transition
independently of the bulk system. This leads to the smearing of the global phase
transition; it occurs, e.g., in dissipative quantum magnets \cite{Vojta03a,HoyosVojta08}
or in the contact process with extended defects \cite{Vojta04}.
In conclusion, our work demonstrates that absorbing state transitions on percolating
lattices display unusual behavior. Interestingly, experimental verifications of the
theoretically predicted critical behavior at (clean) absorbing state transitions are
extremely rare \cite{Hinrichsen00b}. For instance, to the best of our knowledge, the only
complete verification of directed percolation scaling was found very recently in the
transition between two turbulent states in a liquid crystal \cite{TKCS07}. Our theory
suggests that unconventional disorder effects may be responsible for the surprising
absence of directed percolation scaling in at least some of the experiments.
\section*{Acknowledgements}
This work has been supported in part by the NSF under grant no. DMR-0339147, by Research
Corporation, and by the University of Missouri Research Board. We gratefully acknowledge
discussions with J. Hoyos as well the hospitality of the
Max-Planck-Institute for Physics of Complex Systems during part of this research.
\bibliographystyle{apsrev}
|
1,477,468,750,962 | arxiv | \section{Introduction}
The study of quantum phase transitions has greatly benefited from
developments in quantum information theory
\cite{REVIEWSOFMODERNPHYSICS,numericalmethods}. We know, for
example, that the extremum points of entanglement and other related
correlations coincide with phase transition points
\cite{nielsenIsing,Osterlohnature,VidalKitaevprl,jiancui,Kit}, and
that different phases may feature differing fidelity between
neighboring states
\cite{fidelity1,fidelity2,fidelity3,fidelity4,fidelity5,fidelity6}.
These observations have helped pioneer many alternative indicators
of phase transitions, allowing the tools of quantum information
science to be harnessed in the analysis of quantum many body
systems\cite{REVIEWSOFMODERNPHYSICS,numericalmethods}. The reverse,
however, remains understudied. If the concepts of quantum
information processing have such relevance to the study of quantum
phase transitions, one would expect that systems undergoing quantum
phase transition would also exhibit different operational properties
from the perspective of information processing. Yet, there remains
little insight on how such relations are apply to quantum
information and computation.
In this paper, we demonstrate using the XY model that different
quantum phases have distinct operational significance with respect
to quantum information processing. We reveal that the differential
local convertibility of ground states undergoes distinct qualitative
change at points of phase transition. By differential local
convertibility of ground states, we refer to the following (Fig. 1):
A given physical system with an adjustable external parameter $g$ is
partitioned into two parties, Alice and Bob. Each party is limited
to local operations on their subsystems (which we call $A$ and $B$)
and classical inter-party communication, i.e., local operations and
classical communication (LOCC). The question is: \textit{can the
effect on the ground state caused by adiabatic perturbation of $g$
be achieved through LOCC by Alice and Bob?} Differential local
convertibility of ground states is significant. Should LOCC
operations between Alice and Bob be capable of simulating a
particular physical process, then such a process is of limited
computational power, i.e., it is incapable of generating any quantum
coherence between $A$ and $B$.
\begin{figure}
\epsfig{file=Figure-1.eps,width=85mm}
\caption{Differential local
convertibility illustration. Alice and Bob control the two
bi-partitions of a physical system whose ground state depends on a
Hamiltonian that can be varied via some external parameter $g$. In
one phase (panel a), the conversion from one ground state
$|G(g)\rangle$ to another $|G(g +\Delta)\rangle$ requires a quantum
channel between Alice and Bob, i.e, coherent interactions between
the two partitions are required. We say that this phase has no local
convertibility. After phase transition (panel b), Alice and Bob are
able to convert $|G(g)\rangle$ to $|G(g +\Delta)\rangle$ via only
local operations and classical communications, and local
convertibility becomes possible. If $\Delta\rightarrow0$, local
convertibility becomes differential local convertibility. This
implies that it is impossible to completely simulate the adiabatic
evolution of the ground state with respect to $g$ by LOCC in one
quantum phase phase and possible in the other phase.}
\label{fig1-sketch}
\end{figure}
We make use of the most powerful notion of differential local
convertibility, that of LOCC operations together with assisted
entanglement \cite{Nielsen, jonathanplenio}. Given some
infinitesimal $\Delta$, let $|G(g)\rangle_{AB}$ and
$|G(g+\Delta)\rangle_{AB}$ be the ground states of the given system
when the external parameter is set to be $g$ and $g + \Delta$
respectively. The necessary and sufficient conditions for local
conversion between $|G(g)\rangle_{AB}$ and
$|G(g+\Delta)\rangle_{AB}$ is given by $S_{\alpha}(g)\geq
S_{\alpha}(g +\Delta)$ for all $\alpha$, where
\begin{eqnarray}
S_{\alpha}(g)=\frac{1}{1-\alpha}\log_2[Tr\rho_A^{\alpha}(g)]=\frac{1}{1-\alpha}\log_2\big[\sum_{i=1}^d\lambda_i^{\alpha}\big]
\end{eqnarray}
is the R\'{e}nyi entropy with parameter $\alpha$, $\rho_A(g)$ is the
reduced density matrix of $|G(g)\rangle_{AB}$ with respect to
Alice's subsystem, and $\{\lambda_i\}$ are the eigenvalues of
$\rho_A(g)$ in decreasing
order\cite{necessarysufficient1,necessarysufficient2,necessarysufficient3}.
Thus, if the R\'{e}nyi entropies of two states intercept for some
$\alpha$, they cannot convert to each other by LOCC even in the
presence of ancillary entanglement\cite{ccf}. In the
$\Delta\rightarrow0^+$ limit, we may instead examine the sign of
$\partial_g S_{\alpha}(g)$ for all $\alpha$. If $\partial_g
S_{\alpha}(g)$ does not change sign, the effect of an infinitesimal
increase of $g$ results in global shift in $S_{\alpha}(g)$, with no
intersection between $S_{\alpha}(g+\Delta)$ and $S_{\alpha}(g)$.
Otherwise, an intersection must exist.
\section{Results}
\subsection{Transverse field Ising model.}
Before we consider the general $XY$ model, we highlight key ideas on
the transverse Ising model, which has the Hamiltonian
\begin{eqnarray}
H_I(g)=-\sum_{i=1}^N(\sigma_i^x\sigma_{i+1}^x+g\sigma_i^z),
\end{eqnarray}
where $\sigma^k$, for $k = x,y, z$ are the usual Pauli matrices and
periodic boundary conditions are assumed. The transverse Ising model
is one of the simplest models that has a phase transition, therefore
it often serves as a test bed for applying new ideas and methods to
quantum phase transitions. Osterloh {\it et al.} have previously
shown that the derivative of the concurrence is a indicator of the
phase transition\cite{Osterlohnature}. Nielsen {\it et al.} have
also studied concurrence between two spins at zero or finite
temperature\cite{nielsenIsing}. Recently, the Ising chain with
frustration has been realized in experiment\cite{L.M.Duan}.
The transverse Ising model features two different quantum phases,
separated by a critical point at $g=1$. When $g<1$, the system
resides in the ferromagnetic (symmetric) phase. It is ordered, with
nonzero order parameter $\langle\sigma^x\rangle$, that breaks the
phase flip symmetry $\Pi_i\sigma_i^z$. When $g>1$, the system
resides in the symmetric paramagnetic (symmetry broken) phase, such
that $\langle\sigma^x\rangle=0$.
There is systematic qualitative difference in the
computational power afforded by perturbation of $g$ within these two
differing phases. In the paramagnetic phase,
$\partial_g S_{\alpha}(g)$ is negative for all $\alpha$, hence
increasing the external magnetic field can be simulated by LOCC. In the ferromagnetic $\partial_g
S_{\alpha}(g)$ changes signs for certain $\alpha$. Thus the ground
states are not locally convertible, and perturbing the magnetic
field in either direction results in fundamentally non-local quantum effects.
This result is afforded by the study of how R\'{e}nyi
entropy within the system behaves. From Eq.(1), we see that R\'{e}nyi entropy
contains all knowledge of $\{\lambda _i\}$\cite{Renyientropy}. For
large $\alpha$, $\lambda_i^\alpha$ vanishes when $\lambda_i$ is
small, and larger eigenvalues dominate. In the limit where
$\alpha\rightarrow\infty$, all but the largest eigenvalue
$\lambda_1$ may be neglected such that
$S_{\infty}=-\log_2\lambda_1$. In contrast, for small values of
$\alpha$, smaller eigenvalues become as important as their larger
counterparts. In the $\alpha\rightarrow0^+$ limit, the R\'{e}nyi
entropy approaches to the logarithm of the rank for the reduced
density matrix, i.e, the number of non-zero eigenvalues.
This observation motivates study of the eigenvalue spectrum. In
systems of finite size (Fig. 2 ($a$)), the largest eigenvalue
monotonically increases while the second monotonically decreases for
all $g$. All the other eigenvalues $\lambda_k$ exhibit a maximum at
some point $g_k$. From the scaling analysis (Fig. 2 ($b$) and
($c$)), we see that as we increase the size of the system, $g_k
\rightarrow 1$ for all $k$. Thus, in the thermodynamic limit,
$\lambda_k$ exhibits a maximum at the critical point of $g = 1$ for
all $k \geq 3$. Knowledge of this behavior gives intuition to our
claim.
\begin{figure}
\begin{center}
\epsfig{file=Figure-2.eps,width=90mm}
\end{center}
\caption{The largest 4 eigenvalues and the scaling analysis. (a) The
four largest eigenvalues of transverse Ising ground state for N=10
case. The red line represents the largest eigenvalue $\lambda_1$,
and the black line is the second one $\lambda_2$. Note that we have
artificially magnified $\lambda_3$ (green) and $\lambda_4$ (blue) by
$10$ times for the sake of clarity since each subsequent eigenvalue
is approximately one order smaller than its predecessor. (b) The
scaling behaviors of the maximal points of the third eigenvalue.
When $N\rightarrow\infty$, the maximum points approach to the
critical point with certain acceptable error. The black dots are the
data and the red curves are the exponentially fit of these data. The
maximum point for the third eigenvalue
$g_3=0.495\times\exp(-\frac{N}{10.044})+1.09177$. (c) The maximum
point for the fourth eigenvalue
$g_4=-0.082\times\exp(-\frac{N}{5.527})+0.9996$. The maximums points
of smaller eigenvalues have similar behavior.}
\label{fig2-eigenvalues}
\end{figure}
In the ferromagnetic phase, i.e., $0<g<1$, $\partial_g
S_{\alpha}(g)$ takes on different signs for different $\alpha$. When
$\alpha \rightarrow0^+$, $S_{\alpha}$ tends to the logarithm of the
effective rank. From Fig. 2 we see that all but the two largest
eigenvalues increase with $g$, resulting in an increase of effective
rank. Thus $\partial_g S_{\alpha}(g) > 0$ for small $\alpha$. In
contrast, when $\alpha\rightarrow\infty$, $S_{\alpha} \rightarrow
-\log_2\lambda_1$. Since $\lambda_1$ increases with $g$ (Fig. 2),
$\partial_g S_{\alpha}(g) < 0$ for large $\alpha$. Therefore, there
is no differential local convertibility in the ordered phase.
In the paramagnetic phase, i.e., $g>1$, calculation yields that
$\partial_g S_{\alpha}(g)$ is negative for both limiting cases
considered above by similar reasoning. However, the intermediate
$\alpha$ between these two limits cannot be analyzed in a simple
way. The detail and formal proof of the result can be found in the
'Method' section, where it is shown that $\partial_g S_{\alpha}(g)$
still remains negative for all $\alpha>0$. Thus, differential local
convertibility exists in this phase.
These results indicate that at the critical point, there is
a distinct change in the nature of the ground state. Prior to the
critical point, a small perturbation of the external magnetic field
results in a change of the ground state that cannot be implemented
without two body quantum gates. In contrast, after phase transition,
any such perturbation may be simulated completely by LOCC.
\subsection{XY model.}
We generalize our analysis to the $XY$ model, with Hamiltonian
\begin{eqnarray}
H=-\sum_i[\frac{1}{2}(1+\gamma)\sigma_i^x\sigma_{i+1}^x+\frac{1}{2}(1-\gamma)\sigma_i^y\sigma_{i+1}^y+ g\sigma_i^z],
\end{eqnarray}
for different fixed values of $\gamma > 0$. The transverse Ising
model thus corresponds to the the special case for this general
class of models, in which $\gamma = 1$. For $\gamma \neq 1$, there
exists additional structure of interest in phase space beyond the
breaking of phase flip symmetry at $g = 1$. In particular, there
exists a circle, $g^2+\gamma^2 = 1$, on which the ground state is
fully separable. The functional form of ground state correlations
and entanglement are known differ substantially on either side of
the circle\cite{McCoy,Korepin,TZWei}, which motivates the
perspective that the circle is a boundary between two differing
phases. Indeed, such a division already exists from the perspective
of an entanglement phase diagram, where different `phases' are
characterized by the presence and absence of parallel entanglement
\cite{luigi}.
Analysis of local convertibility reveals that from the perspective
of computational power under adiabatic evolution, we may indeed
divide the system into three separate phases (Fig. 3). While the
disordered paramagnetic phase remains locally convertible, the local
convertibility of the ferromagnetic phase now depends on whether
$g^2+\gamma^2 > 1$. In particular, for each fixed $\gamma > 0$. The
system is only locally non-convertible when $g > \sqrt{1 -
\gamma^2}$. We summarize these results in a `local-convertibility
phase-diagram', where the ferromagnetic region is now divided into
components defined by their differential local convertibility.
\begin{figure}
\begin{center}
\epsfig{file=Figure-3.eps,width=85mm}
\end{center}
\caption{ XY model local convertibility phase diagram. Consideration
of differential local convertibility separates the XY model in three
phases, which we label phase 1A, phase 1B and phase 2. We consider
differential local convertibility for fixed values of $\gamma$ while
$g$ is perturbed. Differential local convertibility is featured
within both phase 1B and phase 2, but not phase 1A.}
\label{fig3-phasediagram}
\end{figure}
\subsection{Different partitions.}
Numerical evidence strongly suggests that our results are not
limited to a particular choice of bipartition. We examine the
differential local convertibility when both systems of interest is
partitioned in numerous other ways, where the two parties may share
an unequal distribution of spins (Fig. 4). The qualitative
properties of $\partial_g S_{\alpha}$ remain unchanged. While it is
impractical to analyze all $2^N$ possible choices of bipartition,
these results motivate the conjecture that differential local
convertibility is independent of our choice of partitions. Should
this be true, it has strong implications: The computation power of
adiabatic evolution in different phases are drastic. In one phase,
perturbation of the external field can be completely simulated by
LOCC operations on individual spins, with no coherent two-spin
interactions. While, in other phases, any perturbation in the
external field creates coherent interactions between any chosen
bipartition of the system.
\begin{figure}
\begin{center}
\epsfig{file=Figure-4.eps,width=85mm}
\end{center}
\caption{The sign distribution of $\partial_g S_{\alpha}$ in Ising
model and XY model for different bi-partitions. The sign
distribution of $\partial_g S_{\alpha}$ on the $\alpha-g$ plane for
different bi-partitions on the systems. Panel $(a)$, $(b)$, $(c)$
correspond to Ising model of size $N=12$, where Alice possesses
$6,5,4$ of the spins respectively. $\partial_g S_{\alpha}$ is
negative in lighter regions and positive in the red regions.
Clearly, regardless of choice of bipartition, $\partial_g
S_{\alpha}$ is always negative for $g>1$ and takes on both negative
and positive values otherwise. Note that for very small $g$,
$\partial_g S_{\alpha}$ only becomes negative for very large
$\alpha$ and thus appears completely positive in the graph above.
The existence of negative $\partial_g S_{\alpha}$ can be verified by
analysis of $\partial_g S_{\alpha}$ in the $\alpha \rightarrow
\infty$ limit. The choice of bipartition affects only the shape of
the $\partial_g S_{\alpha}=0$ boundary, which is physically
unimportant. Panel $(d)$, $(e)$, $(f)$ correspond to XY model with
fixed $\gamma = \sqrt{3}/2$, $N=14$ and $L=7, 6, 5$ respectively.
Panel $(h)$, $(i)$, $(j)$ correspond to XY model with fixed $\gamma
= \sqrt{7}/4$, $N=18$ and $L=9,8,7$, respectively. Here the value
of $L$ represents a bipartition in which $L$ qubits are placed in
one bipartition and $N-L$ qubits in the other. The only region in
which $\partial S_\alpha/\partial g$ takes on both negative and
positive values is in phase 1A of Figure 4. Note that the transition
between phase 1A and 1B occurs at $g=0.5$ for $\gamma = \sqrt{3}/2$
(Point E, in Fig 3) and $g=0.75$ for $\gamma = \sqrt{7}/4$ (Point D,
in Fig 3.)} \label{fig4-result}
\end{figure}
\section{Discussion}
The study of differential local convertibility of the ground state
gives direct operational significance to phase transitions in the
context of quantum information processing. For example, adiabatic
quantum computation (AQC) involves the adiabatic evolution of the
ground state of some Hamiltonian which features a parameter that
varies with time
\cite{Adiabaticquantumcomputation1,Adiabaticquantumcomputation2}.
This is instantly reminiscent of our study, which observes what
computational processes are required to simulate the adiabatic
evolution of the ground state under variance of an external
parameter in different quantum phases.
Specifically, AQC involves a system with Hamiltonian $(1-s) H_0 + s
H_p$, where the ground state of $H_0$ is simple to prepare, and the
ground state of $H_p$ solves a desired computational problem.
Computing the solution then involves a gradual increment of the
parameter $s$. By the adiabatic theorem, we arrive at our desired
solution provided the $s$ is varied slowly enough such that the
system remains in its ground state
\cite{Adiabaticquantumcomputation1,Adiabaticquantumcomputation2}. We can regard this process of computation from the perspective of
local convertibility and phase transitions. Should the system lie in
a phase where local convertibility exists, the increment of $s$ may
be simulated by LOCC. Thus AQC cannot have any computational
advantages over classical computation. Only in phases where no local
convertibility exists, can AQC have the potential to surpass
classical computation. Thus, a quantum phase transition could be
regarded as an indicator from which AQC becomes useful.
In fact, the spin system studied in this paper is directly relevant
to a specific AQC algorithm. The problem of ``$2-SAT$ on a Ring:
Agree and Disagree'' features an adiabatic evolution involving the
Hamiltonian
\begin{eqnarray}
\widetilde{H}(s)=(1-s)\sum_{j=1}^N(1-\sigma_j^x)+s\sum_{j=1}^N\frac{1}{2}(1-\sigma_j^z\sigma_{j+1}^z),
\end{eqnarray}
where $s$ is slowly varied from $0$ to $1$
\cite{Adiabaticquantumcomputation1,Adiabaticquantumcomputation2}.
This is merely a rescaled version of the Ising chain studied here,
where the phase transition occurs at $s=\frac{2}{3}$. According to
the analysis above, the AQC during the paramagnetic phase can be
simulated by local manipulations or classical computations. For the
period of ferromagnetic phase, we can do nothing to reduce the
adiabatic procedure.
In this paper, we have demonstrated that the computational power of adiabatic evolution in the $XY$ model is dependent on which quantum phase it resides in. This surprising relation suggests different quantum phases may not only have different physical properties, but may also display different computational properties. This hints that not only are the tools of quantum information useful as alternative signatures of quantum phase transitions, but that the study of quantum phase transitions may also offer additional insight into quantum information processing. This motivates the study of the quantum phases within artificial systems that correspond directly to well known adiabatic quantum algorithms, which may grant additional insight on how adiabatic computation relates to the physical properties of system that implements the said computation. There is much potential insight to be gained in applying the methods of analysis presented here to more complex physical systems that featuring more complex quantum transitions.
In addition, differential local convertibility also may possess
significance beyond information processing. One of the proposed
indicators of a topological order involves coherent interaction
between subsystems that scale with the size of the
system\cite{wenbook,Wen}. In our picture, such a indicator could
translate to the requirement for non-LOCC operations within
appropriate chosen bipartite systems. Thus, differential local
convertibility may serve as an additional tool for the analysis of
such order.
\section{Methods}
\subsection{Eigenvalue properties.}
For the transverse field Ising model, the largest eigenvalue
$\lambda_1$ monotonically increases while the second $\lambda_2$
monotonically decreases for all $g$ (Fig. 2). In the thermodynamic
limit all the other eigenvalues increase in the $g<1$ region and
decrease in the $g>1$ region. Moreover, the eigenvalues other than
the largest two are much smaller than $\lambda_1$ and $\lambda_2$.
Therefore we can average them when considering their contribution to
the Renyi entropy. Thus the eigenvalues are assumed to be
$0.5+\delta, 0.5-\epsilon, (\epsilon-\delta)/(2^n-2),
(\epsilon-\delta)/(2^n-2), \dots$ when $g<1$, and
$1-\delta^{\prime}-\epsilon^{\prime}, \epsilon^{\prime},
\delta^{\prime}/(2^n-2), \dots$, when $g>1$, where $n$ is the
particle number belonging to Alice and certainly Bob has the other
$N-n$ particles. Because of some obvious reasons such as
$\lambda_1>\lambda_2>\lambda_3 \cdots$ and all these eigenvalues are
positive, and so on, we can derive the following relations easily:
$0<\delta<\epsilon<0.5, 0<\partial\delta/\partial
g<\partial\epsilon/\partial g,
0<\delta^{\prime}<\epsilon^{\prime}<0.5,
\partial\delta^{\prime}/\partial g<0$, and
$\partial\epsilon^{\prime}/\partial g<0$. Then we can prove the main
result for each phase region. Namely, in the $g<1$ phase,
$\partial_g S_{\alpha}$ is positive for small $\alpha$ but negative
for large $\alpha$; and for the $g>1$ phase, $\partial_g
S_{\alpha}<0$ for all $\alpha$.
\subsection{Ferromagnetic phase.}
In the ferromagnetic phase $g<1$, the eigenvalues are $0.5+\delta,
0.5-\epsilon, (\epsilon-\delta)/(2^n-2), (\epsilon-\delta)/(2^n-2),
\dots$. So the Renyi entropy
\begin{small}
\begin{equation}
S_{\alpha}=\frac{1}{1-\alpha}\log\Big[(0.5+\delta)^{\alpha}+(0.5-\epsilon)^{\alpha}
+(2^n-2)\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha}\Big],
\end{equation}
and
\begin{flalign}
\begin{split}
\frac{\partial S_{\alpha}}{\partial
g}=\frac{1}{1-\alpha}\frac{1}{(0.5+\delta)^{\alpha}+(0.5-\epsilon)^{\alpha}+(2^n-2)\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha}}\nonumber\\
\end{split}&
\end{flalign}
\begin{flalign}
\begin{split}
\times\alpha\Big[(0.5+\delta)^{\alpha-1}\frac{\partial\delta}{\partial
g}-(0.5-\epsilon)^{\alpha-1}\frac{\partial\epsilon}{\partial g}
+\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha-1}\big(\frac{\partial\epsilon}{\partial
g}-\frac{\partial\delta}{\partial g}\big)\Big]\nonumber\\
\end{split}&
\end{flalign}
\begin{flalign}
\begin{split}
\qquad
=\frac{\alpha}{1-\alpha}\frac{1}{(0.5+\delta)^{\alpha}+(0.5-\epsilon)^{\alpha}+(2^n-2)\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha}}\nonumber\\
\end{split}&
\end{flalign}
\begin{flalign}
\times\Big\{\frac{\partial\delta}{\partial
g}[(0.5+\delta)^{\alpha-1}-\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha-1}]-\nonumber\\
\frac{\partial\epsilon}{\partial
g}[(0.5-\epsilon)^{\alpha-1}-\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha-1}]\Big\}.
\end{flalign}
\end{small}
In the thermodynamic limit $N\rightarrow \infty$,
$(\epsilon-\delta)/(2^n-2)\rightarrow$ $0$.
When $0<\alpha<1$,
$\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha-1}\rightarrow
\infty$,
\begin{small}
\begin{eqnarray}
\frac{\partial S_{\alpha}}{\partial
g}=\frac{1}{1-\alpha}\frac{\alpha}{\epsilon-\delta}\big(\frac{\partial\epsilon}{\partial
g}-\frac{\partial\delta}{\partial g}\big)>0.
\end{eqnarray}
\end{small}
When $\alpha>1$,
$\big(\frac{\epsilon-\delta}{2^n-2}\big)^{\alpha-1}\rightarrow 0$,
\begin{small}
\begin{eqnarray}
\frac{\partial S_{\alpha}}{\partial
g}\sim\frac{1}{1-\alpha}[\frac{\partial\delta}{\partial
g}\big(1+\frac{\epsilon+\delta}{0.5-\epsilon}\big)^{\alpha-1}-\frac{\partial\epsilon}{\partial
g}].
\end{eqnarray}
\end{small}
As $\big(1+\frac{\epsilon+\delta}{0.5-\epsilon}\big)^{\alpha-1}>1$,
and $0<\partial\delta/\partial g< \partial\epsilon/ \partial g $, we
can see that the solution of $\partial S_{\alpha} / \partial g=0$
(labeled as $\alpha_0$) always exists in the region
$g<1\bigcap\alpha>1$, and $\partial S_{\alpha}/\partial g$ will be
negative as long as $\alpha>\alpha_0$. Moreover we can also see that
the smaller $g$ is, the smaller $\delta$ and $\epsilon$ are, and the
smaller $\big(1+(\epsilon+\delta)/(0.5-\epsilon)\big)$ is, and
therefore the larger $\alpha_0$ should be. This explains why we need
to examine larger value of $\alpha$ to find the crossing when $g$ is
very small.
Notice that in the above analysis we used the $N\rightarrow\infty$
condition in the $g<1$ region. We can also find that in Fig. 4 for
finite $N=12$ there is some small green area in the region
$\alpha<1\bigcap g<1$. However, in the above analysis of infinite
$N$, this area should be totally red. This difference is due to the
finite size effect.
\subsection{Paramagnetic phase.}
In the paramagnetic phase, $g>1$. The eigenvalues are
$1-\delta^{\prime}-\epsilon^{\prime}, \epsilon^{\prime},
\delta^{\prime}/(2^n-2), \dots$. The R\'{e}nyi entropy
\begin{small}
\begin{eqnarray}
S_{\alpha}=\frac{1}{1-\alpha}\log[(1-\delta^{\prime}-\epsilon^{\prime})^{\alpha}+(\epsilon^{\prime})^{\alpha}+(2^n-2)\big(\frac{\delta^{\prime}}{2^n-2}\big)^{\alpha}].
\end{eqnarray}
\end{small}
And
\begin{small}
\begin{flalign}
\begin{split}
\frac{\partial S_{\alpha}}{\partial
g} = \frac{1}{1-\alpha}\frac{1}{(1-\delta^{\prime}-\epsilon^{\prime})^{\alpha}+(\epsilon^{\prime})^{\alpha}+(2^n-2)\big(\frac{\delta^{\prime}}{2^n-2}\big)^{\alpha}}\nonumber\\
\end{split}&
\end{flalign}
\begin{align}
\begin{split}
\times\alpha\Big[-(1-\epsilon^{\prime}-\delta^{\prime})^{\alpha-1}\big(\frac{\partial\delta^{\prime}}{\partial
g}+\frac{\partial\epsilon^{\prime}}{\partial g}\big)\nonumber\\
\end{split}&
\end{align}
\begin{align}
\begin{split}+(\epsilon^{\prime})^{\alpha-1}\frac{\partial\epsilon^{\prime}}{\partial
g}+\big(\frac{\delta^{\prime}}{2^n-2}\big)^{\alpha-1}\frac{\partial\delta^{\prime}}{\partial
g}\Big]\nonumber\\
\end{split}&
\end{align}
\begin{flalign}
\begin{split}
\qquad
=\frac{\alpha}{1-\alpha}\frac{(1-\delta^{\prime}-\epsilon^{\prime})^{\alpha-1}}{(1-\delta^{\prime}-\epsilon^{\prime})^{\alpha}+(\epsilon^{\prime})^{\alpha}+(2^n-2)\big(\frac{\delta^{\prime}}{2^n-2}\big)^{\alpha}}\nonumber\\
\end{split}&
\end{flalign}
\begin{flalign}
\begin{split}
\qquad \quad \times\Big\{\frac{\partial\delta^{\prime}}{\partial
g}\Big[\big(\frac{1}{2^n-2}\frac{\delta^{\prime}}{1-\delta^{\prime}-\epsilon^{\prime}}\big)^{\alpha-1}-1\Big]\nonumber\\
\end{split}&
\end{flalign}
\begin{align}
\begin{split}
+\frac{\partial\epsilon^{\prime}}{\partial
g}\Big[\big(\frac{\epsilon^{\prime}}{1-\epsilon^{\prime}-\delta^{\prime}}\big)^{\alpha-1}-1\Big]\Big\},
\end{split}&
\end{align}
\end{small}
where $\partial\delta^{\prime}/\partial
g,\partial\epsilon^{\prime}/\partial g<0$; and
$\epsilon^{\prime}/(1-\epsilon^{\prime}-\delta^{\prime}),\delta^{\prime}/(1-\epsilon^{\prime}-\delta^{\prime})(2^n-2)\in(0,1)$,
since $\lambda_1>\lambda_2>\lambda_3$.
So when $\alpha>1$, $[(\dots)^{\alpha-1}-1]<0$, $\{\dots\}>0$,
$\alpha/(1-\alpha)<0$. We have $\partial S_{\alpha}/\partial g<0$;
and when $0<\alpha<1$, $[(\dots)^{\alpha-1}-1]>0$, $\{\dots\}<0$,
$\alpha/(1-\alpha)>0$. We also have $\partial S_{\alpha}/\partial
g<0$. Hence, we can obtain that when $g>1$, $\partial
S_{\alpha}/\partial g<0$ for all $\alpha>0$. If we consider the full
condition for the local convertibility which includes the
generalization of $\alpha$ to negative value, it also can be proved
easily that the local convertible condition $\partial_g
S_{\alpha}/\alpha>0$ for all $\alpha$ is satisfied in the $g>1$
phase. As for the $g<1$ phase, the sign changing in the positive
$\alpha$ already violates the local convertible condition, so that
we do not need to consider the negative $\alpha$ part. In fact,
generally speaking, for the study of \textit{differential local
convertibility}, we can only focus on the positive $\alpha$ part,
because the derivative of R\'{e}nyi entropy over the phase
transition parameter will necessarily generate a common factor
$\alpha$, which will cancel the same $\alpha$ in the denominator.
To conclude, In the $g>1$ phase $\partial S_{\alpha}/\partial g$ is
negative for all $\alpha$, and in the $g<1\bigcap\alpha<1$ region it
is positive, while in the $g<1\bigcap\alpha>1$ region it can be
either negative or positive with the boundary depending on the
solution of Eq.(4).
Acknowledgement--- The authors thank Z. Xu, W. Son and L. Amico for
helpful discussions. J. Cui and H. Fan thank NSFC grant (11175248)
and ``973'' program (2010CB922904) for financial support. J. Cui, M.
Gu, L. C. Kwek, M.F. Santos and V. Vedral thank the National
Research Foundation \& Ministry of Education, Singapore for
financial support.
Author Contributions--- J.C. carried out the numerical work and
calculation. H.F., J.C., L.C.K. and M.G. contributed to the
development of all the pictures. J.C., M.G. and L.C.K. drafted the
paper. All the authors conceived the research, discussed the
results, commented on and wrote up the manuscript.
Competing Interests--- The authors declare that they have no
competing financial interests.
Correspondence--- Correspondence and requests for materials should
be addressed to J.C.
|
1,477,468,750,963 | arxiv | \section{Introduction}
\label{sec:intro}
Recent studies indicate that,
by 2030,
the number of connected devices is expected to increase to 100 billion,
and that \ac{5G} mobile networks may be supporting up to 1,000× more data traffic than the \ac{4G} ones in 2018.
However, the energy consumption of future networks is concerning.
Deployed \ac{5G} networks have been estimated to be approximately four times more energy efficient than 4G ones.
Nevertheless, their energy consumption is around three times higher,
due to the larger number of cells required to provide the same coverage at higher frequencies,
and the increased processing required by their larger bandwidths and more antennas~\cite{Huawei2020}.
It should be noted that,
on average,
the network \ac{OPEX} accounts for approximately 25\% of the total costs incurred by a \ac{MNO},
and that 90\% of it is spent on large energy bills~\cite{GSMA20205Genergy}.
Importantly,
more than 70\% of this energy has been estimated to be consumed by the \ac{RAN},
and in more details,
by the \acp{BS}~\cite{lopezperez2022survey}.
The energy challenge of \acp{MNO} is thus to meet the upcoming more challenging traffic demands and requirements with significantly less energy consumption and \ac{GHG} emissions than today to reduce the environmental impact of mobile networks,
and in turn, costs.
Third generation partnership project (3GPP) new radio (NR) Release 15 specified intra-NR network energy saving solutions
--similar to those developed for 3GPP long-term evolution (LTE)--
to decrease RAN energy consumption.
Moreover, 3GPP NR Release 17 has recently specified inter-system network energy saving solutions,
and is currently taking network energy saving as an artificial intelligence use case.
However, data collected from 3GPP LTE and NR networks have shown that these solutions are still not sufficient to fundamentally address the challenge of reducing energy consumption~\cite{CT2021report}.
For this reason,
3GPP NR Release 18 has recently approached a study item,
which attempts to develop a set of flexible and dynamic network energy saving solutions~\cite{CT2021report}.
Importantly,
this study item indicates that new 5G power consumption models are needed to accurately develop and optimize new energy saving solutions,
while also considering the complexity emerging from the implementation of state-of-the-art base station architectures.
In recent years,
many models for base station power consumption have been proposed in the literature.
The work in~\cite{Auer2011} proposed a widely used power consumption model,
which explicitly shows the linear relationship between the power transmitted by the BS and its consumed power.
This model was extended in \cite{Debaillie2015},
taking into account the \ac{mMIMO} architecture and energy saving methods.
However, the power consumption estimate discussed in that paper seems to be inaccurate~\cite{han2020energy} with an optimistic 40.5~W per \ac{mMIMO} BS.
The work in~\cite{Tombaz2015} further extended the model in~\cite{Auer2011} by considering a linear increase of the power consumption with the number of mMIMO transceivers.
A more complete and detailed description of the power consumption components was introduced in~\cite{bjornson2015optimal},
where the authors provided a model that considers the mMIMO architecture, downlink and uplink communication phases, as well as the number of multiplexed users per \ac{PRB},
and a large number of mMIMO components.
The power consumption of a system that uses multiple carriers was modeled in~\cite{Yu2015} by considering a linerar model.
Finally, the work in~\cite{lopez2021energy} jointly considered mMIMO and multi-carrier capabilities,
such as carrier aggregation and its different aggregation capabilities.
Aiming at providing more accurate estimations,
validated in the field,
in our most recent work~\cite{piovesan2022machine},
we introduced a new analytical power consumption model for 5G \acp{AAU}
-- the highest power-consuming component of today's mobile networks, based on a \ac{ML} framework, which builds on a large data collection campaign.
In this paper,
we present in detail our \ac{ML} framework
providing a detailed technical analysis of its accuracy performance, its scalability, and generalization capabilities.
\section{5G AAU architecture}
\label{sec:AAUmodel}
The hardware architecture of a 5G \ac{AAU} is shown in Fig.~\ref{fig:AAUarchitecture}.
In particular,
in our \ac{AAU} architecture,
we assume that:
\begin{itemize}
\item
The AAU has a multi-carrier structure,
and uses \ac{MCPA} technology;
\item
The \ac{AAU} manages $C$ carriers deployed in $T$ different frequency bands;
\item
The \ac{AAU} comprises $T$ transceivers,
each operating a different frequency band,
and $M$ MCPAs,
one for each antenna port;
\item
A transceiver includes $M$ \ac{RF} chains,
one per antenna port,
which comprise a cascade of hardware components for analog signal processing,
such as filters and digital-to-analog converters;
\item
Antenna elements are passive.
For example, one wideband panel or $T$ antenna panels may be used per \ac{AAU};
\item
Deep dormancy, carrier shutdown, channel shutdown, and symbol shutdown are implemented,
each switching off distinct components of the \ac{AAU} (as shown in Fig.~\ref{fig:AAUarchitecture}).
\end{itemize}
Importantly,
it should be noted that the implementation of \acp{MCPA} leads to increased energy efficiency compared to single carriers \acp{PA},
as the management of multiple carriers through one `wider' \ac{PA} allows to manage a larger amount of transmit power,
in turn, permitting the \acp{MCPA} to operate at higher energy efficiency areas.
Moreover, the static power consumption of the \acp{MCPA} increases sub-linearly with the number of carriers,
since part of the hardware components can be shared among them.
However, it should be noted that the implementation of \acp{MCPA} involves increased complexity in the management of the network energy saving methods and in the estimation of the power consumption when such methods are activated.
In fact, the deactivation of one carrier may not bring the expected energy savings,
if the \acp{MCPA} need to remain active to operate the co-deployed carriers.
\begin{figure}
\centering
\includegraphics[scale=0.8]{Figures/AAUarchitecture.png}
\caption{Architecture of an AAU with MCPAs handling 2 carriers in 2 different bands, which transmit over the same wideband antenna panel.}
\label{fig:AAUarchitecture}
\end{figure}
\section{ANN Model Architecture}
\label{sec:model}
In this section,
we describe the data collected during our measurement campaign,
and we provide a description of the ANN architecture designed for modeling and estimating the power consumption.
Moreover, we describe the identified loss function and the training of the ANN model parameters.
\subsection{Dataset}
The dataset used for training and testing the \ac{ANN} model is composed of hourly measurements collected during 12 days from a deployment of 7500 \ac{4G}/\ac{5G} \acp{AAU}.
Overall, 24 different types of \acp{AAU} are included.
The collected measurements contain 150 different features,
which can be classified into four main categories:
\begin{itemize}
\item \textit{Engineering parameters}:
Information related to the configuration of each \ac{AAU}
(e.g., \ac{AAU} type, number of \acp{TRX}, numbers of supported and configured carriers);
\item \textit{Traffic statistics}:
Information on the serviced traffic
(e.g., average number of active \acp{UE} per \ac{TTI}, number of used \acp{PRB}, traffic volume serviced);
\item \textit{Energy saving statistics}:
Information on activated energy saving modes \cite{lopezperez2022survey}
(e.g., duration of the carrier-shutdown, channel-shutdown, symbol shutdown and dormancy activation);
\item \textit{Power consumption statistics}:
Information on the power consumed by the \ac{AAU}.
\end{itemize}
\begin{figure}
\centering
\includegraphics[scale=0.65]{Figures/shap2.png}
\caption{Example of SHAP analysis performed on 4 of the available features is the collected measurements data.}
\label{fig:shap}
\end{figure}
Feature importance analysis has been extensively performed on the collected features to identify the most relevant for estimating power consumption.
It is worth highlighting that the features that do not affect power consumption as well as those highly correlated with the selected ones (i.e., thus providing limited information) were discarded.
The analysis of the feature importance consisted into two phases:
i) a first phase,
in which gradient boosting models including different input features were trained, and
ii) a second phase,
in which the analysis of \ac{SHAP} values~\cite{NIPS2017_7062} were performed on such models.
The \ac{SHAP} value of each feature represents the change in the expected model prediction when conditioning on that feature.
As an example,
Fig.~\ref{fig:shap} shows the \ac{SHAP} values of four features,
namely the \ac{DL} \ac{PRB} load, the maximum transmit power, the number of \ac{MIMO} layers per \ac{PRB}, and the \ac{MCS}.
In more details,
the figure indicates in which direction and how much each feature contributes to the model output as compared to the average model prediction.
The y-axis on the right side indicates the respective feature value (low values in blue color and high values in red).
Each scatter dot represents one instance in the data.
The analysis highlights that the \ac{DL} PRB load is the most important feature,
whereas the maximum transmit power is the second most important.
In fact, the knowledge of this two features allows the model to capture the amount of power transmitted by the AAU at different \ac{DL} \ac{PRB} load levels.
In more detail,
the model output is shown to increase when the DL PRB load and/or the maximum transmit power are increased.
Importantly,
the \ac{MCS} and the number of \ac{MIMO} layers per \ac{DL} \ac{PRB} show a large correlation with the \ac{DL} \ac{PRB} load,
meaning that the latter feature is sufficient to capture the energy consumption behavior.
The extended analysis of the importance of the available features allowed us to identify the inputs needed for our \ac{ANN} model, which correspond to the type of \ac{AAU} and a set of characteristics for each of the carriers.
The complete list of selected features is shown in Table~\ref{tab:ML_inputs}.
\begin{table}[]
\begin{tabular}{@{}lll@{}}
\toprule
Class & Parameter & Type \\ \midrule
Engineering parameter & AAU type & Categorical \\
Engineering parameter & Number of TRXs & Numerical \\
Engineering parameter & Carrier transmission mode & Categorical \\
Engineering parameter & Carrier frequency & Numerical \\
Engineering parameter & Carrier bandwidth & Numerical \\
Engineering parameter & Carrier maximum transmit power & Numerical \\
Traffic statistics & Carrier DL PRB load & Numerical \\
Energy saving statistics & Duration of carrier shutdown & Numerical \\
Energy saving statistics & Duration of channel shutdown & Numerical \\
Energy saving statistics & Duration of symbol shutdown & Numerical \\
Energy saving statistics & Duration of deep dormancy & Numerical \\ \bottomrule
\end{tabular}
\caption{ANN model input parameters.}
\label{tab:ML_inputs}
\end{table}
\subsection{Inputs of the model}
\label{sec:inputlayer}
Each of the input features listed in Table~\ref{tab:ML_inputs} was pre-processed according to its type,
and inputted to the neural network.
The numerical features were normalized before being inputted into the model,
whereas the categorical ones were inputted by using one-hot encoding.
Since a \ac{AAU} can operate multiple carriers through the same \acp{MCPA},
to make our \ac{ANN} model the most general and flexible,
it takes input data from $C^{\mathrm{MAX}}$ carriers,
which corresponds to the maximum number of carriers that can be managed by the most capable \ac{AAU} in the dataset.
When $C<C^{\mathrm{MAX}}$ carriers are deployed in an \ac{AAU},
all the input neurons related to the remaining $C^{\mathrm{MAX}}-C$ carriers are set to zero.
It is worth noting that this approach allows to implement an \ac{ANN} model with a fixed number of input neurons,
which can be trained with data from all the AAUs in the dataset,
regardless of their number of configured carriers,
with a minimal loss in terms of accuracy,
as we will discussed in Section~\ref{sec:zeros}.
\subsection{Outputs of the model}
The analysis of the collected data has highlighted that different power consumption values may be reported for the same input feature values.
This effect have multiple origins:
i) the presence of features slightly impacting the power consumption but currently not captured as input for the model,
ii) errors in the measurements or in the collection of the data,
iii) tolerance of the hardware components which affects their power consumption behavior.
To embrace such noise,
we define the measured power consumption, $\bar{y}$, as $\bar{y} = y + n$,
where
$y$ is the power consumption for a given input configuration,
and $n$ is the noise due to the mentioned errors.
Based on the analysis of the available data,
the noise, $n$, can be assumed to be normally distributed with mean 0 and standard deviation $\sigma$.
It thus follows that the measured power consumption, $\bar{y}$, is normally distributed with mean $\mu=\mathbb{E}[y]$ and standard deviation $\sigma$.
The designed \ac{ANN} model estimates and outputs these two parameters, $\mu$ and $\sigma$.
Furthermore, it is worth highlighting that the output of these two parameters also allows computing a confidence interval for each power consumption estimate,
thus increasing the reliability of the whole process.
\subsection{Architecture of the model}
\begin{figure}
\centering
\includegraphics[scale=0.6]{Figures/PCmodel_NN_ICC.eps}
\caption{Architecture of the ANN.}
\label{fig:MLmodel}
\end{figure}
Multilayer perceptron is considered as the basic architecture for the proposed \ac{ANN} model,
consisting of multiple fully connected layers of neurons~\cite{Goodfellow-et-al-2016}.
The overall structure of the proposed \ac{ANN} model is depicted in Fig.~\ref{fig:MLmodel}.
In general,
the input layer is comprised of $n_i=N_{\mathrm{AAU}}+10*C^{\mathrm{MAX}}$ neurons,
where $N_{\mathrm{AAU}}$ is the number of \ac{AAU} types available in the dataset,
and thus modelled by the \ac{AAU},
and $C^{\mathrm{MAX}}$ is the maximum number of carriers of the most capable \ac{AAU},
as discussed earlier.
In our specific scenario,
we collected data for $N_{\mathrm{AAU}}=24$ different \ac{AAU} types,
and the maximum number of carriers of the most capable AAU, $C^{\mathrm{MAX}}$, is equal to 6.
Therefore, the input layer consists of $n_i=84$ neurons.
The input layer is followed by two hidden layers,
which are composed of 40 and 15 neurons, respectively.
These dimensions were chosen after an optimization process aimed at maximizing the accuracy of the model.
Finally, the output layer is composed of two neurons,
which capture the mean and standard deviation of the power consumption,
as explained earlier.
As both metrics must be positive,
the sigmoid activation function is adopted at the output layer.
\subsection{Training of the model}
The goal of the model optimization process is to minimize both the prediction error and the uncertainty.
More in detail,
the \ac{ANN} training process is considered successful if the statistical distribution of the power measurements outputted by the model for a given input, $x$, matches the distribution of the power measurements in the data.
Therefore, during the training phase,
the aim is to maximize the probability that the power consumption estimates, $\bar{y}$, belong to ---are within--- the distribution $\mathcal{N}(\mu,\sigma)$.
Since the power consumption, $\bar{y}$, follows a normal distribution,
this probability is computed as
\begin{equation}
P\left(\bar{y}|\mu,\sigma\right) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(\bar{y}-\mu)^2}{2\sigma^2}}.
\end{equation}
As most of the optimizers used to train \acp{ANN} are designed to solve minimization problems,
we consider the following loss function to train the ANN model:
\begin{equation}
l(\bar{y},\mu,\sigma) = - \log \left( P(\bar{y}|\mu,\sigma)\right) = \log(\sigma) + \frac{(\bar{y}-\mu)^2}{2\sigma^2}.
\label{eq:loss}
\end{equation}
It should be noted that this function reflects the goal of reducing both the prediction error and the related uncertainty.
In fact,
the first term is minimized when the standard deviation, $\sigma$, is low,
which means that the confidence in the estimation is high,
whereas the second term is minimized when the prediction error, $\bar{y}-\mu$, is reduced.
Before the model training was performed,
the available data set was split into two parts:
a training set and a testing set.
The training set contains data collected for 10 days from our 7500 \acp{AAU},
whereas the testing set contains data collected for 2 days from the same \acp{AAU}.
In addition,
80,\% of the training samples are randomly selected to train the \ac{AAU} model,
whereas the remaining 20\,\% are used for validating the model during the training phase.
Model training was carried out by adopting the Adam version of the gradient descent algorithm~\cite{Goodfellow-et-al-2016},
and required 75 minutes to perform 1086 iterations when adopting a learning rate $\alpha=0.001$.
Note that an early stopping method was implemented to stop the training after 200 epochs with no improvements in terms of validation loss.
\section{Experiments and Analysis}
In this section,
we provide an analysis of the error performance achieved by the ANN model.
Moreover, we present a set of experiments carried out to evaluate the generalization capabilities of the framework and its scalability related to multi-carrier architectures and AAU types.
Finally, we investigate the impact of data availability to the estimation performance.
\subsection{Overall model performance}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[xlabel={DL PRB load}, ylabel={Normalized power consumption},width=8.5cm,height=7cm, legend style={at={(0.05,0.95)},anchor=north west},legend cell align={left}]
\addplot[only marks, mark size=1pt, mark options={fill=gray,draw opacity=0}] table [x=x, y=r, col sep=comma] {Figures/plot.csv};
\addlegendentry{Groud truth}
\addplot[only marks, mark size=1pt,mark options={fill=red,draw opacity=0}] table [x=x, y=e, col sep=comma] {Figures/plot.csv};
\addlegendentry{Estimated}
\end{axis}
\end{tikzpicture}
\caption{True and estimated normalized power consumption vs DL PRB load for multiple BSs of a given type.}
\label{fig:truevsest}
\end{figure}
To assess the performance of the developed framework,
we adopted the ANN model to estimate the power consumption of all the 7500 AAUs over the two testing days.
Then, we compared the estimated power consumption with the real measurements available in the data.
In this paper,
we adopt the \ac{MAE} as a metric to measure the absolute error,
and the \ac{MAPE} as a metric to evaluate the relative error.
Overall, the model achieved a \ac{MAE} of 10.94~W and a remarkably low \ac{MAPE} of 5.87~\% when estimating the power
consumed by each AAU across all hours of the test period.
As an example, Fig.~\ref{fig:truevsest} shows the real and estimated normalized power consumption for multiple AAUs of the same type. Note that the power consumption linearly increases with the DL PRB load and that three different slopes are observed due to the presence of three different configuration of the maximum transmit power. Notice that the proposed ANN model accurately fits the power consumption for each of the three configurations.
\subsection{Multi-carrier generalization capabilities}
\label{sec:zeros}
As mentioned in Section~\ref{sec:inputlayer},
to make the ANN model general and work with any type of AAU,
the input layer is designed to take input data from $C^{\mathrm{MAX}}$ carriers.
When $C<C^{\mathrm{MAX}}$ carriers are deployed in the AAU,
all input neurons related to the remaining $C^{\mathrm{MAX}}-C$ carriers are set to zero.
It is worth noting that the alternative modeling approach consists in training multiple ANN models,
each of them supporting AAUs with a given number of carriers.
In this section,
we evaluate the performance loss due to such a general implementation of the \ac{ANN} model.
The performance analysis is performed by considering the following two models:
\begin{itemize}
\item \textit{Single-carrier ANN model}:
The model is tailored to \acp{AAU} in which a single-carrier is deployed
(i.e., the input layer is composed of 34 neurons),
and is thus exclusively trained with data collected from such \acp{AAU};
\item \textit{General ANN model}:
The model provides power consumption estimation for \acp{AAU} with up to $C^{\mathrm{MAX}}=6$ configured carriers
(i.e., the input layer is composed of 84 neurons),
and is trained with all available AAUs.
\end{itemize}
These two models have been tested to estimate the power consumption of all single-carrier \acp{AAU} available in the collected data.
In such single-carrier \acp{AAU} test,
the single-carrier ANN model achieves \ac{MAE} 10.11\,W, and \ac{MAPE} 6.42\,\%,
whereas the general ANN model achieves \ac{MAE} 10.25\,W and \ac{MAPE} 6.54\,\%.
It should be noted that this performance is different than that presented in the previous section as here we estimate the power consumption only of the single carrier AAUs in our dataset. Also, we can observe that general ANN model achieves slightly worse performance
(1.38\% loss in terms of MAE and 1.87\% loss in terms of MAPE),
as it is trained over a more heterogeneous set of data,
while also needing to capture the complex power consumption behaviors that emerge when considering multi-carrier architectures.
However, these errors are minimal,
and shows that the devised general model can cope with a wider set of \acp{AAU} at the expense of a small cost in terms of performance loss.
Importantly,
it is worth stressing that the general \ac{ANN} model has the advantage of observing how power consumption depends on multiple input features in a wide variety of \ac{AAU} types,
and thus,
as we will see in the next section,
it can generalize among them.
\subsection{AAU type generalization capabilities}
In this section,
we analyze the capability of the ANN power consumption model of generalizing over multiple types of AAU.
In this way,
we want to highlight the advantage of our modeling approach,
in which a single and general model is used to capture the power consumption of large variety of AAU type and configurations.
To assess such capability,
we select the most popular AAU type in our data,
and we evaluate the generalisation capabilities of the designed framework by analysing the following models:
\begin{itemize}
\item \textit{Single-AAU ANN model}:
The model is trained --and can provide power estimations-- exclusively for the selected AAU type.
Moreover, the training data does not include any sample in which carrier shutdown is activated.
\item \textit{General ANN model}:
The model is trained with data collected by all the AAUs.
As in the previous case,
the training data related to the selected AAU does not include any sample in which carrier shutdown is activated.
However, training data related to other types of AAUs includes samples in which the carrier shutdown feature is activated.
\end{itemize}
The two models are tested to estimate the power consumption of the selected AAU over the testing set,
in which carrier shutdown is activated for some periods.
The single-AAU ANN model leads to poor accuracy estimations
(i.e., MAE 57.82\,W, MAPE 10.04\,\%),
as it is not able to learn how to characterise the carrier shutdown feature due to the poor training data.
However, the general ANN model provides improved performance
(i.e., \ac{MAE} 19.32\,W, \ac{MAPE} 3.59\,\%),
even if there is not training data covering the scenario in which carrier shutdown is activated for the selected \ac{AAU}.
We highlight that such improved performance is achieved thanks to the generalization capability of our \ac{ANN} model,
which allows capturing knowledge from many different types of \ac{AAU}.
\subsection{ANN scalability}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[xlabel={$N$}, ylabel={MAPE [\%]},width=8.5cm,height=5cm,grid=both, legend style={at={(0.05,0.95)},anchor=north west},legend cell align={left}]
\addplot[mark=*] coordinates {
(5, 5.914)
(7, 6.020)
(10, 6.1306)
(12, 6.2924)
(15,6.4545)
(17, 6.538)
(20,6.655)
};
\addlegendentry{fixed $c=1$}
\addplot[mark=none,color=red,style=dashed,mark=square*, mark options={solid}] coordinates {
(5, 5.914)
(7, 5.850)
(10,5.875)
(12,5.971)
(15,5.928)
(17,5.924)
(20,5.870)
};
\addlegendentry{scalable $c$}
\end{axis}
\end{tikzpicture}
\caption{MAPE achieved by the ANN model when trained/tested over $N$ AAUs types.}
\label{fig:performancePerProduct}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[xlabel={$N$}, ylabel={$c$},width=8.5cm,height=5cm,grid=both]
\addplot[mark=*] coordinates {
(5, 1)
(7,1.085)
(10,1.424)
(12,1.75)
(15,2.25)
(17,2.5)
(20,2.839)
};
\end{axis}
\end{tikzpicture}
\caption{ANN scaling factor, $c$, adopted for different number of AAU types included in the data, $N$.}
\label{fig:performanceScaling}
\end{figure}
The results discussed in the previous sections highlight that the proposed framework is capable of providing accurate estimations of power consumption when dealing with the complexity of multi-carrier \ac{AAU} architectures.
Importantly, the model is capable of capturing the power consumption behaviors of each AAU type considering 5G energy saving features.
In this section,
we analyze how the dimension of the ANN architecture should be scaled according to the number of \ac{AAU} types included in the data.
As a starting scenario,
we consider a dataset that includes 5 different types of \ac{AAU}.
Multiple \ac{ANN} shapes/sizes were trained and tested to identify the smallest \ac{ANN} providing a good estimation error.
The identified \ac{ANN} is composed of two hidden layers with, respectively, $l_1=12$ and $l_2=4$ neurons,
and it reaches \ac{MAE} 11.02 W and \ac{MAPE} 5.91\%.
The same \ac{ANN} architecture was trained and tested on datasets including a larger number of \ac{AAU} types.
Fig.~\ref{fig:performancePerProduct} shows in black the performance achieved when increasing the number of \ac{AAU} types in the dataset.
It can be seen that the estimation error deteriorates when increasing the number of \ac{AAU} types in the data.
In particular,
the \ac{MAPE} increases by 3.6\%, 9.1\% and 12.4\% when increasing the number of \ac{AAU} types to 10, 15 and 20, respectively.
This error performance degradation is motivated by the fact that,
when increasing the number of \ac{AAU} types in the data,
the dimension of the \ac{ANN} architecture (i.e., the parameters of the model) is no longer sufficient to gather the knowledge of the power consumption behavior of different AAU types that allow to successfully estimate their power consumption.
Therefore, the dimension of the \ac{ANN} must be properly scaled when increasing the number of \ac{AAU} types in the data.
To visualize this,
we consider a scaling factor for the \ac{ANN} architecture, $c$.
In more detail,
the first hidden layer has dimension $l_1=12\cdot c$,
while the second has dimension $l_2=4\cdot c$.
Different values of the scaling factor $c$ were tested when considering different numbers of \ac{AAU} types in the data,
to assess how the \ac{ANN} should be scaled to guarantee the \ac{MAPE} of the estimation to be within 1\% of the initial error of 5.91\%.
Fig.~\ref{fig:performanceScaling} shows the lowest value of the scaling factor $c$ that allows to meet the requirement for each number of \ac{AAU} types in the data.
The results indicates that linearly scaling the dimension of the \ac{ANN} allows us to preserve the accuracy of the ANN estimation, while increasing the number of \ac{AAU} types that need to be modeled.
\subsection{Training data availability}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[xlabel={AAU/AAU type}, ylabel={MAPE [\%]},width=8.5cm,height=5cm,grid=both]
\addplot[mark=none] coordinates {
(5.178, 15.0)
(10.594, 12.6)
(15.724, 9.9)
(23.420, 9.2)
(33.040, 8.6)
(43.943, 8.0)
(57.767, 7.6)
(70.523, 7.5)
(87.340, 7.4)
};
\end{axis}
\end{tikzpicture}
\caption{MAPE achieved by the ANN model when considering different number of AAUs per AAU type in the training dataset.}
\label{fig:AAUproduct}
\end{figure}
Collecting measurements from large network deployments can be challenging and time-consuming.
In this section,
we analyze how the amount of available training data affects the performance achieved by the \ac{ANN} model.
To perform the analysis,
we focus on 11~\ac{AAU} types for which more than 90~\acp{AAU} per type are available in the collected data.
During the training of the ANN model,
a varying number of AAUs was included for each AAU type,
while during the testing phase,
all the available AAUs were considered.
Fig.~\ref{fig:AAUproduct} shows the MAPE achieved by the ANN model when considering a different number of AAUs per AAU type in the training dataset.
The error achieved has a clear decreasing trend,
suggesting that including more \ac{AAU} in the training set is beneficial to improve the accuracy of the estimation.
However, after reaching 70~\ac{AAU} per AAU type,
adding more \ac{AAU} in the training data provides negligible gains (i.e., lower than 1\%).
\balance
\section{Conclusions}
\label{sec:conclusions}
In this paper,
we presented a power consumption model for 5G AAUs based on an ANN architecture.
The ANN model was trained with data collected from a large deployment,
which includes multiple types of AAU with different configurations.
Feature analysis allowed us to identify a set of input features for the model.
The analysis of the results highlighted that the model can achieve high accuracy,
with a MAPE less than 6\% when tested on all available AAUs in our data.
Moreover, the experiments highlighted the advantage of training a single general model over all the AAUs in the data,
which is able to capture and generalize the impact of multiple parameters on the power consumption and the benefit of energy saving schemes in complex multi-carrier architectures.
Importantly, the results provided good insights into how the ANN architecture should be scaled when needed to model more AAU types. Moreover, experiments showed that at least 70 AAUs per type should be included in the training to guarantee the achievement of good error performance.
\bibliographystyle{IEEEtran}
|
1,477,468,750,964 | arxiv |
\section{Introduction}
\label{sec:intro}
Following the discovery of the Higgs boson (\PH) by the ATLAS and CMS Collaborations~\cite{paper:Aad:2012,paper:Chatrchyan:2012,Chatrchyan:2013lba} at the CERN LHC, a thorough program of precise measurements~\cite{CMS:2021ugl,Sirunyan:2021fpv,CMS:2021kom} has been carried out to uncover possible deviations from the standard model (SM) or to decipher the nature of the Higgs sector. In particular, various exotic decays of the Higgs boson have been considered, in which small deviations in the Higgs boson decay
width or discovery of exotic decay modes could constitute evidence of beyond the SM (BSM) physics.
This paper describes a search for exotic decays of the Higgs boson ${\PH \to \PZ \PX}$ or ${\PH \to \PX \PX}$ in the four-lepton (electrons or muons) final state, using a sample of proton-proton collision data at a center-of-mass energy of 13\TeV recorded by the CMS experiment in 2016--2018. The analyzed data sample corresponds to an integrated luminosity of \ensuremath{137\fbinv}\xspace.
Here \PX represents a possible BSM particle that could decay into a pair of opposite-sign, same-flavor (OSSF) leptons. In this paper, we consider two specific BSM models.
In both models, leptonic decays of \PX and \PZ to either two muons or electrons give rise to the 4$\ell$ (where $4\ell$ may denote $4\PGm$, $2\Pe2\PGm$, or $4\Pe$) final states.
Assuming narrow-width approximation decays of \PX, only the mass range $m_\PX < m_\PH - m_\PZ \approx 35\GeV$ ($m_\PX < m_\PH/2 \approx 62.5\GeV$)
is kinematically possible for $\PH \to \PZ \PX$ ($\PH \to \PX \PX$), where $\mass{\PH}$ and $\mass{\PZ}$ are the Higgs boson mass and Z boson mass, respectively. The decay channel $\Pp\Pp \to \PH \to 4\ell$ has a large signal-to-background ratio. This channel allows a complete reconstruction of the kinematics of the Higgs boson based on final-state decay particles.
In this analysis, a mass range of $4.0 < m_{\PX} < 35.0\GeV$ (62.5\GeV) is considered.
The first model considered, hereby referred to as the ``hidden Abelian Higgs model'' (HAHM), concerns theories with a hidden ``dark'' sector~\cite{Curtin:2014cca,Curtin:2013fra,Davoudiasl:2013aya,Davoudiasl:2012ag,Gopalakrishna:2008dv}, with the X particle identified as the dark photon (\ensuremath{\PZ_{\mathrm{D}}}\xspace), which mediates a dark $U(1)_{D}$ gauge symmetry, which is spontaneously broken by
a dark Higgs mechanism. Interactions of the dark sector with SM particles can occur through a hypercharge portal via the kinetic-mixing parameter \ensuremath{\varepsilon}\xspace, or through a Higgs portal via the Higgs-mixing parameter $\kappa$, as shown in Fig.~\ref{fig:zdfeyn}. Details of this theory and subsequent phenomenological implications can be found in Ref.~\cite{Curtin:2014cca}. Several searches for \ensuremath{\PZ_{\mathrm{D}}}\xspace were previously performed by collider experiments, for example ATLAS~\cite{Aad:2015sva,Aaboud:2018fvk} and LHCb~\cite{Aaij:2017rft}. Other experiments, such as beam dump experiments, fixed target experiments, helioscopes, and cold dark matter direct detection experiments, provide complementary sensitivities to \ensuremath{\PZ_{\mathrm{D}}}\xspace. A summary of the experimental coverage of the HAHM model can be found in Refs.~\cite{Essig:2013lka,Beacham:2019nyx}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.33\textwidth]{Figure_001-a.pdf}
\includegraphics[width=0.33\textwidth]{Figure_001-b.pdf}
\caption{Feynman diagrams for Higgs boson decay via the kinetic-mixing (left) or Higgs-mixing mechanism (right)~\cite{Curtin:2014cca}.
The symbol $\Ph$ represents the Higgs boson, and $s$ represents the dark Higgs boson. The symbol \ensuremath{\varepsilon}\xspace represents the kinetic-mixing
parameter while $\kappa$ represents the Higgs-mixing parameter.
\label{fig:zdfeyn}}
\end{figure*}
{\tolerance=800
The second model involves axion-like particles (ALPs), with \PX being a pseudoscalar gauge singlet \Pa.
Axions were originally proposed to address the strong CP problem~\cite{PhysRevLett.38.1440}.
Recently, ALPs were proposed to explain the observed anomaly in the magnetic moment of the muon~\cite{PhysRevLett.119.031802}. Theoretical
overviews of the ALP models can be found in Refs.~\cite{Georgi:1986df,Bauer:2017ris}. The models are formulated as an effective field theory
of ALPs coupled to various SM particles. In particular, the theory allows the coupling between the Higgs boson, Z boson, and the ALP field, or the Higgs
boson and the ALP field. These couplings are represented by the Wilson coefficients \ensuremath{C_{\PZ\PH}/\Lambda}\xspace and \ensuremath{C_{\Pa\PH}/\Lambda^2}\xspace, respectively, where $\Lambda$ is the decoupling energy scale in the
effective field theory, or the mass scale of new physics. The former (latter) coefficient gives rise to the exotic decay of $\PH \to \PZ \Pa$ ($\Pa \Pa$). Various experimental searches for $\PH \to \Pa \Pa$ have been performed~\cite{Chatrchyan:2012cg, Aad:2015bua, Khachatryan:2015nba, Khachatryan:2017mnf,ATLAS:2021hbr,ATLAS:2021ldb}.
Recently a direct search for $\PH \to \PZ \Pa$ has been performed targeting a signature with a light and hadronically decaying resonance $\Pa$ with ${\mass{\Pa} < 4\GeV}$~\cite{PhysRevLett.125.221802}.
The present search provides complementary coverage of the phase space of the ALP model with mass greater than 4\GeV.
\par}
This paper is organized as follows. Section~\ref{sec:detector} describes the CMS detector and event reconstruction algorithms.
Section~\ref{sec:dataset} outlines the collision data used and various software packages used to generate the samples of simulated
events. Section~\ref{sec:evtselection} summarizes the selection criteria and the categorization of signal events, and Section~\ref{sec:bkg} describes
the reducible background estimation method. Section~\ref{sec:sys} describes the various sources of systematic uncertainties in the
search. Finally, results and interpretations are detailed in Section~\ref{sec:result}, and a summary is given in Section~\ref{sec:summary}.
Tabulated results are provided in HEPDATA~\cite{hepdata}.
\section{The CMS detector and event reconstruction}
\label{sec:detector}
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter,
providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip
tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator
hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters
extend the pseudorapidity ($\eta$) coverage provided by the barrel and endcap detectors. Muons are detected
in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. A more
detailed description of the CMS detector, together with a definition of the coordinate system used
and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}.
Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}.
The first level, composed of custom hardware processors, uses information from the calorimeters
and muon detectors to select events at a rate of around 100\unit{kHz} within a fixed time interval of about 4\mus.
The second level, known as the high-level trigger, consists of a farm of processors running a version of
the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz}
before data storage.
{\tolerance=800
The candidate vertex with the largest value of summed physics-object $\pt^2$ (where \pt is the transverse momentum) is taken to be the primary $\Pp\Pp$
interaction vertex. The physics objects are the jets, clustered using the jet finding algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma}
with the tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum, taken as the
negative vector sum of the \pt of those jets.
\par}
The particle-flow (PF) algorithm~\cite{CMS-PRF-14-001} aims to reconstruct and identify each individual particle in an event (PF candidate), with an
optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained from the ECAL measurement.
The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker,
the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the
electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from
a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for the response function of
the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
The missing transverse momentum vector \ptvecmiss is computed as the negative vector sum of the transverse momenta of all the PF
candidates in an event, and its magnitude is denoted as \ptmiss~\cite{Sirunyan:2019kia}. The \ptvecmiss is modified to account for corrections
to the energy scale of the reconstructed jets in the event.
Muons in the four lepton final state are measured in the range ${\abs{\eta} < 2.4}$, with detection planes made using three technologies: drift tubes,
cathode strip chambers, and resistive plate chambers. The single-muon trigger efficiency exceeds 90\% over the full $\eta$ range, and the
efficiency to reconstruct and identify muons is greater than 96\%. Matching muons to tracks measured in the silicon tracker results in a
relative \pt resolution, for muons with \pt up to 100\GeV, of 1\% in the barrel and 3\% in the endcaps~\cite{Sirunyan:2018fpa}.
{\tolerance=800
Electrons in the four lepton final state with ${\pt > 7\GeV}$ and ${\abs{\eta} < 2.5}$ are identified by a multivariate discriminant, which
is constructed by observables related to the bremsstrahlung along the electron trajectory, ECAL energy measurements, electromagnetic showers, missing pixel detector hits,
and the photon conversion vertex fit probability~\cite{CMS:2020uim}.
The electron momentum is estimated by combining the energy measurement in the ECAL with the momentum measurement in the tracker. The momentum resolution for
electrons with ${\pt \approx 45\GeV}$ from ${\PZ \to \Pe \Pe}$ decays ranges from 1.7 to 4.5\%. It is generally better in the barrel region than in the
endcaps, and also depends on the bremsstrahlung energy emitted by the electron as it traverses the material in front of the ECAL.
The dielectron mass resolution for ${\PZ \to \Pe \Pe}$ decays when both electrons are in the ECAL barrel (endcap) is 1.9\% (2.9\%).
\par}
This analysis focuses on promptly produced signal processes. To reduce the contributions from leptons arising from hadron decays within jets, a requirement is imposed on each lepton candidate using a variable defined as:
\begin{linenomath}
\begin{equation}
I^{\ell} = \frac{\sum \pt^{\text{charged}}+\max\Bigl[0,\sum \pt^{\text{neutral}}+\sum \pt^{\PGg}-\pt^{\text{PU}}\Bigr]}{\pt^{\ell}}
\end{equation}
\end{linenomath}
where the sums are over the PF candidates within a cone of radius ${R = \sqrt{\smash[b]{\Delta\eta^2+\Delta\phi^2}} < 0.3}$ (where $\phi$ is the azimuthal angle in radians), $\pt^i$ represents transverse momenta from each particle $i$, where $i$ represents either charged hadrons, neutral hadrons, photons, or particles originating from overlapping proton-proton interactions (pileup)~\cite{Sirunyan:2017exp}. For muons, the isolation is required to be ${I^{\PGm} < 0.35}$.
For electrons, this variable is included in the multivariate discriminant for datasets in 2017 and 2018, while for the dataset in 2016, an isolation requirement ${I^{\Pe} < 0.35}$
is imposed on each electron candidate. In addition, the three-dimensional impact parameter of electrons and muons is required to be consistent with the primary collision vertex.
The requirement implies a negligible acceptance to signal models with long-lived \PX.
An algorithm is utilized to correct for effects arising from final-state radiation (FSR) from leptons. PF-reconstructed photons are considered as FSR candidates if they satisfy the requirement ${\pt^{\PGg} > 2\GeV}$ and ${I^{\PGg} < 1.8}$, where $I^{\PGg}$ is calculated similarly to the lepton isolation variable. Then each FSR candidate is assigned to the closest lepton in the event. The candidates are further required to have ${\Delta R(\PGg,\ell)/(\pt^{\PGg})^2 < 0.012\GeV^{-2}}$ and ${\Delta R(\PGg,\ell) < 0.5}$. These candidates are excluded from the calculation of the lepton isolation variables.
Lepton reconstruction and selection efficiencies are measured in data by a ``tag-and-probe'' technique with an inclusive sample of \PZ boson events~\cite{Khachatryan:2010xn}. The difference between the efficiencies in data and simulation are observed to be around 1--4\%, depending on \pt and $\eta$ of the lepton considered. The differences are used to correct lepton efficiencies in simulation.
\section{Data and simulated samples}
\label{sec:dataset}
Leading order (LO) signal samples for the physics processes $\Pp\Pp \to \PH \to \PZ \ensuremath{\PZ_{\mathrm{D}}}\xspace (\ensuremath{\PZ_{\mathrm{D}}}\xspace \ensuremath{\PZ_{\mathrm{D}}}\xspace) \to 4\ell$, where $\ell = (\Pe,\PGm)$, are generated using the \MGvATNLO 2.2.2 (2.4.2)~\cite{MADGRAPH5,Alwall:2007fs,Frederix:2012ps}
generator for 2016 (2017 and 2018), with HAHM~\cite{Curtin:2014cca} at leading order.
Cross sections for each \ensuremath{\PZ_{\mathrm{D}}}\xspace signal are calculated by multiplying the next-to-next-to-next-to-leading order (NNNLO) Higgs production
cross section~\cite{Anastasiou:2016cez} by the branching fraction of $\PH \to \PZ \ensuremath{\PZ_{\mathrm{D}}}\xspace$ and $\PH \to \ensuremath{\PZ_{\mathrm{D}}}\xspace \ensuremath{\PZ_{\mathrm{D}}}\xspace$,
respectively~\cite{Curtin:2014cca}. Final states with $\tau$ leptons are neglected as their contribution to the signal region yield
is below $1\%$. Signal contributions from vector-boson fusion and associated production with a top quark pair or a vector boson are also
omitted.
{\tolerance=800
The SM Higgs boson simulation samples, which include gluon fusion, vector boson fusion, and associated production with a top quark pair or a vector boson, and the simulated
$\PZ\PZ$ background from quark-antiquark annihilation are generated at next-to-leading order (NLO) in
perturbative quantum chromodynamics with \POWHEG~v2~\cite{Bagnaschi:2011tu,Nason:2004rx,Frixione:2007vw,Alioli:2010xd}.
The cross section for the dominant production mode, gluon fusion, is taken at NNNLO~\cite{Anastasiou:2016cez}.
\par}
Decays of the Higgs boson to four leptons are simulated with \textsc{JHUGen}\xspace 7.0.2~\cite{PhysRevD.81.075022,PhysRevD.86.095031}.
The non-resonant process of $\Pg\Pg\to \PZ\PZ$ process is simulated at LO with \MCFM 7.0.1~\cite{Campbell:2019dru}.
NLO correction factors~\cite{Grazzini:2017mhc} are applied to the $\Pg\Pg\to \PZ\PZ$ process.
{\tolerance=800
Minor backgrounds from \ensuremath{\PQt\PAQt\PZ}\xspace and triboson production processes are also simulated at LO and NLO, respectively, with the \MGvATNLO 2.2.2 (2.4.2)~\cite{MADGRAPH5,Alwall:2007fs,Frederix:2012ps}
generator for 2016 (2017 and 2018).
\par}
{\tolerance=1200
The set of parton distribution functions (PDFs) used was NNPDF3.0~\cite{Ball:2014uwa} (NNPDF3.1~\cite{Ball:2017nwa}) for the 2016 (2017 and 2018) simulation.
Parton showering and hadronization are simulated using the \PYTHIA~8.230 generator~\cite{Sjostrand:2014zea}
with the CUETP8M1 (CP5) underlying event tune for the 2016 (2017 and 2018) simulation~\cite{Khachatryan:2015pea,Sirunyan:2019dfx}.
The response of the CMS detector is modeled using the \GEANTfour program~\cite{AGOSTINELLI2003250,1610988}.
Simulated events are reweighted according to a specified instantaneous luminosity and an average number of pileup events.
par}
\section{Event selection}
\label{sec:evtselection}
In the trigger system, events are required to have more than two leptons.
The overall trigger efficiency is measured in data using a sample of $4\ell$ events from single-lepton triggers and
agreements are observed with simulation within 5\%, and is found to be larger than 99\%.
A set of requirements is applied to maximize the sensitivity of the search for a potential signal in the
\ensuremath{\PZ \PX}\xspace and \ensuremath{\PX \PX}\xspace event topologies. In both cases, at least four well-identified and isolated leptons from the primary
vertex are required, possibly accompanied by an FSR photon. Each muon (electron) is required to have $\pt > 5\GeV$ (7\GeV). All four leptons must be separated from each
other by $\Delta R(\ell_i,\ell_j) > 0.02$. The leading (subleading) lepton \pt is required to satisfy $\pt > 20\GeV$ (10\GeV).
The four-lepton invariant mass $m_{4\ell}$ is required to be within $118 < m_{4\ell} < 130\GeV$.
To further suppress background contributions from hadron decays in jet fragmentation or from the decay of low-mass
resonances, all opposite-charge leptons pairs, regardless of lepton flavor, are required to satisfy
$m_{\ell^+ \ell^-} > 4\GeV$.
For each event in the \ensuremath{\PZ \PX}\xspace and \ensuremath{\PX \PX}\xspace searches, dilepton pair candidates are formed by considering all OSSF leptons.
The dilepton invariant mass $m_{\ell^{+} \ell^{-}}$ for each candidate is required to be within
$4 < m_{\ell^{+}\ell^{-}} < 120\GeV$, however the mass window around the $\Upsilon \PQb\PAQb$ bound states ($8.0 < m_{\Upsilon} < 11.5\GeV$) is also excluded.
{\tolerance=800
Two dilepton candidates are then paired to form a \ensuremath{\PZ \PX}\xspace or \ensuremath{\PX \PX}\xspace event candidate.
For the \ensuremath{\PZ \PX}\xspace search, \ensuremath{\PZ_1}\xspace is the OSSF dilepton pair with an
invariant mass closest to the \PZ boson mass~\cite{10.1093/ptep/ptaa104} (representing \PZ in \ensuremath{\PZ \PX}\xspace), and
\ensuremath{\PZ_2}\xspace is the other pair (\PX). For the \ensuremath{\PX \PX}\xspace search, \ensuremath{\PZ_1}\xspace is the OSSF dilepton pair with
the larger invariant mass, and \ensuremath{\PZ_2}\xspace is the lower-mass pair. For the \ensuremath{\PZ \PX}\xspace search, $\mass{\ensuremath{\PZ_1}\xspace}$ is required
to be larger than 40\GeV. For the \ensuremath{\PX \PX}\xspace search, $\mass{\ensuremath{\PZ_1}\xspace}$ and $\mass{\ensuremath{\PZ_2}\xspace}$ must lie between 4 and 62.5\GeV.
For events with more than four selected leptons, the combination of four leptons with $\mass{\ensuremath{\PZ_1}\xspace}$ closest to the Z boson
is used for the \ensuremath{\PZ \PX}\xspace candidate, while the combination with the least value of $(\mass{\ensuremath{\PZ_1}\xspace}-\mass{\ensuremath{\PZ_2}\xspace})/(\mass{\ensuremath{\PZ_1}\xspace}+\mass{\ensuremath{\PZ_2}\xspace})$ is used to select \ensuremath{\PX \PX}\xspace candidates with similar invariant masses.
\par}
Four final-state lepton categories can be defined as \ensuremath{4\PGm}\xspace, \ensuremath{2\PGm2\Pe}\xspace, \ensuremath{4\Pe}\xspace, \ensuremath{2\Pe2\PGm}\xspace,
where the order of lepton flavors corresponds to Z1 and Z2 flavors.
For the \ensuremath{4\PGm}\xspace and \ensuremath{4\Pe}\xspace final states, one alternative pairing of the
four leptons is possible, labelled by $\PZ_\text{a}$ and
$\PZ_\text{b}$.
For the \ensuremath{\PZ \PX}\xspace search, events with $m_{\PZ_\text{b}} < 12\GeV$ and $m_{\PZ_\text{a}}$
closer to the \PZ boson mass than \ensuremath{\PZ_1}\xspace are discarded to suppress background
contributions from on-shell Z and low-mass dilepton resonances.
For the \ensuremath{\PX \PX}\xspace search, the \ensuremath{\PX \PX}\xspace candidate with the smallest value of
$(\mass{\ensuremath{\PZ_1}\xspace}-\mass{\ensuremath{\PZ_2}\xspace})/(\mass{\ensuremath{\PZ_1}\xspace}+\mass{\ensuremath{\PZ_2}\xspace})$ is chosen.
\section{Background estimation}
\label{sec:bkg}
\subsection{Irreducible background estimation}
Irreducible backgrounds for this search come from processes including a SM Higgs boson, as well as nonresonant production of $\PZ\PZ$ via quark-antiquark annihilation or gluon fusion,
and rare backgrounds such as $\PQt\PAQt+\PZ$ and triboson production. These backgrounds are estimated from simulation. Details of the simulation used for each of the
backgrounds are described in Section~\ref{sec:dataset}.
\subsection{Reducible background estimation}
\label{sec:zx}
The reducible backgrounds in the $4\ell$ final state can arise from the leptonic decays of heavy-flavor hadrons,
in-flight decays of light mesons within jets, charged hadrons misidentified as electrons when in proximity
of a $\PGpz$, and photon conversions. These backgrounds primarily arise from the \ensuremath{\PZ+\text{jets}}\xspace process. Additional physics processes
with kinematics similar to the signal include \ensuremath{\PQt\PAQt}\xspace, \ensuremath{\PZ\PGg}\xspace, and \ensuremath{\PW\PZ}\xspace.
Two dedicated control regions are used to estimate the contribution from these backgrounds.
The first (second) control region consists of events with two (three) leptons passing the lepton identification and isolation requirements
and two (one) leptons failing the requirements, and is denoted as the 2P2F (3P1F) region.
Backgrounds with only two prompt leptons, such as \ensuremath{\PZ+\text{jets}}\xspace and \ensuremath{\PQt\PAQt}\xspace, are estimated by the 2P2F region,
while backgrounds with three prompt leptons, such as \ensuremath{\PW\PZ}\xspace and \ensuremath{\PZ\PGg}\xspace with the photon converting to an electron pair,
are estimated by the 3P1F region. Other than the lepton requirements, the 3P1F and 2P2F regions follow the same event selection and alternative pairing
algorithms as in the signal region to closely mimic its kinematics.
The lepton misidentification rates $f_{\PGm}$ and $f_{\Pe}$ are measured as a function of lepton \pt and $\eta$ with a sample which
includes a \PZ candidate, formed by a pair of leptons passing the selection requirement of the analysis, and an additional lepton passing
a relaxed requirement. These rates are measured separately in the data samples from 2016, 2017, and 2018. In addition, the mass of the \PZ candidate is required to satisfy the condition $\abs{\mass{\ensuremath{\PZ_1}\xspace}-\mass{\PZ}} < 7\GeV$ to reduce contributions
from \ensuremath{\PW\PZ}\xspace and \ensuremath{\PQt\PAQt}\xspace processes, and \ptmiss is required to be less than 25\GeV.
To estimate the background contribution in the signal region, events in the 3P1F and
2P2F control regions are reweighted by lepton misidentification probabilities.
Each event $i$ in the 3P1F region is weighted by a factor $f^{i}_{4}/(1-f^{i}_{4})$,
where $f^{i}_{4}$ corresponds to the lepton misidentification rate of the failed lepton in the event.
Physics processes in the 2P2F control region can contribute to the 3P1F region
and are estimated by reweighting 2P2F events with $f^{i}_{3}/(1-f^{i}_{3})+f^{i}_{4}/(1-f^{i}_{4})$,
where $f^{i}_{3}$ and $f^{i}_{4}$ correspond to the lepton misidentification rates of
the two failed leptons in the event. A minor contribution from $\PZ\PZ$ events to the 3P1F control region is estimated from simulation and subtracted.
The expected yield for the signal region can then be estimated as:
\begin{linenomath}
\ifthenelse{\boolean{cms@external}}
{
\begin{multline}
N^{\text{reducible}}_{\mathrm{SR}} = \left( 1-\frac{N^{\PZ\PZ}_{3P1F}}{N_{3P1F}} \right) \\
\times \sum_{i}^{N_{3P1F}} \frac{f^{i}_{4}}{1-f^{i}_{4}} -
\sum_{i}^{N_{2P2F}} \frac{f^{i}_{3}}{1-f^{i}_{3}} \frac{f^{i}_{4}}{1-f^{i}_{4}}
\end{multline}
}
{
\begin{equation}
N^{\text{reducible}}_{\mathrm{SR}} = \left( 1-\frac{N^{\PZ\PZ}_{3P1F}}{N_{3P1F}} \right) \sum_{i}^{N_{3P1F}} \frac{f^{i}_{4}}{1-f^{i}_{4}} - \sum_{i}^{N_{2P2F}} \frac{f^{i}_{3}}{1-f^{i}_{3}} \frac{f^{i}_{4}}{1-f^{i}_{4}}
\end{equation}
}
\end{linenomath}
where each sum is over all 3P1F and 2P2F events, respectively.
Furthermore, dedicated validation regions, which include adjacent $m_{4\ell}$ regions to the signal region ($70 < m_{4\ell} < 118~\GeV$, $130 < m_{4\ell} < 200~\GeV$), are defined to inspect the level of agreement between data and predictions.
\section{Systematic uncertainties}
\label{sec:sys}
{\tolerance=800
Experimental sources of the systematic uncertainties applicable to all final states include the integrated luminosity uncertainty and the lepton identification and reconstruction efficiency
uncertainty. The integrated luminosities of the 2016, 2017, and 2018 data-taking periods are individually known with uncertainties in the 1.2--2.5\%
range~\cite{CMS-PAS-LUM-17-001,CMS-PAS-LUM-17-004,CMS-PAS-LUM-18-002}, while the total Run~2 (2016--2018) integrated luminosity has an uncertainty of 1.6\%~\cite{CMS:2021xjt}, the improvement in precision
reflecting the (uncorrelated) time evolution of some systematic effects.
Lepton efficiency uncertainties are estimated in bins of lepton \pt and $\eta$ using the tag-and-probe
method, as described in Section~\ref{sec:detector}. These uncertainties on each lepton candidate lead to variations from 2.5 to 16.1\% on event yields, dependent on final-state lepton categories.
In addition, the systematic uncertainties in the lepton energy scale are determined by fitting the $\PZ\to\ell\ell$ mass distribution in bins of
lepton \pt and $\eta$ with a Breit--Wigner parameterization convolved with a double-sided Crystal Ball function~\cite{Oreglia:1980cs}.
Systematic uncertainties in the estimation of the reducible background are derived from the level of agreement between data and predictions in the validation regions in each lepton
category (23--48\% depending on data taking period), arising from different background compositions between signal and control regions (30--38\% depending on lepton category),
and from misidentification rate uncertainties (35--100\% depending on lepton category).
\par}
Theoretical uncertainties that affect both the signal and background estimation include uncertainties in the renormalization and factorization scales and the choice of the PDF set.
The uncertainty from the renormalization and factorization scales is determined by varying these scales between 0.5 and 2 times their nominal value while keeping their ratio between 0.5 and 2.
The uncertainty from the PDF set is determined by taking the root-mean-square of the variation when using different replicas
of the default NNPDF set~\cite{Butterworth:2015oua}. An additional uncertainty of 10\% in the K factor used for the $\Pg\Pg\to4\ell$ prediction is included~\cite{Sirunyan:2017exp}.
To estimate the effect of the interference between the signal and background processes, three types of samples are generated using the \MGvATNLO 2.4.2~\cite{MADGRAPH5,Alwall:2007fs,Frederix:2012ps}
generator: inclusive sample (${\PH \to \PZ \PZ^{*} \to 4\ell}$, ${\PH \to \PZ \PX / \PX \PX \to 4\ell}$), signal-only sample ${\PH \to \PZ \PX / \PX \PX \to 4\ell}$ and background-only sample ${\PH \to \PZ \PZ^{*} \to 4\ell}$. The inclusive sample contains background, signal, and interference contributions. The effect of the interference on the normalization of the signal is estimated by taking the difference of the inclusive sample cross section and the sum of the cross sections of the signal and background samples. This difference is at 1-2\% after the final event selection.
Theoretical values of branching fractions $\mathcal{B}(\ensuremath{\PZ_{\mathrm{D}}}\xspace \to \Pe\Pe\ \text{or}\ \PGm\PGm)$ are calculated in Ref.~\cite{Curtin:2014cca}.
The calculations are based on experimental measurements of the ratio of the hadronic cross section to the muon cross section in electron-positron collisions $R_{\PGm\PGm}/R_{\text{had}}$ up to $\mass{\ensuremath{\PZ_{\mathrm{D}}}\xspace} = 12\GeV$ and a next-to-leading
order theoretical calculation for $\mass{\ensuremath{\PZ_{\mathrm{D}}}\xspace} > 12\GeV$. To account for uncertainties in these theoretical estimates, a conservative
20\%\,(10\%) uncertainty is assigned to them for $\mass{\ensuremath{\PZ_{\mathrm{D}}}\xspace} < 12\GeV$ ($\mass{\ensuremath{\PZ_{\mathrm{D}}}\xspace} > 12\GeV$)~\cite{Curtin:2014cca}.
Differences in the kinematic properties between the HAHM and ALP model have been inspected. For the determination of model-independent exclusion limits, differences in acceptances are included as systematic uncertainties, ranging from 10\% ($\mass{\PX} \sim 4\GeV$) to 30\% ($\mass{\PX} \sim 35\GeV$ for \ensuremath{\PZ \PX}\xspace, $\mass{\PX} \sim 60\GeV$ for \ensuremath{\PX \PX}\xspace), while they are used to correct signal yields for the determination of ALP exclusion limits.
In the combination of the three data taking periods, the theoretical uncertainties and experimental ones related to
leptons are correlated across all data taking periods, while all others from experimental sources are taken as uncorrelated. The sensitivity of this analysis is dominated by data statistical
uncertainty rather than systematic uncertainties.
\section{Results and interpretation}
\label{sec:result}
Dilepton mass distributions for the \ensuremath{\PZ \PX}\xspace and \ensuremath{\PX \PX}\xspace selections are shown in
Figs.~\ref{fig:ZZd_mZ2} and~\ref{fig:ZdZd_mZ12}, respectively.
The dilepton mass variable for the \ensuremath{\PX \PX}\xspace selection shown in Fig.~\ref{fig:ZdZd_mZ12}
is $\mass{Z12} = (\mass{\ensuremath{\PZ_1}\xspace}+\mass{\ensuremath{\PZ_2}\xspace})/2$, which should peak at $\mass{\PX}$ in case of a signal $\PH \to \PX \PX$.
In all cases, the observed distributions agree well with standard model expectations within the assigned uncertainties.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.49\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_002-b.pdf}
\caption{Event yields against $m_{\PZ_2}$ with the \ensuremath{\PZ \PX}\xspace selection for the muon and electron channels.
Numbers in the legend show the total event yields with the \ensuremath{\PZ \PX}\xspace selection corresponding to
data, and the expected yields for each background and signal processes, along with the corresponding
statistical uncertainty coming from the amount of simulated data. \label{fig:ZZd_mZ2}}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_003-a.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_003-b.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_003-c.pdf}
\caption{Event yields against $\mass{Z12} = (\mass{\ensuremath{\PZ_1}\xspace}+\mass{\ensuremath{\PZ_2}\xspace})/2$ with the \ensuremath{\PX \PX}\xspace selection for the
\ensuremath{4\PGm}\xspace, \ensuremath{2\Pe2\PGm}\xspace and \ensuremath{4\Pe}\xspace final states. Numbers in the legend show the total event yields
with the \ensuremath{\PX \PX}\xspace selection corresponding to data, and the expected yields for each background and signal processes,
along with the corresponding statistical uncertainty coming from the amount of simulated data.\label{fig:ZdZd_mZ12}}
\end{figure*}
These results are further interpreted as upper limits on model-independent branching fractions and model parameters for
the dark photon and ALP models. For interpretations of the results of the \ensuremath{\PZ \PX}\xspace selection, 351 mass hypotheses are considered. Each mass
hypothesis $m_{i}$ is defined with an incremental step of 0.5\%, as $m_{i} = 4.20 \times 1.005^{i}$,
where $i = 0,1,2,\ldots,424$, excluding the mass points around the $\Upsilon$ $\PQb\PAQb$ bound states between $8.0 < m_{\Upsilon} < 11.5\GeV$ ($i = 130,131,\ldots,201$).
The incremental step is chosen so as not to miss any potential signal contribution due to detector resolution in \mass{\ensuremath{\PZ_2}\xspace}. For each
mass hypothesis, the counting experiments are performed on the \mass{\ensuremath{\PZ_2}\xspace} distribution, with the bin centered at each mass hypothesis.
Because of the finite mass resolution of \mass{\ensuremath{\PZ_2}\xspace}, the choice of the bin width needs to be defined such that most of the signal contribution is included
in the bin. The bin width is defined as $0.04\,(0.10) \times m_{i}$ for the \ensuremath{4\PGm}\xspace and \ensuremath{2\Pe2\PGm}\xspace (\ensuremath{4\Pe}\xspace and \ensuremath{2\PGm2\Pe}\xspace) categories.
This width is chosen as two times the \mass{\ensuremath{\PZ_2}\xspace} resolution and includes $\approx$95\% of signal events.
The normalization of the Higgs background is allowed to float freely in the likelihood fit. For each mass hypothesis,
events outside the mass window are included as a sideband to constrain the normalization parameter.
No significant deviation with respect to the SM prediction is observed.
For interpretations of the results of the \ensuremath{\PX \PX}\xspace selection,
462 mass hypotheses are considered instead. In contrast to the \ensuremath{\PZ \PX}\xspace interpretations, the counting experiments are performed by constructing
a rectangular region, centered at each mass hypothesis, in the (\mass{\ensuremath{\PZ_1}\xspace},\mass{\ensuremath{\PZ_2}\xspace}) plane. The rectangular regions are effectively
triangular as \mass{\ensuremath{\PZ_1}\xspace} is defined as the larger invariant mass. The bin widths are defined in a similar manner as
$0.04 m_{i}$ ($0.10 m_{i}$) for \mass{\ensuremath{\PZ_1}\xspace} or \mass{\ensuremath{\PZ_2}\xspace} formed by muon (electron) pairs.
The likelihood model for each mass hypothesis is formulated as
\begin{equation}
\mathcal{L}_{m}
= \mathcal{L}_{m,\mathrm{SR}} \mathcal{L}_{m,\text{SB}}
\end{equation}
\begin{linenomath}
\ifthenelse{\boolean{cms@external}}
{
\begin{multline}
\mathcal{L}_{m,\mathrm{SR}} = \prod_{\ell} \Pois ( n_{m,\ell} | \mu_{\text{Higgs}} n_{\text{Higgs},m,\ell} \\
+\sum_{b} n_{b,m,\ell} \rho_{b,m,\ell} + \mu n_{s,m,\ell} \rho_{s,m,\ell})
\end{multline}
}
{
\begin{equation}
\mathcal{L}_{m,\mathrm{SR}} = \prod_{\ell} \Pois ( n_{m,\ell} | \mu_{\text{Higgs}} n_{\text{Higgs},m,\ell} + \sum_{b} n_{b,m,\ell} \rho_{b,m,\ell} + \mu n_{s,m,\ell} \rho_{s,m,\ell}),
\end{equation}
}
\end{linenomath}
\begin{equation}
\label{eq:sb}
\mathcal{L}_{m,\text{SB}}
= \prod_{\ell} \Pois ( n_{\ell} | \mu_{\text{Higgs}} n_{\text{Higgs},\ell} + \sum_{b} n_{b,\ell} \rho_{b,\ell} )
\end{equation}
where the function $\Pois(n|x)$ is the Poisson probability to observe $n$ events, when the expectation is $x$.
The symbol $m$ represents a particular mass hypothesis. The likelihood term $\mathcal{L}_{m,\mathrm{SR}}$ ($\mathcal{L}_{m,\text{SB}}$)
corresponds to the event yields within (outside) the mass window.
The symbol $\mu$ is the signal strength parameter, $\mu_{\text{Higgs}}$ represents the free floating normalizing parameter on the SM Higgs boson process,
$\ell$ represents each lepton category,
$b$ represents each background process, $s$ represents a particular signal process and $n_{i,m,\ell}$ represents the yield in a mass window associated with the mass hypothesis $m$, from a source $i$ and the lepton category $\ell$.
In Equation~\ref{eq:sb}, the symbols $n_{\text{Higgs},\ell}$ and $n_{b,\ell}$ represent the yields of the SM Higgs boson and other backgrounds $b$ outside the mass window for the lepton category $\ell$.
Systematic uncertainties are included and profiled as nuisance parameters $\rho$~\cite{ATL-PHYS-PUB-2011-011}.
For each interpretation, $95\%$ exclusion limits are obtained with an asymptotic formulation of the modified frequentist \CLs
criterion as described in Refs.~\cite{Junk:1999kv,Read:2002hq,ATL-PHYS-PUB-2011-011,Cowan:2010js} with the ZX selection and
full \CLs approach for the XX selection.
\subsection{Model-independent limits}
Upper limits at 95\% confidence level (CL) are derived on model-independent
branching fractions with the \ensuremath{\PZ \PX}\xspace and \ensuremath{\PX \PX}\xspace selections assuming three
decay channels: a flavor symmetric decay of \PX to a muon or
an electron pair, exclusive \PX decays to a muon pair, and exclusive
\PX decays to an electron
pair. Acceptance effects arising from different signal models are
included as systematic uncertainties in the signal yields
after event selection. Little model dependence is expected as the
event selection is defined without using angular information between the leptons.
Figures~\ref{fig:limit_Br_ZX} and \ref{fig:limit_Br_XX} show the
exclusion limits on the model-independent branching fractions with the \ensuremath{\PZ \PX}\xspace
and \ensuremath{\PX \PX}\xspace selections, respectively. The weaker observed limit in the XX selection at ${\mass{\PX} \approx 18\GeV}$ is due to
one observed data event and does not represent a significant statistical
deviation from the background hypothesis. Kinematic differences between the dark photon and ALP models are included as systematic uncertainties, as detailed in Section~\ref{sec:sys}.
\begin{figure}[htb!p]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_004-a.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_004-b.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_004-c.pdf}
\caption{
Expected and observed 95\% CL limits on
$\mathcal{B}(\PH \to \PZ \PX) \mathcal{B}(\PX \to \PGm\PGm)$ assuming \PX decays to dimuons only,
$\mathcal{B}(\PH \to \PZ \PX) \mathcal{B}(\PX \to \Pe\Pe)$ assuming \PX decays to dielectrons only,
and $\mathcal{B}(\PH \to \PZ \PX) \mathcal{B}(\PX \to \Pe\Pe\ \text{or}\ \PGm\PGm)$ assuming a flavor symmetric decay of \PX to dimuons and dielectrons.
The dashed black curve is the expected upper limit, with one and two standard-deviation bands shown in green and yellow, respectively.
The solid black curve is the observed upper limit.
The red curve represents the theoretical cross section for the signal process $\PH \to \PZ \PX \to 4\ell$.
The discontinuity at 12\GeV in the uncertainty is due to
the switch from experimental to theoretical uncertainty estimates of $\mathcal{B}(\ensuremath{\PZ_{\mathrm{D}}}\xspace \to \Pe\Pe\ \text{or}\ \PGm\PGm)$, as described in Ref.~\cite{Curtin:2014cca}.
The symbol $\ensuremath{\varepsilon}\xspace$ is the kinetic-mixing parameter.
The grey band corresponds to the excluded region around the $\PQb\PAQb$ bound states of $\Upsilon$.
\label{fig:limit_Br_ZX}
}
\end{figure}
\begin{figure}[htb!p]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_005-a.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_005-b.pdf}
\includegraphics[width=\cmsFigWidth]{Figure_005-c.pdf}
\caption{
Expected and observed 95\% CL limits on
$\mathcal{B}(\PH \to \PX \PX) \mathcal{B}(\PX \to \PGm\PGm)^2$ assuming \PX decays to dimuons only,
$\mathcal{B}(\PH \to \PX \PX) \mathcal{B}(\PX \to \Pe\Pe)^2$ assuming \PX decays to dielectrons only,
and $\mathcal{B}(\PH \to \PX \PX) \mathcal{B}(\PX \to \Pe\Pe\ \text{or}\ \mu\mu)^2$ assuming a flavor symmetric decay of \PX to dimuons and dielectrons.
The dashed black curve is the expected upper limit, with one and two standard-deviation bands shown in green and yellow, respectively.
The solid black curve is the observed upper limit.
The red curve represents the theoretical cross section for the signal process $\PH \to \PX \PX \to 4\ell$.
The discontinuity at 12\GeV in uncertainty is due to
the switch from experimental to theoretical uncertainty estimates of $\mathcal{B}(\ensuremath{\PZ_{\mathrm{D}}}\xspace \to \Pe\Pe\ \text{or}\ \PGm\PGm)$, as described in Ref.~\cite{Curtin:2014cca}.
The symbol $\kappa$ is the Higgs-mixing parameter.
The grey band corresponds to the excluded region around the $\PQb\PAQb$ bound states of $\Upsilon$.
\label{fig:limit_Br_XX}
}
\end{figure}
\subsection{Limits on dark photon model parameters}
Upper limits at 95\% CL are obtained on the Higgs-mixing parameter $\kappa$ and ${\mathcal{B}(\PH \to \ensuremath{\PZ_{\mathrm{D}}}\xspace \ensuremath{\PZ_{\mathrm{D}}}\xspace)}$ with the \ensuremath{\PX \PX}\xspace selection,
as shown in Fig.~\ref{fig:limit_kappa}, assuming $\kappa \gg \ensuremath{\varepsilon}\xspace$.
The LHC provides unique sensitivity to the parameter $\kappa$ due to the presence
of the Higgs boson. In addition, this analysis provides some sensitivity to $\varepsilon$, but the upper limits are almost an
order of magnitude weaker than those from the Drell--Yan search and from the LHCb Collaboration \cite{Aaij:2017rft}, and hence are not reported in this paper.
\begin{figure}[htb!p]
\centering
\includegraphics[width=0.48\textwidth]{Figure_006.pdf}
\caption{
95\% CL limits on the Higgs-mixing parameter $\kappa$, based on the \ensuremath{\PX \PX}\xspace selection, as function of \mass{\ensuremath{\PZ_{\mathrm{D}}}\xspace}. The dashed black curve is the
expected upper limit, with one and two standard-deviation bands shown in green and yellow, respectively.
The solid black curve is the observed upper limit.
The grey band corresponds to the excluded region around the $\PQb\PAQb$ bound states of $\Upsilon$.
\label{fig:limit_kappa}
}
\end{figure}
\subsection{Limits on the ALP model}
Upper limits at 95\% CL are calculated on the Wilson coefficients \ensuremath{C_{\PZ\PH}/\Lambda}\xspace and \ensuremath{C_{\Pa\PH}/\Lambda^2}\xspace,
as shown in Fig.~\ref{fig:limit_ALP}, where \ensuremath{C_{\PZ\PH}}\xspace is the effective coupling parameter of the Higgs boson, \PZ boson,
and the ALP, \ensuremath{C_{\Pa\PH}}\xspace is the effective coupling parameter of the Higgs boson
and the ALP, and $\Lambda$ is the new physics scale. In both interpretations, the ALP is assumed to decay promptly
with $\mathcal{B}(\Pa\to \Pe\Pe\ \text{or}\ \PGm\PGm) = 1$, with equal fractions to muons and electrons. The last six
mass hypotheses are omitted in the calculation of upper limits on \ensuremath{C_{\PZ\PH}/\Lambda}\xspace to match the \mass{\Pa} range adopted in Ref.~\cite{Bauer:2017ris}.
Kinematic differences between the dark photon and ALP models are included as corrections on signal region yields, as detailed in Section~\ref{sec:sys}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.48\textwidth]{Figure_007-a.pdf}
\includegraphics[width=0.48\textwidth]{Figure_007-b.pdf}
\caption{
95\% CL limit on \ensuremath{C_{\PZ\PH}/\Lambda}\xspace and \ensuremath{C_{\Pa\PH}/\Lambda^2}\xspace as function of $m_{\Pa}$. Black curves are
the expected upper limits, with one and two standard-deviation bands shown in green and yellow, respectively.
The solid black curves represent the observed upper limits.
The grey band corresponds to the excluded region around the $\PQb\PAQb$ bound states of $\Upsilon$.
\label{fig:limit_ALP}
}
\end{figure}
\section{Summary}
\label{sec:summary}
A search for dilepton resonances in Higgs boson decays to four-lepton final states
has been presented. The search considers the two intermediate decay topologies $\PH \to \PZ \PX$ and $\PH \to \PX \PX$.
No significant deviations from the standard model expectations are observed.
The search imposes experimental constraints on products of model-independent branching fractions of
$\mathcal{B}(\PH \to \PZ \PX)$, $\mathcal{B}(\PH \to \PX \PX)$ and $\mathcal{B}(\PX \to \Pe\Pe\ \text{or}\ \PGm\PGm)$, assuming flavor-symmetric decays
of \PX to dimuons and dielectrons, exclusive decays of \PX to dimuons, and exclusive decays of \PX to dielectrons, for $\mass{\PX} > 4\GeV$. In addition, two well-motivated theoretical frameworks beyond the standard model are considered.
Due to the presence of the Higgs boson production in LHC proton-proton collisions,
the search provides unique constraints on the Higgs-mixing parameter $\kappa < 4 \times 10^{-4}$ at 95\% confidence level (\CL) in a dark photon model with the \ensuremath{\PX \PX}\xspace
selection, in Higgs-mixing-dominated scenarios, while searches for \ensuremath{\PZ_{\mathrm{D}}}\xspace in Drell--Yan processes~\cite{Sirunyan:2019wqq,Aaij:2017rft} provide better exclusion limits
on $\varepsilon$ in kinetic-mixing-dominated scenarios. For the axion-like particle model, upper limits at 95\% CL are placed on two relevant
Wilson coefficients \ensuremath{C_{\PZ\PH}/\Lambda}\xspace and \ensuremath{C_{\Pa\PH}/\Lambda^2}\xspace. This is the first direct limit on decays of the observed Higgs boson to axion-like particles decaying to leptons.
\begin{acknowledgments}
\label{ack}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid and other centres for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC, the CMS detector, and the supporting computing infrastructure provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES and BNSF (Bulgaria); CERN; CAS, MoST, and NSFC (China); MINCIENCIAS (Colombia); MSES and CSF (Croatia); RIF (Cyprus); SENESCYT (Ecuador); MoER, ERC PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRI (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 724704, 752730, 758316, 765710, 824093, 884104, and COST Action CA16108 (European Union); the Leventis Foundation; the Alfred P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z191100007219010; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Deutsche Forschungsgemeinschaft (DFG), under Germany's Excellence Strategy -- EXC 2121 ``Quantum Universe" -- 390833306, and under project number 400140256 - GRK2497; the Lend\"ulet (``Momentum") Programme and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the Latvian Council of Science; the Ministry of Science and Higher Education and the National Science Center, contracts Opus 2014/15/B/ST2/03998 and 2015/19/B/ST2/02861 (Poland); the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, grant CEECIND/01334/2018 (Portugal); the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Higher Education, projects no. 14.W03.31.0026 and no. FSWW-2020-0008, and the Russian Foundation for Basic Research, project No.19-42-703014 (Russia); the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Stavros Niarchos Foundation (Greece); the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Kavli Foundation; the Nvidia Corporation; the SuperMicro Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
\end{acknowledgments}
|
1,477,468,750,965 | arxiv | \section{Introduction}
\label{Introduction}
One of the main goals of the study of galaxy formation and evolution is to understand the star formation history of the Universe. A key advance in this area was the discovery of the cosmic far-infrared extragalactic background light (EBL) by the \emph{COBE} satellite \citep{Puget96,Fixsen98} with an energy density similar to that of the UV/optical EBL, implying that a significant amount of star formation over the history of the Universe has been obscured and its light reprocessed by dust. Following this, the population of galaxies now generally referred to as sub-millimetre galaxies (SMGs) was first revealed using the Sub-millimetre Common User Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope \citep[JCMT, e.g.][]{Smail97,Hughes98}. SMGs are relatively bright in sub-millimetre bands (the first surveys focussed on galaxies with $S_{850\mu\rm m}>5$ mJy) and some studies have now shown that the bulk of the EBL at 850$\mu \rm m$ can be resolved by the $S_{850\mu\rm m}>0.1$ mJy galaxy population \citep[e.g.][]{Chen13}. SMGs are generally believed to be massive, dust enshrouded galaxies with extreme infrared luminosities ($L_{\rm IR}\gtrsim 10^{12} \rm{L}_{\odot}$) implying prodigious star formation rates (SFRs, $10^{2}$-$10^{3}$ $\rm{M}_{\odot}$yr$^{-1}$), though this is heavily dependent on the assumed stellar initial mass function \citep[IMF, e.g.][]{Blain02,Casey14}.
One difficulty for sub-millimetre observations is the coarse angular resolution ($\sim 20''$ FWHM) of ground-based single-dish telescopes used for many blank-field surveys. Recently, follow-up surveys performed with greater angular resolution ($\sim 1.5''$ FWHM) interferometers (e.g. Atacama Large Millimetre Array - ALMA, Plateau de Bure Interferometer - PdBI, Sub-Millimetre Array - SMA) targeted at single-dish detected sources have indicated that the resolution of single-dish telescopes had in some cases blended the sub-mm emission of multiple galaxies into one single-dish source \citep[e.g.][]{Wang11,Smolcic12,Hodge13}. \cite{Karim13} showed the effect this blending has on the observed sub-mm number counts, with the single-dish counts derived from the Large APEX (Atacama Pathfinder EXperiment) BOlometer CAmera (LABOCA) Extended \emph{Chandra} Deep Field-South (ECDFS) Sub-millimetre Survey \citep[LESS,][]{Weiss09} exhibiting a significant enhancement at the bright end relative to counts derived from the ALMA follow-up (ALESS).
A related observational difficulty concerning SMGs is determining robust multi-wavelength counterparts for single-dish sources. This is in part due to the single-dish resolution spreading the sub-mm emission over a large solid angle making it difficult to pinpoint the precise origin to an accuracy of greater than $\pm 2''$. This process is also compounded by the faintness of SMGs at other wavelengths. Sub-mm bands are subject to a negative $K$-correction, which results in the sub-mm flux of an SMG being roughly constant over a large range of redshifts $z\sim 1-10$ \citep[e.g.][]{Blain02}. This negative $K$-correction is caused by the spectral energy distribution (SED) of a galaxy being a decreasing power law with wavelength where it is sampled by observer-frame sub-mm bands. As the SED is shifted to higher redshifts it is sampled at a shorter rest-frame wavelength, where it is intrinsically brighter. This largely cancels out the effect of dimming due to the increasing luminosity distance. When observed at other wavelengths e.g. radio, galaxies are subject to a positive $K$-correction and so become fainter with increasing redshift. This is problematic as radio emission has often been used to aid in measuring the position of the sub-mm source, as the star formation that powers the dust emission in the sub-mm also produces radio emission from synchrotron electrons produced by the associated supernovae explosions. This radio selection technique thus biases the counterpart identification towards lower redshift \cite[e.g.][]{Chapman05}. Typically, radio-identification yields robust counterparts for $\sim 60$\% of an SMG sample \citep[e.g.][]{Biggs11}. Sub-mm interferometers have greatly improved the situation, providing positional accuracies of up to $\sim 0.2''$, free from any biases introduced by selection criteria at wavelengths other than the sub-mm. Once multi-wavelength counterparts have been identified, photometric redshifts are derived through fitting an SED to the available photometry, allowing redshift to vary as a free parameter \cite[e.g.][]{Smolcic12}. Whilst observationally inexpensive and thus desirable for large SMG surveys, the errors from photometric redshifts are often significant, and samples are again biased by requiring detection in photometric bands.
Compounding these difficulties is the fact that, with the exception of the South Pole Telescope (SPT) survey presented in \cite{Vieira10}\footnote{These authors surveyed 87~deg$^2$ at 1.4 (2) mm to a depth of 11~(4.4)~mJy with a 63$''$~(69$''$)~FWHM beam. Due to the flux limits and wavelength of this survey, the millimetre detections are mostly gravitationally lensed sources \citep{Vieira13}.}, ground-based sub-mm surveys have to date been pencil beams ($<0.7$~deg$^2$) leaving interpretation of the observed results subject to field-to-field variations. In particular, \cite{Michalowksi12} found evidence that photometric redshift distributions of radio-identified counterparts of $1100$ and $850~\mu$m selected SMGs in the two non-contiguous SCUBA Half-Degree Extragalactic Survey (SHADES) fields are inconsistent with being drawn from the same parent distribution. This suggests that the SMGs are tracing different large scale structures in the two fields. Larger surveys have been undertaken at $250$, $350$ and $500$ $\mu$m from space using the Spectral and Photometric Imagine REceiver \citep[SPIRE,][]{Griffin10} instrument on board the \emph{Herschel} Space Observatory \citep{Pilbratt10}. These are also affected by coarse angular resolution; the SPIRE beam has a FWHM of $\sim18''$, $25''$ and $37''$ at 250, 350 and 500 $\mu$m respectively. However, number counts at these wavelengths have been derived from SPIRE maps through stacking analysis \citep{Bethermin12} using the positions and flux densities of sources detected at $24~\mu$m as a prior.
Historically, hierarchical galaxy formation models have struggled to reproduce the high number density of the SMG population at high redshifts \citep[e.g.][]{Blain99,DevriendtGuiderdoni00,Granato00}. However \cite{Baugh05} presented a version of the Durham semi-analytic model (SAM), {\sc{galform}}\xspace, which could successfully reproduce the observed number counts and redshift distribution of SMGs, along with the present day luminosity function. In order to do so, it was found necessary to significantly increase the importance of high-redshift starbursts in the model relative to previous versions of {\sc{galform}}\xspace; this was primarily achieved through introducing a top-heavy stellar initial mass function (IMF) for galaxies undergoing a (merger induced) starburst. Recently, \cite{Hayward13a} introduced a hybrid model which combined the results from idealized hydrodynamical simulations of isolated discs/mergers with various empirical cosmological relations and showed reasonable agreement with the $850$ $\mu\mathrm{m}$ number counts and redshift distribution utilising a solar neighbourhood IMF. However, this model is limited in terms of the range of predictions it can make due to its semi-empirical nature. A similar model was presented in \cite{Hayward13b} which included a treatment of blending by single dish telescopes, showing that the sub-mm emission from both physically associated and unassociated SMGs contribute significantly to the single-dish number counts. This model underpredicts the observed single-dish number counts at $S_{850\mu\mathrm{m}}>5$ mJy, possibly due to the exclusion of starburst galaxies. The Hayward et al. models build on earlier work presented in \cite{Hayward11} and \cite{Hayward12} which were novel in discussing theoretically the effects of the single-dish beam on the observed SMG population.
Here we investigate the effect of both the angular resolution of single-dish telescopes and field-to-field variations on observations of the SMG population. We utilise $50$ randomly orientated lightcones calculated from an updated version of {\sc{galform}}\xspace (Lacey et al. 2014, in preparation, hereafter L14) to create mock sub-mm surveys taking into account the effects of the single-dish beam. This paper is structured as follows: in Section \ref{sec:Model} we introduce the theoretical model we use for this analysis and our method for creating our $850$ $\mu$m mock sub-mm surveys. In Section \ref{sec:results} we present our main results concerning the effects of the single-dish beam and field-to-field variance. In Section \ref{sec:ALESS} we make a detailed comparison of the predictions of our model with the ALESS survey and in Section \ref{sec:multi_lam} we present our predicted single-dish number counts at $450$ and $1100$ $\mu$m. We summarise our findings and conclude in Section \ref{sec:Summary}.
\section{The Theoretical Model}
\label{sec:Model}
In this section we present the model used in this work. We couple a state-of-the-art semi-analytic galaxy formation model run in a Millennium-class \citep{Springel05} $N$-body simulation using the WMAP7 cosmology \citep{Komatsu11}\footnote{$\Omega_{0}=0.272$, $\Lambda_{0}=0.728$, $h=0.704$, $\Omega_{\rm b}=0.0455$, $\sigma_{8}=0.81$, $n_{s}=0.967$. This is the simulation referred to as MS-W7 in \cite{Guo13} and \cite{vgp14}; and as MW7 in \cite{Jenkins13}. It is available on the Millennium database at: \url{http://www.mpa-garching.mpg.de/millennium}.}, with a simple model for the re-processing of stellar radiation by dust (in which the dust temperature is calculated self-consistently). A sophisticated lightcone treatment is implemented for creating mock catalogues of the simulated galaxies \citep{Merson13}. We also describe our method for creating sub-mm maps from these mock catalogues, which include the effects of the single-dish beam size and instrumental noise, from which we extract sub-mm sources in a way that is consistent with what is done in observational studies.
\subsection{GALFORM}
The Durham SAM, {\sc{galform}}\xspace, was first introduced in \cite{Cole00}. Galaxy formation is modelled \emph{ab initio}, beginning with a specified cosmology and a linear power spectrum of density fluctuations and ending with predicted galaxy properties at a range of redshifts. Galaxies are assumed to form within dark matter halos, with their subsequent evolution controlled by the merging history of the halo. These halo merger histories can be calculated using a Monte Carlo technique following extended Press-Schechter formalism \citep{PCH08}, or (as is the case in this work) extracted directly from $N$-body dark matter only simulations \citep[e.g.][]{Helly03,Jiang14}. Baryonic physics is modelled using a set of continuity equations that track the exchange of baryons between stellar, cold disc gas and hot halo gas components. The main physical processes that are modelled include: (i) hierarchical assembly of dark matter halos; (ii) shock heating and virialization of gas in halo potential wells; (iii) radiative cooling and collapse of gas onto galactic discs; (iv) star formation from cold gas; (v) heating and expulsion of gas through feedback processes; (vi) chemical evolution of gas and stars; (vii) mergers of galaxies within halos due to dynamical friction; (viii) evolution of stellar populations using stellar population synthesis (SPS) models; and (ix) the extinction and reprocessing of stellar radiation due to dust. As with other SAMs, the simplified nature of the equations that are used to characterise these complex and in some cases poorly understood physical processes introduce a number of parameters into the model. These parameters are constrained using a combination of simulation results and observational data, reducing enormously the available parameter space. In particular, the strategy of \cite{Cole00} is that for a galaxy formation model to be deemed successful it must reproduce the present day ($z=0$) luminosity function in optical and near infra-red bands. For a more detailed overview of SAMs see the reviews by \cite{Baugh06} and \cite{Benson10}.
Several {\sc{galform}}\xspace models have appeared in the literature that adopt different values for the model parameters and in some cases include different physical processes. For this work we adopt the model presented in L14 as it can reproduce a range of observational data (including $z=0$ luminosity functions in $b_{\rm J}$ and $K$-bands, see L14 for more details) and because it combines a number of important physical processes from previous {\sc{galform}}\xspace models. These include the effects of AGN feedback inhibiting gas cooling in massive haloes \citep{Bower06}, and a star formation law for galaxy discs \citep{Lagos11} based on an empirical relationship between the star formation rate and molecular-phase gas density \citep{BlitzRosolowsky06}. For the purposes of reproducing a number density of sub-mm galaxies appropriate for this study, a top-heavy IMF is implemented for starbursts, as in \cite{Baugh05}. However, in L14 a much less extreme slope is used compared to that invoked by Baugh et al\footnote{The slope of the IMF, $x$, in $dN(m)/d\ln m = m^{-x}$, has a value of $x=1$ in L14 whereas a value of $x=0$ was used in \cite{Baugh05}.}. The top-heavy IMF enhances the sub-mm luminosity of a starburst galaxy through a combination of an enhanced number of massive stars which increases the unattenuated UV luminosity of the galaxy, and a greater number of supernovae events which increases the metal content and hence dust mass available to absorb and re-emit the stellar radiation at sub-mm wavelengths. A significant difference between \cite{Baugh05} and L14 is that in Baugh et al. the starburst population was induced by galaxy mergers, whilst in L14 starbursts are primarily caused by disc instabilities. These instabilities use the same stability criterion for self-gravitating discs presented in \cite{Mo98} and \cite{Cole00}. They were included in \cite{Bower06}, but were not considered in \cite{Baugh05}. As with other {\sc{galform}}\xspace models, a standard \cite{Kennicutt83} IMF is adopted in L14 for quiescently star forming discs.
The model presented in L14 is designed to populate a Millennium-class dark matter only $N$-body simulation using a WMAP7 cosmology with a minimum halo mass of $1.9\times10^{10}$~$h^{-1}$~M$_{\odot}$. This work uses $50$ output snapshots from the model in the redshift range $z=0{-}8.5$, we use this large redshift range so that our simulated SMG population is complete.
\subsection{The Dust Model}
\label{sec:dust_model}
In order for the sub-mm flux of galaxies to be predicted, a model is required to calculate the amount of stellar radiation absorbed by dust and the resulting SED of the dust emission. Here we use a model motivated by the radiative transfer code {\sc{grasil}}\xspace \citep{Silva98}. {\sc{grasil}}\xspace calculates the heating and cooling of dust grains of varying sizes and compositions at different locations within each galaxy, effectively obtaining the dust temperature $T_{\rm d}$ at each position. {\sc{grasil}}\xspace has been coupled with {\sc{galform}}\xspace in previous works \citep[e.g.][]{Granato00,Baugh05,Swinbank08}. However, due to the computational expense of running {\sc{grasil}}\xspace for the number of {\sc{galform}}\xspace galaxies generated in the simulation volume used in this work, we instead use a model which retains some of the key assumptions of {\sc{grasil}}\xspace but with a significantly simplified calculation. Despite the simplifications made, this model can accurately reproduce the predictions of {\sc{grasil}}\xspace for rest-frame wavelengths $\lambda_{\rm rest}>70$ $\mu\mathrm{m}$. We are therefore confident in its application to the wavelengths under investigation here. We briefly describe our dust model in the following section. However, for a more detailed explanation we refer the reader to the appendix of L14.
We adopt the {\sc{grasil}}\xspace assumptions regarding the geometry of the stars and dust. Stars are distributed throughout two components (i) a spherical bulge with an $r^{1/4}$ density profile, and (ii) a flattened component which is either a quiescent disc or a starburst component, with exponential radial and vertical density profiles. Young stars and dust are assumed to be in the flattened component only. A two phase dust medium is also adopted, as in {\sc{grasil}}\xspace. Dust and gas exist in either dense molecular clouds, modelled as uniform density spheres of fixed mass ($10^6$ $\rm{M}_{\odot}$) and radius (16 pc), or a diffuse inter-cloud medium. Stars are assumed to form inside the molecular clouds and gradually escape into the diffuse dust on a timescale of a few Myrs, parametrised as $t_{\rm esc}$ in the model. The dust emission is first obtained by calculating the energy from stellar radiation absorbed in each dust component. Assuming thermal equilibrium, this is then equated to the energy emitted by the respective dust component, such that the luminosity per unit wavelength emitted by a mass $M_{\rm d}$ of dust is given by
\begin{equation}
L_{\lambda}^{\rm{dust}}=4\pi\kappa_{\rm d}(\lambda)B_{\lambda}(T_{\rm d})M_{\rm d},
\end{equation}
where $\kappa_{\rm d}(\lambda)$ is the absorption cross-section per unit mass and $B_{\lambda}(T_{\rm d})$ is the Planck blackbody function. Crucially this means that the dust temperature of each component is not a free parameter but is calculated self-consistently, based on global energy balance arguments. An important simplifying assumption here is that we assume only two dust temperatures, one for the molecular clouds and one for the diffuse medium. The dust mass, $M_{\rm d}$, is proportional to the metallicity times the cold gas mass, normalised to give the local inter-stellar medium dust-to-gas ratio for solar metallicity. For calculating dust emission, the dust absorption cross-section per unit mass of metals in the gas phase is approximated as follows:
\begin{equation}
\label{eq:kappa_d}
\kappa_{\rm d}(\lambda)=
\begin{cases}
\kappa_{1}\left(\frac{\lambda}{\lambda_{1}}\right)^{-2}&\lambda<\lambda_{\rm b}\\
\kappa_{1}\left(\frac{\lambda_{\rm b}}{\lambda_{1}}\right)^{-2}\left(\frac{\lambda}{\lambda_{\rm b}}\right)^{-\beta_{\rm b}}&\lambda>\lambda_{\rm b}\mathrm{.}\\
\end{cases}
\end{equation}
With $\kappa_{1}=140$ cm$^2$g$^{-1}$ at the reference wavelength $\lambda_{1}=30$ $\mu$m \citep[e.g.][]{DraineLee84}. The power-law break is introduced at $\lambda_{\rm b}=100$ $\mu$m for starburst galaxies \emph{only}, with $\beta_{\rm b}=1.5$. For quiescently star forming galaxies we assume an unbroken power law, equivalent to $\lambda_{\rm b}\rightarrow\infty$.
The sub-mm number counts can be calculated by first constructing luminosity functions $dn/d\ln L_{\nu}$ at a given output redshift using $L_{\nu}$ calculated by the dust model. The binning in luminosity is chosen so that we have fully resolved the bright end, to which the derived number counts are sensitive. The number counts and redshift distribution can then be calculated using
\begin{equation}
\frac{d^2 N}{d\ln S_{\nu}dzd\Omega} = \left\langle\frac{dn}{d \ln L_{\nu}}\right\rangle\frac{dV}{dzd\Omega},
\label{eq:ncts}
\end{equation}
where the comoving volume element $dV/dz=(c/H(z))r^2(z)$, $r(z)$ is the comoving radial distance to redshift $z$, and the brackets $\langle...\rangle$ represent a volume-averaging utilising the whole $N$-body simulation volume (500 $h^{-1}$Mpc)$^3$.
\subsection{Creating mock surveys}
\label{sec:create_mock_surveys}
In order to create mock catalogues of our sub-mm galaxies we utilise the lightcone treatment described in \cite{Merson13}. Briefly, as the initial simulation volume side-length ($L_{\rm{box}}=500$ $h^{-1}$Mpc) corresponds to the co-moving distance out to $z\sim 0.17$, the simulation is periodically replicated in order to fully cover the volume of a typical SMG survey, which extends to much higher redshift. This replication could result in structures appearing to be repeated within the final lightcone, which could produce unwanted projection-effect artefacts if their angular separation on the `mock sky' is small \citep{Blaizot05}. As our fields are small in solid angle (0.5 deg$^2$) and our box size is large, we expect this effect to be of negligible consequence and note that we have seen no evidence of projection-effect artefacts in our mock sub-mm maps. Once the simulation volume has been replicated, a geometry is determined by specifying an observer location and lightcone orientation. An angular cut defined by the desired solid angle of our survey is then applied, such that the mock survey area resembles a sector of a sphere. The redshift of a galaxy in the lightcone is calculated by first determining the redshift ($z$) at which its host dark matter halo enters the observer's past lightcone. The positions of galaxies are then interpolated from the simulation output snapshots ($z_{i},z_{i+1}$, where $z_{i+1}<z<z_{i}$) such that the real-space correlation function of galaxies is preserved. A linear $K$-correction interpolation is applied to the luminosity of the galaxy to account for the shift in $\lambda_{\rm rest}=\lambda_{\rm obs}/(1+z)$ for a given $\lambda_{\rm obs}$, based on its interpolated redshift.
To create the 850 $\mu$m mock catalogues we apply a further selection criterion so that our galaxies have $S_{850\mu\rm m}>0.035$ mJy. This is the limit brighter than which we recover $\sim 90\%$ of the 850 $\mu\rm m$ EBL, as predicted by our model (Fig. \ref{fig:EBL}).
\begin{figure}
\includegraphics[width=\columnwidth]{EBL}
\caption{Predicted cumulative extragalactic background light as a function of flux at 850 $\mu$m (blue line). The horizontal dashed line \citep{Fixsen98} and dash-dotted line \citep{Puget96} show the background light as measured by the \emph{COBE} satellite. The shaded \citep{Puget96} and hatched \citep{Fixsen98} regions indicate the respective errors on the two measurements. The vertical dotted line indicates the flux limit above which $90\%$ of the total predicted EBL is resolved.}
\label{fig:EBL}
\end{figure}
We have checked that our simulated SMG population is not affected by incompleteness at this low flux limit, due to the finite halo mass resolution of the $N$-body simulation. To allow us to test field-to-field variance we generate $50~\times~0.5$ deg$^2$ lightcone surveys\footnote{In practise our surveys are $0.55$ deg$^2$. This allows for galaxies outside the $0.5$ deg$^2$ area to contribute to sources detected inside this area after convolution with the single-dish beam.} with random observer positions and lines of sight. In Fig. \ref{fig:ncts_int_lc} we show that the lightcone accurately reproduces the SMG number counts of our model. We also show in Fig. \ref{fig:ncts_int_lc} the predicted $850~\mu$m number counts from starburst (dotted line) and quiescent (dash-dotted line) galaxies in the model. Starburst galaxies dominate the number counts in the range $\sim0.2{-}20$ mJy. Turning off merger-triggered starbursts in this model has a negligible effect on the predicted number counts (L14), from this we have inferred that these bursts are predominately triggered by disc instabilities.
\begin{figure}
\includegraphics[width=\columnwidth]{ncts_integral_lc}
\caption{Predicted cumulative number counts at 850 $\mu$m. Predictions from the lightcone catalogues (red line) and from integrating the luminosity function of the model (dashed blue line) are in excellent agreement. The dotted and dash-dotted blue lines show the contribution to the number counts from starburst and quiescent galaxies respectively. We compare the model predictions to single-dish observational data from Coppin at al. (\citeyear{Coppin06}; orange squares), Knudsen et al. (\citeyear{Knudsen08}; green triangles), Wei{\ss} et al. (\citeyear{Weiss09}; magenta diamonds) and Chen at al. (\citeyear{Chen13}; cyan circles). The vertical dotted line shows the approximate confusion limit ($\sim2$ mJy) of single-dish blank field surveys. Observational data fainter that this limit are derived from cluster-lensed surveys (see Section \ref{sec:nctsbeam} for further discussion).}
\label{fig:ncts_int_lc}
\end{figure}
\subsection{Creating sub-mm maps}
\label{sec:create_submm_maps}
Here we describe the creation of mock sub-mm maps from our lightcone catalogues. First, we create an image by assigning the $850$ $\mu$m flux of a galaxy to the pixel in which it is located, using a pixel size much smaller than the single-dish beam. This image is then convolved with a point spread function (PSF), modelled as a 2D Gaussian with a $15''$ FWHM ($\sim$SCUBA2/JCMT), and then re-binned into a coarser image with $2''\times2''$ pixels, to match observational pixel sizes. The resulting image is then scaled so that it is in units of mJy/beam. We refer to the output of this process as the astrophysical map (see Fig \ref{fig:thumbs}a).
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{map_thumbs}
\caption{Panels illustrating the mock map creation process at 850 $\mu$m. Panels (a)-(d) are $0.2\times 0.2$ deg$^2$ and are centred on a $13.1$ mJy source. (a) Astrophysical map including the effect of the telescope beam. (b) Astrophysical plus Gaussian white noise map, constrained to have zero mean. (c) Matched-filtered map. (d) Matched-filtered map with $S_{850\mu\rm m}>4$ mJy single-dish sources (blue circles centred on the source position) and $S_{850\mu \rm m}>1$ mJy galaxies (green dots) overlaid. (e) As for (d) but for a $0.5'\times0.5'$ area, centered on the same 13.1~mJy source. The 2 galaxies within the $9''$ radius (blue dotted circle, $\sim$ ALMA primary beam) of the source have fluxes of $1.2$ and $11.2$ mJy and redshifts of $1.0$ and $2.0$ respectively. (f) as for (e) but centred on a $12.2$ mJy source. In this case the 2 galaxies within the central $9''$ radius have fluxes of $6.1$ and $6.4$ mJy and redshifts of $2.0$ and $3.2$ respectively.}
\label{fig:thumbs}
\end{figure*}
In order to model the noise properties of observational maps we add `instrumental' Gaussian white noise to the astrophysical map. We tune the standard deviation of this noise such that after it has been matched-filtered (described below) the output is a noise map with $\sigma_{\rm rms} \sim 1$ mJy/beam, comparable to jackknifed noise maps in $850$ $\mu$m blank-field observational surveys \citep[e.g.][]{Coppin06,Weiss09,Chen13}.
It is a well known result in astronomy that the best way to find point-sources in the presence of noise is to convolve with the PSF \citep{Stetson87}. However, this is only optimal if the noise is Gaussian, and does not take into account `confusion noise' from other point-sources. \cite{Chapin11} show how one can optimise filtering for maps with significant confusion, through modelling this as a random (and thus un-clustered) superposition of point sources convolved with the PSF, normalised to the number counts inferred from $P(D)$ analysis of the maps. The PSF is then divided by the power spectrum of this confusion noise realisation. This results in a matched-filter with properties similar to a `Mexican-hat' kernel. An equivalent method is implemented in \cite{Laurent05}. Although our simulated maps contain a significant confusion background, for simplicity we do not implement such a method here, and have checked that the precise method of filtering does not significantly affect our source-extracted number counts.
Prior to source extraction, we constrain our astrophysical plus Gaussian noise map to have a mean of zero (Fig. \ref{fig:thumbs}b) and convolve with a matched-filter $g(x)$, given by
\begin{equation}
g(x)=\mathcal{F}^{-1} \left\{ \frac{s^{*}(q)}{\int |s(q)|^2 d^{2}q} \right\},
\label{eq:filter}
\end{equation}
where $\mathcal{F}^{-1}$ denotes an inverse Fourier transform, $s(q)$ is the Fourier transform of our PSF and the asterisk indicates complex conjugation. The denominator is the appropriate normalisation such that peak heights of PSF-shaped sources are preserved after filtering. Up to this normalisation factor, the matched-filtering is equivalent to convolving with the PSF. Point sources are therefore effectively convolved with the PSF twice, once by the telescope and once by the matched-filter. This gives our final matched-filtered map (Fig. \ref{fig:thumbs}c) a spatial resolution of $\sim 21.2''$ FWHM i.e. $\sqrt{2}\times 15''$.
For real surveys, observational maps often have large scale filtering applied prior to the matched-filtering described above. This is to remove large scale structure from the map, often an artefact of correlated noise of non-astrophysical origin. This is implemented by convolving the map with a Gaussian broader than the PSF and then subtracting this off the original, rescaling such that the flux of point sources is conserved \citep[e.g.][]{Weiss09,Chen13}. As our noise is Gaussian, any excess in the power spectrum of the map on large scales can only be attributed to our astrophysical clustering signal, so we choose not to implement any such high-pass filtering prior to our matched-filtering.
An example of one of our matched-filtered maps is shown in Fig. \ref{fig:Map} and the associated pixel histogram in Fig. \ref{fig:pixelplot}. The position of the peak of the pixel histogram is determined by the constraint that our maps have a zero mean after subtracting a uniform background. We attribute the broadening of the Gaussian fits from $\sigma=1$ mJy/beam in our matched-filtered noise-only map to $\sigma=1.2$ mJy/beam in our final matched-filtered map to the realistic confusion background from unresolved sources in our maps.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{map}
\caption{A matched-filtered map. Sources detected with $S_{850\mu\rm m}>4.5$ mJy by our source extraction algorithm are indicated by blue circles. The central $0.5$ deg$^2$ region, from which we extract our sources, is indicated by the black circle.}
\label{fig:Map}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pixplot}
\caption{Pixel flux histogram of the map shown in Fig. \ref{fig:Map}. The grey and black lines are the map before and after convolution with the single-dish beam respectively, with the same zero point subtraction applied as to our final matched-filtered map (blue line). The map is rescaled after convolution with the single-dish beam to convert to units of mJy/beam (grey to black), and during the matched-filtering due to the normalisation of the filter which conserves point source peaks (black to blue). Dotted lines show Gaussian fits to the matched-filtered noise-only (red solid line) and the negative tail of the final matched-filtered (blue solid line) map histograms respectively.}
\label{fig:pixelplot}
\end{figure}
For the source extraction we first identify the peak (i.e. brightest) pixel in the map. For simplicity we record the source position and flux to be the centre and value of this peak pixel. We then subtract the matched-filtered PSF, scaled and centred on the value and position of the peak pixel, from our map. This process is iterated down to an arbitrary threshold value of $S_{850\mu\rm m}=1$ mJy, resulting in our source-extracted catalogue.
\section{Results}
In this section we present our main results: in Section \ref{sec:nctsbeam} we show the effect the single-dish beam has on the predicted number counts through blending the sub-mm emission of galaxies into a single source. In Section \ref{sec:multiplicity} we quantify the multiplicity of blended sub-mm sources, in Section \ref{sec:unassociated} we show that these blended galaxies are typically physically unassociated and in Section \ref{sec:dndz} we present the redshift distribution of SMGs in our model.
\label{sec:results}
\subsection{Number counts}
\label{sec:nctsbeam}
The cumulative number counts derived from our lightcone and source-extracted catalogues are presented in Fig. \ref{fig:nctsbeam}. The shaded regions, which show the 10-90 percentiles of the distribution of number counts from the individual fields, give an indication of the field-to-field variation we predict for fields of $0.5$ deg$^2$ area. This variation is comparable to or less than the quoted observational errors. Quantitatively, we find a field-to-field variation in the source-extracted number counts of $0.07$ dex at 5 mJy and $0.34$ dex at 10 mJy. A clear enhancement in the source-extracted number counts relative to those derived from our lightcone catalogues is evident at $S_{850\mu\rm m}\gtrsim 1$ mJy. We attribute this to the finite angular resolution of the beam blending together the flux from multiple galaxies with projected on-sky separations comparable to or less than the size of the beam. Our source-extracted number counts show better agreement than our intrinsic lightcone counts with blank-field single-dish observational data above the confusion limit ($S_{\rm lim}\approx2$ mJy) of such surveys, which is indicated by the vertical dotted line in Fig \ref{fig:nctsbeam}.
Observational data fainter than this limit have been measured from gravitationally lensed cluster fields, where gravitational lensing due to a foreground galaxy cluster magnifies the survey area, typically by a factor of a few, but up to $\sim 20$. The magnification increases the effective angular resolution of the beam, thus reducing the confusion limit of the survey and the instances of blended galaxies. The lensing also boosts the flux of the SMGs. These effects allow cluster-lensed surveys to probe much fainter fluxes than blank-field surveys performed with the same telescope. We show observational data in Fig. \ref{fig:ncts_int_lc} at $S_{850\mu\rm m}<2$ mJy for comparison with our lightcone catalogue number counts, with which they agree well.
Fig. \ref{fig:nctsbeam} shows that at $S_{850\mu\rm m}\gtrsim 5$ mJy our source-extracted counts agree best with the \cite{Weiss09} data, taken from ECDFS. There is some discussion in the literature over whether this field is under-dense by a factor of $\sim 2$ (see Section 4.1 of \cite{Chen13} and references therein). Whilst the field-to-field variation in our model can account for a factor of $\sim 2$ (at 10 mJy) it may be that our combined field source-extracted counts (and also those of Wei{\ss} et al.) are indeed underdense compared to number counts representative of the whole Universe.
At $2\lesssim S_{850\mu \rm{m}}\lesssim5$ mJy our source-extracted number counts appear to follow a slightly steeper trend compared to the observed counts, this may be due to the underlying shape of our lightcone catalogue counts and the effect this has on our source-extracted counts. We stress here that the L14 model was developed without regard to the precise effect the single-dish beam would have on the number counts. An extensive parameter search which shows the effect of varying certain parameters on the intrinsic number counts (and other predictions) of the model is presented in L14. We do not consider any variants on the model here, but it is possible that once the effects of the single dish beam have been taken into account some variant models will match other observational data better, and show different trends over the flux range of interest.
The observed number counts at faint fluxes, above the confusion limit, may also be affected by completeness issues. Whilst efforts are made to account for these in observational studies, they often rely on making assumptions about the number density and clustering of SMGs, so it is not clear that they are fully understood.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ncts}
\caption{The effect of single-dish beam size on cumulative 850 $\mu$m number counts. The shaded regions show 10-90 percentiles of the distribution of the number counts from the $50$ individual fields, solid lines show counts from the combined 25 deg$^2$ field for the lightcone (red) and the $15''$ FWHM beam source extracted (green) catalogues. The vertical dotted line at $S_{850 \mu \rm m}=2$ mJy indicates the approximate confusion limit of single-dish surveys. The $15''$ beam prediction is only to be compared at fluxes above this limit. Single-dish blank field observational data is taken from Coppin et al. (\citeyear{Coppin06}; orange squares) Wei{\ss} et al. (\citeyear{Weiss09}; magenta diamonds) and Chen et al. (\citeyear{Chen13}; cyan circles).}
\label{fig:nctsbeam}
\end{figure}
\subsection{Multiplicity of single-dish sources}
\label{sec:multiplicity}
Given that multiple SMGs can be blended into a single source, in this section we quantify this multiplicity. For each galaxy within a $4\sigma$ radius\footnote{We use the $\sigma$ of our match-filtered PSF i.e. $\sqrt{2}\times\rm{FWHM}/2\sqrt{2\ln{2}} \approx 9''$, and choose $4\sigma$ so that the search radius is large enough for our results in this section to have converged after our flux weighting scheme has been applied.} of a given $S_{850\mu\rm{m}} > 2$ mJy source, we determine a flux contribution for that galaxy at the source position by modelling its flux distribution as the matched-filtered PSF with a peak value equal to that galaxy's flux. For example, a 5 mJy galaxy at a $\sim10.6''$ ($\sigma\times\sqrt{2\ln{2}}$) radial distance from a given source will contribute 2.5 mJy at the source position. We do this for all galaxies within the $4\sigma$ search radius and label the sum of these contributions as the total galaxy flux of the source, $S_{\rm gal\_tot}$. The fraction each galaxy contributes towards this total is the galaxy's flux weight. For each source we then interpolate the cumulative distribution of flux weights after sorting in order of decreasing flux weight, to determine how many galaxies are required to contribute a given percentage of the total.
We plot this as a function of source-extracted flux, which includes the effect of instrumental noise and the subtraction of a uniform background, in the top $4$ panels of Fig \ref{fig:multiplicity}. Typically, $90\%$ of the total galaxy flux of a $5$ mJy source is contributed by $\sim3{-}6$ galaxies and this multiplicity decreases slowly as source flux increases. This decrease follows intuitively from the steep decrease in number density with increasing flux in the number counts.
We note that this is not how source multiplicity is typically measured in observations. In Section \ref{sec:ALESS_ncts} we discuss the multiplicity of ALESS sources in a way more comparable to observations, where we have considered the flux limit and primary beam profile of ALMA, see also Table \ref{tab:ALMAtable}. Observational interferometric studies which suggest that the multiplicity of single-dish sources may increase with increasing source flux \citep[e.g.][]{Hodge13} are likely to be affected by a combination of the flux limit of the interferometer, meaning high multiplicity faint sources are undetected, and small number statistics of bright sources.
We also show, in the bottom panel of Fig. \ref{fig:multiplicity}, the ratio of the total galaxy flux to source flux. The consistency with zero indicates that our source-extracted number counts at 850 $\mu$m are not systematically biased. This is due to the competing effects of subtracting a mean background in the map creation (which biases $S_{\rm source}$ low) and the introduction of Gaussian noise (which biases $S_{\rm source}$ high due to Eddington bias caused by the steeply declining nature of the number counts) effectively cancelling each other out in this case. In Section \ref{sec:multi_lam} we find that our number counts at $450~\mu$m are strongly affected by Eddington bias, which we correct for in that case.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{multiplicity}
\caption{\emph{Top $4$ panels}: Number of component galaxies contributing the percentage indicated in the panel of the total galaxy flux (see text) of a $S_{850 \mu m}>2$ mJy source. \emph{Bottom panel}: Ratio of total galaxy flux to source flux. Black dashed line is a reference line drawn at zero. Solid red line shows median and errorbars indicate inter-quartile range for a 2mJy flux bin in all panels. Grey dots show individual sources, for clarity only 10\% of the sources have been plotted.}
\label{fig:multiplicity}
\end{figure}
\subsection{Physically unassociated galaxies}
\label{sec:unassociated}
Given the multiplicity of our sources, we can further determine if the blended galaxies contributing to a source are physically associated, or if their blending has occurred due to a chance line of sight projection. For each source we define a redshift separation, $\Delta z$, as the inter-quartile range of the cumulative distribution of the flux weights (calculated as described above), where the galaxies have in this case first been sorted by ascending redshift. The distribution of $\Delta z$ across our entire $S_{850\mu\mathrm{m}}>4$ mJy source population is shown in Fig. \ref{fig:associated}. \begin{figure}
\centering
\includegraphics[width=\columnwidth]{associated}
\caption{Distribution of the logarithm of redshift separation (see text) of $S_{850\mu\mathrm{m}}>4$ mJy single-dish sources. The dominant peak at $\Delta z\approx1$ implies that the majority of the blended galaxies are physically unassociated. The hatched region indicates the percentage ($\sim36\%$) of sources for which $\Delta z=0$ (see text in Section \ref{sec:unassociated}).}
\label{fig:associated}
\end{figure} The dominant peak at $\Delta z\approx 1$ is similar to the distribution derived from a set of maps which had galaxy positions randomised prior to convolution with the single-dish beam. This suggests that this peak is a result solely of a random sampling from the redshift distribution of our SMGs and thus that the majority of our sources are composed of physically unassociated galaxies with a small on-sky separation due to chance line of sight projection. This is unsurprising considering the large effective redshift range of sub-millimetre surveys, resulting from the negative $K$-corrections of SMGs. We attribute the secondary peak at $\Delta z\sim 5\times 10^{-4}$ to clustering in our model, and defer a more thorough analysis of this to a future work. We also show as the hatched region the area ($\sim36\%$) of sources for which $\Delta z=0$. These are sources for which a single galaxy spans the inter-quartile range of the cumulative distribution described above, this can occur when the flux weight of that galaxy is $>0.5$ and must occur when the flux weight of that galaxy is $>0.75$. We understand that this is not how redshift separation would be defined observationally, and refer the reader to Section \ref{sec:ALESS} and Fig. \ref{fig:ALESS_delta_z} for another definition of $\Delta z$. We note, however, that our conclusions in this section are not sensitive to the precise definition of $\Delta z$.
It is a feature of most current SAMs that any star formation enhancement caused by gravitational interactions of physically associated galaxies prior to a merger event is not included. In principle this may affect our physically unassociated prediction, as in our model galaxy mergers would only become sub-mm bright post-merger, and would be classified as a single galaxy. However, as merger induced starbursts have a negligible effect on our sub-mm number counts, which are composed of starbursts triggered by disc instabilities (L14), we are confident our physically unassociated conclusion is not affected by this feature.
We note that this conclusion is in contrast to predictions made by \cite{Hayward13b} who, in addition to physically unassociated blends, predict a more significant physically associated population than is presented here. However, we believe our work has a number of significant advantages over that of \cite{Hayward13b} in that: (i) galaxy formation is modelled here \emph{ab initio} with a model that can also successfully reproduce galaxy luminosity functions at $z=0$; (ii) the treatment of blending presented here is more accurate through convolution with a beam, the inclusion of instrumental noise and matched-filtering prior to source-extraction, rather than a summation of sub-mm flux within some radius around a given SMG; and (iii) our $15''$ source-extracted number counts show better agreement with single-dish data for $S_{850\mu\rm m}\gtrsim 5$ mJy, this is probably in part due to the exclusion of starbursts from the Hayward et al. (2013b) model, though the effect including starbursts would have on the number counts in that model is not immediately clear.
\subsection{Redshift distribution}
\label{sec:dndz}
As we have shown that sub-mm sources are composed of multiple galaxies at different redshifts, for this section we consider our lightcone catalogues \emph{only}.
The redshift distributions for the `bright' $S_{850\mu\mathrm{m}}>5$ mJy and `faint' $S_{850\mu\mathrm{m}}>1$ mJy galaxy populations are shown in Fig. \ref{fig:dndz_poisson}.\begin{figure}
\centering
\includegraphics[width=\columnwidth]{dndz}
\caption{The predicted redshift distribution for our $50\times0.5$ deg$^2$ fields for the flux limit indicated on each panel. The shaded red region shows the 16-84 ($1\sigma$) percentile of the distributions from the 50 individual fields. The solid red line is the distribution for the combined $25$ deg$^2$ field. The boxplots represent the distribution of the median redshifts of the $50$ fields, the whiskers show the full range, with the box and central line indicating the inter-quartile range and median. The errorbars show the expected $1\sigma$ variance due to Poisson errors.}
\label{fig:dndz_poisson}
\end{figure}
The shaded region shows the 16-84 (1$\sigma$) percentiles of the distributions from the 50 individual fields, arising from field-to-field variations. The errorbars indicate the $1\sigma$ Poisson errors. The bright SMG population has a lower median redshift ($z_{50}=2.05$) than the faint one ($z_{50}=2.77$). We note that the median redshift appears to be a robust statistic with an inter-quartile range of 0.17 (0.11) for the bright (faint) population for the 0.5 deg$^2$ field size assumed. The field-to-field variation seen in the bright population is comparable to the Poisson errors and thus random variations, whereas this field-to-field variation is greater compared to Poisson for the faint population. In order to further quantify this field-to-field variance, we have performed the Kolmogorov-Smirnoff (K-S)
test between the 1225 combinations of our 50 fields, for the bright and faint populations. We find that for the bright population the distribution of $p$-values is similar to that obtained if we perform the same operation with 50 random samplings of the parent field, though with a slightly more significant low $p$-value tail. Approximately $10\%$ of the field pairs exhibit $p<0.05$, suggesting that it is not necessarily as uncommon as one would expect by chance to find that redshift distributions derived from non-contiguous pencil beams of sky fail the K-S test, as in \cite{Michalowksi12}. For the faint population, $92\%$ of the field pairs have $p<0.05$.
Thus, it appears that the bright population in the individual fields is more consistent with being a random sampling of the parent $25$ deg$^2$ distribution. This is due to: (i) the number density of the faint population being $\sim 30$ times greater than the bright population, which significantly reduces the Poisson errors; and (ii) the median halo mass of the two populations remaining similar, $7.6$ $(5.5)$ $\times 10^{11}$ $h^{-1}$M$_{\odot}$ for our bright (faint) population implying that the two populations trace the underlying matter density with a similar bias. We consequently predict that as surveys probe the SMG population down to fainter fluxes, we expect that they become more sensitive to field-to-field variations induced by large scale structure.
\section{Comparison to ALESS}
\label{sec:ALESS}
In this section we make a detailed comparison of our model with observational data from the recent ALMA follow-up survey \citep{Hodge13} of LESS \citep{Weiss09}, referred to as ALESS. LESS is an 870$\mu\mathrm{m}$ LABOCA (19.2$''$ FWHM) survey of 0.35 deg$^2$ (covering the full area of the ECDFS) with a typical noise level of $\sigma\sim1.2$ mJy/beam. \cite{Weiss09} extracted 126 sources based on a S/N $>3.7\sigma$ ($\simeq S_{870\mu\mathrm{m}}>4.5$ mJy) at which they were $\sim70\%$ complete. Of these 126 sources, 122 were targeted for cycle 0 observations with ALMA. From these 122 maps, 88 were selected as `good' based on their rms noise and axial beam ratio, from which 99 sources were extracted down to $\sim1.5$ mJy. The catalogue containing these 99 sources is presented in \cite{Hodge13}, with the resulting number counts and photometric redshift distribution being presented in \cite{Karim13} and \cite{Simpson13} respectively. For the purposes of our comparison we randomly sample (without replacement) $70\%$ ($\sim 88/126$) of our $S_{850\mu\mathrm{m}}>4.5$ mJy sources from the central 0.35 deg$^2$ of our 50 mock maps\footnote{ We re-calculate the `effective' area of our follow-up surveys as $0.35$ deg$^2$ $\times N_{\rm{Good\,ALMA\,Maps}}/N_{\rm LESS\,Sources}\approx0.25$ deg$^2$ as in \cite{Karim13}}. Around all of these sources we place $18''$ diameter masks ($\sim$ ALMA primary beam). From these we extract `follow-up' galaxies down to a minimum flux of $S_{850\mu\rm m}=1.5$ mJy from the relevant lightcone catalogue. We take into account the profile of the ALMA primary beam for this, modelling it as a Gaussian with an $18''$ FWHM, such that lightcone galaxies at a radius of 9$''$ from a source are required to be $>3$~mJy for them to be `detected.' The result of this procedure is our `follow-up' catalogue. We note that we do not attempted to simulate and extract sources from ALMA maps.
\subsection{Number counts and source multiplicity}
\label{sec:ALESS_ncts}
We present the number counts from our simulated follow-up catalogues in Fig. \ref{fig:ALESS_ncts} and observe a similar difference between our simulated single-dish and follow-up number counts as the (A)LESS survey found in their observed analogues (Wei{\ss} et al. 2009 and Karim et al. 2013 respectively).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ALESS_ncts}
\caption{Comparison with (A)LESS number counts. The blue line is our prediction for our combined ($17.5$ deg$^2$) follow-up catalogues (described in text) and is to be compared to the ALESS number counts presented in Karim et al. (\citeyear{Karim13}; green triangles). The green line is our $19''$ source-extracted number counts for the combined (17.5 deg$^2$) field and is to be compared to the number counts presented in Wei{\ss} (\citeyear{Weiss09}; cyan circles). The shaded regions indicate the 10-90 percentiles of the distribution of the individual (0.35 deg$^2$) field number counts. The red line is the number counts for the combined field from our lightcone catalogues. The vertical dotted and dash-dotted lines indicate the 4.5 mJy single-dish source-extraction limit of LESS and the 1.5 mJy maximum sensitivity of ALMA respectively.}
\label{fig:ALESS_ncts}
\end{figure}
Also evident is the bias inherent in our simulated follow-up compared to our lightcone catalogues at fluxes fainter than the source extraction limit of the single-dish survey. This arises because follow-up galaxies are only selected due to their on-sky proximity to a single-dish source, so they are not representative of a blank-field population. For this reason \cite{Karim13} do not present number counts fainter than the source extraction limit of LESS, despite the ability of ALMA to probe fainter fluxes. Whilst our model agrees well with both interferometric and single-dish data at bright fluxes, as discussed in Section \ref{sec:nctsbeam}, our single-dish predictions are in excess of the \cite{Weiss09} data at fainter fluxes ($S_{850\mu\rm m}\lesssim 7$~mJy). We also observe a minor excess in our `follow-up' number counts when compare to the \cite{Karim13} data for $S_{850\mu\rm m}\lesssim5$~mJy.
We show the ratio of the brightest follow-up galaxy flux for each source to the source flux in Fig. \ref{fig:component_flux} and our prediction is in excellent agreement with the observed sample, with the brightest of our follow-up galaxies being roughly $70\%$ of the source flux on average.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ALESS_brightest}
\caption{Ratio of brightest galaxy component flux to single-dish source flux. Grey scatter points show the brightest galaxies from our targeted sources over the combined 17.5 deg$^2$ simulated field. The magenta line shows the median in a given flux bin. Observational data is taken from the Hodge et al. (\citeyear{Hodge13}) ALESS catalogue. The white squares indicate the median observational flux ratio and source flux in a given bin, with the binning chosen such that there are roughly equal numbers of sources in each bin. Error bars indicate the $1\sigma$ percentiles of the ratio distribution in a given flux bin for both simulated and observed data. The black dashed line is a reference line drawn at $70\%$.}
\label{fig:component_flux}
\end{figure}This fraction is approximately constant over the range of source fluxes probed by LESS. The scatter of our simulated data is also comparable to that seen observationally. Not plotted in Fig. \ref{fig:component_flux} are sources for which the brightest galaxy is below the flux limit of ALMA. These account for $\sim10\%$ of our sources. \cite{Hodge13} found that $\sim21\pm5\%$ of the $88$ ALMA `Good Maps' yielded no ALMA counterpart. The greater fraction of blank maps in the observational study could be caused by extended/diffuse SMGs falling below the detection threshold of ALMA and/or a greater source multiplicity in the observed sample. We present a breakdown of the predicted ALMA multiplicity of our simulated LESS sources compared to the observed \cite{Hodge13} sample in Table \ref{tab:ALMAtable}. Our simulated follow-up catalogue is consistent with the observed sample at $\sim2\sigma$. However, we caution that it is difficult to draw strong conclusions from this comparison due to the relatively small number of observed sources. We also note that we observe a similar trend for increasing source multiplicity with flux to that suggested in \cite{Hodge13}. For example, at $S_{850\mu\rm m}=5$~mJy the fraction of simulated sources with 2 ALMA components is $\sim10\%$ increasing to $\sim40\%$ at $S_{850\mu\rm m}=10$~mJy with the fraction of simulated sources with 1 ALMA component decreasing from $\sim70\%$ to $\sim60\%$ over the same flux range. This is in contrast to conclusions drawn from Fig. \ref{fig:multiplicity} and shows that this observed trend is probably caused by the flux limit of the interferometer, meaning that faint components are undetected.
\begin{table}
\centering
\caption{A breakdown of the number of ALMA components from our simulated sample for comparison with the observed sample of Hodge et al. (\citeyear{Hodge13}). The columns are: (1) the number of ALMA components; (2) the percentage of our simulated sources with that number of ALMA components; (3) the percentage of observed LESS sources with `good' ALMA maps that contain that number of ALESS components, errors are Poisson; and (4) the number of observed LESS sources with `good' ALMA maps that contain that number of ALMA components.}
\label{tab:ALMAtable}
\begin{tabular}{cccc}\hline
$N$ & Sim. (\%)& Obs. (\%)& Obs. (/88) \\ \hline
0 & 10.6 & $22\pm5$ & 19 \\
1 & 72.2 & $51\pm8$ & 45 \\
2 & 16.5 & $22\pm5$ & 19 \\
3 & 0.70 & $5\pm3$ & 4 \\
4 & 0.01 & $1\pm1$ & 1 \\ \hline
\end{tabular}
\end{table}
For comparison with future observations we calculate $\Delta z$ for all of our sources with $\geq2$ ALMA components as the redshift separation of the brightest two. We show the resulting distribution in Fig. 12. It is of a similar bimodal shape to the distribution presented in Fig. 8 and supports the idea that, in our model, blended galaxies are predominantly chance line of sight projections with a minor peak at small $\Delta z$ due to clustering. We leave this as a prediction for future spectroscopic redshift surveys of interferometer identified SMGs (e.g. Danielson et al. in prep).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ALESS_dz}
\caption{Distribution of the logarithm of redshift separation of the brightest two ALMA components of a $S_{850 \mu \rm m}>4.5$ mJy single-dish source for our combined (17.5 deg$^2$) field.}
\label{fig:ALESS_delta_z}
\end{figure}
\subsection{Redshift distribution}
One of the main advantages of the 99 ALMA sources identified in \cite{Hodge13} is that the greater positional accuracy ($\sim0.2''$) provided by ALMA allows accurate positions to be determined without introducing biases associated with selection at wavelengths other than sub-mm (e.g. radio). \cite{Simpson13} derived photometric redshifts for 77 of 96 ALMA SMGs\footnote{Three of the 99 SMGs presented in \cite{Hodge13} lay on the edge of ECDFS with coverage in only two IRAC bands, and so were not considered further in \cite{Simpson13}.}. The remaining 19 were only detected in $\leq3$ bands and so reliable photometric redshifts could not be determined. Redshifts for these `non-detections' were modelled in a statistical way based on assumptions regarding the $H$-band absolute magnitude ($M_{H}$) distribution of the 77 `detections' \citep[see][for more details]{Simpson13}. We compare the redshift distribution presented in \cite{Simpson13} to that of our simulated follow-up survey in Fig. \ref{fig:ALESS_dpdz}. For the purposes of this comparison we have included the $P(z)$, the sum of the photometric redshift probability distributions for each galaxy, with (solid green line) and without (dotted green line) the $H$-band modelled redshifts.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ALESS_dndz}
\caption{Comparison of normalised redshift distributions for the simulated and observed ALESS surveys. We show the Simpson et al. (\citeyear{Simpson13}) $P(z)$, the sum of the photometric redshift probability distributions of each galaxy, both including redshifts derived from $H$-band absolute magnitude modelling for `non-detections' (see Simpson et al. for details, solid green line) and for photometric detections only (dotted green line). The square marker indicates the observed median redshift (including $H$-band modelled redshifts), with associated errors. The magenta solid line is the distribution for the simulated, combined 17.5 deg$^2$ field with the shaded region showing the 10-90 percentiles of the distributions from the 50 individual fields. The boxplot shows the distribution of median redshifts for each of the 50 individual fields, the whiskers indicate the full range, with the box and line indicating the inter-quartile range and median respectively.}
\label{fig:ALESS_dpdz}
\end{figure}
Our model exhibits a high redshift ($z>4$) tail when compared to the top panel of Fig. \ref{fig:dndz_poisson}, due to the inclusion of fainter galaxies in this sample, and is in excellent agreement with the median redshift of the observed distribution. We performed the K-S test between each of our 50 follow-up redshift distributions and the ALESS distribution and find a low median $p$ value of $0.16$ with $18\%$ of the K-S tests exhibiting $p<0.05$. We do note, however, that the $M_{H}$ band modelling of the 19 `non-detections' ($\sim 20\%$ of the sample), and the sometimes significant photometric errors may affect the observed distribution.
We also investigate whether or not our model reproduces the same behaviour as seen in ALESS between redshift and $S_{850\mu\mathrm{m}}$ in Fig. \ref{fig:z_vs_S}. Our model predicts that at lower redshift our simulated SMG population is generally brighter whilst in the observational data the opposite appears to be the case. However, \cite{Simpson13} argue that this trend in their data is not significant and that their non-detections, 14/19 of which are at $S_{870\mu\mathrm{m}}<2$ mJy, would most likely render it flat if redshifts could be determined for these galaxies.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ALESS_S_v_z}
\caption{Relation between $S_{850\mu\mathrm{m}}$ and redshift for our simulated follow-up galaxies over our combined 17.5 deg$^2$ field. Solid line shows the median redshift in a given $1$ mJy $S_{850\mu\mathrm{m}}$ bin with errorbars indicating the inter-quartile range. Observational data from Simpson et al. (\citeyear{Simpson13}) has been binned in $2$ mJy bins, with the median redshift plotted as the white squares with errorbars indicating $1\sigma$ bootstrap errors.}
\label{fig:z_vs_S}
\end{figure}
\section{Multi-wavelength surveys}
\label{sec:multi_lam}
Until now we have focussed on surveys performed at 850 $\mu$m, traditionally the wavelength at which most sub-mm surveys have been performed. However, there are now a number of observational blank-field surveys performed at other sub-mm wavelengths \citep[e.g.][]{Scott12,Chen13,Geach13}. In this section we briefly investigate the effects of the finite single-dish beam-size at 450 $\mu$m ($\sim 8''$ FWHM e.g. SCUBA2/JCMT) and 1100 $\mu$m ($\sim 28''$ FWHM e.g. AzTEC/ASTE\footnote{A\emph{z}tronomical Thermal Emission Camera/Atamaca Sub-millimetre Telescope Experiment}). We add that due to our self-consistent dust model the results presented in this section are genuine multi-wavelength predictions and do not rely on applying an assumed fixed flux ratio\footnote{At 450 $\mu$m galaxies at high redshift ($z\gtrsim5.5$) have $\lambda_{\rm rest}<70$ $\mu$m and therefore the sub-mm flux calculated by our dust model may be systematically incorrect when compared to {\sc{grasil}}\xspace predictions (see Section \ref{sec:dust_model}). We expect the contribution of such galaxies to our $450$ $\mu$m population to be small.}.
We create lightcones as described in Section \ref{sec:create_mock_surveys}, taking the lower flux limit at which we include galaxies in our lightcone catalogue as the limit above which $90\%$ of the EBL is resolved at that wavelength, as predicted by our model. This is $0.125$ ($0.02$) mJy at $450$ ($1100$) $\mu$m. As at 850 $\mu$m, our EBL predictions are in excellent agreement with observational data from the \emph{COBE} satellite. At $450$ ($1100$) $\mu$m we predict a background of $140.1$ ($23.9$) Jy deg$^{-2}$ compared to $142.6^{+177.1}_{-102.4}$ ($24.8^{+26.5}_{-20.8}$) Jy deg$^{-2}$ found observationally by Fixsen et al. (1998). We follow the same procedure as described in Section \ref{sec:create_submm_maps} for creating our mock maps. However, we change the standard deviation of our Gaussian white noise such that the match-filtered noise-only maps have a $\sigma$ of $\sim 4$ $(1)$ mJy/beam at 450 (1100) $\mu$m to be comparable to published blank-field surveys at that wavelength \citep[e.g.][]{Aretxaga11,Casey13}.
Thumbnails of the same area, but for different wavelength maps, are shown for comparison in Fig. \ref{fig:multi_lam_thumb}.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{multi_lam_thumb}
\caption{Thumbnails of the same $0.2\times0.2$ deg$^2$ area as depicted in panels (a)-(d) of Fig. \ref{fig:thumbs} but at (a) $450~\mu$m, (b) $850~\mu$m and (c) $1100~\mu$m. Overlaid are the $>3.5\sigma$ sources, as circles centred on the source position with a radius of $\sqrt{2}\times$FWHM of the telescope beam at that wavelength. In (d) the $>3.5\sigma$ sources at each wavelength are overlaid, without background for clarity.}
\label{fig:multi_lam_thumb}
\end{figure*}
The effect of the beam size increasing with wavelength is clearly evident, as is the resulting multiplicity of some of the sources. Drawing physical conclusions from this source multiplicity is not trivial. Selection at shorter wavelengths tends to select lower redshift and/or hotter dust temperature galaxies. For example, for an arbitrary flux limit of 1 mJy the median redshifts of the $450$, $850$ and $1100~\mu$m populations in our model are $2.31$, $2.77$ and $2.93$ respectively. This is complicated further by the fact that, as we have shown in this paper, at sub-mm wavelengths single-dish detected sources are likely to be composed of multiple individual galaxies, which may (or may not) also be bright at other wavelengths depending on the SED of the object, and that these galaxies are generally physically unassociated. If we restrict our analysis to galaxies only, thus avoiding complications caused by the single-dish beam, and consider flux limits of 12, 4 and 2 mJy at 450, 850 and 1100 $\mu$m respectively\footnote{These flux limits were motivated by the median flux ratios of our lightcone galaxies of $S_{1100\mu \rm m}/S_{850\mu \rm m}\approx 0.5$ and $S_{850\mu \rm m}/S_{450\mu \rm m}\approx 0.3$} we find median redshifts of $1.71$, $2.26$ and $2.55$ for selection at each wavelength respectively. If we now consider a sample that satisfy these selection criteria at all wavelengths we find a median redshift of $z=2.09$, and that this sample comprises $52$, $80$ and $66\%$ of the single band selected samples at $450$, $850$ and $1100~\mu$m respectively. It is unsurprising that the multi-wavelength selected sample overlaps most with the intermediate $850~\mu$m band.
In Fig. \ref{fig:AzTEC_ncts} we present the $1100~\mu$m number counts from our source-extracted and lightcone catalogues.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{AzTEC_ncts}
\caption{Predictions for cumulative blank-field single-dish number counts at 1100 $\mu$m. Number counts from our lightcone (red line) and $28''$ FWHM beam ($\sigma = 1$~mJy/beam) source-extracted (green solid line) catalogues are shown. The shaded regions are the 10-90 percentiles of our individual field number counts. We also show number counts derived from a smaller field with Gaussian white noise of $\sigma = 0.5$~mJy/beam (green dotted line). Blank field single-dish observational data is taken from Scott et al. (\citeyear{Scott12}; magenta circles) and serendipitous ALMA 1300~$\mu$m number counts from Hatsukade et al. (\citeyear{Hatsukade13}; cyan squares) assuming $S_{1300\mu\rm m}/S_{1100\mu\rm m} = 0.71$.}
\label{fig:AzTEC_ncts}
\end{figure}
The observational data from \cite{Scott12} is a combined sample of previously published blank field single-dish number counts from surveys of varying area and sensitivity with a total area of $1.6$ deg$^2$, $1.22$ deg$^2$ of which were taken using using the AzTEC/ASTE configuration. As at $850$ $\mu$m, considering the effects of the finite beam-size brings the model into better agreement with the single-dish observational data. We also plot, from \cite{Hatsukade13}, $1300~\mu$m number counts derived from serendipitous detections found in targeted ALMA observations of star-forming galaxies at $z\sim1.4$ (converted to $1100~\mu$m counts assuming $S_{1300\mu\rm m}/S_{1100\mu\rm m} = 0.71$ as is done in Hatsukade et al.). These benefit from the improved angular resolution of the ALMA instrument $\sim 0.6-1.3''$ FWHM and can thus probe to fainter fluxes than the single-dish data. Due to the higher angular resolution of these observations they are to be compared to the lightcone catalogue number counts (red line) and show good agreement with our model. However, we caution that due to the targeted nature of the Hatsukade et al. observations they may not be an unbiased measure of a blank field population. As the Scott et al. (2012) counts are derived from multiple fields of varying area and sensitivity, we also show in Fig. \ref{fig:AzTEC_ncts} number counts derived from a single 0.2 deg$^2$ field which has matched-filtered noise of $0.5$~mJy/beam (green dotted line), similar to the 1100~$\mu$m counts from the SHADES fields \citep{Hatsukade11} used in the \cite{Scott12} sample. This shows better agreement with the Scott et al. data in the range $1\lesssim S_{1100\mu\rm m}\lesssim5$ mJy (at brighter fluxes the smaller field will suffer from a lack of bright objects) which leads us to the conclusion that the discrepancy between our $\sigma = 1$~mJy/beam number counts (green solid line) and the \cite{Scott12} data is due more to our assumed noise than of physical origin. As instrumental/atmospheric noise is unlikely to be Gaussian white noise in real observations, and various methods are used in filtering the observed maps to account for this, which we do not model here, we consider further investigation of the effect of such noise on observations beyond the scope of this work. At $\gtrsim5$~mJy our $\sigma = 1$~mJy/beam, 0.5~deg$^2$ number counts (solid green line) agree well with the \cite{Scott12} data, as the field size is more comparable to the largest field used in Scott et al. (0.7~deg$^2$), and instrumental noise will have less of an effect on both the simulated and observational data.
The number counts at $450~\mu$m are presented in Fig. \ref{fig:450ncts}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{450_ncts}
\caption{Predictions for cumulative blank-field single-dish number counts at 450~$\mu$m. Number counts from our lightcone (red) and $8''$ FWHM beam ($\sigma = 4$~mJy/beam) source-extracted (green) catalogues are shown for our combined $25$~deg$^2$ field. The dotted green line shows the de-boosted source-extracted counts for the combined field (see text). The shaded regions show the 10-90 percentiles of our individual field number counts. Observational data is taken from Casey et al. (\citeyear{Casey13}; magenta squares), Geach et al. (\citeyear{Geach13}; green triangles) and Chen et al. (\citeyear{Chen13}; cyan circles).}
\label{fig:450ncts}
\end{figure}
We attribute the enhancement in our simulated source-extracted counts at $S_{450\mu\rm m}\sim8$~mJy to Eddington bias caused by the instrumental noise rather than an effect of the $8''$ beam. In order to account for this we `deboost' our $S_{450\mu\rm m}>5$ mJy sources following a method similar to one outlined in \cite{Casey13}. The total galaxy flux of each of our $S_{450\mu \rm m}>5$ mJy sources is calculated as described in Section \ref{sec:multiplicity} and we plot this as a ratio of source flux in Fig. \ref{fig:450db}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{450_flux_db}
\caption{Ratio of total galaxy flux (see Section \ref{sec:multiplicity}) to source flux at $450$~$\mu$m. Red line and errorbars shows median and inter-quartile range in a given logarithmic flux bin respectively. For clarity, only 5\% of sources have been plotted as grey dots.}
\label{fig:450db}
\end{figure}
We multiply the flux of our 450 $\mu$m sources by the median of this ratio (red line) before re-calculating the number counts (green dotted line in Fig. \ref{fig:450ncts}). These corrected number counts show good agreement with observational data in the flux range $5\lesssim S_{450\mu\rm m}\lesssim20$ but may slightly overestimate the counts for $S_{450\mu\rm m}\gtrsim20$.
\section{Summary}
\label{sec:Summary}
We present predictions for the effect of the coarse angular-resolution of single-dish telescopes, and field-to-field variations, on observational surveys of SMGs. An updated version of the {\sc{galform}}\xspace semi-analytic galaxy formation model is coupled with a self-consistent calculation for the reprocessing of stellar radiation by dust in order to predict the sub-mm emission from the simulated galaxies. We use a sophisticated lightcone method to generate mock catalogues of SMGs out to $z=8.5$, from which we create mock sub-mm maps replicating observational techniques. Sources are extracted from these mock maps to generate our source-extracted catalogue and show the effects of the single-dish beam on the predicted number counts. To ensure a realistic background in our maps, we include model SMGs down to the limit above which $90\%$ of our total predicted EBL is resolved. Our model shows excellent agreement with EBL observations from the \emph{COBE} satellite at 450, 850 and 1100~$\mu$m. We generate $50\times 0.5$ deg$^2$ randomly orientated surveys to investigate the effects of field-to-field variations.
The number counts from our $850$ $\mu$m source-extracted catalogues display a significant enhancement over those from our lightcone catalogues at brighter fluxes ($S_{850\mu\rm m}>1$ mJy) due to the sub-mm emission from multiple SMGs being blended by the finite single-dish beam into a single source. The field-to-field variations predicted from both lightcone and source-extracted catalogues for the $850$ $\mu$m number counts are comparable to or less than quoted observational errors, for simulated surveys of $0.5$ deg$^2$ area with a $15''$ FWHM beam ($\sim$ SCUBA2/JCMT). Quantitatively we predict a field-to-field variation of 0.34 dex at $S_{850 \mu\rm m}=10$ mJy in our source-catalogue number counts. Typically $\sim3{-}6$ galaxies to contribute $90\%$ of the galaxy flux of an $S_{850 \mu \rm m}=5$ mJy source, and this multiplicity slowly decreases with increasing flux over the range of fluxes investigated by blank-field single-dish surveys at $850$ $\mu\mathrm{m}$. We find further that these blended galaxies are mostly physically unassociated, i.e. their redshift separation implies that they are chance projections along the line of sight of the survey.
Our redshift distributions predict a median redshift of $z_{50}=2.0$ for our `bright' ($S_{850\mu\mathrm{m}}>5$ mJy) galaxy population and $z_{50}=2.8$ for our `faint' ($S_{850\mu\mathrm{m}}>1$ mJy) galaxy population. We leave these as predictions for blank field interferometric surveys of comparable area. We also observe that the field-to-field variations we predict for our bright population are comparable to those expected for Poisson errors, whereas for our faint population the field-to-field variations are greater than Poisson.
A comparison between the ALESS survey and our model reveals that the model can reproduce the observed difference between observed single-dish and interferometer number counts, as well as estimates for the multiplicity of single-dish sources consistent (at $\sim2\sigma$) with those derived observationally. It is in excellent agreement with observed relations between the flux of the brightest interferometric counterpart of a source and the source flux. The model also reproduces the median redshift of the observed photometric redshift distribution. In addition, we predict that the majority of the interferometric counterparts are physically unassociated, and leave this as a prediction for future spectroscopic redshift surveys of such objects.
We also present predictions for our lightcone and source-extracted catalogue number counts at $450$ and $1100~\mu$m, which show good agreement with the observational data . It is evident that the finite beam-size does not lead to a significant enhancement of the number counts at $450$, as opposed to $850$ and $1100~\mu$m, as the beam-size at $450~\mu$m is significantly smaller. At $1100~\mu$m we show that the model agrees well with both interferometric and single-dish observational number counts. Due to our dust model these are genuine multi-wavelength predictions and do not rely on applying an assumed fixed flux ratio.
Our results highlight the importance of considering effects such as the finite beam-size of single-dish telescopes and field-to-field variance when comparing sub-mm observations with theoretical models. In our model SMGs are predominantly a disc instability triggered starburst population, the sub-mm emission of which is often blended along the line of sight of observational single-dish surveys.
In future work we will conduct a more thorough investigation of the properties and evolution of SMGs within the model presented in L14, including an analysis of their clustering with and without the effects of the single-dish beam. We hope that this, when compared with future observations aided by sub-mm interferometry of increasing sample sizes, will lead to a greater understanding of this extreme and important galaxy population.
\section*{Acknowledgments}
The authors would like to thank James Simpson and Chian-Chou Chen for helpful discussions. We also thank the anonymous referee for a detailed and constructive report which allowed us to improve the quality of the manuscript. This work was supported by the Science and Technology Facilities Council [ST/K501979/1, ST/L00075X/1]. This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
\bibliographystyle{mn2e}
|
1,477,468,750,966 | arxiv | \subsubsection*{Statement of the main result}
Let $S$ be the oriented Seifert manifold with unnormalized invariant $(g$; $(a,b))$, where $g$ is an integer $\geqslant 2$ and $a$, $b$ are two coprime integers such that $a\neq0 $ and $b \geqslant 1$. Recall that $S$ is obtained from a genus $g$ oriented surface $\Si$ with boundary a circle $C$, by gluing a solid torus $T$ to $\Si \times S^1$ in such a way that the homology class of a meridian of the boundary of $T$ is sent to $a [C ] + b [S^1]$.
Let $\mathcal{M} (S)$ be the space of conjugacy classes of group morphisms from $\pi_1 (S)$ to $\operatorname{SU}(2)$. Let $X = \pi_0 (\mathcal{M} (S))$ be the set of connected components. Introduce the functions $\al , \be : \mathcal{M} (S) \rightarrow [0, \pi ]$ given by
$$ \al ( [\rho] ) = \arccos \bigl( \tfrac{1}{2} \operatorname{tr} ( \rho (C)) \bigr), \quad \be ([ \rho] ) = \arccos \bigl( \tfrac{1}{2} \operatorname{tr} ( \rho (S^1)) \bigr) $$
These functions are actually constant on each component of $\mathcal{M} (S)$. We denote again by $\al$ and $\be$ the induced maps from $X$ to $[0, \pi ]$. They allow to distinguish between the components, that is the joint map $(\al, \be) : X \rightarrow [0, \pi ]^2$ is one-to-one.
We say that a component of $\mathcal{M} (S)$ is abelian if it contains only abelian representations, and irreducible otherwise.
We will divide $X$ into 4 disjoint subsets: $X_1 $ is the set of abelian components, whereas $X_2$, $X_3$, $X_4$ are the sets of irreducible components $x$ such that $\al (x) \neq 0,1$, $\al (x) = 0$, $\al ( x) =1$ respectively. All the components in the same $X_i$ have the same dimension. In table \ref{tab:xi}, we indicate for each $i$ the possible values of $\al$ and $\be$ on $X_i$, the cardinal of $X_i$ and the dimension of its elements.
\begin{table}
\renewcommand\arraystretch{1.1}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& $\al$ & $\be$ & type & $\#$ & $\frac{1}{2}$dimension \\
\hline $X_1$ & 0 & $\neq 0,\pi$ & abelian & $\operatorname{E} ( \frac{b-1}{2} )$ & $g$ \\
\hline $X_2$ & $\neq 0, \pi$ & 0 or $\pi$ & irreducible & $|a|-1$ & $3g-2$ \\
\hline $X_3$ & 0 & 0 or $\pi$ & irred/abelian & 2 if $b$ is even & $3g-3$ \\
& & & & 1 otherwise & \\
\hline $X_4$ & 1 & 0 or $\pi$ & irreducible & 1 if $b$ is odd & $3g-3$ \\
& & & & 0 otherwise &
\\ \hline
\end{tabular}
\caption{Characteristics of the components of $\mathcal{M} (S)$} \label{tab:xi}
\end{table}
For any $\rho \in \mathcal{M} (S)$, we denote by $\operatorname{CS} ( \rho ) \in {\mathbb{R}} / 2 \pi {\mathbb{Z}}$ the Chern-Simons invariant of $\rho$, cf. Equation (\ref{eq:defrhox}) for a precise definition. The Chern-Simons function $\operatorname{CS}$ is constant on each component of $\mathcal{M} (S)$. We denote again by $\operatorname{CS}$ the induced map from $X$ to ${\mathbb{R}} / 2 \pi {\mathbb{Z}}$
For any $t \in [0,1]$, let $\mathcal{M} ( \Si, t)$ denote the space of conjugacy classes of group morphism $\rho$ from $\pi_1 (\Si)$ to $\operatorname{SU}(2)$ such that $\operatorname{tr} ( \rho (C) ) = \cos ( 2 \pi t)$. This space is a symplectic manifold. We denote by $v_g(t)$ its symplectic volume.
\begin{theo} \label{theo:main-result}
We have the full asymptotic expansion
\begin{xalignat*}{2}
Z_k ( S) = & \sum_{x \in X} e^{ik \operatorname{CS} (x) } k^{n(x)} \sum_{\ell \in {\mathbb{N}}} k^{-\ell} a_{\ell} (x) + \sum_{x \in X_3} k^{m(x)} \sum_{\ell \in {\mathbb{N}}} k^{-\ell} b_{\ell} (x) +
\mathcal{O} ( k^{-\infty})
\end{xalignat*}
where the coefficients $a_{\ell} (x)$, $b_{\ell} (x)$ are complex numbers. The exponents $n(x)$ and the leading coefficients $a_0 (x)$ are given in table \ref{tab:coef} according to whether $x$ belong to $X_1$, $X_2$, $X_3$ or $X_4$. Furthermore, if $x \in X_3$
$$ m(x) = 2g - \tfrac{3}{2}, \qquad b_0 (x) = e^{i \pi/4} i^g b^{g - 3/2} a ^{-g} \pi^{-g +1} \frac{\sqrt 2 ( g-1)!}{ ( 2 ( g-1))!} .$$
\end{theo}
\begin{table}
\[
\renewcommand\arraystretch{1.7}
\begin{array}{|l|c|c|}
\hline
& n(x) & a_0 (x) \\
\hline
X_1 & g -1/2 & \frac{2 \pi^{g - 1/2} }{ \sqrt{b}} \bigl[ \sin ( \al (x) ) \bigr] ^{-2g + 1} \sin \bigl( c \al (x) \bigr) \\
\hline
X_2 & 3g -2 & \frac{1 }{ \sqrt{a}} v_{g} \bigr( \be (x) /\pi \bigl) \sin \bigl( d \be(x) \bigr) \\
\hline
X_3 & 3g -3 & i(4 \pi)^{-1} a^{-3/2} v_g'(0) \\
\hline
X_4 & 3g -3 & (4 \pi)^{-1} a^{-3/2} v'_{g} (1) \\
\hline
\end{array}
\]
\caption{The leading exponents and coefficients} \label{tab:coef}
\end{table}
The novelty in this result is the expression of the leading coefficients $a_{0} (x)$ and $b_0 (x)$. In the paper \cite{LL}, it is proved that $a_0 (x)$ is actually equal to the integral of a Reidemeister torsion, in agreement with the Witten asymptotic conjecture.
Comparing Tables \ref{tab:xi} and \ref{tab:coef}, we also see that the exponent $n(x)$ is equal to half the dimension for an irreducible component, and half the dimension minus $1/2$ for the abelian components, in agreement with the prediction in \cite{FrGo}.
All the components of $\mathcal{M} (S)$ are smooth manifolds except the ones in $X_3$ which have a singular stratum of abelian representations. For these singular components, we have an additional term in the asymptotic expansion, the series $\sum k^{-\ell} b_{\ell} (x)$. We don't know any geometric expression for $b_0 (x)$ or $m_0(x)$. For instance, since the singular strata have the same dimension as the abelian component in $X_1$, we could expect that $m(x)$ is equal to $ n(y)$, $y \in X_1$; but it is not the case.
As a last remark, we haven't tried to determine the signs of the coefficients and to understand the spectral flow contribution.
\subsubsection*{Comparison with earlier results}
The main reference on the subject is certainly the article \cite{Ro2} by Rozansky, where the author first shows that $Z_k (S)$ has an asymptotic expansion, the contribution of irreducible representations being presented in a residue form. In a second part, by comparing various expansions of path integrals, these residues are formally identified with Riemann-Roch number of moduli spaces, cf. Conjecture 5.1 and Proposition 5.3 of \cite{Ro2}. The connection with Theorem \ref{theo:main-result} is that these Riemann-Roch number are approximated at first order by the symplectic volumes $v_g(t)$.
To compare with the previous articles on Witten asymptotic conjecture for Seifert manifolds, our proof has the advantage that the set $X = \pi_0 ( \mathcal{M} (S))$ and the various invariants appear naturally. To the contrary, in most papers on the subject, it is first proved that $Z_k (S)$ is a sum of oscillatory terms with an explicit computation of the phases. Independently, one determines the set $X$ and the corresponding Chern-Simons invariants. After that, a one-to-one correspondence between the components of $\mathcal{M} (S)$ and the various terms of the asymptotic expansion is established, such that the Chern-Simons invariant of a component is equal to the corresponding phase.
The identification between the amplitudes and the Reidemeister torsion has been done in a few cases similarly by comparing the results of two independent computations. One exception is the paper \cite{AnHi}, where everything is computed intrinsically, but the Seifert manifolds covered in \cite{AnHi} have all a vanishing Euler number, so they form a family disjoint to the Seifert manifolds we consider.
The strategy we follow is inspired by our previous works \cite{oim_MCG} and \cite{LJ1}, \cite{LJ2} in collaboration with J. March\'e , where we proved some generalized Witten conjecture for some manifolds with non empty boundary. In \cite{LJ1}, \cite{LJ2}, we considered the complement of the figure eight knot and our main tool was some q-difference relations. The paper \cite{oim_MCG} was devoted some mapping cylinders of pseudo Anosov diffeomorphisms and our main tool was the Hitchin connection. In the present paper, we prove a generalized Witten conjecture for the manifold $\Si \times S^1$. The main ingredients we use are the Verlinde formula and the Riemann-Roch theorem.
\subsubsection*{Sketch of the proof}
The Seifert manifold $S$ being obtained by gluing a solid torus ${T}$ to $\Si \times S^1$, $Z_k (S)$ is the scalar product of two vectors $Z_k ({T})$ and $Z_{k} (\Si \times S^1)$ of the Hermitian vector space $V_k (\partial \Si \times S^1)$. This vector space has a canonical orthonormal basis $(e_\ell, \; \ell = 1, \ldots, k-1)$. By \cite{FrGo}, the coefficients of $Z_{k} ( \Si \times S^1)$ in this basis are the Verlinde numbers $N^{g,k}_\ell$.
These numbers can be computed in several ways. First, $N^{g,k}_\ell$ is the number of admissible colorings of a pants decomposition of $\Si$. However this elementary way is not very useful to study the large $k$ limit. At least, we learn that $N^{g,k}_{\ell}$ vanishes when $\ell$ is even. Second, the $N^{g,k}_{\ell}$ are Riemann-Roch numbers associated to the symplectic manifolds $\mathcal{M} ( \Si, s)$ introduced above. This implies that for any odd $\ell$ satisfying $ 1<\ell <k-1$, we have
\begin{gather} \label{eq:N_RR}
N^{g,k}_{\ell} = \Bigl( \frac{k}{2 \pi } \Bigr)^{3g -2} \sum_{n=0}^{3g-2} k^{-n} Q_{g,n} \Bigl( \frac { \ell}{k} \Bigr)
\end{gather}
where the $Q_{g,n}$ are smooth function on $]0,1[$, $Q_{g,0} (t)$ being the symplectic volume $v_g (t)$.
Third, by Verlinde formula, we have
\begin{gather} \label{eq:N_Verlinde}
N_{\ell}^{g,k} = \sum _{ m =1}^{k-1} S_{m,1} ^{1 - 2g} S_{m, \ell} \quad \text{ where } \quad S_{m, p}= \Bigl( \frac{2}{k}\Bigr)^{1/2} \sin \Bigl( \frac{\pi mp}{k} \Bigr).
\end{gather}
Introduce the Hermitian space $\mathcal{H}_k := {\mathbb{C}}^{{\mathbb{Z}} / 2k {\mathbb{Z}}}$ and denote by $(\Psi_{\ell}, \ell \in {\mathbb{Z}} / 2k {\mathbb{Z}})$ its canonical basis. The family $(\mathcal{H}_k, \; k \in {\mathbb{N}} )$ may be viewed as the quantization of a two dimensional torus $M = {\mathbb{R}}^2 / {\mathbb{Z}}^2 \ni (x,y)$. This has the meaning that $(\mathcal{H}_k)$ may be identified with a space of holomorphic sections of the $k$-th power of a prequantum bundle over $M$. In this context some families $(\xi_k \in \mathcal{H}_k, \; k \in {\mathbb{N}})$ concentrating in a precise way on a 1-dimensional submanifold of $M$ are called Lagrangian states.
We will consider $V_k (\partial \Si \times S^1)$ as a subspace of $\mathcal{H}_k := {\mathbb{C}}^{{\mathbb{Z}} / 2k {\mathbb{Z}}}$ by setting $e_{\ell} = ( \Psi_{\ell} - \Psi_{-\ell}) /\sqrt 2$. We will prove that $( Z_{k} (\Si \times S^1), k \in {\mathbb{N}})$ is a Lagrangian state supported by a Lagrangian submanifold of $M$.
To do this, we will establish and use the following characterization of Lagrangian states.
Let $x_0, x_1 \in {\mathbb{R}}$ such that $ x_0< x_1< x_0+1$ and $\varphi$ be a function in $ \mathcal{C}^{\infty} (]x_0 , x_1 [, {\mathbb{R}})$. Let $U$ be the open set $]x_0,x_1 [ \times {\mathbb{R}}/ {\mathbb{Z}}$ of $M$ and $\Gamma$ be the Lagrangian submanifold $\{ (x, \varphi'(x)); \; x \in ]x_0, x_1[ \}$ of $U$. Then a family $( \xi_k = \sum \xi_k ( \ell) \Psi_\ell, k \in {\mathbb{N}})$ is a Lagrangian state over $U$ supported by $\Gamma$ if and only if
$$ \xi_k ( 2 k x ) = k^{-1/2 + N } e^{ 4 i \pi k \varphi (x) } \sum_{n = 0 } ^{ \infty} k^{-n } f_n ( x) + \mathcal{O} ( k^{-\infty}) $$
where the $f_n$ are smooth functions on $]x_0, x_1[$. This formula may be viewed as a discrete analogue of the WKB expression, the function $\varphi$ is a generating function of $\Gamma$, the leading order term $f_0$ of the amplitude determines the symbol of the Lagrangian state.
By this characterization, we deduce from (\ref{eq:N_RR}) that $Z_{k} (\Si \times S^1)$ is the sum of two Lagrangian states over $M \setminus \{ x = 0 \text{ or } 1/2 \}$, supported respectively by $\{ y = 0 \}$ and $\{ y = 1/2 \}$. Indeed, we insert a factor $( 1 - (-1)^{\ell})/2$ in the right hand side of (\ref{eq:N_RR}) so that the equation is valid for even and odd $\ell$, and we use that for $\ell = 2kx$, $(-1)^{\ell} = e^{ 4 i \pi k x/2}$.
To complete this description on a neighborhood of $\{ x = 0 \text{ or } 1/2 \}$, we will perform a discrete Fourier transform. By Verlinde formula (\ref{eq:N_Verlinde}), we easily get
\begin{gather} \label{eq:3}
Z_{k} (\Si \times S^1) = \frac{\sqrt k }{2i } \sum_{m \in ({\mathbb{Z}}/2k{\mathbb{Z}})\setminus \{ 0,k \}} S_{m,1}^{1-2g} \Phi_m
\end{gather}
where $(\Phi_m)$ is the basis of $\mathcal{H}_k$ given by $\Phi_m = (2k)^{-1/2} \sum_\ell e^{i \pi \ell m /k} \Psi_\ell$ for $m \in {\mathbb{Z}} /2k {\mathbb{Z}}$. A similar characterization of Lagrangian state as above holds, where we exchange $x$ and $y$ and replace the coefficients in the basis $( \Psi_{\ell})$ by the ones in $( \Phi_m)$. We deduce from (\ref{eq:3}) that $Z_k ( \Si \times S^1)$ is Lagrangian on $M \setminus \{ y=0 \text{ or } 1/2 \}$ supported by $\{ x =0 \}$.
Gathering these results, we conclude that $Z_k ( \Si \times S^1)$ is Lagrangian on the open set $ M \setminus \{ (0,1), ( 0,1/2) \}$ and supported by $\{ x= 0 \} \cup \{ y= 0\} \cup \{ y = 1/2 \}$. There is no similar description on a neighborhood of $(0,0)$ and $(0,1/2)$ because the circle $\{x =0 \}$ intersects with $\{ y= 0 \}$ and $\{ y =1/2 \}$ at these points.
By \cite{LJ1}, the state $Z_k ( {T})$ is Lagrangian supported by the circle $\{ y = ax/b \}$. The scalar product of two Lagrangian states supported on Lagrangian manifolds $\Gamma$, $\Gamma'$ which intersects transversally, has an asymptotic expansion, and we can compute geometrically the leading order terms, each intersection point of $\Gamma \cap \Gamma'$ having a contribution \cite{oim_pol}. From this, we obtain Theorem \ref{theo:main-result}, except for the contribution of $X_3$ which corresponds to the singular points $(0,0)$, $(0,1/2)$.
Actually, a large portion of this paper will be devoted to the contribution of $X_3$ in Theorem \ref{theo:main-result}. On one hand we will establish some singular stationary phase lemma for discrete oscillatory sum. On the other hand we will prove several properties of the functions $Q_{g,n}$ in (\ref{eq:N_RR}): first $Q_{g,n}$ vanishes for odd $n$, second $Q_{g, 2m}$ is a polynomial of degree $2(g -m) -1$, third the even part of $Q_{g, 2m}$ is a multiple of the monomial $ x^{2(g-m-1)}$ and the coefficient of this multiple with be explicitly computed for $m=0$.
To prove these facts, we will study the discrete Fourier transform of the family $( \sin ^{-m}( \pi \ell / k ), \; \ell \in {\mathbb{Z}}/ 2k {\mathbb{Z}})$ in the semiclassical limit $k \rightarrow \infty$.
These families may be viewed as discrete analogue of the homogeneous distributions and are interesting by themselves. Then using Verlinde formula (\ref{eq:N_Verlinde}), we will recover the expression (\ref{eq:N_RR}) and obtain the above properties of the $Q_{g,n}$. Some of these properties also have symplectic proofs. For instance the fact that the $Q_{g,n}$ are polynomials is a consequence of Duistermaat-Heckman theorem by introducing some extended moduli space as in \cite{Je2}. The fact that the $Q_{g,n}$ vanish for odd $n$ may be deduced from Riemann Roch theorem by computing the characteristic class of the $\mathcal{M} ( \Si, s)$, and expressing the Riemann Roch number in terms of $A$-genus instead of the Todd class.
To finish this overview of the proof of Theorem \ref{theo:main-result}, let us briefly explain the topological interpretation of the previous computation. For any topological space $T$, let $\mathcal{M} ( T)$ be the space of conjugacy classes of group morphism $\pi_1 (T) \rightarrow \operatorname{SU}(2)$. We have a natural identification between $\mathcal{M} ( \partial \Si \times S^1)$ and the quotient $N$ of $M = {\mathbb{R}}^2 /{\mathbb{Z}}^2$ by the involution $(x,y) \rightarrow (-x, -y)$. This identification restricts to a bijection between:
\begin{enumerate}
\item the projection of $\{x = 0\} \cup \{y =0\} \cup \{ y = 1/2 \}$ and the image of the restriction map $r: \mathcal{M} ( \Si \times S^1 ) \rightarrow \mathcal{M} ( \partial \Si \times S^1)$
\item the projection of $\{ y = ax/b \}$ and the image of the restriction map $r':\mathcal{M} ( {T}) \rightarrow \mathcal{M} ( \partial \Si \times S^1)$.
\end{enumerate}
Finally $\mathcal{M} (S)$ may be viewed as the fiber product of $r$ and $r'$, its connected components being the fibers of the projection $\mathcal{M} (S) \rightarrow \mathcal{M} ( \partial \Si \times S^1)$.
\begin{figure}[!ht]
\centering
\def9cm{9cm}
\input{dessin.pdf_tex}
\caption{Character variety intersections} \label{fig:dessin}
\end{figure}
So the different part of the asymptotic expansion of $\langle Z_k ( {T}), Z_k (\Si \times S^1) \rangle$ are naturally indexed by $X= \pi_0 ( \mathcal{M} (S))$. In the decomposition $X = X_1 \cup X_2 \cup X_3 \cup X_4$ used in Theorem \ref{theo:main-result}, $X_1$ correspond to the points of $M$ such that $x=0$ and $y \neq 0,1/2$, $X_2$ to $ y =0$ or $1/2$ and $x \neq 0,1/2$, $X_3$ to $(0,0)$, $(0,1/2)$ and $X_4$ to $(1/2,0)$, $(1/2,1/2)$. As we will see, the Chern-Simons invariants appears naturally by interpreting the prequantum bundle of $L$ as the Chern-Simons bundle. Furthermore the expressions for the coefficients $a_0$ come from the leading order term $Q_{g,0} (t) = v_g (t)$ in (\ref{eq:N_RR}), the $(1-2g)$-th power of the sinus in (\ref{eq:N_Verlinde}) and the coefficients $a,b$ of the surgery.
\subsubsection*{Outline of the paper}
In section \ref{sec:witt-resh-tura}, we recall how we compute the WRT invariants of a Seifert manifold in terms of the Verlinde numbers. In section \ref{sec:discr-four-transf}, we study the discrete Fourier transform of the negative power of a sinus and we apply this to the Verlinde numbers. In section \ref{sec:geom-quant-tori}, we recall some basic facts on the quantization of tori and their Lagrangian states. Furthermore we establish a criterion characterizing the Lagrangian states in terms of a generating function of the associated Lagrangian manifold. In section \ref{sec:asympt-behav-z_k}, we prove that $Z_k (\Si \times S^1) $ is a Lagrangian state and deduce the asymptotic behavior of $Z_k (S)$ by using some singular stationary phase lemma proved in the Section \ref{sec:sing-discr-stat}. Finally, Section \ref{sec:geom-interpr-lead} is devoted to the geometric interpretation of the results.
\section{The Witten-Reshetikhin-Turaev invariant of a Seifert manifold} \label{sec:witt-resh-tura}
For any integer $k \geqslant 2$ and any closed oriented 3-manifold $M$, we denote by $Z_k (M)$ the Witten-Reshetikhin-Turaev (WRT) invariant of $M$ for the group $\operatorname{SU}(2)$ at level $k-2$. We are interested in the large level limit, $k \rightarrow \infty$. Since we haven't chosen a spin structure on $M$, the sequence $Z_k (M)$, $k \geqslant 2$ is only defined up to multiplication by $\tau_k^n$ where $\tau_k = e^{ \frac{3i \pi}{4} - \frac{3i \pi } { 2k}}$.
In the case $M$ is a Seifert manifold, it is easy to compute $Z_k(M)$ by using a surgery presentation of $M$, cf. Section 1 of \cite{FrGo}. Let us explain this.
Let $\Si$ be a compact oriented surface with boundary a circle $C$. Let $D$ be a disc and $S^1$ be the standard circle. Consider the Seifert manifold $S$ obtained by gluing the solid torus $ D \times S^{1}$ to $\Si \times S^1$ along a preserving orientation diffeomorphism $\varphi : \partial D \times S^1 \rightarrow C \times S^1$
$$ S = ( \Si \times S^1 ) \cup_{\varphi} ( D \times S^1)^{-} .$$
The WRT invariant of a manifold obtained by gluing two manifolds along their boundary may be computed as a scalar product in the setting of topological quantum field theory, \cite{Wi} \cite{ReTu}. In our particular case, we have
\begin{gather} \label{eq:scalar_product}
Z_k (S) = \bigl\langle Z_k ( \Si \times S^1) , \rho_k ( \varphi) ( Z_k ( D \times S^1)) \bigr\rangle_{ V_k ( C \times S^1)}
\end{gather}
Here $V_{k} ( C \times S^1)$ is the Hermitian vector space associated to the torus $C \times S^1$. It has dimension $k-1$. To any oriented basis $( \mu, \la)$ of $H_1 ( C \times S^1)$ is associated an orthornormal basis of $V_k ( C \times S^1)$, called the Verlinde basis. Let us choose $\mu = [ C]$, $\la = [S^1]$ and denote by $e_ \ell$, $\ell = 1, \ldots , k-1$ the corresponding basis.
The bracket in Equation (\ref{eq:scalar_product}) denote the scalar product of $V_k ( C \times S_1)$. Furthermore for any compact oriented 3-manifold $M$ with boundary $C \times S^1$, we denote by $Z_k (M) \in V_k ( C \times S^1)$ the corresponding vector defined in the Chern-Simons topological quantum field theory.
\begin{lem} \label{lem:SiS}
One has $Z_k ( D \times S^1) = e_1$ and $$Z_k ( \Si \times S^1) = \sum_{\ell =1 }^{ k-1} N _\ell ^{g,k} e_{\ell}, $$ where $g$ is the genus of $\Si$ and $N_{\ell} ^{g,k}$ is the dimension of the vector space associated in Chern-Simons quantum field theory to any genus $g$ surface equipped with one marked point colored by $\ell$.
\end{lem}
In our convention, the set of colors is $\mathcal{C}_k = \{ 1, 2, \ldots, k-1 \}$, the color $\ell$ corresponding to the $\ell$-dimensional irreducible representation of $\operatorname{SU}(2)$.
\begin{proof} Recall first that the Verlinde basis consists of the vectors given by
$$ e_\ell = Z_k ( D \times S^1, x, \ell) $$
where $x$ is the banded link $[0,1/2] \times S^1$ of $D \times S^1$. Since $\ell =1$ is the trivial color in our convention, $Z_k ( D \times S^1) = e_1$. Let us compute the coefficients of $Z_k ( \Si \times S^1)$. We have
$$ \langle Z_{k} ( \Si \times S^1) , e_{\ell} \rangle = Z_k ( \overline{\Si} \times S^1, x, \ell) $$
where $\overline \Si$ is the closed surface $ \Si \cup_{C} D^-$. Viewing $\overline \Si \times S^1$ as the gluing of $ \overline{\Si} \times [0,1]$ with itself, we obtain that $Z_k ( \overline{\Si} \times S^1, x, \ell) $ is the trace of the identity of $V_k ( \overline{ \Si}, \ell)$.
\end{proof}
There are several ways to compute the numbers $N _\ell ^{g,k}$. First $N_{\ell}^{g,k}$ is the number of admissible colorings of any pants decomposition of $\Si$. But this elementary way is not very useful to study the large $k$ limit. Alternatively we can use the Verlinde formula.
\begin{theo} \label{theo:Verlinde}
For any $k \in {\mathbb{N}}^*$, $\ell = 1 , \ldots , k-1$ and $g \in {\mathbb{N}}^*$, we have
$$ N_{\ell}^{g,k} = \sum _{ m =1}^{k-1} S_{m,1} ^{1 - 2g} S_{m, \ell}
$$
where $S_{m, p}= \bigl( \frac{2}{k}\bigr)^{1/2} \sin ( \pi \frac{mp}{k} )$.
\end{theo}
Later we will also use the fact that $N_{\ell}^{g,k}$ can be computed with the Riemann-Roch theorem, cf. Theorem \ref{theo:RR}.
It remains to explain the meaning of the $\rho_k$ appearing in Equation (\ref{eq:scalar_product}). $\rho_k $ is the representation of the mapping class group of $C \times S^1$ in $V_k ( C \times S^1)$ provided by the topological quantum field theory. It is actually a projective representation. More precisely, $\rho_k ( \varphi)$ is well-defined up to a power of the constant $\tau_k$ defined above. Using the basis $( \mu,\la)$, the mapping class group of $C \times S^1$ is identified with $\operatorname{SL} ( 2, {\mathbb{Z}})$. Then we have
\begin{xalignat*}{3}
\rho_k(T)e_\ell= & e^{\frac{i\pi(\ell^2-1)}{2k}}e_\ell, & & \text{ if } \quad T=\begin{pmatrix} 1&1\\ 0&1\end{pmatrix}
\intertext{ and }
\rho_k(S)e_\ell= & \sqrt{\frac{2}{k}}\sum_{\ell'=1}^{k-1}\sin \Bigl( \frac{\pi \ell\ell'}{k} \Bigr) e_{\ell'}, & & \text{ if } S=\begin{pmatrix} 0&-1\\ 1&0\end{pmatrix}.
\end{xalignat*}
Since $\operatorname{SL} ( 2, {\mathbb{Z}})$ is generated by the matrices $S$ and $T$, the representation is completely determined by these formulas.
\section{The discrete Fourier transform of a negative power of sinus} \label{sec:discr-four-transf}
Let $\mathbb{T} = {\mathbb{R}} / 2 {\mathbb{Z}}$ and $R_k = (\frac{1}{k} {\mathbb{Z}}) / 2 {\mathbb{Z}} \subset \mathbb{T}$.
Introduce for any $m \in {\mathbb{N}}$, the function $\Xi_m$ from $R_k$ to ${\mathbb{C}}$ given by
\begin{gather} \label{eq:defXi}
\Xi_{m,k} (x) = ( i )^{-m} \sum_{ y \in R_k \setminus \{ 0, 1\} } \bigl[ \sin ( \pi y ) \bigr]^{-m} e^{ik \pi y x }, \qquad \forall x \in R_k .
\end{gather}
This function may be viewed as the discrete Fourier transform of $y \rightarrow \bigl[ \sin ( \pi y ) \bigr]^{-m}$. We are interested in its behavior as $k$ tends to infinity.
\begin{theo} \label{theo:dev_part}
For any $m\in {\mathbb{N}}^*$, there exists a polynomial function $P_{m} \in {\mathbb{Q}} [k,x]$ such that
for any $k \in {\mathbb{N}}^*$ and $x \in [0,2 ] \cap \frac{1}{k} {\mathbb{Z}} $, we have
$$ \Xi_{m,k} (x) = \bigl( 1 + ( -1 ) ^{kx + m }\bigr) P_{m} ( k,x) .$$
Furthermore, $P_{m}$ is a linear combination of the monomials $ k^q x^p$ where $p,q$ run over the integers satisfying $0 \leqslant p \leqslant q \leqslant m$.
\end{theo}
The proof, given in Section \ref{sec:proof-theor-refth}, allows to compute inductively the polynomials $P_m$. In particular, we have
\begin{xalignat*}{1} & P_1 (k,x ) = k ( - x +1 ), \\ & P_2 (k,x) = k^2 \bigl( -\tfrac{1}{2} x^2 +x - \tfrac{1}{3} \bigr) + \tfrac{1}{3}, \\
& P_3 (k,x ) = k^3 \bigl( - \tfrac{1}{6} x^3 + \tfrac{1}{2} x^2 -\tfrac{1}{3} x \bigr) + k \bigl( \tfrac{1}{2} x -\tfrac{1}{2} \bigr).
\end{xalignat*}
In Section \ref{sec:furth-prop-polyn}, we will establish further properties of the polynomials $P_m$. First, each $P_m$ is a linear combination of monomials $k^q x^p$ where $0 \leqslant p \leqslant q \leqslant m$ and $q \equiv m$ modulo 2. Second we will describe the singularity of $\Xi_{m,k}$ at $x =0$ as follows. Write $P_m = P_m^{+} + P_m^{-}$ where $P_m^+$ (resp. $P_m ^{-}$) is a linear combination of the $k^q x^{2 \ell}$ (resp. $k^q x^{2 \ell+1}$). Then
$$ \frac{1}{2} \bigl( P_m ( k ,x ) - P_m ( k, 2 + x ) \bigr) = \begin{cases} P^-_m ( k, x) \quad \text{ if $m$ is even,} \\ P^+_m ( k, x) \quad \text{ if $m$ is odd.}
\end{cases} $$
Furthermore, if $m \geqslant 1$ is even (resp. odd), $P_m^-$ (resp. $P^+_m$) is a linear combination of the monomials $k^{m-2 \ell} x^{m- 2\ell -1}$, with $\ell = 0 ,1 \ldots$. The coefficient of $k^{m} x^{m-1}$ is $ 1 / (m-1)!$. In Section \ref{sec:appl-count-funct}, we will apply this to the counting function $N^{g,k}_{\ell}$.
\subsection{Proof of Theorem \ref{theo:dev_part}} \label{sec:proof-theor-refth}
In the sequel, everything depends on $k$. Nevertheless, to reduce the amount of notation, the subscript $k$ will often be omitted. Introduce the space $\mathcal{H} := {\mathbb{C}}^{R_k}$ and its scalar product
$$ \langle f, g \rangle = \frac{1}{2k} \sum_{x \in R_k } f(x) \overline{g(x)} , \qquad f, g \in \mathcal{H} .$$
\subsubsection*{The endomorphisms $T$, $L$ and $\Delta$}
Introduce the endomorphisms $T$, $L$ and $\Delta$ of $ \mathcal{H}$ given by
$$ (Tf ) (x) = ( -1)^{kx} f(x), \qquad ( L f) ( x) := f \Bigl( x + \frac{1}{k} \Bigr), \qquad \De = \tfrac{1}{2} \bigl( L - L^{-1} \bigr)$$
Observe that $T$ is unitary, $T^2 = \operatorname{id}$ and $\mathcal{H} = \mathcal{H}^+ \oplus^{\perp} \mathcal{H}^-$ where $\mathcal{H}^{\pm} = \ker ( T \mp \operatorname{id}).$ Furthermore, $\mathcal{H}^+$ (resp. $\mathcal{H}^-$) consists of the functions vanishing on the $x \in R_k$ such that $kx$ is odd (resp. even). The orthogonal projector of $\mathcal{H}$ onto $\mathcal{H}^{\pm}$ is $\frac{1}{2} ( \operatorname{id} \pm T)$.
Let $u_0 \in \mathcal{H} $ be the function constant equal to $1$, $u_1 = T u_0$ and $u_0 ^\pm = \frac{1}{2} ( u_0 \pm u_1 ) $. Denote by $\mathcal{H}_0^{\pm}$ the subspace of $\mathcal{H}^\pm$ orthogonal to $u_0 ^\pm$.
\begin{lem} The kernel of $\Delta$ is spanned by $u_0$ and $u_1$. Furthermore $\Delta$ restricts to a bijection from $\mathcal{H}_0^{\pm} $ to $ \mathcal{H}_0^{\mp}$.
\end{lem}
\begin{proof} We easily check that $u_0$ and $u_1$ belong to the kernel of $\Delta$ and that this kernel is 2-dimensional.
Furthermore we have that $LT + T L = 0$, so that $\Delta T + T \Delta = 0$ and consequently $ \Delta ( \mathcal{H}^{\pm} ) \subset \mathcal{ H}^{\mp}$. Since $L$ is unitary, $\Delta $ is skew-Hermitian. So the range of $\Delta$ is the subspace of $\mathcal{H}$ orthogonal to $u_0$ and $u_1$.
\end{proof}
\subsubsection*{Discrete Fourier transform}
Let $(u_y, \; y \in R_k )$ be the orthonormal basis of $\mathcal{H}$
$$ u_{y} (x) = e^{ik\pi y x }, \quad \forall x \in R_k $$
When $y=0$ or $1$, we recover the functions $u_0$ and $u_1$ introduced previously.
Denote by $\mathcal{F} : \mathcal{H} \rightarrow \mathcal{H}$ the discrete Fourier transform
$$ \mathcal{F} ( f) ( y) = \langle f, u_y \rangle , \qquad \forall y \in R_k . $$
Using the relations $ T u _y = u_{y+1}$ and $ L u _y = e^{i \pi y} u_y$, we deduce that for any $f \in \mathcal{H}$ and $y \in R_k$
\begin{gather} \label{eq:fourierTDelta}
\mathcal{F} (T f ) ( y) = \mathcal{F}(f) ( y- 1), \qquad \mathcal{F} ( \Delta f) ( y) = i \sin ( \pi y ) \mathcal{F} ( f) ( y) .
\end{gather}
By definition of $\Xi_m$, Equation (\ref{eq:defXi}), its discrete Fourier transform is given by
$$ \mathcal{F} ( \Xi_m ) ( y) = \begin{cases} \bigl[ i\; \sin ( \pi y) \bigr]^{-m} \text{ if } y \neq 0, 1 \\ 0 \qquad \text{ otherwise.} \end{cases}
$$
Observe that $\mathcal{F} ( \Xi_m ) ( y -1 ) = (-1)^m \mathcal{F} ( \Xi_m ) ( y)$, so that $T \Xi_m = ( -1)^m \Xi_m$. Furthermore $\mathcal{F} ( \Xi_m ) ( 0) = \mathcal{F} ( \Xi_m ) ( 1) = 0$ implies that $\Xi_m$ is orthogonal to both $u_0$ and $u_1$. Hence, $\Xi_m$ belongs to $\mathcal{H}^{+}_0$ or $\mathcal{H}^{-}_0$ according to whether $m$ is even or odd. Furthermore we have that $\Delta \Xi_{m+1} = \Xi_m$. So we can inductively compute $\Xi_m$ by inverting the operators $\Delta: \mathcal{H}^{\pm}_0 \rightarrow \mathcal{H}^{\mp}_0$.
Let us first compute $\Xi_0$. We have
$$ \Xi_0 = \Bigl( \textstyle{\sum}_{y \in R_k} u_y \Bigr) - ( u_0 + u_1 ) .$$
Furthermore $\sum_{y \in R_k} u_y = 2k \delta$ where $\delta ( x) = 1$ if $x = 0$ and $\delta ( x) = 0$ otherwise. So we obtain
\begin{gather} \label{eq:xi0}
\Xi_0 (x) = 2 k \delta (x) - 1 - ( -1)^{kx} .
\end{gather}
Let us compute the inverse of $\Delta :\mathcal{H}^-_0 \rightarrow \mathcal{H}^+_0 $. Denote by $S_k $ the set of integers $\{1, \ldots, k \}$. Let $\tilde{L} $ and $L$ be the endomorphisms of ${\mathbb{C}}^{S_k}$ defined by
$$ \quad \forall \ell \in S_k, \quad \tilde{L} (f) ( \ell ) = \sum _{m = 1}^{ \ell} f (m), \qquad L ( f) = 2 \tilde{L} (f) - \frac{2}{k} \sum_{1}^{k} \tilde{L} (f)(\ell) $$
Let us identify $\mathcal{H}^+$ and $\mathcal{H}^-$ with ${\mathbb{C}}^{S_k}$ by sending $g^{\pm} \in \mathcal{H}^\pm$ into $f^{\pm} \in {\mathbb{C}}^{S_k}$ given by
\begin{gather}\label{eq:rel1}
f^{+} ( \ell ) = g^+\Bigl( \frac{2(\ell-1)}{k } \Bigr) , \qquad f^{-} ( \ell ) = g^{-} \Bigl( \frac{2 \ell -1 } { k} \Bigr) .
\end{gather}
Observe that the subspaces $ \mathcal{H}^{+}_0$ and $\mathcal{H}^{-}_0$ get identified with the subspace of ${\mathbb{C}}^{S_k}$ consisting of function with vanishing average.
\begin{lem}
The inverse of $\Delta : \mathcal{H}^-_0 \rightarrow \mathcal{H}^+_0$ is $L$.
\end{lem}
\begin{proof}
Let $\tilde{\Delta}$ be the endomorphism of ${\mathbb{C}}^{S_k}$ corresponding to $\Delta : \mathcal{H} ^- \rightarrow \mathcal{H} ^+$ through the identifications $\mathcal{H}^- \simeq {\mathbb{C}}^{S_k}$ and $\mathcal{H}^+ \simeq {\mathbb{C}}^{S_k}$. A straightforward computation shows that for any $f \in {\mathbb{C}}^{S_k}$, we have $$\tilde{\Delta} f (\ell) = \frac{1}{2} ( f ( \ell) - f ( \ell -1 )) , \qquad \ell \in S_k$$ with the convention that $f (0) = f(k)$. Assume that the average of $f$ vanishes, so that $\tilde {L} f ( k) = \tilde{L} f ( 0)$. So we have that $2 \tilde{\Delta} \tilde{L} f = f$. Since $\tilde{\Delta} 1 = 0 $, it follows that $ \tilde{\Delta} L f = f$. Furthermore $Lf$ has a vanishing average.
\end{proof}
Let us apply this to compute $\Xi_1$. By (\ref{eq:xi0}) and (\ref{eq:rel1}), the function $f^{+}_0 \in {\mathbb{C}}^{S_k}$ corresponding to $\Xi_0$ is given by $f^{+}_0 ( 1) = 2k -2 $ and $f^{+}_0 ( \ell ) = -2$ for $\ell = 2, \ldots, k$. So that $\tilde{L} ( f^{+}_0)(\ell) = 2k - 2\ell$ and $L( f^{+}_0)( \ell) = -4 \ell + 2k + 2 $. Inverting the second relation of (\ref{eq:rel1}), we obtain for any $x$ of the form $(2 \ell + 1)/k$,
\begin{gather} \label{eq:rel2}
g^{-} ( x) = f^{-} \Bigl( \frac{kx + 1 }{2} \Bigr)
\end{gather}
which leads to $ \Xi_1 ( x ) = 2 k ( - x +1 )$. This proves Theorem \ref{theo:dev_part} for $m=1$.
Assume now that Theorem \ref{theo:dev_part} has been proved for some even $m \geqslant 2$. So the restriction of $\Xi_m $ to $[0,2] \cap \frac{2}{k} {\mathbb{Z}} $ is a linear combination of the monomials $ k^ q x^p$ where $ p \leqslant q \leqslant m$. Then, the function $f_m^+ \in {\mathbb{C}}^{S_k}$ corresponding to $\Xi_m $ is a linear combination of the monomials $ k^ {q-p} \ell^p$ where $ p \leqslant q \leqslant m$. Now, it is a well-known consequence of Euler-Maclaurin formula that for any $p \in {\mathbb{N}}$, there exists a polynomial $Q_p$ with degree $p+1$ and vanishing at $0$ such that
\begin{gather} \label{eq:sompol}
\sum_{m = 1 } ^{\ell} m ^p = Q_p ( \ell ) , \qquad \forall \ell \in {\mathbb{N}}
\end{gather}
Let $f \in {\mathbb{C}}^{S_k}$ be given by $f( \ell) = \ell^p$. Then $\tilde{L}( f) ( \ell) = Q_p ( \ell)$.
Applying (\ref{eq:sompol}) to the monomials of $Q_p$, we obtain a polynomial $R_p$ of degree $p+ 2$, vanishing at $0$ and such that
$$ \sum_{m = 1 } ^{\ell} Q_p ( m) = R_{p } ( \ell) , \qquad \forall \ell \in {\mathbb{N}}
$$
Consequently $ L ( f) ( \ell ) = 2Q_p ( \ell) - 2k^{-1}R_{p } (k) $, so that
$$ L( k^{q-p} f ) ( p) = 2k^{q-p} Q_p ( \ell ) - 2k^{q-p-1} R_p ( k) . $$
Since $R_p(0) = 0$, $k^{-1} R_p ( k)$ is polynomial in $k$ with degree $p+1$.
This proves that $L ( f_m)$ is a linear combination of the monomials $ k^ {q-p} \ell^p$ where $ p \leqslant q \leqslant m +1 $. Applying the relation (\ref{eq:rel2}), we obtain that the restriction of $\Xi_{m+2} $ to $[0,2] \cap \frac{1}{k} ( 1+ 2{\mathbb{Z}})$ is a linear combination of the monomials $ k^ {q-p + p'} x^{p'}$ where $ p' \leqslant p \leqslant q \leqslant m +1 $. Equivalently it is a linear combination of the $ k^ {q} x^{p} $ where $ p \leqslant q \leqslant m +1 $, which proves Theorem \ref{theo:dev_part} for $m+1$.
Similarly we can show that the result for $m$ odd implies the result for $m+1$. To do this, we identify $\mathcal {H}_0^+$ and $\mathcal{H}_0^-$ with ${\mathbb{C}} ^{S_k}$ by using the relations
\begin{gather*}
f^{+} ( \ell ) = g^+\Bigl( \frac{2\ell}{k } \Bigr) , \qquad f^{-} ( \ell ) = g^{-} \Bigl( \frac{2 \ell -1 } { k} \Bigr) .
\end{gather*}
instead of (\ref{eq:rel1}). Then the inverse of $\Delta : \mathcal{H}^+_0 \rightarrow \mathcal{H}^-_0$ is still given by $L$. The remainder of the proof is unchanged.
\subsection{Further properties of the polynomials $P_{m}$} \label{sec:furth-prop-polyn}
For any $m$, let us write $P_m = P_m^{+} + P_m ^{-}$, where $P_m^{+}$ (resp. $P_m ^{-}$) is a linear combination of the monomials $ k^q x^{2 \ell}$ (resp. $ k^q x^{2 \ell + 1}$).
\begin{prop} \label{eq:sing_part}
For any $m \in {\mathbb{N}}^*$, we have
$$ \frac{1}{2} \bigl( P_m ( k ,x ) - P_m ( k, 2 + x ) \bigr) = \begin{cases} P^-_m ( k, x) \quad \text{ if $m$ is even,} \\ P^+_m ( k, x) \quad \text{ if $m$ is odd.}
\end{cases}$$
\end{prop}
\begin{proof}
The discrete Fourier transform $\mathcal{F}( \Xi_m)$ has the same parity as $m$, so the same holds for $\Xi_m$. This implies that for any $x \in [0,2] \cap \frac{1}{k} {\mathbb{Z}}$ such that $kx $ has the same parity as $m$, we have
$$ P_m( k, 2- x ) = ( -1)^m P_m( k, x)$$
Since $P_m$ is polynomial in $k$ and $x$, this equality actually holds for any $k$ and $x$. So we have
$$ P_m ( k, x ) - P_m ( k, 2 + x ) = P_m ( k,x ) - ( -1)^m P_m (k, -x) ,
$$
which concludes the proof.
\end{proof}
\begin{prop} \label{prop:calcul_pm}
For any $m \geqslant 2$, we have
\begin{gather} \label{eq:rec_Pm}
\sum_{\ell =1 }^{\infty} \frac{k^{-2 \ell +1 }}{(2 \ell -1)!} \Bigl( \frac{d}{dx} \Bigr)^{2 \ell - 1 } P_{m} ( k, x) = P_{m-1} ( k, x).
\end{gather}
Furthermore, if $m$ is even,
\begin{gather} \label{eq:pair_init}
k \int_0^2 P_m ( k, x ) \; dx = 4 \sum_{\ell=1}^{\infty} \frac{B_{2 \ell} }{ ( 2 \ell ) !} \Bigl( \frac{2}{k} \Bigr)^{ 2 \ell -1 } \Bigl( \frac{d}{dx} \Bigr)^{2 \ell - 1 } P_m^- ( k,0)
\end{gather}
where the $B_{\ell}$ are the Bernouilli numbers. If $m$ is odd,
\begin{gather} \label{eq:impair_init} P_{m} (k, \tfrac{1}{k} ) = P_{m-1} ( k,0) .
\end{gather}
\end{prop}
\begin{proof}
Since $\Delta \Xi_{m} = \Xi_{m-1}$, we have for any $x \in [0,2] \cap \frac{1}{k} {\mathbb{Z}}$ such that $kx$ has the same parity as $m-1$,
$$\tfrac{1}{2} \bigl( P_{m} (k, x + \tfrac{1}{k} ) - P_{m} (k, x - \tfrac{1}{k} ) \bigr) = P_{m-1}(k, x ) .$$
$P_{m}$ and $P_{m-1}$ being polynomials the same equality holds for any $x$ and $k$. We obtain Equation (\ref{eq:rec_Pm}) by applying Taylor formula.
Recall that the average of $\Xi_m$ vanishes. For even $m$ this implies that
$\sum_{\ell = 1}^{k} P_{m} ( k, \tfrac{2\ell}{k} ) = 0 .$ By Euler-Maclaurin formula, we have
\begin{xalignat*}{2}
\sum_{\ell = 1}^{k} P_{m} ( k, \tfrac{2\ell}{k} ) = & \int_0 ^k P_m (k, \tfrac{2x}{k} ) dx + \frac{1}{2} \bigl( P_m ( k, 2 ) - P_m ( k,0) \bigr) \\ + & \sum_{ \ell \geqslant 1} \frac{B_{2\ell}}{ ( 2 \ell)!} \Bigl( \frac{2 }{k} \Bigr)^{ 2 \ell -1 } \Bigl[ \Bigl( \frac{d}{dx} \Bigr)^{2 \ell - 1 } P_m ( k, 2 ) - \Bigl( \frac{d}{dx} \Bigr)^{2 \ell - 1 } P_m ( k,0 ) \Bigr]
\end{xalignat*}
Using that $P_m ( k,2) = \Xi_{m, k } ( 0 ) = P_m ( k,0)$ and Proposition \ref{eq:sing_part}, we obtain Equation (\ref{eq:pair_init}).
If $m$ is odd, $\Delta \Xi_{m} = \Xi _{m-1}$ implies that
$$ \tfrac{1}{2} \bigl( P_m ( k, \tfrac{1}{k} ) - P_m ( k, - \tfrac{1}{k} ) \bigr) = P_{m-1} (k,0).$$
Since $\Xi_m $ is also odd, we have $P_m (k , - \tfrac{1}{k} ) = - P_m ( k, \tfrac{1}{k})$ which proves Equation (\ref{eq:impair_init}).
\end{proof}
Proposition \ref{prop:calcul_pm} allows to compute the $P_m$'s inductively. Indeed, we have the following
\begin{prop} \label{prop:recurrence}
For any $m \geqslant 2$, $P_{m}$ is the unique solution in ${\mathbb{C}} [ k,x]$ of Equations (\ref{eq:rec_Pm}) and (\ref{eq:pair_init}) (resp. (\ref{eq:impair_init})) if $m$ is even (resp. odd).
\end{prop}
\begin{proof}
Write
$ P_m ( k,x) = k^{r} Q_r ( x) + k^{r-1} P_{r-1} ( x) + \ldots + Q_0 (x)$. Denote by $D$ the derivation $\frac{d}{dx}$. Then
\begin{xalignat*}{2}
\sum_{\ell =1 }^{\infty} \frac{k^{-2 \ell +1 }}{(2 \ell -1)!} D^{2 \ell - 1 } P_{m} ( k, \cdot ) = & k^{r-1} D Q_r + k^{r-2} DQ_{r-1} + k^{r-3 }( DQ_{r-2} + \\ & \tfrac{1}{3!} D^3 Q_{r} ) + k^{r-4} ( DQ_{r-3} + \tfrac{1}{3!} D^3 Q_{r-1}) + \ldots \\ = & \sum_{\ell =0} ^{r} k^{r -\ell-1 }( D Q_{r -\ell} + R_{\ell}) + \mathcal{O}(k^{-2} )
\end{xalignat*}
where for any $ 0 \leqslant \ell \leqslant r$, $R_{\ell} \in {\mathbb{C}} [x]$ only depends on $Q_{r}, Q_{r-1}, \ldots , Q_{r-\ell +1}$.
So Equation (\ref{eq:rec_Pm}) leads to a triangular system of equations for the $DQ_{r-\ell}$'s. We conclude that $P_m$ is the unique solution of Equation (\ref{eq:rec_Pm}) up to some polynomial in $k$. Arguing similarly, we prove that this latter polynomial is uniquely determined by Equation (\ref{eq:pair_init}) or (\ref{eq:impair_init}) according to the parity of $m$.
\end{proof}
\begin{prop} \label{prop:par_k}
For any $m \in {\mathbb{N}}^*$, $P_m $ is a linear combination of the monomials $ k^q x^p$, where $p,q$ run over the integers such that $p \leqslant q \leqslant m$ and $q$ has the same parity of $m$.
\end{prop}
\begin{proof}
We have to prove that $P_m ( -k, x) = ( -1)^m P_m ( k, x)$. Assume the result holds for $m-1$. Then we easily see that $P_m ( -k , x)$ satisfies the Equations of Proposition \ref{prop:calcul_pm}. To check Equation (\ref{eq:impair_init}) in the case $m$ is odd, we use that $P_m ( k, -\frac{1}{k} ) = -P_m ( k, \frac{1}{k} )$. We conclude with Proposition \ref{prop:recurrence}.
\end{proof}
\begin{prop} \label{prop:sing_part}
For any even $m \geqslant 2$ (resp. odd $m \geqslant 1$), $P_m^-$ (resp. $P^+_m$) is a linear combination of the monomials $k^{m-2 \ell} x^{m- 2\ell -1}$, with $\ell = 0 ,1 \ldots$. The coefficient of $k^{m} x^{m-1}$ is $ 1 / (m-1)!$.
\end{prop}
\begin{proof} Again the proof is by induction on $m$. Assume that $m$ is even and write $ A = P_m ^{-}$, $B = P_{m-1} ^{+}$. By Proposition \ref{prop:par_k}, we have
\begin{gather*}
A ( k, x ) = k^{m} A_m ( x) + k^{m-2} A_{m-2} ( x) + \ldots + A_0 ( x) , \\ \qquad B(k, x) = k^{m-1} B_{m-1} (x) + k^{m-3} B_{m-3} (x) + \ldots + k B_1 (x) .
\end{gather*}
We deduce from Equation (\ref{eq:rec_Pm}) that
\begin{gather*}
D A_m = B_{m-1} , \qquad DA_{m-2} + \tfrac{1}{3!} D^3 A_m = B_{m-3} , \qquad \ldots \\
D A_2 + \tfrac{1}{3!} D^3 A_4 + \ldots , \tfrac{1}{ (m-1)!} D^{m-1} A_m = B_1 ,\\ DA_0+ \tfrac{1}{3!} D A_2 + \ldots + \tfrac{1}{ ( m +1 ) !} D^{m+1} A_m = 0
\end{gather*}
Assume that $ B_{\ell} (x) = b_{\ell} x^{\ell-1}$. Then these equations imply that $A_\ell ( x) = a_{\ell} x^{\ell-1} + a_{\ell}^0$. Since the $A_\ell$'s are odd, the constants $a_{\ell}^{0}$ vanish.
Assume now that $m$ is odd and that the results holds for $m-1$. Arguing as above with Proposition \ref{prop:par_k} and Equation (\ref{eq:rec_Pm}), we prove that
\begin{xalignat*}{2}
P_m ^+ ( k, x) = & k^{m } ( c_m x^{m-1} + d_m ) + k^{m-2} ( c_{m-2} x^{m-3} + d_{m-2} ) + \ldots \\ + & k^{3} ( c_3 x^2 + d_3) + k c_1
\end{xalignat*}
By Proposition \ref{prop:par_k}, $P_{m-1} ( k,0)$ is even. So Equation (\ref{eq:impair_init}) implies that $$P_m ^+ (k, \tfrac{1}{k} ) = 0.$$
We deduce that the $d_{\ell}$'s vanish and $c_m + c_{m-2} + \ldots + c_1 = 0$.
\end{proof}
\subsection{Application to the counting function} \label{sec:appl-count-funct}
As a consequence of Verlinde formula, we have the following
\begin{lem} \label{lem:relat-with-count}
For any $k \in {\mathbb{N}}^*$, $\ell = 1 , \ldots , k-1$ and $g \in {\mathbb{N}}^*$, we have
$$N^{g,k}_{\ell} = C_g k ^{ g -1 } \Xi _{2g -1 } ( \ell / k ) $$
with $C_g = (-1)^{g-1} 2 ^{-g}$.
\end{lem}
\begin{proof}
We compute from Theorem \ref{theo:Verlinde}
\begin{xalignat*}{2}
N_{\ell }^{g,k} = & \sum_{m= 1 } ^{k-1} S_{m,1} ^{1 - 2g} S_{m, \ell} =\frac{1}{2i} \Bigl( \frac{2}{k} \Bigr)^{1/2} \sum_{m=1}^{k-1} S_{m,1} ^{1 - 2g} ( e^{ i \pi \frac{m\ell}{k} } - e^{ -i \pi \frac{m\ell}{k} } ) \\
\intertext{ setting $ m = k y $ so that $S_{m, p}= \bigl( \frac{2}{k}\bigr)^{1/2} \sin ( \pi y )$, we get}
= & \frac{1}{2i} \Bigl( \frac{2}{k} \Bigr)^{1-g} \sum_{y \in R_k \setminus \{ 0 , 1\}} \bigl[ \sin ( \pi y ) \bigr]^{1-2g} e^{i \pi y \ell } \\
= & 2^{-g}(-k ) ^{g-1} i^{1-2g} \sum_{y \in R_k \setminus \{ 0 , 1\}} \bigl[ \sin ( \pi y ) \bigr]^{1-2g} e^{i \pi y \ell }
\end{xalignat*}
which ends the proof.
\end{proof}
So we deduce from Theorem \ref{theo:dev_part} and the result of Section \ref{sec:furth-prop-polyn} the following
\begin{theo} \label{theo:counting_smoot}
Let $g\in {\mathbb{N}}^*$. Then there exists a family of polynomial functions
$P_{g, m} : [0,1] \rightarrow {\mathbb{R}}$, $m =0,1,\ldots, g-1$ such that for any $k \in {\mathbb{N}}^*$ and for any odd integer $\ell$ satisfying $1 \leqslant \ell \leqslant k-1$, we have
$$ N^{g, k }_{\ell} = \Bigl( \frac{k}{2\pi} \Bigr)^{ 3 g -2 } \sum _{m = 0 }^{g-1 } k^{-2m} P_{g, m} \Bigl( \frac{\ell}{k} \Bigr) . $$
Furthermore $P_{g, m}$ has degree $2(g -m) -1 $. The even part of $P_{g, m }$ is of the form $\la_{g,m} x^{ 2( g -m - 1)}$. For $m=0$, we have
$$ \la_{g,0} = \frac{2 C_g ( 2 \pi)^{ 3g -2}}{ ( 2 ( g-1))!} .$$
\end{theo}
The polynomials $P_{g,m}$ will be expressed in Section \ref{sec:symplectic-volumes} as integrals of characteristic classes on some moduli spaces. In particular $P_{g,0} (s)$ is a symplectic volume.
\section{Geometric quantization of tori and semiclassical limit}
\label{sec:geom-quant-tori}
\subsection{The quantum spaces} \label{sec:quantum-spaces}
Let $(E,\om)$ be a real two-dimensional symplectic vector space. Let $R$ be a lattice of $E$ with volume $4\pi$. Let $L_E = E \times {\mathbb{C}}$ be the trivial Hermitian line bundle over $E$. Endow $L_E$ with the connection $d + \frac{1}{i} \al$ where $\al \in \Om^1 ( E, {\mathbb{C}})$ is given by
$$ \al_x ( y) = \frac{1}{2} \om ( x, y). $$
Consider the Heisenberg group $E \times U(1)$ with the product
\begin{gather} \label{eq:action_prod}
(x, u) . ( y,v) = ( x+ y, uv e^{i \om ( x, y) /2})
\end{gather}
This groups acts on $L_E$ by preserving the metric and the connection, the action of $(x,u)$ being given by formula (\ref{eq:action_prod}) with $(y,v ) \in L_E$. The group $E \times U(1)$ is actually the group of automorphisms of $(L_E , d + \frac{1}{i} \al)$ lifting the translations of $E$.
Since $\om ( R, R) \subset 4 \pi {\mathbb{Z}}$, $R \times \{1 \}$ is a subgroup of $E \times U(1)$. Let $M$ be the torus $E /R$ and $L_M $ be the bundle $L_E / R \times \{1\}$. The symplectic form $\om$ and the connection $d + \frac{1}{i} \al$ descend to $M$ and $L_E$ respectively. Let $k$ be a positive integer. For any $ x \in \frac{1}{2k} R$, the action of $(x,1)$ on $L_E^k$ commutes with the action of $R \times \{1 \}$. This defines an action of $(x, 1)$ on $L^k_M$. Denote by $T^*_x$ the pull-back of the sections of $L^k_M$ by the action of $(x,1)$. Observe that for any $ x, y \in R$, we have
$$ T_{x/2k}^* T_{y/2k} ^* = e^{i\om ( x, y)/4k} T_{y/2k}^* T_{x/2k}^* .$$
Choose a linear complex structure $j$ of $E$ compatible with $\om$.
This complex structure descends to $M$. Furthermore, $L_M$ inherits a holomorphic structure compatible with the connection. The space $H^0 ( M , L^k_M)$ of holomorphic sections of $L^k_M$ has dimension $2k$. The operators $T_{x/2k}^*$, $x \in R$ introduced above, preserve $H^0 ( M , L^k_M)$.
Let $( \delta, \varphi)$ be a half-form line, that is $\delta$ is a complex line and $\varphi$ is an isomorphism from $\delta^{\otimes 2} $ to the canonical line $K_j$,
$$ K_j = \{ \al \in E^* \otimes {\mathbb{C}}/ \al ( j \cdot ) = i \al \} .$$
Let $\mathcal{H}_k = H^0 ( M , L^k_M) \otimes \delta = H^0 ( M , L^k_M \otimes \delta_M )$ where $\delta_M$ is the trivial line bundle $M \times \delta$. $K_j$ has a natural scalar product given by $( \al , \be) = i \al \wedge \con{\be} / \om$. We endow $\delta$ with the scalar product making $\varphi$ a unitary map. $\mathcal{H}_k$ has a scalar product defined by integrating the pointwise scalar product against $\om$.
Let $( \mu , \la)$ be a positive basis of $R$, so that $\om ( \mu , \la ) = 4 \pi$. It is a known result that $\mathcal{H}_k$ has an orthonormal basis $(\Psi_{\ell} , \; \ell \in {\mathbb{Z}} / 2 k {\mathbb{Z}})$ such that
$$ T^*_{\mu/2k } \Psi_\ell = e ^{ i \pi \ell / k} \Psi_\ell, \qquad T^* _{\la/ 2k } \Psi_{\ell} = \Psi_{\ell + 1} . $$
The only indeterminacy in the choice of this basis is the phase of $\Psi_0$. We will often use the following normalization
$$ \Psi ( 0) = \la \si^k \otimes \Om_{\mu} \qquad \text{ with } \la >0 $$
Here $\si \in L_{M, [0]}$ is the vector send to $1$ by the identification $L_{M, [0]} \simeq L_{E, 0} = \{0 \} \times {\mathbb{C}}$. $\Om_{\mu}$ is one of the two vectors in $\delta$ satisfying $ \varphi ( \Om_{\mu}^2) ( \mu ) = 1$. We can explicitly compute the coefficient $\la$ as an evaluation of the Riemann Theta function. It satisfies as $k$ tends to infinity $\la = 1 + \mathcal{O} ( e^{-k/C})$ for some positive $C$.
Let $S$ be the automorphism of $L_E^k$ sending $(x,u)$ into $( -x,u)$. This automorphism descends to an automorphism of $L_M$ that we still denote by $S$. The subspace of alternating sections of $\mathcal{H}_k$ is by definition the eigenspace $\ker ( \operatorname{Id}_{\mathcal{H}_k} + S^k \otimes \operatorname{id}_{\delta}) $. It has dimension $k-1$ and admits as a basis the family $(\Psi_{\ell} - \Psi_{-\ell}, \ell =1, \ldots, k)$.
\subsection{Semi-classical notions}
Consider the same data as above. For any $k\in {\mathbb{N}}^*$ and $\xi \in \mathcal{H}_k$, we denote by $|\xi | \in \mathcal{C}^{\infty} ( M , {\mathbb{R}})$ the pointwise norm of $\xi$ and by $\| \xi \| \in {\mathbb{R}}$ the norm defined previously, so $\| \xi \|^2 = \int_M | \xi |^2 \om$.
We say that a family $ \xi = ( \xi_k \in \mathcal{H}_k, k \in {\mathbb{N}}^*)$ is {\em admissible} if there exists $N$ and $C>0$ such that for any $k$, $\| \xi_k \| \leqslant C k^{N}$. Equivalently, $\xi$ is admissible if there exists $N$ and $C>0$ such that for any $k$, $| \xi_k | \leqslant C k^{N}$ on $M$.
The {\em microsupport} of an admissible family $( \xi_k)$ is the subset $\operatorname{MS} ( \xi )$ of $M$ defined as follows: $x \notin \operatorname{MS} ( \xi)$ if and only if there exists a neighborhood $U$ of $x$ and a sequence $(C_N)$ such that for any integer $N $ and $x \in U$, $|\xi ( x) | \leqslant C_N k^{-N}$.
Let $U$ be an open set of $M$. Let $\Gamma$ be a one dimensional submanifold of $U$. Let $\Theta$ and $\si$ be sections of $L_M \rightarrow \Gamma$ and $\delta_M \rightarrow \Gamma$ respectively. Assume that $\Theta$ is flat and that its pointwise norm is constant equal to 1. Let $\xi$ be an admissible family. We say that the restriction of $\xi$ to $U$ is a {\em Lagrangian state} supported by $\Gamma$ with associated sections $( \Theta, \si)$ if $\operatorname{MS} ( \xi ) \cap U \subset \Gamma $ and for any $ x_0 \in \Gamma$, there exists an open neighborhood $V$ of $x_0$ such that
\begin{gather} \label{eq:lag_state}
\xi( x) = \Bigl( \frac{k}{2 \pi} \Bigr)^{1/4+N} E^k(x) f( x , k) + \mathcal{O} ( k^{-\infty}) , \qquad x \in V
\end{gather}
where the $\mathcal{O}$ is uniform on $V$ and
\begin{itemize}
\item $E$ is a section of $L_M \rightarrow V$ such that $ E = \Theta$ on $\Gamma \cap V$, $|E| < 1$ outside of $\Gamma$, $\con{\partial} E \equiv 0$ modulo a section vanishing to infinite order along $\Gamma$,
\item $(f( . , k))$ is a sequence of $\mathcal{C}^{\infty} ( V, \delta_M)$ which admits an asymptotic expansion of the form $f_0 + k^{-1} f_1 + \ldots$ with coefficients $f_0, f_1, \ldots $ in $\mathcal{C}^{\infty} (V, \delta_M)$. Furthermore $f_0 = \si$ on $\Gamma \cap V$,
\item $N$ is a real number which does not depend on $x_0$.
\end{itemize}
Let us recall how we can estimate the norm and the scalar product of Lagrangian states in terms of the corresponding sections $\Theta$ and $\si$.
In these statements, we identify $ \si^2 \in \mathcal{C}^{\infty} ( \Gamma, \delta_M^2)$ with the one-form of $\Gamma$ given by
$$ \si^2 (p)( X) := \varphi ( \si^2(p) ) ( X), \qquad \forall p \in \Gamma \text{ and } X \in T_p \Gamma $$
where we see $T_p \Gamma$ as a subspace of $T_p M = E$.
The normalization for $N$ has been chosen so that for any $\rho \in \mathcal{C}^{\infty} ( M)$ supported in $U$
$$ \int_U | \xi _k | ^2 \rho \; \om = \Bigl( \frac{k}{2\pi} \Bigr)^N \int_{\Gamma} \rho (x) |\si|^2 (x) + \mathcal{O} ( K^{N-1}) $$
Here $| \si |^2 $ is the density $|\si^2|$ of $\Gamma$, so that it makes sense to integrate it on $\Gamma$. For a proof of this formula, cf Theorem 3.2 in {\cite{oim_demi}}.
Consider now two Lagrangian states $\xi$ and $\xi'$ over $U$ with associated data $(\Gamma, \Theta,\si ,N)$ and $( \Gamma', \Theta', \si', N')$. Assume that $\Gamma \cap \Gamma ' = \{ y \}$ and this intersection is transverse. Introduce a function $\rho \in \mathcal{C}^{\infty} (M)$ such that $\operatorname{supp} \rho \subset U$ and $\rho = 1 $ on a neighborhood of $y$. By Theorem 6.1 in \cite{oim_pol}, we have the following asymptotic expansion
\begin{gather} \label{eq:scalprodlag}
\int_U \bigl( \xi_k , \xi'_k \bigr) \rho \om = \Bigl( \frac{k}{2\pi} \Bigr)^{-1/2 +N +N'} \bigl( \Theta (y) , \Theta' ( y) \bigr)^k_{L_{y}} \sum_{\ell = 0 }^{\infty} k ^{\ell} a_\ell + \mathcal{O}( k^{-\infty})
\end{gather}
where the $a_{\ell}$'s are complex numbers and $ a_0 = \langle \si (y), \si '(y) \rangle_{T_y \Gamma, T_y \Gamma'} .$
Here the bracket is defined as follows. For any two distinct lines $\nu$, $\nu'$ of $E$, there exists a unique sesquilinear map
$ \langle \cdot, \cdot \rangle_{\nu, \nu'} : \delta \times \delta \rightarrow {\mathbb{C}}$
such that for any $u$, $v \in \delta$,
\begin{gather} \label{eq:pairing}
\bigl( \langle u , v \rangle_{\nu, \nu'}\bigr)^2 = i \frac{ u^2(X) \overline{v^2(Y)}}{ \om (X, Y)}, \qquad \forall X \in \nu, Y \in \nu'
\end{gather}
where $X$ and $Y$ are any non vanishing vectors in $\nu$ and $\nu'$ respectively. The sign of $ \langle u , v \rangle_{\nu, \nu'}$ is determined by the following condition: the bracket depends continuously on $\nu$, $\nu'$ and $\langle u, u \rangle _{\nu, j \nu } \geqslant 0$.
Assume that $ \si$ and $\si'$ vanish at $y$. Then $a_0 =0$ and $a_1$ is computed as follows. Write $\si = f \tau$ and $\si '= f' \tau'$ with $f$ and $f'$ smooth functions vanishing at $y$. Then
\begin{gather} \label{eq:subpairing}
a_1 = i\frac{d_x f ( X) \overline{ d_x f'(Y)}}{\om ( X, Y)} \langle \tau (y) , \tau' (y) \rangle_{T_y \Gamma, T_y \Gamma'} .
\end{gather}
for any nonvanishing vector $X \in T_x \Gamma$ and $Y \in T_x \Gamma'$.
\subsection{Alternative description of Lagrangian states}
Choose a basis $(\mu, \la)$ of $R$ such that $\om ( \mu, \la ) = 4 \pi$. Denote by $p$ and $q$ the linear coordinates of $E$ dual to this basis. Let $s$ be the section of $L_E$ given by $s = e^{-2i \pi pq}$. We easily compute that
\begin{gather} \label{eq:ders}
\nabla s = \frac{4 \pi}{ i} p dq \otimes s .
\end{gather}
Observe that $s( 0) =1$ and $s$ is flat along the lines ${\mathbb{R}} \mu$, $x + {\mathbb{R}} \la$ for all $x \in {\mathbb{R}} \mu$. These conditions completely determine $s$.
Let $q_0$ and $q_1$ in ${\mathbb{R}}$ be such that $q_0 < q_1 < q_0 + 1$. Let $U$ be the open set
$ U = \{ [p \mu + q \la ]/ \; q\in ]q_0 , q_1[, \; p \in {\mathbb{R}} \} $ of $M$. Let $\phi$ be a smooth real valued function defined on the interval $]q_0, q_1[$. Introduce the submanifold $\Gamma$ of $U$
$$ \Gamma = \{ [ \phi'(q) \mu + q \la ] ;\quad q \in ]q_0, q_1[ \} $$
and the section $\Theta$ of $L_M \rightarrow \Gamma$ such that for any $q \in ]q_0 , q_1[$,
\begin{gather} \label{eq:theta}
\Theta ( [ p \mu + q \la ] ) = e^{4i\pi \phi ( q) } s ( p \mu + q\la) \quad \text{ with } p = \phi' ( q)
\end{gather}
It follows from (\ref{eq:ders}) that $\Theta$ is flat. Let $\si$ be a section of $\delta_M \rightarrow \Gamma$.
Let $( \Psi_\ell , \ell \in {\mathbb{Z}}/2k {\mathbb{Z}})$ be the basis of $\mathcal{H}_k$ corresponding to $(\mu, \la)$. Let $\xi$ be an admissible family. Denote by $\xi_k( \ell)$, $\ell \in {\mathbb{Z}} / 2k {\mathbb{Z}}$ the coefficients of $\xi_k$ in $( \Psi_{-\ell})$.
\begin{theo} \label{theo:laginbasis}
The restriction of $\xi$ to $U$ is a Lagrangian state with associated data $(\Gamma, \Theta, \si , N)$ if and only if for any $q \in ]q_0 , q_1 [ \cap \frac{1}{2k} {\mathbb{Z}}$,
\begin{gather} \label{eq:coeff_laginbasis}
\xi_ k ( 2 k q ) = \Bigl( \frac{k}{2\pi} \Bigr)^{-1/2 +N } e^{ 4 i \pi k \phi (q) }\sum_{\ell =0 }^{\infty} k^{-\ell} f_{\ell} (q) + \mathcal{O} ( k^{-\infty})
\end{gather}
where the $\mathcal{O}$ is uniform on any compact set of $]q_0, q_1[$, the $f_{\ell}$'s are smooth functions on $]q_0, q_1[$ and the square of $f_0$ satisfies for any $q$,
\begin{gather} \label{eq:symb_laginbasis}
\si^2 ( \phi ' (q) \mu + q \la ) = f_0 ^2 ( q) \frac{ \om ( \cdot, \mu)}{i} .
\end{gather}
\end{theo}
\begin{proof}
By Proposition 3.2 \cite{LJ1}, the family $(\Psi_0 \in \mathcal{H}_k, k \in {\mathbb{N}}^*)$ is a Lagrangian state supported by the circle $C= \{ [p \mu] ; \; p \in {\mathbb{R}} \} \subset M$ and
\begin{gather} \label{eq:psi0}
\Psi_0 ([ p \mu] ) = \Bigl( \frac{k}{2\pi} \Bigr)^{1/4} s^k ( p \mu) \otimes \Om_\mu+ \mathcal{O}( k^{-\infty}).
\end{gather}
Assume that $\xi$ is a Lagrangian state. Then we can estimate the scalar product
$$ \xi_k ( 2kq ) = \langle \xi_k , \Psi_{ -2k q } \rangle = \langle \xi_k , T^*_{-q\la } \Psi_0 \rangle$$
with formula (\ref{eq:scalprodlag}). Actually we need a version with parameter of formula (\ref{eq:scalprodlag}) to get a uniform control with respect to $q$. Such a version holds and its proof is not more difficult. Let us explain how we obtain the factor $\exp( 4 i \pi k \phi (q))$ in (\ref{eq:coeff_laginbasis}) and Formula (\ref{eq:symb_laginbasis}) for the leading coefficient. First observe that $\Gamma$ intersects $q\la + C$ transversally at the point $y_q = [\phi ' (q) \mu + q \la ]$. Translating (\ref{eq:psi0}) by $q \la$, we obtain
$$ \Psi_{ -kq} ( [q \lambda + p \mu ]) = \Bigl( \frac{k}{2\pi} \Bigr)^{1/4} s^k ( q \la + p \mu) \otimes \Om_\mu+ \mathcal{O}( k^{-\infty}). $$
By the definition of $\Theta$, cf. Equation (\ref{eq:theta}), we have
$$ ( \Theta( y_q)^k , s^k ( y_q) )_{L^k_{M,q}} = e^{4i\pi k \phi ( q)}$$
Equation (\ref{eq:symb_laginbasis}) follows from equation (\ref{eq:pairing}) and the fact that $\Om_{\mu}^2 ( \mu ) =1$.
Conversely, assume that the asymptotic expansion (\ref{eq:coeff_laginbasis}) holds. By the first part of the proof, there exists a Lagrangian state $\xi '$ such that the coefficients $\langle \xi ' _k , \Psi _{ - 2kq} \rangle$ satisfy the same asymptotic expansion. The coefficients of the sequence $f( \cdot, k)$ in (\ref{eq:lag_state}) have to be defined by successive approximations so that we recover the same coefficients in (\ref{eq:coeff_laginbasis}). Then we have
$$ \langle \xi_k - \xi ' _k , \Psi _{ - 2kq} \rangle = \mathcal{O}( k^{-\infty})$$
uniformly on any compact set of $]q_0, q_1 [$. This has the consequence that the microsupport of $\xi- \xi'$ does not meet $U$. For more details on this last step, see Proposition 2.2 in \cite{oim_torus}.
\end{proof}
\subsection{Application to the functions $\Xi_{m,k}$} \label{sec:appl-funct-xi_m}
Choose a positive basis $( \mu, \la)$ of $R$ and denote by $( \Psi_{\ell}$, $\ell \in {\mathbb{Z}} / 2k {\mathbb{Z}})$ the corresponding basis of $\mathcal{H}_k$. Recall the function $\Xi_{m,k}$ of Section \ref{sec:discr-four-transf}. Define
$$\xi_{m,k} = \sum_{\ell \in {\mathbb{Z}} / 2k {\mathbb{Z}}} \Xi _{m,k} \Bigl( \frac{\ell}{k} \Bigr) \Psi_{\ell} . $$
Introduced the
subsets of $M$
\begin{gather} \label{eq:defA12}
A_1 := \{ [ p \mu ]; \; p \in {\mathbb{R}} \} , \quad A_2 := \{ [q \la ] , [ \mu /2 + q \la ] ; \; q \in {\mathbb{R}} \}.
\end{gather}
Introduce the neighborhoods of $A_1 \setminus A_2$ and $A_2 \setminus A_1$ respectively given by $U_1 := A_1 \setminus A_2$ and $U_2 := A_2 \setminus A_1$.
Let $A := A_1 \cup A_2$ and $ \Theta_A$ be the section of $L_M \rightarrow A$, which is flat and equal to $1$ at the origin.
\begin{theo} \label{theo:asympt-behav-xi}
The restriction of $( \xi_{m,k}, \; k \in {\mathbb{N}}^*) $ to $U_1$ (resp. $U_2$) is a Lagrangian state supported by $A_1 \setminus A_2$ (resp. $A_2 \setminus A_1$) with order $-1$ (resp. $1/2+m$) and corresponding section $\Theta_A$.
\end{theo}
The symbol can also be computed in terms of the polynomials $P_m $ of Theorem \ref{theo:dev_part}.
\begin{proof} It is a consequence of Theorem \ref{theo:laginbasis} and Theorem \ref{theo:dev_part}.
Indeed, denoting by $\xi_{m,k} (\ell)$ the coefficient of $\Psi_{-\ell}$ in $\xi_{m,k}$, we have for any $q \in ]0,1[ \cap \frac{1}{2k} {\mathbb{Z}}$,
\begin{xalignat}{2} \notag
\xi_{m,k} ( 2k q ) = & \Xi _{m} (-2 q) = (-1)^m \Xi _{m} (2 q) \\
\label{eq:singla1} = & \bigl( (-1)^m + e^{2i k \pi q} \bigr) P_{m} ( k , 2q)
\end{xalignat}
by Theorem \ref{theo:dev_part}. Recall that $P_{m} (k, q)$ depends smoothly on $q$ (even polynomially) and is polynomial in $k$ with degree $m$. So by Theorem \ref{theo:laginbasis}, the restriction of $(\xi_{m,k})$ to $U_2$ is a Lagrangian state supported by $A_2 \setminus A_1 $.
To prove the result on $U_1$, introduce the basis of $\mathcal{H}_k$
$$ \Phi_{\ell} = \frac{e^{i\pi/4}}{\sqrt{2k}} \sum_{n \in {\mathbb{Z}}/ 2k {\mathbb{Z}}} e^{i \pi \ell n/k} \Psi_n , \qquad \ell \in {\mathbb{Z}}/2k {\mathbb{Z}}$$
We check without difficulty that
$$T^*_{\mu/2k} \Phi_{\ell} = \Phi_{\ell+1}, \qquad T^*_{\la /2k} \Phi_{\ell} = e^{-i \pi \ell/k} \Phi_{\ell+1}.$$
Furthermore, the normalization with the factor $e^{i \pi/4}$ has been chosen so that $\Phi_0 ( 0 ) = \Om_{-\la}$, where $\Om_{-\la} \in \delta $ is such that $\Om _{-\la}^2 ( -\la) = 1$, cf. Theorem 2.3 of \cite{LJ1} for a proof of this formula. So $( \Phi_{\ell})$ is the basis associated to $( -\la , \mu)$. Furthermore, it follows from the definition of $\Xi_{m,k} $ that
\begin{xalignat}{2} \notag
\xi_{m,k} = & ( i)^{-m} \sum_{ n , \ell \in {\mathbb{Z}} / 2k {\mathbb{Z}} } \bigl[ \sin ( \pi \tfrac{\ell}{k} ) \bigr]^{-m} e^{i \pi \ell n /k } \Psi_{n}\\ \label{eq:xi_phi}
= & e^{-i \pi/4} ( i )^{-m} \sqrt{2k} \sum_{ \ell \in {\mathbb{Z}} / 2k {\mathbb{Z}} } \bigl[ \sin ( \pi \tfrac{\ell}{k} ) \bigr]^{-m} \Phi_{\ell}
\end{xalignat}
where by convention $(0)^{-m} = 0$. So by Theorem \ref{theo:laginbasis}, the restriction of $(\xi_{m,k})$ to $U_1$ is a Lagrangian state supported by $A_1 \setminus A_2 $.
\end{proof}
\section{Asymptotic behavior of $Z_k ( \Si \times S^1)$ and $Z_k (S)$}
\label{sec:asympt-behav-z_k}
Recall that we introduced in Section \ref{sec:witt-resh-tura} a compact oriented surface $\Si$ with a connected boundary $C$. Let
\begin{gather} \label{eq:def_E}
E = H_1 ( C \times S^1 , {\mathbb{R}}), \quad R = H_1 ( C \times S^1) , \qquad M = E/R.
\end{gather}
Let $\om $ be the symplectic form of $E$ defined as $4\pi$ times the intersection product. Consider the quantum space $\mathcal{H}_k = H^0 (M, L_M^k ) \otimes \delta$ defined in Section \ref{sec:quantum-spaces}. We denote by $\mathcal{H}_k^{\operatorname{alt}}$ the subspace of alternating sections.
Let $(\mu, \la)$ be the basis of $R$ given by $\mu =[C]$, $\la = [S^1]$. Denote by $(e_\ell)$ and $(\Psi_\ell)$ the corresponding basis of $V_k ( C \times S^1)$ and $\mathcal{H}_k$ respectively introduced in Section \ref{sec:witt-resh-tura} and Section \ref{sec:quantum-spaces}. We identify $V_k ( \Si \times S^1)$ with $\mathcal{H}_k^{\operatorname{alt}}$ by sending $e_{\ell}$ into $2^{-1/2} ( \Psi_{\ell} - \Psi_{-\ell})$.
As it was proved in Theorem 2.4 of \cite{LJ1}, this identification depends on the choice of the basis $( \mu, \la)$ only up to a multiplicative factor $\exp (i \pi ( \frac{n}{4} + \frac{n'}{2k}))$, $n $ and $n'$ being two integers independent of $k$.
\subsection{The state $Z_k ( \Si \times S^1)$}
The vector $Z_k ( \Si \times S^1)$ of $V_k ( C \times S^1)$ is given in the basis $( \Psi_\ell)$ by
\begin{gather} \label{eq:Z_k1}
Z_k ( \Si \times S^1) = \frac{1}{\sqrt 2} \sum_{\ell =1 }^{ k-1} N^{g,k }_{\ell} \bigl( \Psi_{\ell } - \Psi_{-\ell} \bigr)
\end{gather}
Using Lemma \ref{lem:relat-with-count} and the fact that $\Xi_{2g -1}$ is odd, we get
\begin{gather} \label{eq:Z_k2}
Z_k ( \Si \times S^1) = \frac{C_g }{ \sqrt 2 } k^{g-1} \sum _{ \ell \in {\mathbb{Z}} / 2k {\mathbb{Z}}} \Xi _{ 2g -1} \Bigl( \frac{\ell}{k} \Bigr) \Psi_{\ell} = \frac{C_g }{ \sqrt 2 } k^{g-1} \xi_{2g-1,k}
\end{gather}
where $\xi_{2g-1,k}$ is the vector introduced in Section \ref{sec:appl-funct-xi_m}. By Theorem \ref{theo:asympt-behav-xi}, $(\xi_{2g-1,k})$ is a Lagrangian state, so the same holds for $\bigl( Z_k ( \Si \times S^1) \bigr)$. Let us complete this result by computing the symbol. In the following statement, we use the sets $A_1$, $A_2$ introduced in~(\ref{eq:defA12}), their neighborhoods $U_1$, $U_2$ and the corresponding section $\Theta_A$.
\begin{theo} \label{theo:asympt-behav-z_k}
The restriction of $( Z_k ( \Si \times S^1) , \; k \in {\mathbb{N}}^*) $ to $U_1$ is a Lagrangian state with associated data $(A_1 \setminus A_2, \Theta_A, \si _1, g ) $ where
$$ \si_1 (p \mu ) \equiv i \sqrt{2} \pi ^g \bigl[ \sin ( 2 \pi p ) \bigr]^{-2g+1} (dp)^{1/2}, \qquad \forall p \in {\mathbb{R}}. $$
The restriction of $( Z_k ( \Si \times S^1) , \; k \in {\mathbb{N}}^*) $ to $U_2$ is a Lagrangian state with associated data $(A_2 \setminus A_1, \Theta_A, \si _2, 3g - 3/2) $ where $\si_2$ is given in terms of the function $P_{g,0}$ introduced in Theorem \ref{theo:counting_smoot} by
$$ \si_2 (q \la ) \equiv e^{i \pi/4} (\tfrac{\pi}{2})^{1/2} \; P_{g,0} ( 2q) \; ( dq )^{1/2} \ , \qquad \forall q \in (0,1)$$
and $\si_2( q \la + \tfrac{1}{2} \mu ) = - \si_2 ( q \la)$.
\end{theo}
\begin{proof}
Introduce the same basis $( \Phi_\ell)$ as in the proof of Theorem \ref{theo:asympt-behav-xi}.
Denoting by $\eta_k ( \ell)$ the coefficient of $\Phi_{-\ell}$ in $Z_k ( \Si \times S^1)$, we have by Equations (\ref{eq:Z_k2}) and (\ref{eq:xi_phi}) that
\begin{xalignat*}{2}
\eta_k (2 k p ) = & \frac{C_g}{\sqrt 2} k^{g-1} e^{i \pi /4} (-1)^{g} \sqrt { 2k} \bigl[ \sin (- 2 \pi p ) \bigr]^{-2g+1}\\
= & \frac{e^{i \pi /4}}{2^g} k^{g-1/2} \bigl[ \sin ( 2 \pi p ) \bigr]^{-2g+1}
\end{xalignat*}
because $C_g = (-1)^{g+1} 2^{-g}$.
To conclude the computation of $\si_1$, we use that $\om ( \cdot, - \la) /i = 4 i \pi dp $ and equation (\ref{eq:symb_laginbasis}).
Let us compute the symbol $\si_2$. By Theorem \ref{theo:asympt-behav-xi} and Equation (\ref{eq:xi_phi}), the coefficients $\zeta_{k} ( \ell)$ of $\Psi_{-\ell}$ in $Z_{k} ( \Si \times S^1)$ satisfy
$$ \zeta_k ( 2k q ) = \Bigl( \frac{k}{2 \pi} \Bigr)^{ 3g -2 } \bigl( 1 - e^{2i k \pi q} \bigr) f (q) + \mathcal{O}( k^{3g -3}) $$
with $f$ a smooth function on $]0,1[$. By Equation (\ref{eq:symb_laginbasis}), we have for any $q \in (0,1)$,
$$\si_2( q \la ) = f ( q) \sqrt{ 4 i \pi dq }, \qquad \si_2( q \la + \tfrac{1}{2} \mu ) = - \si_2( q \la).$$
On the one hand, by Equation (\ref{eq:Z_k1}), $$\zeta_k ( 2k q) = - 2^{-1/2} N_{2kq}^{g,k}.$$ On the other hand, $1- e^{2i k \pi q} =2$ for odd $2k q$. So we conclude from Theorem \ref{theo:counting_smoot} that $2 f( q) = - \frac{1}{\sqrt 2} P_{g,0} ( 2q) $.
\end{proof}
\subsection{The state $\rho_k ( \varphi ) (Z_k (D \times S^1)) $}\label{sec:remplissage}
Recall that $\varphi $ is a diffeomorphism from $\partial D \times S^1$ to $C \times S^1$. So the homology class $\nu $ of $\varphi ( \partial D )$ is a primitive vector of $R$, that is $\nu = a \mu + b \la$ where $a,b$ are coprime integers. There is no restriction to assume that $b$ is non negative. Introduce the subset of $M$
\begin{gather} \label{eq:def_B}
B:= \{ [r \nu] \in M ; \; r \in {\mathbb{R}} \} .
\end{gather}
Observe that $B$ is a circle and there is a unique flat section $\Theta_B$ of $L \rightarrow B$ such that $\Theta_B ( 0) = 1$.
The following result is Theorem 3.3 of \cite{LJ1}.
\begin{theo} \label{theo:state-tore_solide}
The family $\bigl( \rho_k ( \varphi ) (Z_k (D \times S^1)) $, $k \in {\mathbb{N}}^*$) is a Lagrangian state with associated data $(B, \Theta_B, \si_B, 0)$ where
$$ \si_B ( r \nu ) = \sqrt 2 \sin ( 2 \pi r) \Om_\nu $$
with $\Om_\nu \in \delta$ such that $\Om_\nu^2 ( \nu ) = 1$.
\end{theo}
For any non vanishing $c$, denote by $I_c$ the interval $$I_c = ]-\tfrac{1}{2|c|}, \tfrac{1}{2|c|} [.$$ Assume that $a$ and $b$ do not vanish. Consider the open set
$U = \bigl\{ [ p \mu + q \la ] ; \; p \in I_b, \; q \in I_a \bigr\}$ of $M$. Observe that
$$ B \cap U = \bigl\{ \bigl[ \tfrac{a}{b} q \mu + q \la \bigr]; \; q \in I_a\}.$$
Introduce a function $f_U \in \mathcal{C}^{\infty} (M )$ with support contained in $U$, which is identically equal to $1$ on a neighborhood of the origin and such that $f_U(-x) = f_U(x)$. Let $Z( \ell)$ be the coefficients
\begin{gather} \label{eq:4}
Z( \ell ) = \bigl\langle f_U \rho_k ( \varphi ) (Z_k (D \times S^1)), e_{\ell} \bigr\rangle, \qquad \ell =1 , \ldots, k-1
\end{gather}
We deduce from Theorem \ref{theo:laginbasis} and Theorem \ref{theo:state-tore_solide} the following
\begin{prop} \label{lem:state-remplissage-e}
We have for any $q \in (0, \tfrac{1}{2} ) \cap \frac{1}{2k } {\mathbb{Z}}$,
$$ Z( 2kq) = \Bigl( \frac{2 \pi}{k}\Bigr)^{1/2} e^{2i \pi k \frac{a}{b} q^2} \sum_{\ell = 0 }^{\infty} k^{-\ell } f_\ell (q)+ \mathcal{O} ( k^{-\infty})$$
where the $f_\ell$ are smooth odd functions on ${\mathbb{R}} $ with support contained in $I_a$.
Furthermore $f_0 ( q) = e^{-i \pi/4} \sin ( 2 \pi q /b) / \sqrt{ \pi b}$ on a neighborhood of $0$.
\end{prop}
\subsection{Asymptotics of $Z_k ( S)$}
Let us assume that $a \neq 0$ and $b \neq 0$. Under this assumption the intersection of $B$ with $A= A_1 \cup A_2 $ is finite. As we will see, each point of $A \cap B$ contributes in the asymptotic expansion of $Z_k (S)$. Actually, since we work with alternating sections, the relevant set is the quotient $X$ of $A \cap B $ by the involution $-\operatorname{id}_M$
\begin{gather} \label{eq:def_N}
-\operatorname{id}_M : M \rightarrow M, \qquad [p \mu + q \la] \rightarrow [- p \mu - q \la].
\end{gather}
Let us denote by $N$ the quotient of $M$ by $-\operatorname{id}_M$ so that $X$ is a subset of $N$.
Introduce the functions $\al$, $\be$ from $N$ to $[0, \pi]$ satisfying
\begin{gather} \label{eq:def_be}
\begin{split}
\al ( [ p \mu + q \la ] ) = \arccos ( \cos ( 2 \pi p ) ), \\
\be ( [ p \mu + q \la ] ) = \arccos ( \cos ( 2 \pi q ) ) .
\end{split}
\end{gather}
Here it may be worth to observe that $[0,1/2]$ is a fundamental domain for the action of ${\mathbb{Z}} \rtimes {\mathbb{Z}}_2$ on ${\mathbb{R}}$, where ${\mathbb{Z}}$ acts by translation and $-1 \in {\mathbb{Z}}_2$ by $- \operatorname{id}_{{\mathbb{R}}}$.
The function $\frac{1}{2\pi} \arccos ( \cos ( 2\pi x))$ induces a section from ${\mathbb{R}}/ {\mathbb{Z}} \rtimes {\mathbb{Z}}_2$ to $[0,1/2]$. So if $x = [p \mu + q \la]$, then $\al(x) /2\pi \equiv p$ and $\be(x) /2 \pi \equiv q$ modulo ${\mathbb{Z}} \rtimes {\mathbb{Z}}_2$
The quotient $N$ is an orbifold with four singular points
$$ p_1 = [ 0], \quad p_2 = [\mu /2], \quad p_3 = [ \la/2], \quad p_4 = [ \mu/2 + \la /2 ]. $$
corresponding to the fixed points of $-\operatorname{\operatorname{id}}_M$. All these points belong to $A_2$ and the first two belong to $A_1$ too, actually $A_1 \cap A_2 = \{ p_1 , p_2 \}$. Since these points play a particular role in the asymptotic expansion of $Z_k (S)$, we divide $X$ into four sets $X_1 = ( A_1 \setminus \{ p_1, p_2 \} ) \cap B$, $X_2 = (A_2 \setminus \{ p_1, p_2, p_3 , p_4 \} ) \cap B$, $ X_3 = \{ p_1, p_2 \} \cap B$ and $ X_4 = \{ p_3, p_4 \} \cap B$.
\begin{lem} The sets $X_1$ and $X_2$ consist respectively of $\operatorname{E} \bigl( \frac{b-1}{2} \bigr)$ and $|a|-1$ points. $X_3 = \{ p_1 , p_2 \}$ if $b$ is even and $\{ p_1 \}$ otherwise. $X_4 = \{ p_3 \}$ if $a$ is even, $\{ p_4 \}$ if $a$ and $b$ are odd, empty if $b$ is even.
\end{lem}
\begin{proof}
Observe that for any $x \in E$, there exists a unique $r \in [0,1/2]$ such that $x = [r ( a \mu + b \la)]$. Furthermore $x \in \{ p_1, p_2, p_3, p_4 \}$ if and only if $r =0$ or $1/2$, $x \in A_1$ if and only if $ r \in \frac{1}{b} \{ 0, 1, \ldots, \operatorname{E} ( b/2) \}$, $x \in A_2 $ if and only if $ r \in \frac{1}{2|a|} \{ 0,1, \ldots , |a|\}$. We conclude easily.
\end{proof}
For any $x$ in $X$, introduce a function $f_x \in \mathcal{C}^{\infty} (N)$ which is identically equal to $1$ on a neighborhood of $x$. Assume furthermore that these functions have disjoint supports. Consider for any $x \in X$ the quantity
$$I_x (k) = \bigl\langle f_x Z_k ( \Si \times S^1) , \rho_k ( \varphi) (Z_k (D \times S^1)) \bigr\rangle $$
where the bracket is the scalar product of $\mathcal{C}^{\infty} ( M , L^k \otimes\delta_M )$.
Introduce two integers $c$ and $d$ such that $ ac + bd = 1$.
\begin{theo}
We have for any $x \in X_1 \cup X_2 \cup X_4$ that
\begin{xalignat*}{2}
I_x (k) = & \Bigl( \frac{ k } { 2\pi} \Bigr)^{g - 1/2} \langle \Theta_A( x) , \Theta_B (x) \rangle ^k \sum_{\ell =0}^{\infty} k^{-\ell} a_{\ell}(x) + \mathcal{O}( k^{-\infty})
\end{xalignat*}
where the $a_\ell (x)$ are complex coefficients. If $x \in X_1$
$$ n(x) = g - \tfrac{1}{2}, \qquad a_0 (x) \equiv \frac{2 \pi^{g - 1/2} }{ \sqrt{b}} \bigl[ \sin ( \al (x) ) \bigr] ^{-2g + 1} \sin \bigl( c \al (x) \bigr) $$
If $x \in X_2$
$$ n(x) = 3g -2 , \qquad a_0 (x) \equiv \frac{1 }{ \sqrt{a}} P_{g,0} \Bigr( \frac{\be (x) }{\pi} \Bigl) \sin \bigl( d \be(x) \bigr) $$
with $P_{g, 0}$ is the function introduced in \ref{theo:counting_smoot}. If $x \in X_4$,
$$ n(x) = 3g -3 , \qquad a_0 (x) \equiv \frac{i P_{g,0}'(1) }{ 4 \pi a^{3/2}} .$$
\end{theo}
\begin{proof}
This follows from Theorem \ref{theo:asympt-behav-z_k}, Theorem \ref{theo:state-tore_solide} and Equation (\ref{eq:scalprodlag}). To compute the leading coefficient with equation (\ref{eq:pairing}), we write
$$\frac{ dp ( \mu) \overline{ \Om_{\nu} ^2 ( \nu) }} { \om ( \mu, \nu)} = \frac{1}{ 4 \pi b}, \qquad \frac{ dq ( \lambda ) \overline{ \Om_{\nu} ^2 ( \nu) }} { \om ( \la, \nu)} = \frac{-1}{ 4 \pi a} .$$
Furthermore, to compute $\sin ( 2 \pi r)$, we use that for $x = r \nu$,
$$ r = rac + rbd \equiv \frac{\al(x)}{2\pi} c \pm \frac{\be(x)}{2 \pi} d \mod {\mathbb{Z}} \rtimes {\mathbb{Z}}_2 $$
If $x$ belongs to $X_1$, then $\be ( x) = 0$ which implies that $\sin ( 2 \pi r) \equiv \sin ( c \al (x))$ up to sign. If $x $ belongs to $X_2$, then $\al ( x) = 0$ or $\pi$ so that $\sin ( 2 \pi r ) \equiv \sin ( \be (x) d)$ up to sign.
To compute $I_x (k)$ with $x \in X_4$, we use formula (\ref{eq:subpairing}).
\end{proof}
To estimate $I_x (k)$ with $x \in X_3$, we need the followings results.
Let $\al \in {\mathbb{R}}$ and $f \in \mathcal{C}^{\infty}_0 ( {\mathbb{R}}_{+}, {\mathbb{C}})$ with ${\mathbb{R}}_+ = [0, \infty)$. Introduce the sum
$$ S_k^+(f) = \tfrac{1}{2} f( 0 ) + \sum_{\ell =1 }^{\infty} e^{i \frac{\al}{2} \ell^2/k} f \Bigl( \frac{\ell}{k} \Bigr) .$$
$f$ being with compact support, the sum is finite.
\begin{theo} \label{theo:pd1}
Let $\al \in {\mathbb{R}}^*$ and $f \in \mathcal{C}^{\infty}_0 ({\mathbb{R}}_+, {\mathbb{C}})$ be such that its support is contained in $[0, \frac{2\pi}{|\al|} )$. If $f$ is even and $f(x) = \la x^{2n} + \mathcal{O}( x^{2n+1})$, then
$$ S_k ^+(f) = k^{\frac{1}{2} - n } \Bigl( \frac{\pi}{2|\al|} \Bigr)^{\frac{1}{2}} e^{ i \frac{\pi}{4} \operatorname{sgn} \al} \sum_{\ell= 0 } ^{\infty} k^{-\ell} c_\ell + \mathcal{O} ( k^{-\infty}) \quad \text{ with } \quad c_0 = \Bigl( \frac{i}{2 \al} \Bigr)^n \frac{ ( 2n ) !}{ n!} \la $$
and $c_\ell \in {\mathbb{C}}$ for any positive integer $\ell$.
If $f$ is odd and $f(x) = \la x^{2n+1} + \mathcal{O}( x^{2n+2})$, then
$$ S_k ^+(f) = k^{-n} \sum_{\ell= 0 } ^{\infty} k^{-\ell} c_\ell + \mathcal{O} ( k^{-\infty}) \quad \text{ with } \quad c_0 = \Bigl( \frac{2i} {\al} \Bigr)^{n+1} \frac{ n !}{ 2} \la $$
and $c_\ell \in {\mathbb{C}}$ for any positive integer $\ell$.
\end{theo}
Here we say that a function of $\mathcal{C}^{\infty} ( {\mathbb{R}}_+)$ is even (resp. odd) if its Taylor expansion at the origin contains only even monomials (resp. odd monomials).
Similarly, the sum
$$ S_k^{-}(f) = \tfrac{1}{2} f( 0 ) + \sum_{\ell =1 }^{\infty} (-1)^\ell e^{i \frac{\al}{2} \ell^2/k} f \Bigl( \frac{\ell}{k} \Bigr) $$
has the following asymptotic behavior.
\begin{theo} \label{theo:pd2}
Let $\al \in {\mathbb{R}}^*$ and $f \in \mathcal{C}^{\infty}_0 ({\mathbb{R}}_+, {\mathbb{C}})$ be such that its support is contained in $[0, \frac{\pi}{|\al|} )$. If $f$ is even, $S^{-} _k (f) = \mathcal{O}( k^{-\infty})$. If $f$ is odd and $f( x) = \mathcal{O}( x^{n})$, then
$$ S_k^{-}(f) = k^{-n} \sum_{\ell= 0 } ^{\infty} k^{-\ell} c_\ell + \mathcal{O} ( k^{-\infty}) $$
for some complex coefficients $c_\ell$.
\end{theo}
These two theorems are proved in Section \ref{sec:sing-discr-stat}.
We deduce the following
\begin{theo}
For any $x \in X_3$, we have the asymptotic expansion
\begin{xalignat*}{2}
I_x (k) = & \Bigl( \frac{ k } { 2 \pi} \Bigr)^{3g - 3} \sum_{\ell =0 } ^{\infty} a_\ell(x) k^{-\ell} + k^{ 2g -3/2} \sum_{\ell = 0 }^{\infty} b_\ell(x) k^{-\ell} + \mathcal{O} ( k^{-\infty})
\end{xalignat*}
with $a_\ell(x)$ and $b_\ell(x)$ complex coefficients, the leading ones being given by
$$ a_0 (x) = \frac{ P_{g,0}'(0) }{ 4 \pi a^{3/2}}, \qquad b_0 (x) = e^{i \pi/4} i^g b^{g - 3/2} a ^{-g} \pi^{-g +1} \frac{\sqrt 2 ( g-1)!}{ ( 2 ( g-1))!}$$
\end{theo}
\begin{proof}
$I_{p_1} (k)$ is equal to the scalar product of $Z_k ( \Si \times S^1)$ with the vector $Z_k$ introduced in (\ref{eq:4})
$$ Z_k = \sum_{\ell =1 }^{k-1} \bigl\langle f_U \rho_k ( \varphi ) (Z_k (D \times S^1)), e_{\ell} \bigr\rangle e_\ell . $$
The asymptotic behavior of the coefficients of $Z_k$ is given in Proposition \ref{lem:state-remplissage-e}.
By Lemma \ref{lem:SiS} and Theorem \ref{theo:counting_smoot}, $Z_k ( \Si \times S^1)$ is the sum of four terms $Z_k^{+,+}$, $Z_k^{+,-}$, $Z_k^{-,+}$, $Z_k^{-,-}$ whose coefficient in the basis $( e_{\ell})$ are
$$Z_k^{+, \pm} ( \ell ) = \frac{1}{2} \Bigl( \frac{k}{2\pi} \Bigr)^{ 3 g -2 } \sum _{m = 0 }^{g-1 } k^{-2m} P^{\pm}_{g, m} \Bigl( \frac{\ell}{k} \Bigr) , \quad Z_k^{-, \pm} ( \ell ) = (-1)^{\ell+1} Z_k^{+, \pm} ( \ell ) .$$
As a consequence of Theorem \ref{theo:pd2}, we have
\begin{gather} \label{eq:5}
\bigl\langle Z_k , Z_k ^{-, -} \bigr\rangle = \mathcal{O}( k^{- \infty}), \qquad \bigl\langle Z_k , Z_k ^{-, +} \bigr\rangle = k^{g - 3/2} \sum_{\ell = 0 }^{\infty} k^{-\ell} c_\ell
\end{gather}
for some coefficient $c_{\ell}$. To prove the second formula of Equation (\ref{eq:5}), we have to take into account that
\begin{gather} \label{eq:2}
P^{+}_{ g,m} (x) = \mathcal{O}( x^{ 2 ( g-m -1)} ).
\end{gather}
By Theorem \ref{theo:pd1}, we have
$$ \bigl\langle Z_k , Z_k ^{+, -} \bigr\rangle = \Bigl( \frac{ k } { 2 \pi} \Bigr)^{3g - 3} \sum_{\ell =0 } ^{\infty} a_\ell(x) k^{-\ell} , \qquad \bigl\langle Z_k , Z_k ^{+, +} \bigr\rangle = k^{ 2g -3/2} \sum_{\ell = 0 }^{\infty} b_\ell(x) k^{-\ell}
$$
where $a_0$ and $b_0$ are given by the formula in the statement. To compute $b_0$, we use the expression for $P_{g,0}^{+}$ given in Theorem \ref{theo:counting_smoot}. Furthermore, Equation (\ref{eq:2}) implies that the polynomials $P_{g,m}$ with $m \geqslant 1$ do not enter in the computation. Since $ 2g - 3/2 > g - 3/2$, $\bigl\langle Z_k , Z_k ^{-, +} \bigr\rangle$ does not contribute to the leading order terms. This concludes the proof for $x= p_1$.
Assume that $b$ is even. Then $X_3$ consists of $p_1$ and $p_2$ and by a symmetry argument, we see that the computation of $I_{p_3} (k)$ is the same as the one of $I_{p_1} (k)$. Indeed, we have that $(T_{\mu/2}^{*} + \operatorname{id} ) Z_k (\Si \times S^1)=0$. Furthermore, by Theorem \ref{theo:state-tore_solide}, $T^*_{\nu/2} \rho_k( \varphi )Z_k ( D \times S^1) =0 $ is a Lagrangian state with associated data $(B, \Theta_B, -\si_B, 0)$. Since $b$ is even, $\mu/2 = \nu/2$. Clearly, $ p_1 + \mu/2 = p_2$, which concludes the proof.
\end{proof}
\section{Singular discrete stationary phase} \label{sec:sing-discr-stat}
In this section, we prove Theorem \ref{theo:pd1} and Theorem \ref{theo:pd2}.
Let $\al, \be$ be two real numbers. Assume that $\al \neq 0$. Denote by ${\mathbb{R}}_{+}$ the set of non negative real numbers. For any function $\si \in \mathcal{C}^{\infty}_0 ( {\mathbb{R}} ^{+})$ and positive $\tau$, introduce the sum
$$ S _{\tau}( \si) = \frac{\si (0)}{2} + \sum_{\ell =1 }^{\infty} e^{i ( \frac{\al}{2} \frac{ \ell^2}{ \tau} - \be \ell)} \si \Bigl( \frac{\ell}{ \tau} \Bigr) $$
In this appendix we study the asymptotics of $S_{\tau} ( \si)$ as $\tau$ tends to infinity. Our treatment is partly inspired by the paper \cite{KeKn}. We will adapt the stationary phase method. The relevant variable is $x = \ell / \tau$. As we will see, the set of stationary points is $\frac{\be}{\al} + 2 \pi {\mathbb{Z}}$. The origin also contributes non trivially to the asymptotic because the sum starts at $\ell =0$. In Theorem \ref{theo:pd1}, we are in the most delicate situation, because $\be =0$, and the origin is both a stationary point and an endpoint of the summation interval. Let us start with the easiest case where the support of $\si$ does not contain any stationary point.
\begin{theo} \label{theo:app1}
For any $\si \in \mathcal{C}^{\infty} _0( {\mathbb{R}}_+)$ such that $\operatorname{Supp} \si \cap \bigl( \frac{\be}{\al} + 2 \pi {\mathbb{Z}} \bigr) = \emptyset$ and $\si (x) = \mathcal{O}( x^n) $ at the origin, we have the following asymptotic expansion
$$ S( \si) = k^{-n} \sum_{\ell =0 }^{ \infty} k^{-\ell} c_{\ell} $$
for some complex numbers $c_{\ell}$.
\end{theo}
For the proof, we will have to consider more general sums of the form $S_\tau ( \rho ( \cdot, \tau))$ where $\rho ( \cdot, \tau)$, is a family of functions in $\mathcal{C}^{\infty}_0 ( {\mathbb{R}}_{+})$ whose supports are contained in a fixed compact subset of ${\mathbb{R}}_{+}$ and which admits a complete asymptotic expansion in inverse power of $\tau$,
$\rho ( \cdot, \tau) = \rho_0 + \tau ^{-1} \rho_1 + \tau^{-2} \rho_2 + \ldots $,
for the $\mathcal{C}^{\infty}$ topology. We call such a family $(\rho ( \cdot, \tau))$ a {\em symbol}. In particular, for any function $f \in \mathcal{C}^{\infty} ({\mathbb{R}})$, we will denote by $f ( \frac{1}{\tau} \frac{\partial}{\partial x}) \si$ any symbol with the expansion
$$ f ( 0 ) \si + \tau^{-1} f'(0) \si' + \tau^{-2} f^{(2)} (0 ) \si^{(2)} + \ldots $$
We will also use the notation $D = \frac{1}{\tau} \frac{\partial}{\partial x}$.
\begin{proof} The sum
$ \tilde{S}_\tau ( \si) = \sum_{\ell =0 }^{\infty} e^{i ( \frac{\al}{2} \frac{ \ell^2}{ \tau} - \be \ell)} \si \bigl( \frac{\ell}{ \tau} \bigr) $ satisfies the relation
\begin{gather} \label{eq:1}
\tilde{S}_{\tau} ( \sigma ( \delta -1 ) ) + \tilde{ S}_{\tau} ( \delta ( e ^D - 1 ) \si \bigr) + \si ( 0) = 0
\end{gather}
where $\delta$ is the symbol $ \delta ( x, \tau) = e^{ i( \al x - \be ) + i \al /2 \tau }$. To prove this, we
apply the summation by part formula
$$ \sum_{\ell = 0 } ^{n} f_{\ell} ( g_{\ell+1} - g_{\ell} ) + \sum_{\ell=0 }^{n} g_{\ell+1} ( f_{\ell + 1} - f_{\ell} ) = f_{n+1} g_{n+1} - f_0 g_0 $$
to the sequences $f_{\ell} = \si \bigl( \frac{\ell}{\tau} \bigr)$ and $g_{\ell} = \exp \bigl( i \frac{\al}{2} \frac{\ell^2}{\tau} - i \be \ell \bigr).$
Observe that
$$ f_{\ell +1} = \si \Bigl( \frac{\ell}{\tau} \Bigr) + \tau^{-1} \si' \Bigl( \frac{\ell}{\tau} \Bigr) + \tfrac{1}{2} \tau^{-2} \si'' \Bigl( \frac{\ell}{\tau} \Bigr) + \ldots = \bigl( e^{D} \si \bigr) \Bigl( \frac{\ell}{\tau} \Bigr) + \mathcal{O}( \tau^{-\infty})$$
so that $f_{\ell+1} - f_{\ell} = \bigl( ( e ^D - 1 ) \si \bigr) ( \ell / \tau)$. Furthermore $g_{\ell +1} = g_{\ell} \delta( \ell/ \tau , \tau) $ and Equation (\ref{eq:1}) follows.
We have $$\delta (x,\tau) - 1= e^{ \frac{i}{2} ( \al x - \be ) } \sin \bigl( \tfrac{1}{2} ( \al x - \be ) \bigr) + \mathcal{O} ( \tau^{-1}).$$ Observe that the zero set of $ \sin \bigl( \frac{1}{2} ( \al x - \be ) \bigr)$ is $ \frac{\be}{\al} + 2 \pi {\mathbb{Z}}$. So if the support of $\si$ does not intersect this set, we can write $\si = \gamma (\delta -1) $ for some symbol $\gamma$. Let us apply (\ref{eq:1}) to $\gamma$, we obtain
$$ \tilde{S}_{\tau} ( \si) = - \si (0) ( \delta (0) -1) + \tau^{-1} \tilde{S}_{\tau} ( \si_1) $$
where $\si_1$ is the symbol $\tau ( \delta ( e^{D} -1) \gamma $. Since the support of $\si_1$ is smaller than the support of $\si$, we can do the same computation with $\tilde{S}_{\tau} ( \si_1)$. In this way, we prove that $S_{\tau} ( \si)$ has a complete asymptotic expansion in power of $\tau^{-1}$. With a careful inspection of this computation, we also get that $S_{\tau} ( \si) = \mathcal{O}( k^{-n})$ if $\si$ vanishes to order $k$ at the origin.
\end{proof}
Choosing $\be = \pi$ in the last result, we obtain Theorem \ref{theo:pd2}. For the proof of Theorem \ref{theo:pd1}, we will use the following relation, which has the advantage to be more symmetric that Equation (\ref{eq:1}). In the remainder of the appendix, we assume that $\be =0$.
\begin{lem} \label{lem:sum_part}
For any $\si \in \mathcal{C}^{\infty} _0( {\mathbb{R}}_+)$, we have
$$ S_{\tau} \bigl( \sin ( \al \cdot ) \si \bigr) = \tfrac{i}{2} \si (0)+ i e^{- i \al / (2 \tau)} \Bigl( S_{\tau} \bigl( \sinh (D) \si \bigr) + \tfrac{1}{2} ( \cosh (D) \si ) (0) \Bigr) $$
up to a $\mathcal{O} ( \tau^{-\infty})$.
\end{lem}
\begin{proof}
We will use the following summation by part formula
\begin{gather*}
\tfrac{1}{2} f_0 \delta_0 (g) + \sum_{\ell =1 }^{n-1}f_\ell \delta_{\ell} (g) + \tfrac{1}{2} f_n \delta_n (g) + \tfrac{1}{2} g_0 \delta_0 (f) + \sum_{\ell =1 }^{n-1}g_\ell \delta_{\ell} (f) + \tfrac{1}{2} g_n \delta_n (f) + \\
\tfrac{1}{2} g_0 ( f_1 + f_{-1}) + \tfrac{1}{2} f_0 ( g_1 + g_{-1}) - \tfrac{1}{2} f_{n} ( g_{n-1} + g_{n+1} ) - \tfrac{1}{2} g_n ( f_{n-1} + f_{n+1}) = 0
\end{gather*}
to the same sequences $f_{\ell}$ and $g_{\ell}$ that we used in the proof of Theorem \ref{theo:app1}. We have that
$$ \delta_{\ell} (g) = 2i g_{\ell} \sin \bigl( \al \ell / \tau \bigr) \exp ( i \al / 2 \tau ) , \qquad \tfrac{1}{2} ( g_1 + g_{-1} ) = \exp ( i \al / 2 \tau ). $$
Furthermore
$$ \tfrac{1}{2} \delta_{\ell} f \equiv \bigl( \sinh (D) \si \bigr) \Bigl( \frac{\ell}{\tau} \Bigr), \qquad \tfrac{1}{2} ( f_{1} + f_{-1} ) \equiv \bigl( \cosh (D) \si \bigr) ( 0 ) $$
up to a $\mathcal{O} ( \tau^{-\infty})$.
Applying these expressions in the summation by part formula with $n$ sufficiently large, we get
$$
2 i e^{i \frac{\al}{2\tau}} S_{\tau} ( \sin ( \al \cdot ) \si ) + 2 S_{\tau} ( \sinh (D) \si ) + \bigl( \cosh (D) \si \bigr) ( 0) +
\si (0) e^{i \frac{\al}{2 \tau}} \equiv 0 $$
up to a $\mathcal{O}( \tau^{-\infty})$,
which was the result to proved.
\end{proof}
\begin{lem} \label{lem:leading_term}
Let $\rho \in \mathcal{C}^{\infty}( {\mathbb{R}}_+)$ with support contained in $[ 0, \tfrac{2\pi}{ |\al|} )$ and such that $\rho \equiv 1$ on a neighborhood of $0$. Then
$$ \frac{2}{\tau} S_{\tau} ( \rho ) = \Bigl( \frac{2 \pi } { \tau} \Bigr) ^{ 1/2} \frac{ e^{i \frac{\pi}{4} \operatorname{sgn} \al }}{| \al |^{1/2}} + \mathcal{O} ( \tau ^{-\infty}).$$
\end{lem}
\begin{proof}
We extend $\rho$ to a smooth even function on ${\mathbb{R}}$. Then
$$2 S_{\tau} ( \rho) = \sum_{\ell= - \infty}^{ + \infty} e^{i \frac{\al}{2} \frac{ \ell^2}{ \tau}} \rho \Bigl( \frac{\ell}{ \tau} \Bigr) $$
By Poisson formula,
$$ 2 S_{\tau} ( \rho ) = \tau \sum_{\ell = - \infty}^{\infty} I_{\ell}, \qquad \text{ with } \quad I_{\ell} = \int_{{\mathbb{R}}} e^{ i \tau ( \frac{\al}{2} x^2 - 2 \pi x \ell ) } \rho (x) dx .$$
We can estimate each $I_\ell$ by stationary phase method. For $\ell \neq 0$, the phase $\frac{\al}{2} x^2 - 2 \pi x \ell$ has a unique critical point $2 \pi \ell / \al$. This point not belonging to the support of $\rho$, $I_\ell =\mathcal{O} ( \tau^{-\infty})$. We can actually prove the stronger result that
$$ \sum _{\ell \neq 0} I_{\ell} = \mathcal{O} ( \tau^{-\infty} \bigr).$$
Estimating $I_0$ we get the final result.
\end{proof}
\begin{theo} \label{theo:sing-discr-stat}
Let $\si \in \mathcal{C}^{\infty}_0 ( {\mathbb{R}}_+)$ with support contained in $[0, \tfrac{2 \pi}{ |\al|})$. Then
$$ S_{\tau} (\si ) = \tau^{1/2} \sum_{\ell = 0 } ^{\infty} a_{\ell} \tau^{-\ell} + \sum_{\ell =0 }^{\infty} b_{\ell} \tau^{-\ell} $$
where the leading coefficients are
$$ a_0 = \Bigl( \frac{\pi}{2} \Bigr)^{1/2} \frac{ e^{i \frac{\pi}{4} \operatorname{sgn} \al }}{| \al |^{1/2}} \si(0) , \qquad b_0 = i \frac{\si' (0)}{\al} .$$
\end{theo}
\begin{proof} By Theorem \ref{theo:app1}, we can assume that the support of $\si$ is contained in $[ 0, \frac{\pi}{|\al|} )$. Let $\rho$ be a function satisfying the assumption of Lemma \ref{lem:leading_term}. Write
$$ \si (x) = \si(0)\rho(x) - i \sin (\al x) \si_1 ( x ) $$
where $\si_1 $ is in $\mathcal{C}^{\infty}_0 ( {\mathbb{R}}_+)$ with support in $[0, \tfrac{2\pi}{|\al|})$.
We have by Lemma \ref{lem:sum_part}
$$ S_{\tau} ( \si ) = \si(0) S_{\tau} ( \rho) + \tfrac{1}{2} \si_1 (0) + e^{- i \frac{\al}{ 2 \tau}} \Bigl( S_{\tau} \bigl( \sinh (D) \si_1 \bigr) + \tfrac{1}{2} ( \cosh (D) \si_1 ) (0) \Bigr)$$
By lemma \ref{lem:leading_term}, $\si (0) S_{\tau} ( \rho) = \tau^{1/2} a_0$ where $a_0$ is defined as in the statement. Furthermore $\tfrac{1}{2} \si_1 (0) + \tfrac{1}{2} ( \cosh (D) \si_1 ) (0) = \si_1 (0) + \mathcal{O} ( \tau^{-1})$. We also have $\si_1 (0) = b_0$. So we obtain
$$ S_{\tau} ( \si ) =\tau^{1/2} a_0 + b_0 + \tau^{-1} R_\tau$$
where $R_{\tau}$ is given by
$$ R_\tau= e^{- i \al / (2 \tau)} \Bigl( S_{\tau} \bigl( \tau \sinh (D) \si_1 \bigr) + \tfrac{\tau}{2} \bigl( (\cosh (D) \si_1 ) (0) - \si_1 (0) \bigr) \Bigr)
$$
Observe that $\tau \sinh (D) \si_1$ is a symbol, we can apply the same argument to $S_\tau ( \tau \sinh (D) \si_1 )$.
We prove in this way the result by successive approximations.
\end{proof}
\begin{theo} \label{theo:sing-discr-stat++}
Let $\si \in \mathcal{C}^{\infty}_0 ({\mathbb{R}}_+, {\mathbb{C}})$ with support contained in $[0, \frac{2\pi}{|\al|} )$. If $\si$ is even and $\si(x) = \la x^{2n} + \mathcal{O}( x^{2n+1})$ at the origin, then
$$ S_\tau (f) = \tau^{\frac{1}{2} - n } \Bigl( \frac{i \pi}{2\al} \Bigr)^{\frac{1}{2}} \sum_{\ell= 0 } ^{\infty} \tau^{-\ell} c_\ell + \mathcal{O} ( \tau^{-\infty}) \quad \text{ with } \quad c_0 = \Bigl( \frac{i}{2 \al} \Bigr)^n \frac{ ( 2n ) !}{ n!} \la .$$
If $\si$ is odd and $\si(x) = \la x^{2n+1} + \mathcal{O}( x^{2n+2})$, then
$$ S_\tau (f) = \tau^{-n} \sum_{\ell= 0 } ^{\infty} \tau^{-\ell} c_\ell + \mathcal{O} ( \tau^{-\infty}) \quad \text{ with } \quad c_0 = \Bigl( \frac{2i} {\al} \Bigr)^{n+1} \frac{ n !}{ 2} \la. $$
\end{theo}
\begin{proof}
First, by adapting the proof of Theorem \ref{theo:sing-discr-stat}, we show that if $\si$ is even, the coefficients $b_\ell$ vanish, whereas if $\si$ is odd, the coefficient $a_\ell$ vanish. For instance, if $\si$ is even, $\si_1$ is odd, so that $\si_1 ( 0) = 0$ and $\sinh (D) \si_1$ is even. We conclude by iterating.
To compute the leading coefficients, we use the filtration $\mathcal{O} ( m)$, $m\in {\mathbb{N}}$ of the space of symbols defined as follows:
$$f \in \mathcal{O} (m) \Leftrightarrow f = \sum_{0 \leqslant \ell \leqslant m/2} \tau^{-\ell} g_{\ell} + \mathcal{O} ( \tau^{-m/2} )$$ where for any $\ell$, the coefficient $g_{\ell} \in \mathcal{C}^{\infty} ( {\mathbb{R}}_+)$ vanishes to order $m -2\ell$ at the origin. Observe that if $f \in \mathcal{O} ( m+1)$ then $f ( 0 ) = \mathcal{O} ( \tau ^{- ( m+1) / 2} )$ and $Df \in \mathcal{O} ( m+1)$.
Assume that $\si \in \mathcal{O} (m)$ and that we want to compute $S_{\tau} (\si)$ up to a $\mathcal{O}(\tau ^{- m/2})$.
We consider again the proof of Theorem \ref{theo:sing-discr-stat}. Introduce the function
$$ \gamma (x) = (\si (x) - \si (0) \rho (x) )/x .$$
Then $ \si_1 = \frac{i}{\al} \gamma + \mathcal{O}( m+1)$ and $\sinh (D) \si_1 = \frac{i }{\al} D \gamma + \mathcal{O}( m+1)$. From this, we deduce that
$$ S_{\tau} ( \si ) = S_{\tau} ( \rho ) \si (0) + \tfrac{i}{\al} \gamma (0) + \tfrac{i }{\al} S_{\tau} \bigl( D \gamma + \mathcal{O} ( m+1) \bigr) + \mathcal{O}(\tau ^{- m/2})$$
To conclude the proof, we choose $\si = \la x^m \rho$ and apply this formula as many times as necessary.
\end{proof}
This completes the proof of Theorem \ref{theo:pd1}.
\section{Geometric interpretation of the leading coefficients}
\label{sec:geom-interpr-lead}
\subsection{Symplectic volumes} \label{sec:symplectic-volumes}
For any $t \in [0,1]$, denote by $\mathcal{M} (\Si, t)$ the moduli space of flat $\operatorname{SU}(2)$-principal bundles whose holonomy $g$ of the boundary $C = \partial \Si$ satisfies $\frac{1}{2}\operatorname{tr} (g) = \cos ( 2 \pi t)$. Equivalently, $\mathcal{ M} ( \Si , t)$ is the space of conjugacy classes of group morphisms $\rho$ from $\pi_1 ( \Si)$ to $\operatorname{SU}(2)$ such that for any loop $\gamma \in \pi_1(\Si)$ isotopic to $C$, $\frac{1}{2}\operatorname{ tr} ( \rho ( \gamma)) = \cos ( 2 \pi t)$.
We say that a morphism $\rho$ from $\pi_1 ( \Si)$ to $\operatorname{SU}(2)$ is irreducible if the corresponding representation of $\pi_1 ( \Si)$ in ${\mathbb{C}}^2$ is irreducible.
The subset $\mathcal{M} ^{\operatorname{irr}} ( \Si, t)$ of $\mathcal{M} ( \Si, t)$ consisting of conjugacy classes of irreducible morphisms is a smooth symplectic manifold. Using the usual presentation of $\pi_1 ( \Si)$, one easily sees that any morphism $\rho : \pi_1 ( \Si ) \rightarrow \operatorname{SU}(2)$ such that $\rho ( \gamma) \neq \operatorname{id} $ for $\gamma$ isotopic to $C$, is irreducible. Consequently $\mathcal{M} ^{\operatorname{irr} } ( \Si , t ) = \mathcal{M} ( \Si , t)$ for $t \in ( 0,1]$. The subset of $\mathcal{M} ( \Si , 0)$ consisting of non irreducible representation is in bijection with $\operatorname{Mor} ( \pi_1 ( \Si ), {\mathbb{R}} / {\mathbb{Z}})$, the set of group morphisms from $\pi_1 ( \Si)$ to ${\mathbb{R}}/Z$.
Furthermore, for $t \in (0,1)$, $\mathcal{M} ( \Si , t)$ is $2 ( 3g -2 )$-dimensional, whereas $\mathcal{M} ( \Si , 1) $ and $\mathcal{M} ^{\operatorname { irr}} ( \Si , 0)$ have dimension $2 ( 3g -3)$.
\begin{theo} \label{theo:RR}
For any $ k, \ell \in {\mathbb{N}}$ such that $ 0< \ell \leqslant k$ and $\ell$ is even, we have
$$ N^{g, k + 2 }_{\ell +1} = \int_{ \mathcal{M} ( \Si , s)} e^{ \frac{k}{2 \pi} \om_s } \operatorname{Todd}_s $$
where $s = \ell / k$, $\om_s$ is the symplectic form of $\mathcal{M} ( \Si, s)$ and $\operatorname{Todd}_s$ any representant of its Todd class
\end{theo}
As a corollary, we can compute the polynomial function $P_{g,0}$ as a symplectic volume.
\begin{cor} For any $s \in ( 0,1)$, we have
$$ P_{g,0} ( s) = \int_{\mathcal{M} ( \Si, s)} \frac{ \om_s ^{3g-2}}{ ( 3g -2)!} .$$
\end{cor}
We can actually recover partially Theorem \ref{theo:counting_smoot} in this way. Introduce the space $\mathcal{M} ( \Si)$ of conjugacy classes of morphisms from $\pi_1( \Si)$ to $\operatorname{SU}(2)$. Let $f : \mathcal{M} ( \Si) \rightarrow {\mathbb{R}}$ be the function sending $\rho$ into $\frac{1}{\pi} \arccos ( \operatorname{tr }( \rho (C)))$. Then for each $s \in [0,1]$, $\mathcal{M} ( \Si , s)$ is the fiber at $s$ of $f$. Furthermore $(0,1)$ is the set of regular values of $f$. So we can identify the $\mathcal{M} ( \Si , s)$, $ s\in (0,1)$ to a fixed manifold $F$, by a diffeomorphism uniquely defined up to isotopy. In particular, the homology groups of $\mathcal{M} ( \Si , s)$ are naturally identified with the ones of $F$.
In \cite{Je2}, Jeffrey introduced an extended moduli space $\mathcal{M} ^{ \mathfrak{t}} ( \Si)$. This space is a $2(3g-1)$-dimensional $( {\mathbb{R}} / {\mathbb{Z}})$-Hamiltonian space, such that for any $s \in ( 0,1)$, $\mathcal{M} ( \Si ,s )$ is the symplectic reduction of $\mathcal{M} ^{\mathfrak{t}} ( \Si)$ at level $s$. We recover that the various $\mathcal{M} ( \Si, s)$, $s \in (0,1)$ can be naturally identified with a fixed manifold $F$ up to isotopy.
Furthermore, by Duistermaat-Heckman Theorem \cite{DuHe}, the cohomology class of $\om_s$ is an affine function of $s$ with value in $H^2(F)$, that is $[\om_s] = \Om + s c$, where $\Om$ and $c$ are constant cohomology classes in $H^2(F)$.
This implies in particular that $P_{g,0}$ is polynomial with degree $(3g -2)$.
We can explain in this way why the shifts of $\ell$ and $k$ we introduced are natural. Indeed it has been proved by Meinrenken-Woodward \cite{MeWo2} that the canonical class $c_1$ of $\mathcal{M} ( \Si , s)$ is $-4 \Om -2 c$. Using that $\operatorname{Todd} = \hat{A} e^{-\frac{1}{2} c_1} $ where $\hat{A}$ is the $A$-genus, we obtain the that for any $k, \ell \in {\mathbb{N}}$ such that $ 0< \ell < k$ and $\ell$ is odd,
$$ N^{g,k}_{\ell} = \int_{\mathcal{M} ( \Si , s)} e^{ \frac{k}{2 \pi} \om_s } \hat{A}, \qquad \text{ with } s = \ell / k . $$
Since the $\hat{A}$-genus belong to $\bigoplus_{\ell} H^{4\ell}(F) $, it follows that
$$ Q( k ,s) = \int_{\mathcal{M} ( \Si , s)} e^{ \frac{k}{2 \pi} \om_s } \hat{A} $$
is a linear combination of the monomial $k^{2m} s ^p$ with $0 \leqslant p \leqslant 2m$ and $0 \leqslant m \leqslant g-1$, which was already proved in Theorem \ref{theo:counting_smoot}.
\subsection{Character varieties}
For any topological space $V$, introduce the character variety $\mathcal{M} ( V) $ defined as the space of group morphisms from $\pi_1 (V)$ to $\operatorname{SU}(2)$ up to conjugation.
If $W$ is a subspace of $V$, we have a natural map from $\mathcal{M} (V)$ to $\mathcal{M} (W)$, that we call the restriction map.
For the circle $C$, $\pi_1 (C)$ being cyclic, $\mathcal{M} (C)$ identifies with the set of conjugacy classes of $\operatorname{SU} (2)$. So $\mathcal{M} (C) \simeq [0,\pi]$ by the map sending the morphism $\rho$ to the number $\arccos ( \frac{1}{2} \operatorname{tr} \rho (C) )$. Similarly, $\mathcal{M} (S^1) \simeq [0,\pi]$.
For the two-dimensional torus $C \times S^1$, there is a natural bijection between $\mathcal{M}(C \times S^1)$ and the quotient of $H_1 (C \times S^1, {\mathbb{R}})$ by $H_1(C \times S^1) \rtimes {\mathbb{Z}}_2$ defined as follows. Identify $\pi_1 ( C \times S^1)$ with $H_1 ( C \times S^1)$ and denote by $\cdot$ the intersection product of $H_1( C \times S^1)$. Then to any $x \in H_1 ( C \times S^1, {\mathbb{R}})$ we associate the representation $\rho_x$ given by
\begin{gather} \label{eq:defrhox}
\rho _x ( \gamma) = \exp ( (x \cdot \gamma) D), \qquad \forall \gamma \in H_1( C \times S^1)\end{gather}
where $D \in \operatorname{SU}(2)$ is the diagonal matrix with entries $2i \pi$, $-2 i\pi$.
Recall that we denote by $M$ the quotient of $H_1 ( C \times S^1 , {\mathbb{R}})$ by $H_1 ( C \times S^1)$ and by $N$ the quotient of $M$ by $- \operatorname{id}_M$, cf. (\ref{eq:def_E}) and (\ref{eq:def_N}). So the map sending $x$ to $\rho_x$ induces a bijection between $N$ and $\mathcal{M} ( C \times S^1)$. Furthermore the restriction maps from $\mathcal{M} ( C \times S^1)$ to $\mathcal{M} ( C)$ and $\mathcal{M} ( S^1)$ identify respectively with the maps $\be$ and $\al$ introduced in (\ref{eq:def_be}).
Recall that we introduced subsets $A_1$, $A_2$, $A= A_1 \cup A_2$ and $B$ of $M$. We denote by $\tilde{A}_1$, $\tilde{A}_2$, $\tilde{A}$ and $\tilde B$ their projections in $N$. So $\tilde{A}_1$ and $\tilde{A}_2$ consists respectively of the classes $[\rho] \in \mathcal{M} ( C\times S^1)$ such that $\rho (C) = \operatorname{id}$ or $\rho ( S^1) = \pm \operatorname{id}$. In other words $\tilde{A}_1 = \beta^{-1} (0)$ and $\tilde{A}_2 = \alpha^{-1} ( \{ 0,\pi \} )$.
\begin{lem} \label{lem:mapf}
The image of the restriction map $f$ from $\mathcal{M} ( \Sigma \times S^1)$ to $\mathcal{M} ( C \times S^1)$ is $\tilde{A}$. For any $x \in \tilde{A}_1 \setminus \tilde{A}_2$, $f^{-1}(x)$ identifies with $\operatorname{Mor} ( \pi_1 ( \Si), {\mathbb{R}} /{\mathbb{Z}})$. For any $x \in \tilde{A}_2$, $f^{-1} (x)$ identifies with $\mathcal{M} ( \Si, \be (x) /\pi)$.
\end{lem}
\begin{proof} Let $\mathbb{T}$ be the subgroup of $\operatorname{SU}(2)$ consisting of diagonal matrices. So $\mathbb{T} \simeq {\mathbb{R}} / {\mathbb{Z}}$. We will use that for any $g \in \mathbb{T} \setminus \{ \pm \operatorname{id} \}$, the centralizer of $g$ in $\operatorname{SU}(2)$ is $\mathbb{T}$.
Let $\rho $ be a morphism from $\pi_1 ( \Si \times S^1) =\pi_1 ( \Si ) \times \pi ( S^1)$ to $\operatorname{SU}(2)$. The restriction $\rho'$ of $\rho $ to $\Si$ commutes with $\rho (S^1)$. Conjugating $\rho$ if necessary, $ g = \rho ( S^1)$ belongs to $\mathbb {T}$. Consider the following two cases:
\begin{itemize}
\item If $g$ is not central, the image of $\rho'$ is contained in $\mathbb{T}$. This implies that $\rho ( C) = \operatorname{ \operatorname{id}}$ so that $f ( \rho ) \in \tilde{A}_1$. Conversely, any $g \in \mathbb{T} \setminus \{ \pm \operatorname{id} \}$ and $\rho ' \in \operatorname{Mor} ( \pi_1 ( \Si), \mathbb{T} )$ determines a unique $\rho \in \mathcal{M} ( C \times S^1)$.
\item If $g$ is central, then $f( \rho) \in \tilde{A}_2$.
Conversely, any $g = \pm \operatorname{id}$ and $\rho' \in \mathcal{M} ( \Si )$ determine a unique $\rho \in \mathcal{M} ( C \times S^1)$.
\end{itemize}
To end the proof in the second case, we view $\mathcal{M} ( \Si )$ as the union of the $\mathcal {M} ( \Si , t) $ where $t$ runs over $[0,1]$.
\end{proof}
Recall that $S$ is the Seifert manifold obtained by gluing the solid torus $D \times S^1$ to $\Sigma \times S^1$ along the diffeomorphism $\varphi$ of $C \times S^1$. Furthermore, $X = \tilde{A} \cap \tilde{B}$.
\begin{theo}
The components of $\mathcal{M} (S)$ are in bijection with $X$. For any $x \in X$ the corresponding component is homeomorphic with $f^{-1} (x)$.
\end{theo}
\begin{proof} It follows from Van Kampen Theorem that $\pi_1 (S)$ is the quotient of $\pi_1 ( \Si \times S^1)$ by the subgroup generated by $\varphi (C) $. So the group morphisms from $\pi_1 (S) $ identify with the group morphisms from $\pi ( \Si \times S^1)$ sending $\varphi ( C)$ to the identity.
On the other hand, for any $x \in H_1 ( C \times S^1, {\mathbb{R}})$ the corresponding representation $\rho_x$ defined in (\ref{eq:defrhox}) is trivial on $\varphi(C)$ if and only if $x$ belongs to the line generated by $ \nu = a \mu + b \la$. So $\tilde{B}$ consists of the conjugacy classes of representations which are trivial on $\varphi(C)$.
This implies that the restriction map from $\mathcal{M} (S)$ to $\mathcal{M} ( \Si \times S^1)$ is injective, and its image is $f^{-1} (\tilde{B})$. The conclusion follows from Lemma \ref{lem:mapf}, taking into account that the fibers of $f$ are connected.
\end{proof}
\subsection{Chern-Simons invariant}
For any three-dimensional closed oriented manifold $V$ and $\rho \in \mathcal{M} (V)$ the Chern-Simons invariant of $\rho$ is defined by
\begin{gather} \label{eq:defCS}
\operatorname{CS}( \rho) = \int _V \tfrac{2}{3} \al^3 + \al \wedge d\al \in {\mathbb{R}} / 2 \pi {\mathbb{Z}}
\end{gather}
where $\al \in \Om ^1(V, \mathfrak{su} (2))$ is any connection form whose holonomy representation is $\rho$.
\begin{theo}
For any $ \rho \in \mathcal{M} ( S)$, the Chern-Simons invariant of $\rho$ is given by
$$ e^{ i \operatorname{CS} ( \rho ) } = \bigl\langle \Theta_A (x) , \Theta_B (x) \rangle $$
where $x \in \mathcal{M} ( C \times S^1)$ is the restriction of $\rho $ to $C \times S^1$.
\end{theo}
The proof is based on the relative Chern-Simons invariants introduced in \cite{RaSiWe}, cf. also \cite{Fr}.
\begin{proof}[proof (sketch)] We can define a relative Chern-Simons invariant for compact oriented 3-manifold $V$ with boundary. To do this we define first a complex line bundle $L \rightarrow \mathcal{M} ( \partial V)$, called the Chern-Simons bundle. Then for any $\rho \in \mathcal{M} ( V)$, $e^{i \operatorname{CS} (\rho)}$ is by definition a vector in $L_{r(\rho)}$ where $r$ is the restriction map from $\mathcal{M} ( V)$ to $\mathcal{M} ( \partial V)$. This invariant has the three following properties:
\begin{itemize}
\item The fiber of $L$ at the trivial representation has a natural trivialization. If $\rho \in \mathcal{M} (V)$ is the trivial representation, then $e^{ i \operatorname{CS} ( \rho)} =1$ in this trivialization.
\item $L$ has a natural connection, and the section of $r^* L$ sending $\rho$ into $e^{i \operatorname{CS} ( \rho)}$ is flat.
\item If $V$ is closed and obtained by gluing two manifolds $V_1$ and $V_2$ along the common boundary, then for any $\rho \in \mathcal{M} (V)$,
$$e^{i \operatorname{CS} ( \rho) } = \bigl\langle e^{i\operatorname{CS} ( \rho_1)}, e^{i \operatorname{CS} ( \rho_2)} \bigr\rangle$$ where $\rho_1$ and $\rho_2$ are the restrictions of $\rho$ to $V_1$ and $V_2$ respectively.
\end{itemize}
In our case, the pull-back of the Chern-Simons bundle of $\mathcal{M} ( C \times S^1)$ by the projection $ M \rightarrow \mathcal{M} ( C \times S^1)$ is the prequantum bundle $L_M$, cf. \cite{LJ2}. Furthermore the image of the restriction maps from $\mathcal{M} ( D \times S^1)$ and $\mathcal{M} (\Si \times S^1)$ to $\mathcal{M} ( C \times S^1)$ are respectively $\tilde{A}$ and $\tilde{B}$. We conclude by lifting everything to $M$ and by using that $\Theta_A$ and $\Theta_B$ are flat and satisfy $\Theta_A ( 0) = \Theta_B (0)$.
\end{proof}
\bibliographystyle{alpha}
|
1,477,468,750,967 | arxiv | \section{Introduction}
\subsection{} Towards the beginning of the 20th century, Schur defined a
functor from polynomial representations of $GL_n(\CM)$ of degree $d$ to
representations of the symmetric group ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$, which is an equivalence
of categories for $n\geq d$. Later it was observed by Green~\cite{Green} that Schur's functor is well-defined over any field $k$, but is not an equivalence if the characteristic of $k$ is less than or equal to $d$. Green defines the Schur functor as follows. Consider the group $GL_n$ and its polynomial representations. The $n$-dimensional standard representation $E$ of $GL_n$ is polynomial and homogeneous of degree one. Letting the symmetric group ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ act on $E^{\otimes d}$ by permutations defines a functor:
\[ \Hom(E^{\otimes d}, -): C_k(n,d) \rightarrow \mathrm{Mod}-k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d],\]
where $C_k(n,d)$ denotes the category of polynomial representation of $GL_n$ of degree $d$.
The categories $C_k(n,d)$ and $\mathrm{Mod}-k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d]$ are much more complicated when $\mathrm{char}~k \leq d$. The resulting categories are not semisimple and even basic facts, for example, the dimensions of
the irreducible representations of the symmetric group, are not
known in general. The Schur functor has been used as an effective tool to relate structure
in the representation theory of the general linear groups with that of
the symmetric groups and vice versa.
\subsection{} In the first part of this paper, we relate the existence of the Schur functor to the geometry of certain singular spaces associated to $GL_n$. For $n\geq d$, we give a geometric interpretation of the category $C_k(n,d)$ and the Schur functor in terms of Springer theory for the nilpotent cone $\NC_d \subset \mathfrak{gl}_d$. More precisely, we show:
\begin{thm}
For any $n \geq d$, there is an equivalence of categories:
\[ \phi^\bullet: C_k(n,d) \stackrel{\sim}{\rightarrow} P_{GL_d}(\NC_d;k),\]
which takes the representation $E^{\otimes d}$ to the Springer sheaf $\SC$.
\end{thm}
Using this we provide a geometric proof of Carter-Lusztig's generalization of Schur-Weyl duality.
\begin{thm}\cite[Thm. 3.1]{CarterLusztig}
For any $d\leq n$, the morphism \[ k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d] \rightarrow \End_{G_n^k}(E^{\otimes d}) \] defined by
permuting the tensor factors is an isomorphism.
\end{thm}
\begin{cor}
There is a commutative diagram of functors:
\begin{equation}
\xymatrix{
P_{G_d}(\NC) \ar[rr]^{\Hom(\SC,-)} \ar[d]^\simeq_{\phi^\bullet} && \mathrm{Mod}-\End(\SC) \\
C_k(n,d) \ar[rr]^{\Hom(E^{\otimes d},-)} && \mathrm{Mod}-k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d] \ar[u]_{\simeq} ,
}
\end{equation}
\end{cor}
\subsection{} The proof of these statement is reached by crossing two bridges.
The first bridge is the geometric Satake equivalence, which relates the representation theory of a split reductive group $G$ over $k$ to a category of equivariant perverse sheaves on the affine Grassmannian for the complex reductive group $G^\vee_\CM$ with dual root datum. In particular, it allows us to identify the category of polynomial representations $C_k(n,d)$ with a part of the affine Grassmannian $\overline{\Gr^\d} \subset {\EuScript Gr}_{GL_n}$.
The second bridge is a relationship, which exists for the group $GL_n$, between the nilpotent cone and affine Grassmannian. In the paper \cite{lu}, Lusztig introduced a map $\phi: \NC_d \rightarrow \overline{\Gr^\d}$. Using a map in the other direction defined on quotient stacks, we prove an equivalence of categories of equivariant perverse sheaves.
\subsection{} In the second part of the paper, we shift our focus to an arbitrary connected complex reductive group $G$.
We observe that our reinterpretation of the Schur functor as a functor from the category of adjoint-equivariant perverse sheaves on the nilpotent cone to modules over endomorphisms of the Springer sheaf is well-defined for any reductive group.
Moreover, we give a reformulation in terms of the Fourier-Sato transform $\TM$ on the Lie algebra $\gg$. Let $j_{\mathrm{rs}} : \gg_{\mathrm{rs}} \hookrightarrow \gg$ denote the open embedding of the regular semi-simple locus and $i: \NC \hookrightarrow \gg$ the closed embedding of the nilpotent cone. Consider the functor
\[\FC := j_{\mathrm{rs}}^* \circ \TM \circ i_* : P_G(\NC;k) \rightarrow P_G(\gg_{\mathrm{rs}};k),\]
where $P_G(X;k)$ denotes the category of $G$-equivariant perverse sheaves. We show that $\FC$ factors through $\Loc_W(\gg_{\mathrm{rs}})$, the category of local systems on $\gg_{\mathrm{rs}}$ with monodromy that factors through the Weyl group, and identify $\FC$ with the functor $\Hom(\SC,-)$.
We propose that $P_G(\NC;k)$ should be thought of as a generalization of the category $C_k(n,d)$ and $\FC$ as the \emph{geometric Schur functor} for the group $G$.
\subsection{Related work}
This paper is a revised version of part of the author's Ph.D. thesis~\cite{Mautnerthesis}.
A generalization of some of the results in this paper has since been explored by Achar, Henderson and Riche. In~\cite{AchHen} and then in~\cite{AchHenRic}, they consider a split reductive group $G$ over $k$ and the category of ``small" representations of $G$ (analogous to a variant of $C_k(n,d)$). They show that the corresponding part of the affine Grassmannian for the dual group $G^\vee_\CM$ contains an open locus that maps $G^\vee$-equivariantly to the nilpotent cone for $G^\vee$. They use this map to construct a functor $\Psi_G$ between categories of perverse sheaves on the affine Grassmannian and nilpotent cone.
The main result of these papers is an equivalence between the composition of $\Psi_G$ with the geometric Schur functor and the composition of the Satake equivalence with taking the zero weight space together with its $W$-action.
Another closely-related picture arises in joint work with Achar. Motivated by the equivalence between $C_k(n,d)$ and $P_{GL_n}(\NC;k)$, we propose~\cite{AchMau} a \emph{geometric Ringel functor} for the equivariant derived category $P_G(\NC;k)$. The definition is very similar to that of the geometric Schur functor in this paper. In particular, we define a functor
\[ \RC := i^* \TM i_*[\dim \NC - \dim \gg]: D_G(\NC;k) \rightarrow D_G(\NC;k) \]
and show that it is an autoequivalence.
We expect that functors $\FC$ and $\RC$ will be very useful tools in understanding the categories $P_G(\NC;k)$ and $D_G(\NC;k)$.
\subsection{} Here is an outline of the paper. Section \ref{sec-DP} contains a summary of the various ingredients that will be used in the paper. In Section \ref{sec-proj}, we study various maps between the nilpotent cone and affine Grassmannian and the relations they satisfy. We then use these relations in Section \ref{sec-equiv} to prove an equivalence of categories of perverse sheaves, with which, in Section \ref{sec-SWthm}, we deduce Carter-Lusztig's Schur-Weyl duality as a corollary. Sections \ref{sec-GSF} and \ref{sec-GAF} contain a proposal for a ``geometric Schur functor."
\subsection{Acknowledgements}
This paper has been a long time coming and so the author had a number of years to benefit from useful conversations and deep insights from many people. He would like to thank in particular: David Ben-Zvi for continued support and advice and Daniel Juteau whose thesis was a source of inspiration for much of this paper. Thanks as well to Pramod Achar, Dennis Gaitsgory, Joel Kamnitzer, David Helm, David Nadler, Catharina Stroppel, Geordie Williamson, Zhiwei Yun, and Xinwen Zhu.
\section{Dramatis Personae}
\label{sec-DP}
\subsection{Notation}
Let $G$ be a connected reductive algebraic group over the complex numbers. Let $\gg$ be its Lie algebra, $\NC \subset \gg$ the nilpotent cone, $\gg_{\mathrm{rs}}$ the regular semi-simple locus and $W$ the Weyl group.
Let $i:\NC \hookrightarrow \gg$ denote the closed inclusion of the
nilpotent cone in the Lie algebra. Let $\gg_{{\mathrm{rs}}}$ denote the
regular semi-simple locus of $\gg$ and $j_{{\mathrm{rs}}}:\gg_{{\mathrm{rs}}}
\hookrightarrow \gg$ the open inclusion.
Let $k$ be a commutative ring. For a scheme $X$ defined over the complex numbers with an
action of a reductive group $G$, we denote by $P_G(X;k)$ the category
of $G$-equivariant perverse sheaves with coefficients in $k$ on $X$.
This is equivalent to the category of perverse sheaves on the quotient
stack $[X/G]$ (cf.~\cite[Rmk. 5.5]{LO}).
\subsubsection{} We denote the general linear group $GL_r$ by $G_r$ and its Lie algebra
by $\gg_r$. We consider the group $G_r$ over various rings and when
we need to specify the ring over which we are working, we write
$G^k_r$. We fix the standard upper triangular Borel
subgroup $B_r$ of $G_r$, with its unipotent radical $U_r$. Let $T_r
\cong \GM_m^r$ be the Cartan subgroup of diagonal matrices and
$\hg_r \cong \AM^r$ its Lie algebra, the standard Cartan
subalgebra. The Weyl group $W_r={\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_r$ acts on $\hg_r=\AM^r$ by permuting its
coordinates. We denote the weight lattice of $G_r$ by $\Lambda_r$ and
identify it with $\ZM^r$ using our identification of the Cartan
subgroup. The set of highest weights $\Lambda^+$ are those
$\lambda=(\lambda_1,\ldots, \lambda_r)\in\Lambda$ such that $\lambda_i \ge \lambda_j$ for all $i<j$.
We denote the category of homogeneous polynomial representations of
$G_n$ of degree $d$ by $C_k(n,d)$.
Let $\Lambda(n,d)\subset \Lambda_n$ be the set of weights
$(\lambda_1,\lambda_2, \ldots, \lambda_n)$ such that $\sum_i
\lambda_i=d$ and $\lambda_i \ge 0$ for all $1\le i \le n$. As shown
in~\cite[Prop. A.3]{Jan}, the category of polynomial representations
of degree $d$, $C_k(n,d)$, is precisely the full subcategory of those
representations all of whose weights lie in the subset $\Lambda(n,d)$.
\subsection{Schur-Weyl Duality and the Schur Functor}
The following classical result is known as Schur-Weyl duality.
\begin{thm}
Let $E$ denote the standard $n$-dimensional representation of
$G_n^\CM$. For any $d\leq n$, the morphism $\CM[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d]
\rightarrow \End_{G_n^\CM}(E^{\otimes d})$ given by permuting the tensor
factors is an isomorphism.
\end{thm}
A generalization of this theorem appears in the work of
Carter and Lusztig. They show that this result
is true over any commutative ring. Namely,
\begin{thm}\cite[Thm. 3.1]{CarterLusztig}
Consider $G_n$ over an arbitrary commutative ring $k$.
Let $E$ denote the standard $n$-dimensional representation of $G_n^k$.
For any $d\leq n$, the morphism $k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d] \rightarrow \End_{G_n^k}(E^{\otimes d})$ given by
permuting the tensor factors is an isomorphism.
\end{thm}
We give a new geometric proof of this result in section~\ref{sec-SWthm}.
Using the symmetric group action on $E^{\otimes d}$, one
can define a functor from $C_k(n,d)$ to the category of
representations of the symmetric group ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$.
\begin{defi}
The Schur functor $\SC:C_k(n,d) \rightarrow \Rep_k {\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ is defined as the functor
$\Hom(E^{\otimes d},-)$ on which $k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d]$ acts on the right.
\end{defi}
\begin{remark}
This functor was defined by Schur, who showed that it is an equivalence of categories when $k$ is a field of
characteristic greater than $d$. It is not an equivalence if the characteristic is less than or equal to $d$.
\end{remark}
From this definition, it is clear that $\SC$ admits a left adjoint. A
slightly different description of the Schur functor also yields a
right adjoint (cf. \cite{DEK}). We interpret the Schur functor
geometrically and obtain similar descriptions for its adjoints as well.
\begin{remark}
In what follows, we will give a geometric description of the category
$C_k(n,d)$ that does not depend on $n$. We should point out that it
is well-known that for any $n>d$, there is an equivalence $C_k(n,d)
\cong C_k(d,d)$ (see \cite[Thm 4.3.6]{Martin}).
\end{remark}
\subsection{The nilpotent cone for $GL_d$}
For any commutative $\CM$-algebra $R$, the $R$-points of $\gg_d$ is the set of endomorphisms of the free $R$-module $R^d$. The $R$-points of the quotient stack $[\gg_d/G_d]$
is the groupoid whose objects are endomorphisms of locally free $R$-modules of rank $d$
and morphisms are isomorphisms between such pairs.
Let $\NC_d \subset \gg_d$ denote
the nilpotent cone, the variety parameterizing nilpotent endomorphisms
of $\CM^d$. For $R$ a commutative $\CM$-algebra, the set of
$R$-points of $\NC$ is the set of nilpotent endomorphisms of $R^d$ and the groupoid of $R$-points of $[\NC_d/G_d]$ consists of nilpotent endomorphisms of locally free $R$-modules of rank $d$ and isomorphism between them.
We will be interested in the category $P_{G_d}(\NC_d;k)$ of equivariant perverse
sheaves on the nilpotent cone. The $G_d$-orbits stratify $\NC$ and the orbits are
labelled by partitions of $d$ according to the Jordan decomposition.
\subsubsection{} We now list some basic facts about the
Springer and Grothendieck resolutions of $\NC$ and $\gg$. For a more
detailed account, see \cite[Chapter 3]{CG}.
Recall that if $\hg \subset \gg$ is a Cartan subalgebra, then Chevalley's restriction theorem tells us that $\gg//G \cong \hg/W$. Let $\hg_{\mathrm{reg}} \subset \hg$ be the complement of the root hyperplanes.
When $G=G_d$, $\hg/W = \AM^d/{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d =: \AM^{(d)}$ is naturally isomorphic to
the affine space of monic polynomials of degree $d$. In this case, $\hg_{\mathrm{reg}} = (\AM^{(d)})_{rs}$ the open subvariety of monic polynomials with $d$ distinct roots.
Let $\chi:\gg \rightarrow \gg//G$ be the quotient map. In the case of $G_d$, $\chi$ is the map sending an
endomorphism to its characteristic polynomial. Note that
$\chi^{-1}(0)=\NC$.
Recall the
existence of Grothendieck's simultaneous resolution $\pi: \tilde\gg
\rightarrow \gg$ of the fibers of the map $\chi$, which fits into the
following diagram:
\[
\begin{array}{c}
\label{diag-gg}
\xymatrix{
& \tilde\NC \ar@{^{(}->}[r]\ar[ld]_{\pi_\NC} & \tilde\gg
\ar[ld]^{\pi}\ar[rd]_{\tilde\chi} & \\
\NC \ar@{^{(}->}[r]\ar[rd] & \gg\ar[rd]^{\chi} & & \AM^d \ar[ld] \\
& \{0\} \ar@{^{(}->}[r] & \AM^{(d)} &
}
\end{array}
\]
The map $\pi_\NC$ is semi-small and $\pi$ is small. The restriction
of the map $\chi$ to $\gg_{{\mathrm{rs}}}=\chi^{-1}(\hg_{\mathrm{reg}})$ is smooth with fibers isomorphic to
$G/T$. The space $\hg_{{\mathrm{reg}}}$ has fundamental group the braid group $\tilde W$. As
the fibers of $\chi$ over $\hg_{{\mathrm{rs}}}/W$ are simply connected, the fundamental group of
$\gg_{{\mathrm{rs}}}$ is also the braid group. The restriction of $\pi$ to
$\tilde\gg_{{\mathrm{rs}}}$ is the $W$-cover corresponding to the pure subgroup.
\subsection{Springer theory}
Here we review Juteau's study of modular Springer theory~\cite{Ju-thesis}. Before doing so, we remark that while Juteau works with varieties over finite fields, we consider the analogous situation over the complex numbers.
Let $\SC=\pi_{\NC*}\mathbf{IC}_{\tilde\NC}$ denote the Springer sheaf and
$\GC=\pi_* \mathbf{IC}_{\tilde\gg}$ the Groth\-en\-dieck sheaf.
There exists a \emph{Fourier-Sato transform} functor, denoted
\[\TM_\gg: D_G(\gg;k) \rightarrow D_G(\gg^*;k).\]
It is defined by composing the functor defined in~\cite[\S 3.7]{KS1} with the shift $[n^2]$. With this shift, $\TM$ is $t$-exact for the perverse $t$-structure~\cite[Proposition~10.3.18]{KS1}. This functor is an equivalence of categories with inverse
\[
\:{}^\backprime\mathbb{T}_\gg: D_G(\gg^*;k) \rightarrow D_G(\gg;k).
\]
We fix an isomorphism $\gg^* \cong \gg$ and will identify them from now on. The following is due to
Juteau in the modular case:
\begin{prop} There is a natural isomorphism $\TM_\gg(\SC)\cong \GC$.
\end{prop}
Using this, Juteau defines a modular Springer correspondence from irreducible representation of $W$ to $\mathbf{IC}$-sheaves on $\NC$. The correspondence associates to an irreducible $W$-representation the image of the functor $\TM j_{{\mathrm{rs}}!*}$ applied to the corresponding local system on $\gg_{\mathrm{rs}}$. The $\mathbf{IC}$-sheaves that occur in the correspondence are precisely those contained in the top (or equivalently in the socle) of the Springer sheaf.
\medskip
The Grothendieck sheaf $\GC$ carries a natural $W$-action because it is a Goresky-MacPherson extension of a pushforward
along a $W$-cover. Using this action, we can equip the Springer sheaf
$\SC$ with a $W$-action in two ways: by the Fourier transform or by
restriction to the nilpotent cone.
\begin{prop}
\label{prop-signs}
The two $W$-actions on $\SC$ - one defined by Fourier transform
and the other by restriction to the nilpotent cone - differ by the
sign character. In other words, there is an isomorphism of $W$-sheaves:
\[\TM_\gg(\GC) \cong i^* \GC \otimes \mathrm{sgn}.\]
\end{prop}
A version of this proposition for $\bar\QM_\ell$-sheaves appeared
in~\cite{HOT} and was proven for $\DC$-modules in~\cite{GiSpr}
and~\cite{HK}. A proof in the modular case is to appear in work of Achar, Henderson, and Riche.
\subsection{The affine Grassmannian}
We first recall the affine Grassmannian for $GL_n$, which
will play a role analogous to $\NC_d$, and then the Beilinson-Drinfeld
Grassmannian, which corresponds under the same analogy to $\gg_d$.
\subsubsection{Local version} Let ${\EuScript Gr}$ denote the affine
Grassmannian over $\CM$ for the group $G_n = GL_n$. In other
words, ${\EuScript Gr}$ is the ind-scheme over $\CM$ whose $R$-points are the set
$G_n(\KC)/G_n(\OC)$ where $\KC$ is the ring of Laurent series
$R((t))$ and $\OC$ its ring of Laurent polynomials, $R[[t]]$, for $R$ a
commutative $\CM$-algebra. The group $G_n(\KC)$ acts transitively on
the set of $\OC$-lattices in $\KC^{\oplus n}$, and the stabilizer of
the standard lattice is $G_n(\OC)$. It follows that one can view the
$R$-points of ${\EuScript Gr}$ as lattices.
For each $\lambda \in \Lambda^+$, we associate the
corresponding $G_n(\OC)$-orbit in ${\EuScript Gr}$, denoting it by ${\EuScript Gr}^\lambda$.
Let $\mathbf d=(d,0,\ldots,0)\in \ZM^n$. We will be interested in $\overline{\Gr^\d}$
and $G_n(\OC)$-perverse sheaves supported on it. Let $\OC_d$ denote
the quotient $\OC/t^d\OC$. As the congruence subgroup
$\Ker(G_n(\OC)\rightarrow G_n(\OC_d))=1+t^d\gg_d(\OC)$ acts trivially on
$\overline{\Gr^\d}$, it is equivalent to study $P_{G_n(\OC_d)}(\overline{\Gr^\d};k)$.
From the definitions above, one finds that $\overline{\Gr^\d}$ is a projective
variety parameterizing lattices $L$ contained in the standard lattice
$L_0=\OC^{\oplus n}$ such that the quotient $L_0/L$ is locally free of rank $d$.
Considering the $G_n(\OC)$-orbits in $\overline{\Gr^\d}$, we find that, as $n \ge
d$, they are labelled by partitions of $d$.
Let $\varpi_1$ be the fundamental weight $(1,0,\ldots,0)$. There is a $G(\OC)$-equivariant semi-small resolution $(\Gr^{\varpi_1})^{*d} \rightarrow
\overline{\Gr^\d}$. The $\CM$-points of $(\Gr^{\varpi_1})^{*d}$ are given by all flags
$0\subset V^1 \subset V^2 \subset \ldots \subset V^{d-1} \subset
L_0 /L$ preserved by the action of $\OC$.
\subsubsection{Global version}
The $d$-th Beilinson-Drinfeld Grassmannian~\cite{BD,MV} of $GL_n$, ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}(n,d)$, is an indscheme defined over $\AM^{(d)}=\AM^d/{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ whose
points are isomorphism classes of triples $(x,\FC,\beta)$ where
$x\in \AM^{(d)}$, $\FC$ is a rank $n$ vector bundle on $\AM^1$,
and $\beta$ is a trivialization of $\FC$ away from $\cup x \subset
\AM^1$.
Let ${\mathfrak K}} \def\kg{{\mathfrak k}} \def\KM{{\mathbb{K}}$ be the ring of rational functions $R(t)$ and ${\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$ the
ring of polynomials $R[t]$. Let $\LC_0$ denote the standard
${\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$-lattice in ${\mathfrak K}} \def\kg{{\mathfrak k}} \def\KM{{\mathbb{K}}^{\oplus n}$.
Following \cite{Ngo,MVy}, we will be interested in a particular closed
subscheme of the ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}(n,d)$ which we denote by ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$. The $R$-points
of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$ are the ${\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$-lattices $\LC \subset \LC_0$ such that
$\LC_0/\LC$ is a locally free $R$-module of rank $d$. To see that this
is a subfunctor of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}(n,d)$, note that any such lattice is a locally
free ${\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$-module of rank $n$, i.e., a vector bundle of rank $n$ on $\AM^1$.
Equivalently, the points of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$ can be expressed as
\[ {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d(R) = \{g \LC_0 \subset \LC_0 | g \in G_n({\mathfrak K}} \def\kg{{\mathfrak k}} \def\KM{{\mathbb{K}}) \cap \gg_n({\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}), \mathrm{deg}(\det(g)) \}. \]
From this point of view, the natural map to $\AM^{(d)}$ is defined by sending a lattice $\LC = g \LC_0$
to the determinant of $g$.
Let $G_{n,d}$ be the group scheme over $\AM^{(d)}$ whose fiber at a
point $P\in \AM^{(d)}(R)$ (thought of as a monic polynomial)
has $R$-points $G_{n,d}(P)(R)=GL({\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}/(P))(R)$.
Ng\^{o} checks in~\cite[2.1.1]{Ngo}, that $G_{n,d} \rightarrow \AM^{(d)}$ is smooth
with geometrically connected fibers of dimension $n^2d$. It comes
with a natural action $G_{n,d}\times_{\AM^{(d)}}{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d \rightarrow {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$.
We will consider the stack $[{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d/G_{n,d}]$. Its $R$-points form the
groupoid whose objects are pairs $(L \subset F)$ where $F$ is a locally free ${\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$-module of rank $n$
with $L$ a submodule, such that $F/L$ is locally free over $R$ of rank $d$. Again,
$[\overline{\Gr^\d}/G_n(\OC_d)]$ is then simply the subfunctor of such pairs
where $P=x^d$.
Analogous to the Grothendieck resolution, there is a global
resolution, which we denote by $\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$. It is the scheme whose
$R$-points are full flags of ${\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$-lattices.
Observe that the stalks of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d, G_{n,d}$, and $\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$ over
$0\in\AM^{(d)}$ are respectively $\overline{\Gr^\d}$, $G_n(\OC_d)$, and $(\Gr^{\varpi_1})^{*d}$.
We thus see that the spaces described fit into a diagram analogous to
that of Springer theory.
\[
\begin{array}{c}
\label{diag-GG}
\xymatrix{
& ({\EuScript Gr}^1)^{*d} \ar@{^{(}->}[r]\ar[ld]_{\pi_{\EuScript Gr}} & \tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d
\ar[ld]^{\pi_{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}}\ar[rd]_{\tilde f} & \\
\overline{\Gr^\d} \ar@{^{(}->}[r]\ar[rd] & {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d \ar[rd]^{f} & & \AM^d \ar[ld] \\
& \{0\} \ar@{^{(}->}[r] & \AM^{(d)} &
}
\end{array}
\]
Similarly to above, $\pi_{\EuScript Gr}$ is semi-small and $\pi_{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}$ is small. Let ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}} = f^{-1}((\AM^{(d)})_{{\mathrm{rs}}})$, $\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}} = \pi_{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}^{-1}({\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}})$, and $j_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}{\mathrm{rs}}}: {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}} \rightarrow {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}$ be the inclusion. The restriction of $\pi_{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}$, $\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}},{\mathrm{rs}}}: \tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}} \rightarrow {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}}$, is a Galois ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$-cover. It follows that $\pi_{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}} \mathbf{IC}_{\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}} \cong (j_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}{\mathrm{rs}}})_{!*} (\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}{\mathrm{rs}}})_! \mathbf{IC}_{\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}}}$.
\subsection{Geometric Satake}
Mirkovic and Vilonen~\cite{MV} prove, following work in characteristic zero
of Lusztig~\cite{Lu2} and then Ginzburg~\cite{Gi}, that for a split
reductive group $G$ defined
over a commutative ring $k$: the category $P_{G^\vee(\OC)}({\EuScript Gr}; k)$ of
equivariant perverse sheaves with coefficients in $k$ on the affine
Grassmannian for the Langlands dual group $G^\vee/\CM$, together
with its convolution structure, is tensor equivalent to the category
of representation of $G$ over $k$.
In this paper, we only consider the case $G=G^\vee=GL_n$. We use the
following immediate corollaries of the results of~\cite{MV}.
\begin{cor}
\label{cor-geomsat}
There is an equivalence of categories
\[P_{G_n(\OC)}(\overline{\Gr^\d};k) \cong \CC_k(n,d).\]
Under this equivalence, the $GL_n$-representation
$E^{\otimes d}$ (here $E$ is the standard $n$-dimensional
representation) corresponds to the restriction of $\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}*}
\mathbf{IC}_{\tilde{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}}}$ to ${\EuScript Gr}$, or equivalently to the
pushforward $\pi_{{\EuScript Gr}*} \mathbf{IC}_{(\Gr^{\varpi_1})^{*d}}$.
The symmetric group ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ acts on
\[\pi_{{\EuScript Gr}*} \mathbf{IC}_{(\Gr^{\varpi_1})^{*d}} \cong [(j_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}},rs})_{!*} (\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}},rs})_! \mathbf{IC}_{\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{rs}}]|_{{\EuScript Gr}}\]
by the deck transformations of $\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}},rs}$ and the funtoriality of $!*$-extensions. This action corresponds under the geometric Satake equivalence to the permutation action of ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ on $E^{\otimes d}$.
\end{cor}
\section{A projection map and Lusztig's section}
\label{sec-proj}
In this section, we relate the spaces $\NC_d$ and
$\overline{\Gr^\d}$ and their quotient stacks. We do so using the functor of
points descriptions given in the previous sections.
\begin{lemma}
\label{lem-maps}
There exist natural morphisms:
\begin{equation}
\xymatrix{
\overline{\Gr^\d} \ar[rd]^{\tilde\psi}\ar[d] & \NC_d
\ar[d]\ar@{_{(}->}_{\overline\phi}[l] & {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d
\ar[rd]^{\tilde\Psi}\ar[d] & \gg_d \ar[d]\ar@{_{(}->}_{\overline\Phi}[l]
\\
[\overline{\Gr^\d}/G_n(\OC_d)] \ar@<1ex>[r]^{\psi} & [\NC_d/G_d] \ar@{.>}[l]^{\phi}
& [{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d/G_{n,d}] \ar@<1ex>[r]^{\Psi} & [\gg_d/G_d]
\ar@{.>}[l]^{\Phi}
}
\end{equation}
such that all of the solid morphisms form a commutative diagram, while the
dotted morphisms satisfy $\psi \circ \phi= Id_{[\NC_d/G_d]}$ and $\Psi \circ \Phi= Id_{[\gg_d/G_d]}$.
\end{lemma}
\begin{proof}
Recall diagrams \ref{diag-gg} and \ref{diag-GG}. In the lemma above,
the morphisms denoted by lower-case letters will be defined as the
restriction of the upper-case morphisms by the closed embeddings $\NC_d
\rightarrow \gg_d$ and $\overline{\Gr^\d} \rightarrow {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$. We thus need only describe the
global (or upper-case) morphisms and check that they satisfy the
relations described.
\subsection*{The maps $\tilde\Psi$ and $\Psi$}
The map $\tilde\Psi$ is the forgetful map which associates to an
$R$-point of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$, $\LC\in {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d(R)$, the rank $d$ locally free
$R$-module $\LC_0/\LC$ together with the endomorphism given by the action of
$t\in {\mathfrak O}} \def\og{{\mathfrak o}} \def\OM{{\mathbb{O}}$. Similarly, $\Psi$ associates to a point $(P,L \subset F)$ of $[{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d/G_{n,d}]$,
the locally free $R$-module $F/L$ together with the
endomorphism defined by multiplication by $t$.
\subsection*{The maps $\overline\Phi$ and $\Phi$}
To any $R$-point of $[\gg_d/G_d]$, an endomorphism $a$ of a locally free rank $d$ $R$-module $E$, let $\Phi(a)$ be the pair $(L \subset F)$, where $F = E[t]$ and $L = (a-t)E[t]$. To see that this is indeed a point of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$, note that $a-t \in GL(E(t)) \cap \mathfrak{gl}(E[t])$ and that its determinant is the characteristic polynomial of $a$ and therefore of degree $d$ as desired.
\begin{remark}
The local version $\overline\phi$ was first observed by
Lusztig~\cite{lu}. As far as I am aware, a description of the global map $\overline\Phi$ first appeared in~\cite{MVy}.
\end{remark}
\begin{remark}
\label{rmk-embed}
Lusztig~\cite{lu} observes that the map $\overline\phi$ is an open
embedding when $n=d$ and the same is true of $\overline\Phi$. Moreover,
the image of the imbedding intersects every orbit and provides a
natural bijection between the orbits of $\overline{\Gr^\d}$ and those of $\NC_d$.
\end{remark}
It remains to exhibit a natural equivalence $id \stackrel{\sim}{\rightarrow} \Psi \circ \Phi$.
We claim that such an equivalence can be defined as follows. For any $(a,E)$, consider the map of $R$-modules given by
\[E \hookrightarrow E[t] \rightarrow E[t]/(a-t)E[t].\]
Note that the map is injective as $E \cap (a-t)E[t] =0$. To prove surjectivity, one can use induction on the degree to show that any polynomial $p(t) \in E[t]$ is in $E + (a-t)E[t]$. To finish the proof, we observe that this isomorphism intertwines the action of $a$ on $W$ with the action of $t$ on $E[t]/(a-t)E[t]$.
\end{proof}
\section{Equivalence of categories}
\label{sec-equiv}
Fix $n \ge d$.
\subsection{} In this section we use the Lemma \ref{lem-maps} from the
previous section to prove certain equivalences of categories.
\begin{thm}
\label{thm-equiv}
The maps $\phi:\NC_d \rightarrow \overline{\Gr^\d}$ and $\Phi:\gg_d \rightarrow {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d$ give equivalence of
categories \[\phi^\bullet:=\phi^*[d^2-nd]:
P_{G_n(\OC)}(\overline{\Gr^\d}) \stackrel{\sim}{\rightarrow} P_{G_d}(\NC_d),\] \[
\Phi^\bullet:=\Phi^*[d^2-nd] :
P_{G_{n,d}}({\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d) \stackrel{\sim}{\rightarrow} P_{G_d}(\gg_d).\]
\end{thm}
\begin{proof}
By~\cite[4.2.5]{BBD}, the (shifted) pull-back along a smooth morphism $f:X \rightarrow Y$
with geometrically connected fibers is a fully faithful embedding of the
category of perverse sheaves on $Y$ to that on $X$.
We now check that $\Phi$ enjoys these properties (and therefore
$\phi$ does as well by base change). From this it will follow that
$\Phi^\bullet$ and $\phi^\bullet$ are fully faithful.
First consider the case $n=d$. Recall that $\overline\phi,\overline\Phi$ are an open
morphisms (Remark \ref{rmk-embed}). From this, we can deduce that $\phi$ and $\Phi$ are smooth. As the intersection of $\gg$ is open and connected in every orbit of ${\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}$, we conclude that the fibers are
geometrically connected.
In the general case $n>d$, we can factor $\tilde\Phi_d^n$ as
$\tilde\Phi_d^d$ and the map $a_{d,n}:[{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d/G_{d,d}] \rightarrow
[{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_n/G_{n,d}]$. In \cite[Lemme 2.2.1]{Ngo}, it is shown that
$a_{d,n}$ is smooth with connected fibers.
By Lemma~\ref{lem-maps}, the compositions $\phi^* \circ \psi^*$ and
$\Phi^* \circ \Psi^*$ are the identity functors on $P_{G_d}(\NC_d)$ and
$P_{G_{n,d}}({\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d)$ respectively. We conclude that
$\phi^\bullet$ and $\Phi^\bullet$ are also essentially surjective,
which completes the proof.
\end{proof}
\subsection{}
We conclude this section with an observation that we will not use in what
follows, but which puts in perspective the relationship between the various
equivariant derived categories.
\begin{cor}
The pullback functor on equivariant derived categories
\[\psi^*: D_{G_d}(\NC_d) \rightarrow D_{G_n(\OC)}(\overline{\Gr^\d})\]
(resp. $\Psi^*: D_{G_d}(\gg_d) \rightarrow D_{G_{n,d}}({\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d)$)
splits the functor
\[\phi^*:D_{G_n(\OC)}(\overline{\Gr^\d}) \rightarrow D_{G_d}(\NC_d)\]
(resp. $\Phi^*:
D_{G_{n,d}}({\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d) \rightarrow D_{G_d}(\gg_d)$), in the following
sense:
For any two objects, $A,B \in D_G(\NC)$, $\psi^*$ induces an injection
of graded vector spaces
\[\Ext^*_{D_G(N)}(A,B) \hookrightarrow \Ext^*_{D_{G(\OC)}(\overline{\Gr^\d})}(\psi^*A,\psi^*B),\]
which naturally splits the projection map given by $\phi^*$.
\end{cor}
\begin{remark}
It is worth noting that as functors between equivariant derived
categories, the pullback functors do not induce equivalences, and in
fact the various categories are not equivalent. To see this, consider
the $\Ext$-groups between the standard and costandard sheaf on the
stratum associated to a partition $\lambda$. The $\Ext$ groups in the
equivariant derived categories will agree with the equivariant
cohomology of the the stratum, which in turn agrees with the group
cohomology (of the reductive part) of the stabilizer of a point.
Unless $n=d$ and $\lambda$ is the trivial partition, these will not
agree.
\end{remark}
\section{Generalized Schur-Weyl Theorem}
\label{sec-SWthm}
In this section we give a new, geometric proof of Carter and Lusztig's Schur-Weyl duality with general coefficients \cite[Thm. 3.1]{CarterLusztig}.
\begin{thm} Let $GL_n^k$ denote $GL_n$ over a ring
$k$ and $E$ be the standard $n$-dimensional representation. For any $d\leq n$, the
action of ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ on $E^{\otimes d}$ induces an isomorphism
\[ k[S_d] \rightarrow \End_{GL_n^k}(E^{\otimes d}). \]
\end{thm}
\begin{proof}
Recall the following consequences of the geometric Satake equivalence summarized in Corollary \ref{cor-geomsat}.
There is natural isomorphism
\[ \End_{GL_n^k}(E^{\otimes d}) \cong \End_{P_{G(\OC)}({\EuScript Gr})}(\pi_{{\EuScript Gr} *}\mathbf{IC}_{({\EuScript Gr}^1)^{*d}}).\]
The perverse sheaf $\pi_{{\EuScript Gr} *}\mathbf{IC}_{({\EuScript Gr}^1)^{*d}} \cong [(j_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}{\mathrm{rs}}})_{!*} (\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}{\mathrm{rs}}})_! \mathbf{IC}_{\tilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_{{\mathrm{rs}}}}]|_{\EuScript Gr}$ carries an action of ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ induced by the action of the deck-transformations of $\pi_{{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}{\mathrm{rs}}}$, and this action agrees under the isomorphism above with the permutation action of ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ on $E^{\otimes d}$.
We wish to translate this action from the affine Grassmannian to the nilpotent cone. A simple
generalization of the maps $\overline\Phi$ and $\overline\phi$ completes the following commutative cube:
\begin{equation}
\xymatrix{
& \tilde\NC_d \ar[r]\ar[dl]\ar[dr] & \tilde\gg_d \ar[dl]\ar[dr] & \\
\NC_d \ar[dr]\ar[r] & \gg_d \ar[dr] & (\Gr^{\varpi_1})^{*d} \ar[dl]\ar[r] & \widetilde{\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d \ar[dl] \\
& \overline{\Gr^\d} \ar[r] & {\mathfrak G}} \def\gg{{\mathfrak g}} \def\GM{{\mathbb{G}}_d & \\
}
\end{equation}
Here the left and right faces are both pull-back squares, all the
maps from upper-right to lower-left are proper, and the maps from upper-left to lower-right are inclusions.
By Theorem \ref{thm-equiv}, the functor $\phi^\bullet$ induces an isomorphism
\[ \End_{P_{G(\OC)}({\EuScript Gr})}(\pi_{{\EuScript Gr} *}\mathbf{IC}_{({\EuScript Gr}^1)^{*d}}) \cong \End_{P_{G(\OC)}(\NC)}(\SC).\]
The ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$-action on $\pi_{{\EuScript Gr} *}\mathbf{IC}_{({\EuScript Gr}^1)^{*d}}$ corresponds to the analogously-defined action on $\SC \cong i^*(j_{\mathrm{rs}})_{!*} \pi_{{\mathrm{rs}}*} \mathbf{IC}_{\tilde\gg_{\mathrm{rs}}}$.
On the other hand, Juteau \cite{Ju-thesis} observes that, as in characteristic 0, the
Fourier transform exchanges the Springer and Grothendieck sheaves and
so
\[ \End(\SC) \stackrel{\TM}{\cong} \End(\GC) \cong \End(\pi_{{\mathrm{rs}}!} \mathbf{IC}_{\tilde\gg_{\mathrm{rs}}}) \cong k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d].\]
Proposition \ref{prop-signs} says that the resulting action of ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ by
Fourier transform differs from the one arising by restriction (as in
the previous paragraph) by a sign character. But as the first induces
an isomorphism, the second does as well.
\end{proof}
\section{Geometric Schur Functor}
\label{sec-GSF}
The previous section allows us to identify the functor $\Hom(\SC,-)$ with the Schur functor. In other words, we have a
commutative diagram of functors:
\begin{equation}
\xymatrix{
P_{G_d}(\NC_d) \ar[rr]^{\Hom(\SC,-)} \ar[d]^\cong && \mathrm{Mod}-\End(\SC) \\
C_k(n,d) \ar[rr]^{\Hom(E^{\otimes d},-)} && \mathrm{Mod}-k[{\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d] \ar[u]_\cong ,
}
\end{equation}
where the action of ${\mathfrak S}} \def\sg{{\mathfrak s}} \def\SM{{\mathbb{S}}_d$ on $\SC$ is through the action by deck transformations on $\GC$ and the identification of $\SC$ with $i^* \GC$.
Motivated by this connection, we will shift our focus to a general connected complex reductive group.
For a general complex reductive group $G$, by analogy, we can define a \emph{Schur functor} from category of $G$-equivariant perverse sheaves on the nilpotent cone in $\gg$ to representations of the Weyl group,
\[ \Hom(\SC,-): P_G(\NC_G;k) \rightarrow \mathrm{Mod}-\End(\SC).\]
In this section, we will introduce another functor that is closely related to $\Hom(\SC,-)$. We propose that this new functor, which also exists for any complex reductive group, should be viewed as a geometric Schur functor.
For any connected reductive group $G$, consider the functor:
\[\FC := j^*_{rs} \circ \TM_\gg \circ (i_\NC)_* : P_{G}(\NC) \rightarrow
P_{G}(\gg_{rs}).\]
Note that it is exact because each component is exact. To ease notation, we will let $\TM=\TM_\gg$,
$j=j_{rs}$ and $i=i_\NC$. Let $\Loc_W(\gg_{\mathrm{rs}}) \subset P_G(\gg_{\mathrm{rs}})$ denote the full subcategory of local systems with monodromy factoring through the Weyl group.
In the proof of the following lemma we will use parabolic restriction and induction functors. For their definition and some basic properties in this context, see for example \cite{AchMau}.
\begin{lemma}
\label{lem:locW}
The functor $\FC$ factors through the inclusion $\Loc_W(\gg_{\mathrm{rs}}) \subset P_G(\gg_{\mathrm{rs}})$.
\end{lemma}
\begin{proof}
We first show that the statement is true for all simple objects in $P_G(\NC;k)$. For each nilpotent orbit $\OC$ and irreducible local system $\LC \in P_G(\OC;k)$, consider the $\mathbf{IC}$-sheaf $A= \mathbf{IC}(\OC,\LC)$. Let $T$ be a maximal torus for $G$. Note that ${\mathrm{res}}_T^G A \neq 0$ if and only if $A$ is in Juteau's modular Springer correspondence and hence $\TM A \cong \mathbf{IC}(\gg_{\mathrm{rs}}, \LC_V)$, where $\LC_V \in \Loc_W(\gg_{\mathrm{rs}})$ and corresponds to some irreducible $k[W]$-representation $V$.
Suppose instead that ${\mathrm{res}}_T^G A = 0$. Then there exist some Levi subgroup $L$ such that $A_0 = {\mathrm{res}}_L^G A$ is cuspidal. It follows that $\TM A_0$ is also cuspidal and by~\cite[Lemma~4.4]{mirkovic}\footnote{While Mirkovi\'{c} works with $\DC$-modules, the same argument works in the constructible context with arbitrary coefficients.}, every cuspidal sheaf of $\lg$ is supported on $\NC_L \times Z(\lg)$. By adjunction, there exists a non-zero map $A \rightarrow \ind_L^G A_0$. As $A$ is simple, \[\overline{\supp \TM A} \subset \overline{\ind_L^G \TM A_0}\subset {}^G(\NC_L \times Z(\lg)).\]
Finally, ${}^G(\NC_L \times Z(\lg)) \cap \gg_{rs} = \emptyset$ as $L$ is not a maximal torus. Thus $j^* \TM A =0$.
From this we conclude that the functor $\FC$ at least factors through the category of $G$-equivariant local systems on $\gg_{\mathrm{rs}}$. It remains to check that for a general object $X \in P_G(\NC)$, $\FC X$ has monodromy that factors through $W$.
Let $Q \subset X$ be minimal such that $\FC(X/Q)=0$ (Such an object exists because the category of perverse sheaves is Artinian). By the exactness of $\FC$, $\FC(Q) \cong \FC(X)$. By the minimality of $Q$, the top of $Q$ is a direct sum of $\mathbf{IC}$-sheaves in Juteau's modular Springer correspondence. Recall that the Springer sheaf is a direct sum of the projective covers of such $\mathbf{IC}$-sheaves. Thus for some $m \geq 0$, there exists a surjection $\SC^{\oplus m} \rightarrow Q$. Using again the exactness of $\FC$, we conclude that $\FC Q$ is a quotient of $m$ copies of $j^*\GC$ and thus has monodromy that factors through the Weyl group.
\end{proof}
In order to compare $\FC$ with $\Hom(\SC,-)$, we consider the functor
\[ \rho: \mathrm{Mod}-\End(j^*\GC) \rightarrow \Loc_{W}(\gg_{{\mathrm{rs}}}) \]
defined by the tensor product $(-) \otimes_{\End(j^* \GC)} j^*\GC$ and its inverse
\[\Hom(j^*\GC, -): \Loc_W (\gg_{\mathrm{rs}}) \rightarrow \mathrm{Mod}-\End(j^*\GC).\]
\begin{thm}
\label{thm-schurgeom}
There is a natural equivalence of functors:
\begin{equation}
\xymatrix{
P_{G}(\NC) \ar[rr]^{\FC} \ar[rd]_{\Hom(\SC,-)} &&
\Loc_{W}(\gg_{rs})\\
& \mathrm{Mod}-\End(\SC)\cong \mathrm{Mod}-\End(j^*\GC) \ar[ru]_{\rho}^\sim &
}
\end{equation}
\end{thm}
\begin{proof}
We proceed by constructing a sequence of equivalences. Fourier transform induces a equivalence of functors:
\[\rho(\Hom(\SC,-)) \cong \rho(\Hom(\GC,\TM i_*(-))).\]
For any $X \in P_G(\NC)$, the Fourier transform $\TM X$ is perverse and so we obtain an exact sequence:
\[0\rightarrow K \rightarrow \TM X \rightarrow {}^p j_*j^* \TM X \rightarrow C \rightarrow 0\]
where $K$ and $C$ have support in $\gg - \gg_{\mathrm{rs}}$. Applying the exact functor $\Hom(\GC,-)$ gives an isomorphism $\Hom(\GC,\TM X) \cong \Hom(\GC, {}^p j_*j^* \TM X)$. Using the adjunction between $j^*$ and ${}^p j_*$ we obtain an equivalence:
\[ \rho(\Hom(\GC,\TM i_*(-))) \cong \rho(\Hom(j^* \GC, j^* \TM i_*(-))). \]
The composition $\rho \circ \Hom(j^*\GC,-)$ restricted to $\Loc_W(\gg_{\mathrm{rs}})$ is isomorphic to the identity functor. From Lemma \ref{lem:locW}, the functor $j^* \TM i_*$ takes values in $\Loc_W(\gg_{\mathrm{rs}})$ and so we may conclude:
\[\rho(\Hom(j^* \GC, j^* \TM i_*(-))) \cong j^* \TM i_*(-) = \FC(-).\]
\end{proof}
\section{Geometric Adjoint Functors}
\label{sec-GAF}
In the previous section we introduced the notion of a geometric Schur functor for an arbitrary connected complex reductive group. We close with a simple observation in this general context.
By the usual adjointness for closed and open embeddings, the geometric Schur functor has the following left and right adjoints which we consider restricted to $\Loc_W(\gg_{\mathrm{rs}})$:
\[\GC_R = {}^p i^! \circ \:{}^\backprime\mathbb{T}_\gg \circ {}^p j_* :\Loc_W(\gg_{\mathrm{rs}}) \rightarrow P_G(\NC),\]
\[\GC_L = {}^p i^* \circ \:{}^\backprime\mathbb{T}_\gg \circ {}^p j_! : \Loc_W(\gg_{\mathrm{rs}}) \rightarrow P_G(\NC).\]
\begin{lemma}
\label{lem-antiorb}
The functor $\:{}^\backprime\mathbb{T} j_{!*}: \Loc_W(\gg_{rs}) \rightarrow P_{G_d}(\gg)$ factors through the subcategory $P_{G_d}(\NC)
\subset P_{G_d}(\gg)$.
\end{lemma}
In other words, the perverse extension of a representation of the
Weyl group is anti-orbital.
\begin{proof}
It suffices to check on a projective generator of $\Loc_W(\gg_{\mathrm{rs}})$. The local system $j^*\GC$ is such a projective generator and $j_{!*}j^*\GC \cong
\GC$. Thus $\:{}^\backprime\mathbb{T} j_{!*} j^*\GC \cong i_* \SC$, which has support on $\NC$.
\end{proof}
\begin{prop}
The adjoint functors $\GC_L,\GC_R$ are both right inverses to $\FC$.
\end{prop}
\begin{proof}
We provide the proof for $\GC_R$, the proof for $\GC_L$ is
completely analogous.
We seek a natural equivalence $j^* \TM i_* {}^p i^! \TM {}^p j_*
\cong Id$. Consider an element $\LC \in \Loc_W(\gg_{\mathrm{rs}})$. One has the canonical inclusion $j_{!*}\LC \hookrightarrow {}^p j_* \LC$.
The Fourier transform $\:{}^\backprime\mathbb{T}$ is an equivalence of categories, so
$\:{}^\backprime\mathbb{T} j_{!*}\LC \hookrightarrow \:{}^\backprime\mathbb{T} {}^p j_! \LC$. By lemma
\ref{lem-antiorb}, the $\:{}^\backprime\mathbb{T} j_{!*}\LC$ is supported on the nilpotent
cone $\NC$.
On the other hand, for any $\FC\in P(\gg)$, the perverse sheaf $i_* {}^p
i^! \FC$ injects into $\FC$ in such a way that any subsheaf $\AC$ of $\FC$
supported on $\NC$ factors through it.
In particular, letting $\FC = \TM ({}^p j_* \LC)$ and $\AC= \TM
(j_{!*} \LC)$ we obtain a commutative triangle:
\begin{equation}
\xymatrix{
\:{}^\backprime\mathbb{T} j_{!*} \LC \ar@{^{(}->}[rr]\ar@{_{(}->}[rd] && \:{}^\backprime\mathbb{T} {}^p j_* \LC \\
& i_* {}^p i^! \:{}^\backprime\mathbb{T} {}^p j_* \LC \ar@{^{(}->}[ur]&
}
\end{equation}
We now apply the exact functor $j^* \circ \TM$ to obtain:
\begin{equation}
\xymatrix{
\LC \ar[rr]^\sim \ar@{_{(}->}[rd] && \LC \\
& j^* \TM i_* {}^p i^! \:{}^\backprime\mathbb{T} {}^p j_* \LC \ar@{^{(}->}[ur]&
}
\end{equation}
We conclude that the two bottom maps are also isomorphisms and thus
$j^* \circ \TM$ applied to the adjunction morphism gives the natural
equivalence we sought to prove.
\end{proof}
\bibliographystyle{myalpha}
|
1,477,468,750,968 | arxiv | \section{Preliminaries}
\setcounter{equation}{0}
\vskip .1in \noindent A problem of great interest in the classical Complex Function Theory is the
following:
\vskip .1in \noindent Given a function $f(z) = \sum_{k=0}^\infty a_k z^k$, analytic at $z=0$,
determine the asymptotic distribution of the zeros of the {\em partial sums}
$s_n(z) = \sum_{k=0}^n a_k z^k$.
\vskip .1in \noindent Some contributors to this area include Jentzsch \cite{J}, who explored the
problem for a finite radius of convergence; Szeg\H o \cite{Sz}, who explored
the exponential function $e^z$; Rosenbloom \cite{R}, who discussed the
angular distribution of zeros using potential theory, and applied his work to
sub-class of the confluent hypergeometric functions; Erd\H os and Tur\'an
\cite{ET}, who used minimization techniques to discuss angular distributions
of zeros; Newman and Rivlin \cite{NR1, NR2}, who related the work of Szeg\H o
to the Central Limit Theorem;
Edrei, Saff and Varga \cite{ESV}, who gave a thorough analysis
for the family of Mittag-Leffler functions; Carpenter, Varga and Waldvogel
\cite{CVW}, who refined the work of Szeg\H o; and Norfolk \cite{No1, No2},
who refined the
work of Rosenbloom on the confluent hypergeometric functions and a related
set of integral transforms.
\vskip .1in \noindent In this paper, we will analyze the behavior of the zeros of sections
of the binomial expansion, that is
\begin{equation}
B_{r,n}(z) = \sum_{k=0}^r {n \choose k} z^k~,1 \le r \le n~.
\end{equation}
\vskip .1in \noindent This investigation not only fits into the general theme of the works cited,
but also arises from matroid theory. Specifically (cf. \cite{W}), the
{\em univariate reliability polynomial} for the uniform matroid $U_{r,n}$
is given by
\begin{equation}
{\rm Rel}_{r,n} (q) = (1-q)^n B_{r,n} \left( \frac{q}{1-q} \right)
= \sum_{k=0}^r {n \choose k} q^k (1-q)^{n-k}~,
\end{equation}
which can be written as ${\rm Rel}_{r,n}(q) = (1-q)^{n-r} H_{r,n} (q)$,
where
\begin{equation}\label{rel}
H_{r,n}(q) = \sum_{k=0}^r {n \choose k} q^k (1-q)^{r-k}
= (1-q)^r B_{r,n} \left( \frac{q}{1-q} \right)~.
\end{equation}
\vskip .1in \noindent Some special cases are easy to analyze, and may thus be dispensed with. In
particular,
\begin{enumerate}
\item $B_{1,n} (z) = 1 + nz$, which has its only zero at $z =
-\frac{1}{n}$.
\item $B_{n,n} (z) = (1+z)^n$, which clearly has a zero of multiplicity $n$
at $z= -1$.
\item $B_{n-1,n} (z) = (1+z)^n - z^n$. Noting that this polynomial cannot have
positive zeros, we obtain the zeros $z = \frac{\omega^k}{1-\omega^k}$,
for $1 \le k \le n-1$, where $\omega = \exp \left( \frac{2\pi i}{n}
\right)$ is the principal $n$-th root of unity,
all of which lie on the vertical line ${\rm Re}~z
= - \frac{1}{2}$.
\end{enumerate}
\vskip .1in \noindent In what follows, we will therefore focus on the cases
$1 \le r < n-1$, and give two collections of results. The first are concerned
with bounding regions for the zeros of $B_{r,n}(z)$, the rest with
convergence results.
\vskip .1in \noindent We note that this problem was investigated independently by Ostrovskii
\cite{Os}, who obtained many of the results that we present here.
The methods used there involved using a bilinear transformation to convert
the problem to an integral formulation. This choice of formulation makes the
proofs more involved and requires some additional constraints. By contrast,
we claim that our methods given here flow directly from the structure of the
problem, and yield additional results, in terms of additional bounds on the
zeros, and limiting cases. The paper \cite{Os} also gives a result on the
spacing of the zeros on the limit curve, using classical potential-theoretic
methods. We do not duplicate that result here, but give formulations in terms
of specific points on the curve.
\vskip .1in \noindent The methods used generate a set of constants and related
limit curves for $ 0 < \alpha < 1$,
defined by
\begin{equation}\label{cur1}
\frac{1}{2} \le K_\alpha = \alpha^\alpha (1-\alpha)^{1-\alpha} < 1~,
\end{equation}
\begin{equation}\label{cur2}
C_\alpha = \left\{ z~:~\frac{|z|^\alpha}{|1+z|} = K_\alpha
,~|z| \le \frac{\alpha}{1-\alpha} \right\} ~,
\end{equation}
and
\begin{equation}\label{cur3}
C_\alpha^\prime = \left\{ z~:~\frac{|z|^\alpha}{|1+z|} = K_\alpha
,~\frac{\alpha}{1-\alpha} \le |z| \right\}~.
\end{equation}
The properties of these curves are outlined in Lemma \ref{lem1}. Section 3
also presents bounds which are used to simplify the proofs of some of the
results presented here.
\section{Main Results}
\setcounter{equation}{0}
\vskip .1in \noindent As discussed above, we begin with a theorem on bounds of the zeros
of $B_{r,n}(z)$, and follow with results on convergence of those zeros.
\begin{thm}\label{thm1}
Let $r,n$ be positive integers, with $1 \le r < n-1$, and let
$z^*$ be any zero of $B_{r,n}(z) = \sum_{k=0}^r
{n \choose k} z^k$.
\vskip .1in \noindent Then, $z^*$ lies in a region defined by the intersection of two circles
and a
plane closed curve, to the right of a vertical line. Specifically,
\begin{equation}
|z^*| \le \frac{r}{n+1-r}~,
\end{equation}
\begin{equation}
\left| z^* - \frac{\gamma^2}{1-\gamma^2} \right|
\le \frac{\gamma}{(1-\gamma^2 )}~,
{\rm ~where~} \gamma = \frac{r}{n-1}~,
\end{equation}
\begin{equation}
{\rm Re~} z^* > -\frac{1}{2}~,
\end{equation}
and $z^*$ lies exterior to the curve $C_{r/n}$, as defined in (\ref{cur1},
\ref{cur2}).
\end{thm}
\noindent {\em Proof. } We begin by considering the ratio of coefficients
\begin{equation}
\frac{{n \choose k}}{{n \choose k-1}} = \frac{n-k+1}{k}~,
\end{equation}
which is decreasing in $k$.
\vskip .1in \noindent Hence, writing $B_{r,n}\left( \frac{r}{n-r+1} z \right)
=\sum_{k=0}^r a_k z^k$, we have that
\begin{displaymath}
\frac{a_k}{a_{k-1}} = \frac{n-k+1}{k} \cdot \frac{r}{n-r+1} \ge 1~.
\end{displaymath}
That is, $\{ a_k\}_{k=0}^r$ is non-decreasing, so by the Enestr\"om-Kakeya
Theorem (\cite{He}, p. 462), the zeros of this polynomial satisfy $|z| \le 1$.
Hence, the zeros of $B_{r,n}(z)$ satisfy
$|z| \le \frac{r}{n-r+1}$.
\vskip .1in \noindent For the second bounding circle, we refer to Wagner \cite{W}, where it is
shown, again using the Enestr\"om-Kakeya Theorem, that the zeros of
$H_{r,n}(q)$ as given in (\ref{rel}), lie in the annulus
\begin{displaymath}
\frac{1}{n-r} \le |q| \le \frac{r}{n-1}~.
\end{displaymath}
\vskip .1in \noindent Since $z = -1$ is clearly not a zero of $B_{r,n} (z)$ for $r < n$,
we may make
the substitution $z = \frac{q}{1-q}$ (or equivalently
$q = \frac{z}{1+z}$) in (\ref{rel}), which shows
immediately that $H_{r,n}(q) = (1+z)^{-r} B_{r,n}(z)$, from which
one obtains
\begin{equation}\label{wag}
\left| \frac{z}{1+z} \right| \le \frac{r}{n-1} =: \gamma~.
\end{equation}
\vskip .1in \noindent Writing this last inequality in terms of the real and imaginary parts
of $z$ yields the claimed result.
\vskip .1in \noindent Noting that (\ref{wag}) implies that
$\left| \frac{z}{1+z} \right| < 1$,
yields the half-plane ${\rm Re~} z > -\frac{1}{2}$, as claimed.
\vskip .1in \noindent For the final bound, we mimic the analysis of Buckholtz \cite{Bu} on
the partial sums of $e^z$, and write
\begin{equation}\label{eq1.1}
(1+z)^{-n} B_{r,n}(z) = 1 - \frac{z^r}{(1+z)^n} \cdot
R_{r,n} (z)~,
\end{equation}
where
\begin{equation}\label{rem}
R_{r,n}(z) = \sum_{k=r+1}^n {n \choose k} z^{k-r}~
= z^{n-r} B_{n-r-1,n} \left( \frac{1}{z} \right).
\end{equation}
\vskip .1in \noindent For clarity, we set $\beta = r/n$. Inside and on the curve
$C_\beta$ (\ref{cur1},\ref{cur2}), we have
$|z| < \frac{\beta}{1-\beta }$
and $\left| \frac{z^r}{(1+z)^n} \right| \le K_\beta^n$,
where $K_\beta$ is defined in (\ref{cur1}). This, with the upper bound
of Lemma \ref{lem3} yields
\begin{equation}\label{eq1.2}
\left| (1+z)^{-n} B_{r,n}(z) \right| \ge 1-\left| \frac{z^r}{(1+z)^n} \right| \cdot
\left| R_{r,n}(z) \right| \\
> 1 - K_\beta^n \cdot K_\beta^{-n} = 0 ~, \\
\end{equation}
which is the desired result. \hfill {\em Q.E.D.} \vskip .3in
\vskip .1in \noindent Note that
the second bounding circle occurring in this result,
namely
\begin{displaymath}
\left| z - \frac{\alpha^2}{1-\alpha^2} \right| = \frac{\alpha}{1-\alpha^2}~,
\end{displaymath}
intersects the negative real axis at
$z = -\frac{\alpha}{1+\alpha }$.
This circle is contained in the first, namely $|z| = \frac{\alpha}{1-\alpha}$, and
both meet at the common point $z = \frac{\alpha}{1-\alpha}$.
\vskip .1in \noindent The limiting case $|z| = \frac{\alpha}{1-\alpha}$
corresponding to the first
bounding circle, and the bounding half-plane ${\rm ~Re~} z > -\frac{1}{2}$ both
appear in \cite{Os}, with proofs that require significantly more detailed
derivation. The bounding curves and associated zeros for the case $r=10$
and $n = 30$ are illustrated in figure \ref{fig1}.
\vskip .1in \noindent We now use these results, and the bounds from the proof, to discuss
some convergence results.
\begin{thm}\label{thm2}
Suppose that $1 \le r_j < n_j-1$ for all $j$, that
$\lim_{j \to \infty} n_j = \infty$, and that
\begin{displaymath}
\lim_{j \to \infty} \frac{r_j}{n_j} = \alpha,~0 < \alpha < 1~.
\end{displaymath}
Then
\begin{enumerate}
\item The zeros of $\{B_{r_j,n_j}(z)\}$ converge uniformly to points
of the curve $C_\alpha$, i.e.
\begin{displaymath}
\sup_{z:B_{r_j,n_j}(z)=0} d(z,C_\alpha ) \to 0~,
\end{displaymath}
where $d(z,C_\alpha ) = \inf_{\zeta \in C_\alpha} |z-\zeta |$ is the
distance from $z$ to $C_\alpha$,
and
\item Each point of $C_\alpha$ is a limit point of zeros of
$\left\{ B_{r_j,n_j} (z) \right\}_{j=1}^\infty$.
\end{enumerate}
\end{thm}
\noindent {\em Proof. } Set $\beta_j = {r_j}/{n_j}$, so that
$\lim_{j \to \infty} \beta_j = \alpha$. Using (\ref{eq1.1}), the zeros
of $B_{r_j,n_j}(z)$
then satisfy
\begin{equation}\label{poly1}
\frac{z^{r_j}}{(1+z)^{n_j}} \cdot R_{r_j,n_j}(z) = 1~.
\end{equation}
\vskip .1in \noindent Using Theorem \ref{thm1}, Lemma \ref{lem1}
and Lemma \ref{lem3}, these zeros lie outside
the curve $C_{\beta_j}$, and thus satisfy
$\nu \beta_j < X_{\beta_j} \le |z| \le \frac{\beta_j}{1-\beta_j}$,
where $-X_{\beta_j}$ is the intersection of the curve $C_{\beta_j}$ with the
negative real axis, and $\nu$ is the unique positive solution to
$xe^{1+x} = 1$.
\vskip .1in \noindent Hence,
\begin{equation}\label{ineq}
\frac{\nu r_j}{n_j(r_j+1)} \le \frac{K_{\beta_j}^{n_j}
\left| R_{r_j,n_j}(z) \right|}
{\sum_{k=r_j+1}^{n_j} {n_j \choose k} {\beta_j}^k (1-\beta_j)^{n_j-k}} \le 1~,
\end{equation}
for this region. Note that the sum in the denominator above converges to $1/2$
by the Central Limit Theorem.
\vskip .1in \noindent Consequently,
$\lim_{j \to \infty} \left| R_{r_j,n_j}^{1/n_j}(z) \right| = K_\alpha^{-1}$
uniformly on the
set in question. Taking
moduli and $n_j$-th roots in (\ref{poly1}), we observe that
the zeros of $B_{r_j,n_j}(z)$ must satisfy
\begin{equation}\label{asym}
\frac{|z|^{\beta_j}}{|1+z|} |R_{r_j,n_j}(z)|^{1/n_j} = 1~.
\end{equation}
\vskip .1in \noindent Since $\beta_j \to \alpha$, this establishes that every limit point of
a sequence of zeros of $B_{r_j,n_j}(z)$ lies on $C_\alpha$. Since, by Theorem
\ref{thm1}, the zeros lie in a compact set, it follows that the zeros converge
uniformly to points of $C_\alpha$.
\vskip .1in \noindent For the second claim, fix any $\zeta \in C_\alpha$ with
$\zeta \ne z_\alpha = \alpha/(1-\alpha )$. Then $|\zeta | < z_\alpha$, so we
may take a small neighborhood $D$ of $\zeta$ such that $0 < |z| < z_\alpha$
for $z \in {\overline D}$. Consequently, for $j$ sufficiently large,
$|z| < z_{\beta_j}$ for all $z \in {\overline D}$, and it follows from Lemma
\ref{lem3} and the Central Limit Theorem, that
\begin{displaymath}
\left| R_{r_j, n_j}(z) \right|^{1/n_j} \to K_\alpha^{-1}~,
\end{displaymath}
uniformly on ${\overline D}$.
\vskip .1in \noindent In particular, for large $j$, $R_{r_j,n_j}(z) \ne 0$ on ${\overline D}$,
so we may fix an analytic branch of $R_{r_j,n_j}^{1/n_j}(z)$ in $D$. Letting
$\theta_j = {\rm arg~} (R_{r_j,n_j}^{1/n_j}(z))$ (with arguments in the
range $(0,2\pi )$), we then have
\begin{displaymath}
e^{-i \theta_j} R_{r_j,n_j}(z) \to K_\alpha^{-1}~,
\end{displaymath}
uniformly on compact subsets of $D$.
\vskip .1in \noindent By shrinking $D$, we may assume that the latter limit holds uniformly on
$D$. Furthermore, we may assume that $0 < {\rm arg~} (z) < 2 \pi$ for $z \in D$,
and thus the powers $z^{\beta_j}$ and $z^\alpha$ are well-defined in $D$. Hence,
\begin{equation}\label{svan}
\frac{z^{\beta_j}}{1+z} R_{r_j,n_j}^{1/n_j}(z) - \frac{z^\alpha}{1+z}
K_\alpha^{-1} e^{i \theta_j} \to 0~,
\end{equation}
uniformly on $D$.
\vskip .1in \noindent Since the mapping $w = \frac{z^\alpha}{1+z} K_\alpha^{-1}$ maps $C_\alpha$
onto an arc of the unit circle, it maps $D \cap C_\alpha$ onto a subarc. Thus,
for $j$ sufficiently large, there exists an integer $p_j$ such that
$\frac{z^\alpha}{1+z} K_\alpha^{-1} e^{i \theta_j} = e^{2\pi i p_j /n_j}$
for some $z = \zeta_j \in D \cap C_\alpha$. We may further assume that
$\zeta_j \to \zeta$. It now follows from Hurwitz' theorem and (\ref{svan}) that,
for $j$ sufficiently large,
\begin{displaymath}
\frac{z^{\beta_j}}{1+z} R_{r_j,n_j}^{1/n_j}(z) - e^{2 \pi i p_j / n_j}
\end{displaymath}
has a zero $z_j \in D$. Each such zero $z_j$ satisfies (\ref{poly1}), and so
by (\ref{eq1.1}), is a zero of $B_{r_j,n_j}(z)$. This proves that every point on
$C_\alpha$ is a limit point of zeros of $\left\{ B_{r_j,n_j}(z) \right\}$.
\hfill {\em Q.E.D.} \vskip .3in
\vskip .1in \noindent We note that, thanks to (\ref{rem}), the non-trivial zeros of
$R_{r_j,n_j}(z)$
converge uniformly to all points which lie on the curve $C_\alpha^\prime$,
as defined in (\ref{cur3}).
\vskip .1in \noindent This result also appears in \cite{Os}, using more elaborate asymptotics.
The analysis presented requires a deletion of a neighborhood of the
singular point
$z_\alpha = \frac{\alpha}{1-\alpha }$. Consideration of the results of
Lemma \ref{lem3} shows that this is not necessary with our methods.
\vskip .1in \noindent The remaining results presented here do not appear in the literature.
\vskip .1in \noindent The asymptotic expansions in the proof of Theorem \ref{thm2}
immediately give
the following result on the rate of convergence. We note that, as shown
in \cite{CVW} in the case of the exponential function, this rate is
best possible.
\begin{thm}\label{thm3}
Fix $0 < \delta < 1$. Then, there exists a constant $c$, depending only on $\delta$,
such that, if
$r,n$ are large, and $0 < \delta < \frac{r}{n} < 1-\delta$,
for any zero $z^*$ of $B_{r,n}(z)$
\begin{displaymath}
\min_{\zeta \in C_{r/n}} |z^*-\zeta| \le \frac{c}{|z^*-\frac{r}{n-r}|}\cdot \frac{\ln n}{n}~.
\end{displaymath}
\vskip .1in \noindent Additionally, proximity to the singular point
$z_{r/n} = \frac{r}{n-r}$ is of order $O \left( \frac{1}{\sqrt{n}}\right)$.
\end{thm}
\noindent {\em Proof. } Set $\beta = r/n$ From (\ref{ineq}), we obtain the approximation
\begin{equation}\label{svan2}
\left| R_{r,n}^{1/n} (z) \right| \cdot K_\beta = 1+G_{r,n}(z)
\cdot \frac{\ln n}{n}~,
\end{equation}
where $G_{r,n}(z)$ is uniformly bounded in a region containing the zeros.
\vskip .1in \noindent Let $z^*$ be a zero of $B_{r,n}(z)$, and let $\zeta$ be the point on
$C_\beta$ closest to $z^*$. Note that $|\zeta-z^*| = o(1)$ as a consequence of
Theorem \ref{thm2}, as applied to sequences for which $\beta$ converges.
Note that the curve $C_\beta$ is asymptotically a pair of straight lines
at angle $\pi/4$ to the real axis close to the point
$z_\beta = \beta /(1-\beta )$.
Hence, if $z^*$ is close to $z_\beta$, by Theorem \ref{thm1}, it must lie
in the wedges between these lines and the vertical line ${\rm Re~} z = z_\beta$,
from which $|z^*-z_\beta | = O(|\zeta - z_\beta |)$.
\vskip .1in \noindent Note that $z^*$ satisfies (\ref{asym}), without the subscript $j$, and thus,
by (\ref{svan2}), we have
\begin{displaymath}
\frac{|z^*|}{|1+z^*|}\cdot K_\beta^{-1} = \left( 1 + G_{r,n}(z)
\cdot \frac{\ln n}{n} \right)^{-1}~.
\end{displaymath}
Expanding $F(z) = \ln (K_\beta^{-1} |z|^\beta/|1+z|)
= {\rm Re~} \ln(K_\beta^{-1} z^\beta / (1+z))$ as a Taylor series centred at $\zeta$
(noting that $F(\zeta) = 0$), we find that
\begin{displaymath}
|z^*-\zeta | = O \left( \left| \frac{\zeta (1+\zeta)}{\beta - (1-\beta ) \zeta}
\cdot G_{r,n}(z)\cdot \frac{\ln n}{n} \right| \right)
= O \left( \frac{1}{|z_\beta - \zeta|}\cdot \frac{\ln n}{n} \right)~.
\end{displaymath}
This not only gives the desired result, but shows that,
as expected, the rate of convergence is worst for those points closest to
the singular point
$z_\beta = \frac{\beta}{1-\beta }$.
\vskip .1in \noindent To discuss the convergence at the singular point, we take an approach
similar to that used
for the exponential function in \cite{NR1,NR2} and for the Mittag-Leffler
functions in \cite{ESV}.
For convenience, we set
$\mu = n \beta = r$, and $\sigma^2 = n \beta (1-\beta)$. Then,
\begin{displaymath}
f_{r,n}(w) =
(1-\beta)^n B_{r,n} \left( \frac{\beta e^{w/\sigma}}{1-\beta} \right)
= \sum_{k=0}^r {n \choose k} \beta^k (1-\beta )^{n-k} e^{kw/\sigma}~,
\end{displaymath}
which is a truncated moment generating function for a binomial distribution
with mean $\mu$ and variance $\sigma$. Using the Central Limit Theorem,
\begin{displaymath}
f_{r,n}(w) \approx \frac{1}{\sqrt{2\pi}\sigma} \int_{-\infty}^\mu
e^{-\frac{1}{2} \left( \frac{t-\mu}{\sigma} \right) + \frac{tw}{\sigma}} dt~.
\end{displaymath}
Making the substitution $s = \frac{t-\mu-\sigma w}{\sqrt{2}\sigma}$ yields
\begin{displaymath}
e^{-\mu w/\sigma - w^2/2} f_{r,n}(w) \approx
\frac{1}{\sqrt{\pi}} \int_{-\infty}^{-w/\sqrt{2}} e^{-s^2} ds
= \frac{1}{2} {\rm erfc} \left( \frac{w}{\sqrt{2}} \right)~,
\end{displaymath}
the complementary error function. Thus, given the zero $\chi$ of ${\rm erfc}(z)$
which is closest to the origin, there must exist a zero $z^*$ of $B_{r,n}(z)$
for which
\begin{displaymath}
z^* \approx \frac{\beta e^{\sqrt{2}\chi / \sigma}}{1-\beta}
\approx \frac{\beta}{1-\beta} + \sqrt{\frac{2\beta}{(1-\beta)^3}}\cdot
\frac{\chi}{\sqrt{n}}~,
\end{displaymath}
the desired result.
\hfill {\em Q.E.D.} \vskip .3in
\vskip .1in \noindent The figures \ref{fig1} and \ref{fig2} show the zeros, bounding curve
and bounding circles for the cases $r=10, n=30$ and $r=30, n=90$
respectively. Since the ratio $r/n$ is the same in both cases, they serve
to illustrate both the rate of convergence of the zeros to the limit curve,
and the rate of convergence of the bounding circles.
\vskip .1in \noindent Figure \ref{fig3} shows the zeros for the case $r=40, n=80$, as well as the
curve $C_{1/2}$ and the approximating points on the curve.
\vskip .1in \noindent It should be noted at this point that, due to the structure of the
coefficients of these polynomials, direct computation of the zeros
for significantly higher degrees suffers due to numerical instability.
\begin{figure}[htp]
\includegraphics[angle=-90,width=4.3in]{binfig1.ps}
\caption{The bounding curves and zeros for $r=10$, $n=30$}
\label{fig1}
\end{figure}
\begin{figure}[hbp]
\includegraphics[angle=-90,width=4.3in]{binfig2.ps}
\caption{The bounding curves and zeros for $r=30$, $n=90$}
\label{fig2}
\end{figure}
\begin{figure}[htp]
\includegraphics[angle=-90,width=4.3in]{binfig3.ps}
\caption{The curve $C_{1/2}$, points $\{\zeta_{p,80}\}$
and zeros for $r=40$, $n=80$}
\label{fig3}
\end{figure}
\vskip .1in \noindent We conclude by considering the limiting cases $\alpha = 0$ and
$\alpha = 1$. The trivial
result for $\alpha = 0$, given the radius $\frac{r}{n+1-r}$ of the bounding
circle, is that
all zeros converge uniformly to $0$ in this case. However, a slight modification
gives a much more interesting result.
\begin{thm}\label{thm4}
Suppose that $\lim_{j \to \infty} r_j = \infty$ and that
$\lim_{j \to \infty} \frac{r_j}{n_j} = 0$.
\noindent Then, the limit points of the zeros of $\{ B_{r_j,n_j}
( \frac{r_j z}{n_j-r_j} ) \}_{j=1}^\infty$
are precisely the points of the
Szeg\H o curve $|ze^{1-z}| = 1$, $|z| \le 1$.
\end{thm}
\noindent {\em Proof. }
With the given normalization, the results of Theorem \ref{thm1}
yield that the zeros of the normalized polynomial above satisfy
\begin{equation}\label{eq4.1}
1 = \left( \frac{r_j}{n_j-r_j} \right)^{r_j} K_{r_j/n_j}^{-n_j}
\frac{z^{r_j}}{\left( 1+\frac{r_jz}{n_j-r_j} \right)^{n_j}} h(z)
{\rm ~and~} |z| \le 1~,
\end{equation}
where
\begin{equation}\label{eq4.2}
h(z) =
\sum_{k=r_j+1}^{n_j} {n_j \choose k} \left( \frac{r_j}{n_j} \right)^k \left(
1- \frac{r_j}{n_j} \right)^{n_j-k_j} z^{k-r_j}.
\end{equation}
\vskip .1in \noindent Noting that
\begin{displaymath}
\left( \frac{r_j}{n_j-r_j} \right)^{r_j} K_{r_j/n_j}^{-n_j}
= \left( 1 - \frac{r_j}{n_j} \right)^{-n_j}~,
\end{displaymath}
we may use standard expansions to convert (\ref{eq4.1}) to the form
\begin{equation}\label{eq4.3}
1 = (ze^{1-z+g(z)})^{r_j} h(z)~,
\end{equation}
where $|g(z)| \le \frac{3r}{n}$ uniformly in the unit disk.
\vskip .1in \noindent Considering points inside and on the curve $|ze^{1-z}| = e^{-3r_j/n_j}$,
and noting that $|h(z)| \le h(1) < 1$ on the unit disk,
we may repeat the analysis of (\ref{eq1.2}) to deduce that the zeros are
uniformly bounded away from zero by $|z| \ge \eta > 0$.
This implies that we may repeat the bounding process of
Lemma \ref{lem3} to deduce that $h^{1/r_j}(z) \to 1$ uniformly
in $\eta \le |z| \le 1$, defining the roots by a cut along the positive
real axis.
This establishes the desired result.
\hfill {\em Q.E.D.} \vskip .3in
\vskip .1in \noindent Finally, we consider the other limiting case.
\begin{thm}\label{thm5}
Suppose that $\lim_{j \to \infty} r_j = \infty$ and
$\lim_{j \to \infty} \frac{r_j}{n_j} = 1$.
\noindent Then, the limit points of the zeros of the
polynomials $\left\{ B_{r_j,n_j}(z) \right\}_{j=1}^\infty$
are precisely the points of the line ${\rm Re~}z = -\frac{1}{2}$.
\end{thm}
\noindent {\em Proof. }
As in the previous proofs, we write the
equation for the zeros as
\begin{displaymath}
1 = \frac{z^{r_j}}{(1+z)^{n_j}} R_{r_j,n_j}(z)~.
\end{displaymath}
\vskip .1in \noindent We again use the bounds of Lemma \ref{lem3} and obtain the desired
result, using the fact that $\lim_{\alpha \to 1^-} K_\alpha = 1$.
\hfill {\em Q.E.D.} \vskip .3in
\section{Technical Results}
\setcounter{equation}{0}
\vskip .1in \noindent Here we give the properties and inequalities necessary for the main results,
beginning with the properties of the bounding curves.
\begin{lem}\label{lem1}
Fix $0 < \alpha < 1$, and let
\begin{equation}
K_\alpha = \alpha^\alpha (1-\alpha )^{1-\alpha}
\end{equation}
and
\begin{equation}
C_\alpha = \left\{ z~:~\frac{|z|^\alpha}{|1+z|} = K_\alpha ,~
|z| \le \frac{\alpha}{1-\alpha} \right\}~.
\end{equation}
Then,
\begin{enumerate}
\item $\frac{1}{2} \le K_\alpha < 1$, $\lim_{\alpha \to 0^+}
K_\alpha = 1$, $\lim_{\alpha \to 1^-} K_\alpha = 1$.
\item $C_\alpha$ is a simple, smooth closed curve, symmetric with respect
to the real axis, starlike with respect to $z=0$, which passes through
$z_\alpha = \frac{\alpha}{1-\alpha }$.
\item The intersection of $C_\alpha$ with the negative real axis
occurs at $z = -X_\alpha$,where
$\nu \alpha < X_\alpha < \frac{1}{2}$ and $\nu = 0.278 \cdots$
is the unique positive root of
$xe^{1+x} = 1$.
\item $X_\alpha \le |z|$ and $|z| \le z_\alpha$ for any $z \in C_\alpha$,
with the latter equality holding only at $z = z_\alpha$.
\end{enumerate}
\end{lem}
\noindent {\em Proof. } \begin{enumerate} \item A simple calculation gives the limits. Taking
derivatives yields
\begin{displaymath}
\frac{dK_\alpha}{d\alpha} = K_\alpha \ln \left( \frac{\alpha}{1-\alpha}
\right)~,
\end{displaymath}
which shows that $K_\alpha$ is decreasing on $\left( 0, \frac{1}{2}
\right)$ and increasing on $\left( \frac{1}{2}, 1 \right)$. Calculating
$K_{1/2}$ directly gives the equality.
\item Clearly, the definition shows that $C_\alpha$ is closed and symmetric,
and direct calculation shows that it passes through the point
$z_\alpha = \alpha/(1-\alpha )$.
\vskip .1in \noindent We write $z = re^{i\theta}$, and set
\begin{equation}
c_\theta (r) = \frac{|z|^\alpha}{|1+z|} =
\frac{r^\alpha}{\sqrt{1+2r\cos \theta + r^2}}~.
\end{equation}
Clearly, $c_\theta (0) = 0$ and $\lim_{r \to \infty} c_\theta (r) = 0$.
\vskip .1in \noindent For $\theta = 0$, we have
\begin{displaymath}
c_0^\prime (r) = \frac{r^{\alpha-1}}{(1+r)^2} [ \alpha - (1-\alpha )r ]~,
\end{displaymath}
which shows that the given point is the only positive real
value satisfying the
equation.
\vskip .1in \noindent For $0 < \theta < \pi$, we have
\begin{displaymath}
c_\theta^\prime (r) = r^{\alpha-1}{(1+2r \cos \theta +r^2)^{-3/2}}
[(\alpha-1) r^2 + (2\alpha -1) r \cos \theta + \alpha]~.
\end{displaymath}
\vskip .1in \noindent Since $\alpha - 1 < 0$, this derivative has exactly one positive root,
which is a maximum of the function. Further, a simple calculation shows that
\begin{displaymath}
c_\theta \left( \frac{\alpha}{1-\alpha} \right) > K_\alpha~,
\end{displaymath}
from which each such ray yields exactly one point on the curve, inside the
bounding circle, $|z| = \frac{\alpha}{1-\alpha}$.
Considering the defining function, this value of $r$
is clearly decreasing in $0 \le \theta < \pi$.
Hence, the curve is simple and starlike with respect to 0.
\vskip .1in \noindent Finally, for $\theta = \pi$, we have that
\begin{displaymath}
c_\pi^\prime (r) = \frac{r^{\alpha-1}}{(1-r)^2} [\alpha + (1-\alpha )r] > 0
\end{displaymath}
for $0 < r < 1$, and $\lim_{r \to 1^-} c_\pi (r) = \infty$, which gives
exactly one solution in this range.
\vskip .1in \noindent That these points are the only solutions within the bounding circle
can be deduced from the fact that $z \in C_\alpha$
if and only if
$\frac{1}{z} \in C_{1-\alpha}^\prime$.
\vskip .1in \noindent Examining the function $w = K_\alpha^{-1}\frac{z^\alpha}{1+z}$ using arguments
in the range $(0, 2\pi )$ shows that $C_\alpha$ maps onto the approriate
arc of the unit circle in the $w$-plane. This mapping is also one-to-one
along the arc $0 < {\rm arg~} w < 2 \pi \alpha$, since $w^\prime \ne 0$ on the cut plane.
This fact is implicitly
used in the calculation of the rate of convergence.
\item The solution on the negative real axis is $-t = -X_\alpha$, and satisfies
\begin{displaymath}
\frac{t^\alpha}{1-t} = K_\alpha ~,
\end{displaymath}
which we write as
\begin{equation}
f(t) = t^\alpha + \alpha^\alpha (1-\alpha )^{1-\alpha} (t-1) = 0~.
\end{equation}
\vskip .1in \noindent Now, $f(t)$ is increasing, with $f(0) < 0$,
$f (X_\alpha ) = 0$, and
\begin{displaymath}
f \left( \frac{1}{2} \right) = \left( \frac{1}{2} \right)^\alpha - \frac{1}{2} K_\alpha
> \frac{1}{2} (1-K_\alpha ) > 0~,
\end{displaymath}
from which $X_\alpha < \frac{1}{2}$ follows immediately.
\vskip .1in \noindent To show that $\nu \alpha < X_\alpha$, we consider
\begin{equation}\label{eq3.1}
f(\nu \alpha ) = \alpha^\alpha ( \nu^\alpha - (1- \nu \alpha )
(1-\alpha )^{1-\alpha} )~.
\end{equation}
and set
\begin{equation}
g(\alpha ) = \ln ((1- \nu \alpha )(1-\alpha )^{1-\alpha })~,
\end{equation}
which satisfies $g (0) = 0$, $g^\prime (0) = -\nu - 1$ and
\begin{equation}
g^{\prime\prime} (\alpha ) = \frac{ (\nu \alpha )^2 +
(\nu-2)(\nu \alpha ) + 1 - \nu^2 }
{(1-\alpha ) (1-\nu \alpha )^2} > 0~.
\end{equation}
The last inequality follows since the quadratic in the numerator has
discriminant $\nu^3(5\nu-4) < 0$, from Lemma \ref{lem1}, and so has no
real zeros.
\vskip .1in \noindent Hence,
\begin{displaymath}
e^{g(\alpha)} > e^{-(\nu + 1) \alpha} = e^{\alpha \ln \nu } = \nu^\alpha~,
\end{displaymath}
and thus, by (\ref{eq3.1}), $f(\nu \alpha ) < 0$ for $0 < \alpha < 1$,
as desired. \hfill {\em Q.E.D.} \vskip .3in
\end{enumerate}
\vskip .1in \noindent We continue with a lemma required for one of the bounds.
\begin{lem}\label{lem2}
Let $f(z) = \sum_{k=0}^\infty b_k z^k$ satisfy
\begin{equation}
b_0 > b_1 \ge 0,~b_k \ge 0,~b_1b_{k-1}-b_0b_k \ge 0 {\rm ~for~} k \ge 1~.
\end{equation}
Then, $|f(z)| \ge \frac{b_0-b_1}{b_0+b_1} f(1)$ for $|z| \le 1$.
\end{lem}
\noindent {\em Proof. } The conditions given imply that $\{ b_k \}$ is strictly decreasing, unless
$b_k = 0$ for $k \ge K$.
Let $r = \frac{b_1}{b_0} < 1$. Then, the conditions given show that
$b_k \le r b_{k-1}$ for $k \ge 1$. Hence, $f(z)$ is analytic for $|z| < \frac{1}{r}$,
and in particular in the closed unit disk.
Applying the Enestr\"om-Kakaya Theorem to the partial sums
$p_n(z) = \sum_{k=0}^n b_k z^k$ shows that all have their zeros
in the region $|z| > 1$, hence, by Hurwitz' Theorem, $f(z)$ cannot have any zeros
inside the unit disk. Thus, applying the Minimum Modulus Theorem, the minimum value
of $|f(z)|$ for $|z| \le 1$ must occur on the boundary.
\vskip .1in \noindent For $|z| = 1$, we have
\begin{equation}
\begin{array}{lcl}
|(b_0-b_1z) f(z)| & = & \left| b_0^2 + \sum_{k=1}^\infty (b_0b_k-b_1b_{k-1}) z^k \right| \\
& \ge & b_0^2 - \sum_{k=1}^\infty \left| (b_1b_{k-1}-b_0b_k) \right| \\
& = & b_0^2 - \sum_{k=1}^\infty b_1b_{k-1} + \sum_{k=1}^\infty b_0b_k \\
& = & b_0^2-b_1f(1)+b_0 (f(1)-b_0) \\
& = & (b_0-b_1) f(1) ~. \\
\end{array}
\end{equation}
Hence, we have
\begin{equation}
|f(z)| \ge \frac{(b_0-b_1)f(1)}{|b_0-b_1z|} \ge \frac{(b_0-b_1)f(1)}{b_0+b_1}~,
\end{equation}
the desired result. \hfill {\em Q.E.D.} \vskip .3in
\vskip .1in \noindent Finally, we have the estimates of the remainder term.
\begin{lem}\label{lem3} Given integers $1 \le r < n$, we set
$\beta = \frac{r}{n}$,
and consider the remainder term
\begin{equation}
R_{r,n}(z) = \sum_{k=r+1}^n {n \choose k} z^{k-r}~.
\end{equation}
Then, for $|z| \le \frac{\beta}{1-\beta}$, we have
\begin{equation}
\left| R_{r,n}(z) \right| \le K_\beta^{-n} \sum_{k=r+1}^n {n \choose k}
\beta^k (1-\beta )^{n-k} \le K_\beta^{-n}
\end{equation}
and
\begin{equation}
\left| R_{r,n}(z) \right| \ge
\frac{|z|}{r+1} K_\beta^{-n}
\sum_{k=r+1}^n {n \choose k} \beta^k (1-\beta )^{n-k}~.
\end{equation}
\end{lem}
\noindent {\em Proof. } Given that all coefficients are positive, we use the value of $K_\beta$
from (\ref{cur1}) and the bound on $|z|$ to deduce that
\begin{displaymath}
\left| R_{r,n}(z) \right| \le R_{r,n} \left( \frac{\beta}{1-\beta} \right)
= K_\beta^{-n} \sum_{k=r+1}^n {n \choose k} \beta^k (1-\beta )^{n-k}~.
\end{displaymath}
\vskip .1in \noindent The latter sum is clearly bounded by 1, using the binomial expansion. In fact,
using the Central Limit Theorem, it is asymptotically $1/2$ for $r$ and $n-r$
both large.
\vskip .1in \noindent For the lower bound, we consider
\begin{displaymath}
g(z) = \left( \frac{1-\beta }{\beta z} \right) R_{r,n} \left( \frac{\beta z}{1-\beta }
\right) = \sum_{k=0}^{n-r-1} b_k z^k~,
\end{displaymath}
where
\begin{displaymath}
b_k = {n \choose k+r+1} \left( \frac{\beta}{1-\beta } \right)^k~.
\end{displaymath}
\vskip .1in \noindent It is simple to show that $g(z)$ satisfies the conditions of Lemma \ref{lem2},
that
\begin{displaymath}
\frac{b_0-b_1}{b_0+b_1} = \frac{2n-r}{2r(n-r)+(2n-3r)} \ge \frac{1}{r+1}~,
\end{displaymath}
and finally that
\begin{displaymath}
g(1) = \left( \frac{1-\beta}{\beta} \right) K_\beta^{-n} \sum_{k=r+1}^n
{n \choose k} \beta^k (1-\beta )^{n-k}~.
\end{displaymath}
\vskip .1in \noindent Rewriting $R_{r,n}(z)$ in terms of $g(z)$ yields the result.
\hfill {\em Q.E.D.} \vskip .3in
\vskip .1in \noindent We would like to acknowledge Professor Alan Sokal, of New York University,
who suggested this problem in 2001, and independently deduced the form
of the limit curves.
\nocite{*}
\bibliographystyle{plain}
|
1,477,468,750,969 | arxiv | \section{Introduction}
Quantum circuit synthesis is a process to construct a quantum circuit that implements a desired unitary operator and optimizes its size/depth in terms of a given gate set, which is an important task in the field of quantum computation and quantum information~\cite{ShendeBM06,vartiainen2004efficient,nielsen2002quantum}.
There are two key limitations of the current intermediate-scale quantum devices. First, the performance and reliability of quantum devices heavily depend on the length of time that the underlying quantum states can remain coherent. Hence it is natural to design the algorithm with less coherent time and environmental noise~\cite{boixo2018characterizing, Arute2019}, which in other words, with smaller size and/or lower depth.
The second limitation is that the state-of-art quantum devices do not support to place 2-qubit gates (usually the CNOT gates) in arbitrary pairs of qubits. The 2-qubit gates are only allowed to place between the adjacent qubits~\cite{ibmqexp2017,ye2019propagation, Arute2019}.
We denote a qubit as a vertex, and use
an edge between two vertices to represent a 2-qubit gate can be performed on these two qubits. Then the limitations of the qubit connection can be represented as a \emph{topological constraint graph}.
The near term devices usually use grid-style graph to be their constrained graphs~\cite{ibmqexp2017,ye2019propagation,boixo2018characterizing}. Meanwhile, $d$ dimensional grid is also a good candidate of constrained graph for quantum supremacy by Harrow \emph{et al.}~\cite{harrow2018approximate}. {There are a large number of work aim to map a quantum circuit to real quantum superconducting devices restricted to a constrained graph~\cite{Tannu19Association, sadlier2019near, zhang2020depth,deng2020codar,shi2020compilation}.}
CNOT circuits, in which there are only CNOT gates, are indispensable for quantum circuit synthesis to construct general circuits~\cite{aaronson2004improved,patel2008optimal,barends2014superconducting,vartiainen2004efficient}, since people often use CNOT gates with some single qubit gates to construct universal quantum computing~\cite{shi2002both,barenco1995elementary,boykin2000new}. The optimization of the size/depth of CNOT circuits shed some light on more general problem of arbitrary circuit mapping.
CNOT circuits also dominate stabilizer circuits, which plays an important role in quantum error correction~\cite{nielsen2002quantum} and quantum fault-tolerant computations~\cite{bravyi2005universal}. Aaronson \textit{et al.} \cite{aaronson2004improved} proved that any stabilizer circuit has a canonical form, \textit{i.e.}, H-C-P-C-P-C-H-P-C-P-C, where H and P are one layer of Hadamard gates and Phase gates respectively, and C is a block of CNOT gates only.
Hence, optimization of CNOT circuits can be directly applied to optimization of stabilizer circuit.
There are many researchers aiming at reducing the size/depth of CNOT circuits without topological constraints~\cite{moore2001parallel,jiang2019optimal,patel2008optimal}. For instance, Moore and Nilsson \cite{moore2001parallel} proposed an algorithm to parallelize any CNOT circuits to $O(\log n)$ depth with $O(n^2)$ ancillas, in which the depth matches the lower bound $\Omega(\log n)$.
However, these work can not be directly applied to near-term quantum devices with topological constraints.
There are several size optimization algorithms for CNOT circuits under topological constraint graph.
Kissinger \emph{et al.}~\cite{kissinger2019cnot} proposed an algorithm that gives a $2n^2$-size equivalent circuit for any CNOT circuits if the constrained graph contains a Hamiltonian path. Unfortunately, their optimized size is $O(n^{3})$ for the constrained graph without a Hamiltonian path. Nash \emph{et al.}~\cite{nash2019quantum} proposed a similar algorithm simultaneously which gives a $4n^2$-size equivalent CNOT circuit for any CNOT circuits on any connected graph.
There arises the following question:
Given any CNOT circuit, how can we implement it on the state-of-art quantum devices with smallest size of quantum gates and/or lowest circuit depth?
In this paper, we consider how to optimize the size/depth of CNOT circuits under topological structure constraints, without or with limited ancillas. We firstly propose an algorithm that can optimize the size of any given CNOT circuit to $2n^2$ on any connected graph. This bound is tight for the constrained graph with maximum degree $O(1)$. We further give an algorithm that can optimize the size of any given CNOT circuit to $O\pbra{\frac{n^2}{\log \delta}}$ on a constrained graph with minimum degree $\delta$. We also prove this bound is asymptotically tight for a regular graph. We simulate our size optimization algorithm on some particular graphs in near-term quantum devices, and the simulation experimental results show our optimized size is smaller than the existing results~\cite{kissinger2019cnot,nash2019quantum}.
Secondly, based on the rapid decoherence of quantum system and the grid constriction of the recent quantum devices~\cite{Arute2019,boixo2018characterizing}, we propose an algorithm which can optimize the depth of any given CNOT circuit to $O\pbra{\frac{n^2}{\min\cbra{m_1, m_2}}}$ with $3n \leq m_1m_2\leq n^2$ qubits on $m_1 \times m_2$ grid. The optimized depth is asymptotically tight when $m_1m_2=n^2$.
We generalize the result to any constant $d$ dimensional grid. We also give the experimental result of the depth optimization algorithm on an $n\times n$ grid. As the number of qubits grows, the optimized depth has significant advantages over the existing size optimization algorithms.
In the rest of the manuscript, we cover the basic notation of this manuscript, and the basic preliminaries of CNOT circuit and its properties in Sec. \ref{sec:pre}. In Sec. \ref{sec:sizeOpt} we introduce our size optimization algorithms, the lower bound and also give an experimental comparison of our algorithms and existing algorithms. Next, we introduce our depth optimization and the experimental results in Sec. \ref{sec:DepthOpt}. In Sec. \ref{sec:experimentIBMQ}, we implement the optimized CNOT circuit on IBMQ device. The experimental results show less measurement errors compared to the original circuit on the IBMQ noisy device.
Finally, we give an dissicussion in Sec. \ref{sec:discuss}.
\section{Preliminary}\label{sec:pre}
\textbf{Notations.}
We use $[n]$ to denote $\{1,2,\dots,n\}$, $\mathbb{F}_q$ to denote field with $q$ elements, $\oplus$ to denote addition under $\mathbb{F}_2$, $\mathrm{GL}(n,2)$ to denote set of all $n\times n$ invertible matrix over $\mathbb{F}_2$,
{$\bm{\mathrm I}$} to denote identity matrix. Let $e_j$ be the $j$-th coordinate basis vector with all zeros but a $1$ in the $j$-th position. In the later sections, with a little abuse of symbols, we use a decimal number to represent the ceiling of this number when it needs to be an integer.
\begin{figure}
\centering
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=1em @R=1em {
\lstick{1}&\qw &\qw\\
\lstick{2}&\ctrl{1} &\qw\\
\lstick{3}&*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw
}
\\
(a)
\end{tabular}
\qquad\quad
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=1em @R=.7em {
\lstick{1}&\ctrl{1} &*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw\\
\lstick{2}&*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw &\qw\\
\lstick{3}&\qw &\ctrl{-2} &\qw
}
\\
(b)
\end{tabular} \quad
\begin{tabular}[b]{c}
\begin{tikzpicture}[scale = 0.5]
\node[draw, thick, circle] at (0,0) (c){1};
\node[draw, thick, circle] at (-2cm,0) (l){2};
\draw (l)--(c);
\node[draw, thick, circle] at (2cm,0) (r){3};
\draw (c)--(r);
\end{tikzpicture}
\\
(c)
\end{tabular}
\caption{$3$-qubit CNOT circuit with topological constraint graph. (a) A circuit which cannot be performed in topological constraint (c); (b) A circuit which can be performed in topological constraint (c); (c) topological constraint of a $3$-qubit quantum devices.}
\label{fig:CNOTMRep}
\end{figure}
\textbf{CNOT/SWAP Gate and Circuit.}
A CNOT gate, with control qubit $q_{i}$ and target qubit $q_{j}$, maps $\pbra{q_{i},q_{j}}$ to $\pbra{q_{i},q_{j}\oplus q_{i}}$. A SWAP gate, operating on two qubits $q_i$ and $q_j$, swaps $\pbra{q_i, q_j}$ to $\pbra{q_j, q_i}$. A CNOT/SWAP circuits is a circuit only contains CNOT/SWAP gates. {We call a CNOT circuit with size $s$, which means the number of CNOT gates equals $s$ in this CNOT circuit.}
\textbf{Circuit with topological constraint.}
We use graph $G(V,E)$ to represent the topological constraint of two-qubit gates (CNOT gate) in the circuit. A vertex in $G$ represents a qubit, and the two-qubit gate (CNOT gate) can only operate on two qubits that are connected in $G$.
We say a circuit $\mathcal{C}$ is under constrained graph $G$ if all two-qubit gates in $\mathcal{C}$ satisfy the constraints. Fig. \ref{fig:CNOTMRep} gives an example for the constrained circuit. CNOT circuit in Fig. \ref{fig:CNOTMRep} (a) cannot be performed directly on a $3$-qubit quantum device with topological constraint in \ref{fig:CNOTMRep} (c). In contrast, CNOT circuit in Fig. \ref{fig:CNOTMRep} (b) can be performed directly on it.
\textbf{Circuit with ancillas.} We say a CNOT circuit $\mathcal{C}_{n,m-n}$ with $n$-qubit inputs and $(m-n)$-qubit ancillas implements an $n$-qubit circuit $\mathcal{C}$, if for any input $\ket{x}$ with ancillas $\ket{0}^{\otimes (m-n)}$, the results of $\mathcal{C}_{n,m-n}$ is $\mathcal{C}\ket{x}\otimes\ket{0}^{\otimes (m-n)}$. We say two circuits (with or without ancillas) are equivalent if they implement the same circuit $\mathcal{C}$.
\textbf{Matrix representation of CNOT circuit~\cite{moore2001parallel}.}
We use $\mathtt{CNOT}_{i,j}$ to denote CNOT gate with control qubit $q_{i}$ and target qubit $q_{j}$. The CNOT gate is an invertible linear map $\begin{bmatrix}
1&0\\
1&1\\
\end{bmatrix}$ over $\mathbb{F}_2$.
It is easy to check, a CNOT gate $\mathtt{CNOT}_{i,j}$ is equivalent to the row operation which adds the $i$-th row to $j$-th row over $\mathbb{F}_2$ in the corresponding invertible matrix. By the linearity property of CNOT circuits, we can represent an $n$-qubit CNOT circuit $\mathcal{C}$ as an invertible matrix $\bm{\mathrm M}\in\mathrm{GL}(n,2)$~\cite{moore2001parallel,patel2008optimal}, and the $j$-th column of $\bm{\mathrm M}$ equals $\mathcal{C} e_j$.
We use $R(i,j)$ to denote such row adding operation in the matrix, and call the $\{i,j\}$ its index set. We take a 3-qubit CNOT circuit as an example for the matrix representation as shown in Fig. \ref{fig:CNOTRep}.
A sequence of row adding operations that transform $\bm{\mathrm M}$ to $\bm{\mathrm I}$ correspond to a CNOT circuit represented by $\bm{\mathrm M}^{-1}$. The size optimization of CNOT circuit $\mathcal{C}$ is equivalent to optimizing a parameter $t$ such that there exist index pairs $(i_1,j_1),\dots,(i_t,j_t)$ satisfy $\bm{\mathrm M}=\prod_{k=1}^tR(i_k,j_k)$.
The summation for the input qubits is under module 2 in {Sec. \ref{sec:sizeOpt} and \ref{sec:DepthOpt}.}
\begin{figure}
\centering
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=1em @R=1.4em {
\lstick{1}&\ctrl{1} &*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw\\
\lstick{2}&*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw &\qw\\
\lstick{3}&\qw &\ctrl{-2} &\qw
}
\\
(a)
\end{tabular}\qquad \quad
\begin{tabular}[b]{c}
$\begin{bmatrix}
1 & 0 & 1\\
1& 1 & 0\\
0 & 0 & 1
\end{bmatrix}$
\\
(b)
\end{tabular}
\caption{The matrix representation (b) of CNOT circuit in (a).}
\label{fig:CNOTRep}
\end{figure}
\textbf{Grid Graph.}
In a $d$ dimensional grid graph $G(V,E)$ with
the size of each dimension be $m_i$.
A vertex in this $m_1\times \cdots \times m_d$ grid can be represented as a $d$ dimensional integer vector $(p_1,p_2,\dots,p_d)$, where $p_i\in[m_i]$.
An $n$-qubit CNOT circuit is under $m_1\times \cdots \times m_d$ grid, which means the number of ancillas is $m_1\cdots m_d-n$, where $m_1\cdots m_d\geq n$, and the initial input $x\in\cbra{0,1}^n$ is located on any $n$ vertices of the grid.
\textbf{Parallelized row-addition Matrices~\cite{jiang2019optimal}.}
We say a matrix $\bm{\mathrm R}$ is a parallelized row-addition matrix if there exists $t\in[n], \bm i\in[n]^t, \bm j\in[n]^t$ such that $i_k,j_k$'s are $2t$ different indices and $\bm{\mathrm R}=\prod_{k=1}^tR(i_k,j_k)$. A parallelized row-addition matrix corresponds to a CNOT circuit with depth $1$. Hence optimizing the depth of a CNOT circuit $\mathcal{C}$ is equivalent to optimizing a parameter $t$ such that there exists $t$ parallelized row-addition matrices $\bm{\mathrm R}_1,\dots,\bm{\mathrm R}_t$ and $\bm{\mathrm M}=\prod_{k=1}^t\bm{\mathrm R}_k$.
\textbf{Several concept in graph theory.}
The degree of a vertex is the number of edges that are incident to the vertex. In graph $G$, $\Delta$ and $\delta$ denote its maximum and minimum degree of its vertices respectively. A graph is regular if $\Delta=\delta$. The Steiner tree with terminals set $S\subseteq V$, is a connected sub-graph of $G$ with a minimum number of edges that contain all vertices in $S$. A cut vertex is a vertex whose removal will make the connected graph disconnected.
\section{Size optimization of a CNOT circuit} \label{sec:sizeOpt}
Since the quantum device is sensitive to the error and easy to be disturbed by the environment, we consider the size optimization of CNOT circuits on any connected constrained graph.
We present two results for size optimization process. The first result is aiming at the near-term quantum devices under a sparse constrained graph, while the second result has a great advantage over the quantum devices with a larger number of input qubits and the denser topological structure. We then give the lower bound for the size of CNOT circuits under topological constraint. We give an experimental comparison of our size optimization algorithms and other existing algorithms in the end of this section.
\subsection{Size optimization algorithm for the near-term quantum devices}
\label{subsec:sizeNearTerm}
{For the ``elimination process" that transforms a matrix to identity by row operations under a constrained graph}, we cannot add a row to another if their corresponding vertices are not adjacent.
Given the constrained graph $G$ and the matrix $\bm{\mathrm M}\in \mathrm{GL}(n,2)$ corresponding to a CNOT circuit.
The algorithms of Kissinger \emph{et al.}~\cite{kissinger2019cnot} and Nash \emph{et al.}~\cite{nash2019quantum} both firstly transform a given invertible matrix into an upper triangular matrix, and then transform the triangular matrix into identity. In contrast, we propose an algorithm that eliminates the $i$-th row and $i$-th column simultaneously for vertex $i\in V$ which is not a cut vertex.
The optimized size of the algorithm achieves $2n^2$ in the worst case on any connected topological structure, as stated in the following theorem.
\begin{theorem}\label{thm:TopoSize2n}
The size of any $n$-qubit CNOT circuit can be optimized to $2n^{2}$ under a connected graph $G(V,E)$ as the topological constraint.
\end{theorem}
We give a size optimization algorithm, as Algorithm \textbf{ROWCOL} in this section, to ensure the correctness of Theorem \ref{thm:TopoSize2n}.
Theoerem \ref{thm:TopoSize2n} is more applicable for near-term quantum devices since it has a smaller factor than the existing algorithms~\cite{kissinger2019cnot,nash2019quantum}.
In Algorithm \textbf{ROWCOL}, Steps (3-6) aim to transform the $i$-th column into $e_i$ and Steps (7-10) aim to transform the $i$-th row into $e_i^T$. All arithmetic operations are over the binary field $\mathbb{F}_2$. {An illustrative example is shown in Example 1. The topological constrained graph for the CNOT circuit in Example 1 is depicted in Fig. \ref{fig:stairCNOT}. The optimized CNOT circuit for this inevitable matrix is the inverse of the joint circuit generated from step (1) to (8).}
\begin{algorithm}[htbp]
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Integer $n$, matrix $\bm{\mathrm M} \in \mathbb{F}_2^{n \times s}$, graph $G(V,E)$ where $|V|=n$.}
\Output{Row additions to transform $\bm{\mathrm M}$ into $\bm \bm{\mathrm I}$.}
\For{ $i\in V$ which is not a cut vertex}{
\emph{$S=\{j|M_{ij\neq 0}\}\cup\{i\}$\;}
\emph{Find a tree $T$ containing $S\subseteq V$ in graph $G$\;}
\emph{Postorder traverse $T$ from $i$. When reaching $j$ with parent $k$, add row $j$ to row $k$ if $\bm{\mathrm M}_{ji}=1$ and $\bm{\mathrm M}_{ki}=0$\;}
\emph{Postorder traverse $T$ from $i$, add every row to its children when reached\;}
\emph{Let $S'\subseteq V$ that $\sum_{j\in S'}M_j=M_i+e_i$\;}
\emph{Find a tree $T'$ containing $S'\cup{i}$\;}
\emph{Preorder traverse $T'$ from $i$. When reaching $j\notin S'$, add the $j$-th row to its parent\;}
\emph{Postorder traverse $T'$ from $i$, add every row to its parent when reached\;}
\emph{Delete $i$ from graph $G$\;}
}
\caption{\textbf{(ROWCOL)} Near-term size optimization algorithm}
\end{algorithm}
\begin{figure}[htbp]
\caption*{{Example 1: An illustration of Algorithm \textbf{ROWCOL}. In each block, we eliminate the red row or column on the left by leverage of the right side of CNOT circuit.}}
\begin{tabularx}{\textwidth}{|Xm{4cm}|Xm{4cm}|}
\hline
\small\textbf{1)} &\small\hfill &\small\textbf{2)} & \small\hfill\\
$ \begin{pmatrix}
\textcolor{red}{1} & 1 & 0 & 1 & 1 \\
\textcolor{red}{0} & 0 & 1 & 1 & 0 \\
\textcolor{red}{1} & 0 & 1 & 0 & 1 \\
\textcolor{red}{1} & 1 & 0 & 1 & 0 \\
\textcolor{red}{1} & 1 & 1 & 1 & 0 \\
\end{pmatrix}$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \ctrl{4} & \qw\\
\\
\lstick{ 2} & \qw & \qw & \qw & \qw\\
\lstick{ 3} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & \qw\\
\lstick{ 4} & \ctrl{-1} & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 5} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw\\
}&
$\left( \begin{array} {cccccc}
\textcolor{red}{1} & \textcolor{red}{1} & \textcolor{red}{0} & \textcolor{red}{1} & \textcolor{red}{1} \\
0 & 0 & 1 & 1 & 0 \\
0 & 1 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw\\
\lstick{ 2} & \qw & \qw & \qw & \qw &\qw\\
\lstick{ 3} & \qw & \ctrl{1} & \qw & \qw &\qw\\
\lstick{ 4} & \ctrl{-3} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-3} &\qw\\
\lstick{ 5} & \qw & \qw & \ctrl{-1} & \qw &\qw\\
\\
\\
}\\
\hline
\small\textbf{3)} &\hfill &\small\textbf{4)} & \small\hfill \\
$\left( \begin{array} {ccccc}
1 & 0 & 0 & 0 & 0 \\
0 & \textcolor{red}{0} & 1 & 1 & 0 \\
0 & \textcolor{red}{1} & 1 & 1 & 1 \\
0 & \textcolor{red}{1} & 0 & 1 & 0 \\
0 & \textcolor{red}{0} & 1 & 0 & 0 \\
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \ctrl{1} & \qw\\
\lstick{ 3} & \ctrl{-1} & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 4} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw\\
\\
\lstick{ 5} & \qw & \qw & \qw & \qw\\
}&
$\left( \begin{array} {cccccc}
1 & 0 & 0 & 0 & 0 \\
0 & \textcolor{red}{1} & \textcolor{red}{0} & \textcolor{red}{0} & \textcolor{red}{1} \\
0 & 0 & 1 & 1 & 0 \\
0 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 \\
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\lstick{ 2} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 3} & \ctrl{-1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw\\
\lstick{ 4} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw & \qw\\
\lstick{ 5} & \ctrl{-1} & \qw & \qw & \qw\\
\\
\\
}\\
\hline
\small\textbf{5)} &\hfill &\small\textbf{6)} & \small\hfill \\
$\left( \begin{array} {ccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & \textcolor{red}{1} & 1 & 1 \\
0 & 0 & \textcolor{red}{0} & 0 & 1 \\
0 & 0 & \textcolor{red}{1} & 0 & 0 \\
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 3} & \qw & \qw & \ctrl{1} & \qw\\
\lstick{ 4} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 5} & \ctrl{-1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw\\
}&
$\left( \begin{array} {cccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & \textcolor{red}{1} & \textcolor{red}{1} & \textcolor{red}{1} \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 3} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw\\
\lstick{ 4} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw & \qw\\
\lstick{ 5} & \ctrl{-1} & \qw & \qw & \qw\\
\\
\\
}\\
\hline
\small\textbf{7)} &\hfill &\small\textbf{8)} & \small\hfill \\
$\left( \begin{array} {ccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & \textcolor{red}{1} & 1 \\
0 & 0 & 0 & \textcolor{red}{0} & 1 \\
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 3} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 4} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 5} & \qw & \qw & \qw & \qw\\
}&
$\left( \begin{array} {cccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & \textcolor{red}{1} & \textcolor{red}{1} \\
0 & 0 & 0 & 0 & 1 \\
\end{array} \right)$
&\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\\\
\lstick{ 2} & \qw & \qw & \qw & \qw\\\\
\lstick{ 3} & \qw & \qw & \qw & \qw\\\\
\lstick{ 4} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & \qw\\
\lstick{ 5} & \ctrl{-1} & \qw & \qw & \qw\\
\\
\\
}\\
\hline
\small\textbf{9)} &\hfill & & \small\hfill \\
$\left( \begin{array} {ccccc}
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{array} \right)$
&
&
&
\\
\hline
\end{tabularx}
\label{tab:RCExample}
\end{figure}
When we perform CNOT gates in Steps (2-3, 5-6), the number of CNOT gates is less than the number of remaining nodes, hence the total size is at most $4(n-1)+4(n-2)+\dots+4\times 1\le 2n^2$.
Since a connected graph must have a vertex which is not a cut vertex and the graph keeps connected after that vertex deleted,
this algorithm can be applied to any connected graph. We can BFS (Breadth First Search) starting from any vertex in the graph and apply the above algorithm in the BFS tree, then we delete a leaf node each time.
\subsection{Size optimization algorithm for general topological graph}
\label{subsec:sizeOptFuture}
In this subsection, we propose an algorithm aiming at optimizing the size of CNOT circuits for quantum system on denser constrained graph, as shown in Theorem \ref{thm:TopoAveDeg}.
\begin{theorem}\label{thm:TopoAveDeg}
The size of any $n$-qubit CNOT circuit can be optimized to $O\pbra{\frac{n^{2}}{\log \delta}}$ under a connected graph with minimum degree $\delta$ as the topological constraint.
\end{theorem}
\begin{proof}
Let $k = n/\delta$ in Theorem \ref{thm:TopoAveDegNewVersion}, where $\delta$ is the minimum degree of graph $G(V,E)$. It is easy to check the summation of the degree of any $k$ vertices are greater than $n$, and thus we have a $O\pbra{\frac{n^2}{\log \delta}}$-size CNOT circuit.
\end{proof}
This theorem is the generalization of Patel \emph{et al.}~\cite{patel2008optimal}, which optimizes the size of CNOT circuits on the complete graph and gives the optimized size of $O\pbra{\frac{n^2}{\log n}}$. The most significant difference between the techniques of Theorem \ref{thm:TopoAveDeg} and the algorithms in Refs.~\cite{kissinger2019cnot,nash2019quantum} is that here we eliminate several columns simultaneously instead of eliminating a single column each iteration.
It follows that the optimized size in Theorem \ref{thm:TopoAveDeg} is asymptotically tight for a nearly regular graph in which all vertices have the same order of degree by Lemma \ref{lem:LowerBoundSizeCnot}.
\begin{algorithm}[!ht]
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Integers $n, s$, where $s\leq \log n/2$, matrix $\bm{\mathrm M} \in \mathbb{F}_2^{n \times s}$, graph $G(V,E)$ where $|V|=n$.}
\Output{Row additions to transform $\bm{\mathrm M}$ into $\sbra{e_1,\ldots, e_s}$.}
\emph{Let $\bm{\mathrm M}^{(1)}$ be the first $s$ rows of $\bm{\mathrm M}$ and $\bm{\mathrm M}^{(2)}$ be the rest rows, and $T$ be the 2-approximate minimum Steiner tree for vertices $\cbra{V_1,... , V_s}$ in $G$}\;
\For{$j\leftarrow 1$ \KwTo $s$}{
\emph{Eliminate the $j$-th column of $\bm{\mathrm M}^{(1)}$ to $e_j$ under graph $T$ with Lemma \ref{lem:SizeSubColSequence}}\;
}
\emph{$l := \argmax_{s<j\leq n} \deg(V_j)$}\;
\For{$j\leftarrow 1$ \KwTo $s$}{
\emph{Eliminate $(l,j)$-th element $\bm{\mathrm M}_{l,j}$ to 0 with the $j $-th row of $\bm{\mathrm M}^{(1)}$ under the minimum path of the vertices $V_{j}$ and $V_{l}$ with Lemma \ref{lem:SizeSubColSequence}}\;
}
\emph{$k:= n/2^{2s}$}\;
\For{$i \leftarrow 1$ \KwTo $2^{2s}$}{
\emph{Let $\text{Gray}(i) := i\oplus \floor{i/2}$ be the Gray code of $i$}\;
\emph{Let $S$ be the set containing all the rows in $\bm{\mathrm M}^{(2)}$ with Gray code equals $\text{Gray}(i)$}\footnotemark[1]{}\;
\emph{Transform the binary string of row $l$ to $Gray(i)$ by adding one of the rows in $\bm{\mathrm M}^{(1)}$ to row $l$ under the minimum path between the vertices of these two rows
with Lemma \ref{lem:SizeSubColSequence}}\;
\While{$|S|\ne \emptyset$}{
\emph{Let set $S'$ be random $k$ rows of $S$, and let set $R$ be the vertices of these rows in $S'$}\;
\emph{Eliminate rows in $S'$ to 0 by row $l$ under the 2-approximate minimum Steiner tree of set $R$ in $G$ with Lemma \ref{lem:SizeSubColSequence}}\;
}
}
\caption{\textbf{(SBE)} Eliminate the first $s$ columns of the given invertible matrix.}
\label{alg:nearfutureAlg}
\end{algorithm}
\footnotetext[1]{The Gray code for row $i$ equals Gray$(i)$.}
We give an explicit algorithm to show the upper bound in Theorem \ref{thm:TopoAveDeg}. Recall that the construction of CNOT circuit is equivalent to construct an invertable matrix with row addition operations.
Algorithm \textbf{SBE} (size block elimination) gives the row additions for the first $s$ columns of the invertible matrix under graph $G(V,E)$. In the algorithm, we use vertex $V_i$ to represent row $i$, also qubit $q_i$. The $i$-th row can be added to the $j$-th row means vertex $V_i$ is connected to $V_j$. We also depict the process in Fig. \ref{fig:algscols}.
Let $k$ be a number such that the summation of degree of any $k$ vertices in $G(V,E)$ are greater than the total number of qubits $n$.
Algorithm \textbf{SBE} gives a better optimized {size}
\[O\pbra{\frac{n^2}{\log (n/k)}}\leq O\pbra{\frac{n^2}{\log \delta}},\]
as stated in Theorem \ref{thm:TopoAveDegNewVersion}. Theorem \ref{thm:TopoAveDegNewVersion} is the generalization of Theorem \ref{thm:TopoAveDeg}, here we consider the first $k$ minimum degrees of the graph instead of only one minimum degree.
\begin{theorem}\label{thm:TopoAveDegNewVersion}
For any $n$-qubit CNOT circuit and connected graph $G(V,E)$ as the topological structure of quantum system, for any integer $k$ such that the summation of the degree of any $k$ vertices are greater than $n$, there exists an algorithm that outputs an equivalent $O\left(\frac{n^{2}}{\log (n/k)}\right)$ size CNOT circuit on the constrained graph $G$.
\end{theorem}
\begin{proof}
Theorem \ref{thm:TopoAveDegNewVersion} holds by Lemma \ref{lem:MinSteinerSize} and Lemma \ref{lem:topologicalSteinerK}.
\end{proof}
Lemma \ref{lem:MinSteinerSize} ensures the efficiency of our optimization algorithm, by which we can eliminate one row in one step on average.
\begin{lemma}\label{lem:MinSteinerSize}
Given connected graph $G(V,E)$, for any integer $k$ such that the summation of the degree of any $k$ vertices are greater than $n$, the minimum Steiner tree for any $k$ vertices in $G$ is less than $5k$.
\end{lemma}
This Lemma can be obtained directly by applying the technique in Theorem 2.4 of Ali \emph{et al.}~\cite{ali2012upper}. The detailed proof of this lemma is in Appendix \ref{app:ProfMSS}. The core idea of the proof is that two vertices share no common neighbors if the distance between them equals three. This lemma needs exponential cost to give a minimum Steiner tree, hence we replace it by the 2-approximate minimum Steiner tree in Algorithm \textbf{SBE}, as stated in the following corollary.
\begin{corollary}
Given connected graph $G(V,E)$, for any integer $k$ such that the summation of the degree of any $k$ vertices are greater than $n$, the 2-approximate minimum Steiner tree for any $k$ vertices in $G$ is less than $10k$.
\label{cor:AppSteinerSize}
\end{corollary}
The following lemma serves for Lemma \ref{lem:topologicalSteinerK}. This lemma can be obtained directly from the optimization process of Nash \emph{et al.}~\cite{nash2019quantum}, by which we can add the value of a qubit to the target qubits and keep the values of the other qubits the same.
\begin{lemma}\cite{nash2019quantum}
For $S\subseteq[n-1], y\in \{0,1\}^n$, where $y_j = x_j + x_n$ if $j\in S$, and $y_j = x_j$ otherwise. The transformation $\ket{x_1}\cdots \ket{x_n}\rightarrow \ket{y_1}\cdots \ket{y_n}$ on any connected graph can be implemented by CNOT gates with size $O\pbra{n}$.
\label{lem:SizeSubColSequence}
\end{lemma}
By this lemma, we can eliminate one column of the matrix with $O(n)$ row additions.
\begin{lemma}\label{lem:topologicalSteinerK}
There is a polynomial time algorithm where for any CNOT circuits and integer $k\leq n$, connected graph $G(V,E)$ with $O(k)$-size 2-approximate Steiner tree of any $k$-vertex set, the algorithm outputs an equivalent CNOT circuit with $O\left(\frac{n^{2}}{\log (n/k)}\right)$ size on constrained graph $G$.
\end{lemma}
\begin{proof}
Let $s = \log (n/k)/2$.
Given $n$-qubit CNOT circuit $\bm{\mathrm M}\in\mathrm{GL}(n,2)$, the following algorithm uses $O\left({n^2}/{s}\right)$ row additions to transform $\bm{\mathrm M}$ to $\bm{\mathrm I}$ and thus gives an equivalent $O\left({n^{2}}/{s}\right)$ size CNOT circuit.
The algorithm starts with dividing $\bm{\mathrm M}$ into $n/s$ blocks $\left[\bm{\mathrm M}_{1}\cdots \bm{\mathrm M}_{n/s}\right]$, where $\bm{\mathrm M}_i \in \mathbb{F}_{2}^{n\times s}$. Similarly let $\bm{\mathrm I} = \left[\bm{\mathrm I}_{1}\cdots \bm{\mathrm I}_{n/s}\right]$.
Next transform $\bm{\mathrm M}_j$ to $\bm{\mathrm I}_{j}$ for all of $j\in \sbra{n/s}$.
Algorithm \textbf{SBE} states how to transform $\bm{\mathrm M}_1$ to $\bm{\mathrm I}_1$ with row additions.
The process of transforming $\bm{\mathrm M}_j$ to $\bm{\mathrm I}_j$ for $j> 1$ are almost the same with the process of transforming $\bm{\mathrm M}_{1}$ to $\bm{\mathrm I}_{1}$, except in step (1), we set
$\bm{\mathrm M}^{(1)}$ be the $(j-1)s + 1$-th to $js$-th rows for input matrix $\bm{\mathrm M}_j$, and $\bm{\mathrm M}^{(2)}$ be the rest rows.
Now we prove the row additions is indeed $O\left(n^2/s\right)$.
Since any $k$ vertices has a $O(k)$-size 2-approximate minimum Steiner tree, the number of vertices in 2-approximate minimum Steiner trees in Steps (2) and (17) in Algorithm \textbf{SBE} are both $O(k)$. Hence the number of row additions is less than
\begin{align*}
\underbrace{O(s \cdot k)}_{\text{Steps (3-5)}} +
\underbrace{O(s\cdot k)}_{\text{Steps (7-9)}} +
\underbrace{O(2^{2s} \cdot k)}_{\text{Step (14)} }+
\underbrace{O(n)}_{\text{Step (17)} } = O(n)
\end{align*}
for $k\leq n$. Thus we need in total
$n/s\times O(n) = O\left({n^2}/s\right)$ row additions.
\end{proof}
The optimized size in Theorem \ref{thm:TopoAveDegNewVersion} is asymptotically tight when $k = n^\varepsilon$ for $0\leq\varepsilon<1$ since $\Omega\pbra{n^2/\log n}$ is the lower bound for any constrained graph by Theorem \ref{thm:LowerboundSize}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.25]
\pgfmathsetmacro{\S}{0.3/0.5}
\pgfmathsetmacro{\X}{2.2}
\pgfmathsetmacro{\T}{1.5/4*\X}
\pgfmathsetmacro{\Z}{\X-\T}
\fill[gray!80] (0,\Z) rectangle +(2.5*\X+\T,2.5*\X+\T);
\draw[black,dashed] (0,\Z) rectangle +(2.5*\X+\T,2.5*\X+\T);
\draw[black,dashed] (0,2.5*\X) -- (3.5*\X-\Z,2.5*\X);
\draw[black,dashed] (\X,3.5*\X) -- (\X,\Z);
\draw[decorate,decoration={brace,amplitude=6*\S pt}] (0,3.5*\X) -- (\X,3.5*\X) node[midway,yshift=0.6*\S cm,scale=1.5*\S] {$\frac{1}{2}\!\log\frac{n}{k}$};
\draw[decorate,decoration={brace,amplitude=6*\S pt}] (0,2.5*\X) -- (0,3.5*\X) node[midway,xshift=-1*\S cm,scale=1.5*\S] {$\frac{1}{2}\!\log\frac{n}{k}$};
\draw[decorate,decoration={brace,amplitude=8*\S pt,mirror}] (0,\Z) -- (2.5*\X+\T,\Z) node[midway,yshift=-0.6*\S cm,scale=1.5*\S] {$n$};
\node[scale=2*\S] at (3.3*\X,1.25*\X+\T/2+\Z) {$\Rightarrow$};
\begin{scope}[xshift=3.7*\X cm]
\fill[gray!80] (0,\Z) rectangle +(\X,1.5*\X+\T);
\fill[gray!80] (\X,\Z) rectangle +(1.5*\X+\T,1.5*\X+\T);
\fill[gray!80] (\X,\Z+1.5*\X+\T) rectangle +(1.5*\X+\T,\X);
\draw[black,dashed] (0,\Z) rectangle +(2.5*\X+\T,2.5*\X+\T);
\draw[black,very thick] (0,3.5*\X) -- (\X,2.5*\X);
\draw[black,dashed] (0,2.5*\X) -- (3.5*\X-\Z,2.5*\X);
\draw[black,dashed] (\X,3.5*\X) -- (\X,\Z);
\node[scale=2*\S] at (3.3*\X,1.25*\X+\T/2+\Z) {$\Rightarrow$};
\end{scope}
\begin{scope}[xshift=2*3.7*\X cm]
\fill[gray!80] (0,\Z) rectangle +(\X,1.5*\X+\T);
\fill[gray!80] (\X,\Z) rectangle +(1.5*\X+\T,1.5*\X+\T);
\fill[gray!80] (\X,\Z+1.5*\X+\T) rectangle +(1.5*\X+\T,\X);
\fill[gray!30] (0,\Z) rectangle +(\X,0.4*\X);
\draw[black,dashed] (0,\Z) rectangle +(2.5*\X+\T,2.5*\X+\T);
\draw[black,very thick] (0,3.5*\X) -- (\X,2.5*\X);
\draw[black,dashed] (0,2.5*\X) -- (3.5*\X-\Z,2.5*\X);
\draw[black,dashed] (\X,3.5*\X) -- (\X,\Z);
\draw[-{Straight Barb[length=1*\S mm]}] (0.5*\X,\Z+2.2*\X) to (0.5*\X,\Z+0.2*\X) ;
\draw[black,dashed] (0,0.4*\X+\Z) -- (\X,0.4*\X+\Z);
\node[scale=2*\S] at (3.3*\X,1.25*\X+\T/2+\Z) {$\Rightarrow$};
\end{scope}
\begin{scope}[xshift=3*3.7*\X cm]
\fill[gray!80] (0,\Z) rectangle +(\X,1.5*\X+\T);
\fill[gray!80] (\X,\Z) rectangle +(1.5*\X+\T,1.5*\X+\T);
\fill[gray!80] (\X,\Z+1.5*\X+\T) rectangle +(1.5*\X+\T,\X);
\fill[gray!30] (0,\Z) rectangle +(\X,0.4*\X);
\draw[black,dashed] (0,\Z) rectangle +(2.5*\X+\T,2.5*\X+\T);
\draw[black,very thick] (0,3.5*\X) -- (\X,2.5*\X);
\draw[black,dashed] (0,2.5*\X) -- (3.5*\X-\Z,2.5*\X);
\draw[black,dashed] (\X,3.5*\X) -- (\X,\Z);
\draw[-{Straight Barb[length=1*\S mm]}] (0.5*\X,\Z+0.2*\X) to (0.5*\X,\Z+1.2*\X) ;
\draw[-{Straight Barb[length=1*\S mm]}] (0.25*\X,\Z+0.2*\X) to (0.25*\X,\Z+0.7*\X) ;
\draw[-{Straight Barb[length=1*\S mm]}] (0.75*\X,\Z+0.2*\X) to (0.75*\X,\Z+1.7*\X) ;
\draw[black,dashed] (0,0.4*\X+\Z) -- (\X,0.4*\X+\Z);
\node[scale=2*\S] at (3.3*\X,1.25*\X+\T/2+\Z) {$\Rightarrow$};
\end{scope}
\begin{scope}[xshift=4*3.7*\X cm]
\fill[gray!80] (\X,\Z) rectangle +(1.5*\X+\T,1.5*\X+\T);
\fill[gray!80] (\X,\Z+1.5*\X+\T) rectangle +(1.5*\X+\T,\X);
\draw[black,dashed] (0,\Z) rectangle +(2.5*\X+\T,2.5*\X+\T);
\draw[black,very thick] (0,3.5*\X) -- (\X,2.5*\X);
\draw[black,dashed] (0,2.5*\X) -- (3.5*\X-\Z,2.5*\X);
\draw[black,dashed] (\X,3.5*\X) -- (\X,\Z);
\end{scope}
\end{tikzpicture}
\caption{The elimination process of Algorithm \ref{alg:nearfutureAlg}.}
\label{fig:algscols}
\end{figure}
\subsection{Lower bound for size optimization}\label{sec:LowerBound}
The best lower bound of unrestricted CNOT circuit synthesis is $\Omega(n^2 / \log n)$ size by Patel \emph{et al.}~\cite{patel2008optimal}. This lower bound is obtained by counting the number of distinct CNOT circuit with the given number of CNOT gates, which also implies the same lower bound $\Omega(n^2 / \log n)$ for CNOT circuit synthesis on a constrained graph.
We say the quantum circuit $\hat{\bm{\mathrm U}}\in \mathbb{C}^{2^n\times 2^n}$ is an $\varepsilon$-approximation for the quantum circuit $\bm{\mathrm U}\in \mathbb{C}^{2^n\times 2^n}$ if $\vabs{\hat{\bm{\mathrm U}} - \bm{\mathrm U}}_2\leq \varepsilon$.
Here we prove a tighter lower bound as stated in Theorem~\ref{thm:LowerboundSize}. Note that what we are discussing here is a lower bound for general circuits. The technique of the proof is inspired by the counting method from Christofides~\cite{christofides2014asymptotic} and Jiang \emph{et al.}~\cite{jiang2019optimal}.
Before giving a lower bound for general 2-qubit circuit, we firstly give the same lower bound for ``CNOT circuit'' in Lemma \ref{lem:LowerBoundSizeCnot}. Next we generalize the ``equivalent CNOT circuit'' into any 2-qubit gates with $\varepsilon<\frac{1}{\sqrt{2}}$ approximation as in Theorem \ref{thm:LowerboundSize}.
\begin{lemma}
For any connected graph $G(V,E)$ as the topological structure of quantum system, there exists some $n$-qubit CNOT circuits that any equivalent CNOT circuits need $\Omega\pbra{n^2/ \log \Delta}$ size of CNOT gates on graph $G$, where $\Delta$ is the maximum degree of the constrained graph.
\label{lem:LowerBoundSizeCnot}
\end{lemma}
Before giving the proof of Lemma \ref{lem:LowerBoundSizeCnot}, we first define the canonical CNOT circuit.
\begin{definition}[Canonical CNOT circuit]
For any $n$-qubit CNOT circuits with $k$ CNOT gates, which can be represented as a sequence of elementary row operations, $R_1,R_2, \dots, R_k$. We say it is canonical if and only if the sequence can be partitioned into several non-empty blocks $G_1, G_2,\dots, G_s$ and these blocks satisfies the following properties,
\begin{enumerate}
\item For $1\leq i\leq s$, the index set of matrix in block $G_i$ is disjoint with each other;
\item For $2\leq i\leq s$, for every matrix in the block $G_i$, there exists at least one element of its index set belonging to the index set of some matrix in the previous block $G_{i-1}$.
\end{enumerate}
\label{def:CANONICAL}
\end{definition}
Intuitively, the canonicity means that CNOT gates in the same block can be executed simultaneously and no CNOT gate can be put into the previous block. It is not hard to prove any CNOT circuit can be transformed into an equivalent canonical CNOT circuits within finite steps and the readers can refer to \cite{christofides2014asymptotic} for specific proof. Next, we will show the upper bound of the number of distinct canonical CNOT circuits as stated in \ref{lem:Upper}.
\begin{lemma}
Given the constrained graph $G(V,E)$, there are at most $ 8^k e^{k} \Delta^{k} 2^{n\log n}$ different canonical $n$-qubit CNOT circuits with $k$ CNOT gates, where $\Delta$ is the maximum degree of the graph.
\label{lem:Upper}
\end{lemma}
\begin{proof}
For any $n$-qubit CNOT circuits with $k$ gates $R_1,R_2$, $\dots, R_k$, we denote its canonical form by $\{G_1, G_2,\dots, G_s\}$, in which the length of the blocks are respectively
$ r_1,r_2,\dots,r_s$. We first consider the number of different partitioning ways, i.e., different choices of $r_1,r_2,\dots, r_s$. It's not hard to see the number is $2^{k-1}$ for any combination from set $[k-1]$ being a partition of set $[k]$.
Next, we derive the upper bound of the number of distinct canonical forms, given the specific partitioning $r_1,r_2,\dots, r_s$.
For block 1, the index set of each matrix are required to be disjoint. Thus, there are at most $\dbinom{n}{r_1}$ different combinations of $r_1$ indices and there are at most $\Delta$ choices for another index of a matrix since the CNOT gates can only act on the nearest neighbour qubits and the maximum degree of the graph is $\Delta$. All this leaves for block 1 at most $(2\Delta)^{r_1} \dbinom{n}{r_1}$ possible combination, where the factor 2 is due to the different order of the indices.
Subsequently, for block 2, each index set of matrix has at least one element intersect with that of block 1, so we need to choose $r_2$ index from the index set of block 1. The number of possible combination is $\dbinom{2r_1}{r_2}$. Similar to block 1, there are at most $(2\Delta)^{r_2} \dbinom{2r_1}{r_2}$ possible combinations.
For the same reason, block $i$ has at most $(2\Delta)^{r_i}\dbinom{2r_{i-1}}{r_{i}}$ possible combinations. In all, the number of distinct canonical forms is at most
\begin{equation}
2^k \Delta^{k} \dbinom{n}{r_1}\dbinom{2r_1}{r_2}\dots \dbinom{2r_{s-1}}{r_s}.
\end{equation}
Since $\dbinom{n}{r_1} < n^{r_1}/r_1! < 2^{n\log n} / r_1! $, and $\dbinom{2r_i}{r_{i+1}} < 2^{r_{i+1}} r_i^{r_{i+1}} / r_{i+1}! $, we can relax the upper bound to
\begin{equation}
\frac{ 4^k \Delta^{k} 2^{n\log n} r_1^{r_2} r_2^{r_3} \cdots r_{s-1}^{r_{s}}}{r_1! r_2! \cdots r_s!}.
\end{equation}
From the Stirling formula, the following inequality holds
\begin{equation}
r_1!r_2!\cdots r_s! \ge \left(\frac{r_1}{e} \right)^{r_1}\left(\frac{r_2}{e} \right)^{r_2}\cdots \left(\frac{r_s}{e} \right)^{r_s}.
\label{eq1}
\end{equation}
And then applying the rearrangement inequality to obtain
\begin{equation}
r_1^{r_1} r_2^{r_2} \cdots r_{s}^{r_{s}} \ge r_1^{r_2} r_2^{r_3} \cdots r_{s-1}^{r_{s}} r_{s}^{r_{1}} \ge r_1^{r_2} r_2^{r_3} \cdots r_{s-1}^{r_{s}},
\label{eq2}
\end{equation}
The last inequality holds for $r_{s}\ge 1 $ and $r_{1} \ge 1$. Combining Eq. (\ref{eq1}) and (\ref{eq2}), we have the following inequality,
\begin{equation}
r_1!r_2!\cdots r_s! \ge e^{-k} r_1^{r_2} r_2^{r_3} \cdots r_{s-1}^{r_{s}}.
\end{equation}
Therefore, we can obtain the desired upper bound by multiplying the number of different partitioning ways.
\begin{equation}
8^k e^{k} \Delta^{k} 2^{n\log n}.
\end{equation}
\end{proof}
In the following, we give the proof of Lemma \ref{lem:LowerBoundSizeCnot}.
\begin{proof}[Proof of Lemma \ref{lem:LowerBoundSizeCnot}]
Since the number of $n$-qubit CNOT circuits equals the number of invertible matrix $\bm{\mathrm M}\in\mathrm{GL}(n,2)$, there are at least $2^{n(n-1)/2}$ distinct $n$-qubit CNOT circuits. And by Lemma \ref{lem:Upper}, the upper bound of the number of distinct $n$-qubit CNOT circuits with $k$ CNOT gates is at most $ 8^k e^{k} \Delta^{k} 2^{n\log n}$. Thus if we want to construct all of the $n$-qubit CNOT circuits, $k$ must satisfies
\begin{equation}
k \ge \frac{n(n-1)/2-n\log n}{ \log \Delta +3+ \log e}.
\end{equation}
In other words, there exists some $n$-qubit CNOT circuit, any equivalent CNOT circuit need $\Omega\pbra{n^2/ \log \Delta}$ size of CNOT gates.
\end{proof}
We can generalize the lower bound of Lemma \ref{lem:LowerBoundSizeCnot} to any two qubit circuits, as the following theorem.
\begin{theorem}
For any connected graph $G(V,E)$ as the topological structure of quantum system, there exists some $n$-qubit CNOT circuits that any $\varepsilon< 1/\sqrt{2}$-approximation 2-qubit circuits need $\Omega\pbra{n^2/ \log \Delta}$ size of CNOT gates on graph $G$, where $\Delta$ is the maximum degree of the constrained graph.
\label{thm:LowerboundSize}
\end{theorem}
\begin{proof}
For a real number $a\in [-1,1]$, let the $\eta$ discretization of $a$ be $a^{\eta}=\eta \floor{\frac{a}{\eta}}$. Then there are in total $\frac{2}{\eta}$ different $\eta$ discretizations for all the continuous number $a$ in the interval $[-1,1]$ with $\abs{a^\eta - a}\leq \eta$. Hence there are in total $\pbra{\frac{2}{\eta}}^{32}$ different
$\eta$ discretizations for all the 2-qubit unitaries $\bm{\mathrm U}$ in $\mathbb{C}^{4\times 4}$, and the error $\vabs{\bm{\mathrm U}^\eta - \bm{\mathrm U}}\leq 2\eta$.
In the following we prove that when $\eta = \frac{\varepsilon}{2k}$ and $\varepsilon<\frac{1}{\sqrt{2}}$, the $\eta$ discretizations of any two different CNOT circuits with size $k$ are different.
Since for any two different CNOT circuits $\bm{\mathrm U}_f, \bm{\mathrm U}_g$ with size $k$ such that
\[
\bm{\mathrm U}_f \ket{x} \ne \bm{\mathrm U}_g\ket{x},
\]
we have
\[
\vabs{\bm{\mathrm U}_f \ket{x} - \bm{\mathrm U}_g \ket{x}}_2= \sqrt{2}.
\]
By the definition of $\bm{\mathrm U}_f^\eta$, we have
\[\vabs{\pbra{\bm{\mathrm U}_f^\eta - \bm{\mathrm U}_f}\ket{x}}_2\leq 2k\eta \leq \varepsilon.
\]
Similarly we have $\vabs{\pbra{\bm{\mathrm U}_g^\eta-\bm{\mathrm U}_g}\ket{x}}_2\leq \varepsilon$. Since our approximation needs to be $\varepsilon<\frac{1}{\sqrt{2}}$ close to the original solution, which implies
\begin{equation}
\bm{\mathrm U}_f^\eta \ket{x} \ne \bm{\mathrm U}_g^\eta \ket{x}.
\label{eq:countingError}
\end{equation}
Hence we have an upper bound for the number of 2-qubit circuit with $\varepsilon<\frac{1}{\sqrt{2}}$ approximation to the all of the possible $k$ CNOT gates
\[8^k e^{k} \Delta^{k} 2^{n\log n}\pbra{\frac{2}{\eta}}^{32k}\]
with error $2k\eta$.
Since Eq. \eqref{eq:countingError} is greater than all possible CNOT circuits, which is greater than $2^{n(n-1)/2}$, then we have
$k=\Omega\pbra{n^2/\log \Delta}$.
\end{proof}
\subsection{Experimental results of Algorithm \textbf{ROWCOL}} \label{subsec:SizeExperiment}
\begin{figure}[htbp]
\centering
\begin{minipage}{0.3\textwidth}
\begin{adjustbox}{width=3.5cm, height=3.5cm, keepaspectratio}
\begin{tikzpicture}[font=\small]
\foreach \x in {0,1,2,3}
\foreach \y in {0,1,2}{
\draw (\x,\y)node[circle,fill=white,draw]{}--(\x,\y+1)node[circle,fill=white,draw]{};
\draw (\x,\y)node[circle,fill=white,draw]{}--(\x+1,\y)node[circle,fill=white,draw]{};
}
\foreach \x in {0,1,2,3}
\draw (\x,3)node[circle,fill=white,draw]{}--(\x+1,3)node[circle,fill=white,draw]{};
\foreach \y in {0,1,2}
\draw (4,\y)node[circle,fill=white,draw]{}--(4,\y+1)node[circle,fill=white,draw]{};
\draw (1,3)node[circle,fill=white,draw]{}--(2,2)node[circle,fill=white,draw]{};
\draw (2,3)node[circle,fill=white,draw]{}--(1,2)node[circle,fill=white,draw]{};
\draw (3,3)node[circle,fill=white,draw]{}--(4,2)node[circle,fill=white,draw]{};
\draw (4,3)node[circle,fill=white,draw]{}--(3,2)node[circle,fill=white,draw]{};
\draw (0,2)node[circle,fill=white,draw]{}--(1,1)node[circle,fill=white,draw]{};
\draw (1,2)node[circle,fill=white,draw]{}--(0,1)node[circle,fill=white,draw]{};
\draw (2,2)node[circle,fill=white,draw]{}--(3,1)node[circle,fill=white,draw]{};
\draw (3,2)node[circle,fill=white,draw]{}--(2,1)node[circle,fill=white,draw]{};
\draw (1,1)node[circle,fill=white,draw]{}--(2,0)node[circle,fill=white,draw]{};
\draw (2,1)node[circle,fill=white,draw]{}--(1,0)node[circle,fill=white,draw]{};
\draw (3,1)node[circle,fill=white,draw]{}--(4,0)node[circle,fill=white,draw]{};
\draw (4,1)node[circle,fill=white,draw]{}--(3,0)node[circle,fill=white,draw]{};
\draw (2,-1)node[]{(a)};
\end{tikzpicture}
\end{adjustbox}
\end{minipage}%
\begin{minipage}{0.3\textwidth}
\begin{adjustbox}{width=3.5cm, height=3.5cm, keepaspectratio}
\begin{tikzpicture}[font=\small]
\foreach \x in {0,1,2,...,3}
\foreach \y in {2,3}{
\draw (\x,\y)node[circle,fill=white,draw]{}--(\x+1,\y)node[circle,fill=white,draw]{};
}
\draw (0,0)node[circle,fill=white,draw]{}--(0,1)node[circle,fill=white,draw]{};
\draw (0,2)node[circle,fill=white,draw]{}--(0,3)node[circle,fill=white,draw]{};
\draw (0,0)node[circle,fill=white,draw]{}--(1,0)node[circle,fill=white,draw]{};
\draw (1,0)node[circle,fill=white,draw]{}--(2,0)node[circle,fill=white,draw]{};
\draw (0,1)node[circle,fill=white,draw]{}--(1,1)node[circle,fill=white,draw]{};
\draw (1,1)node[circle,fill=white,draw]{}--(2,1)node[circle,fill=white,draw]{};
\draw (2,2)node[circle,fill=white,draw]{}--(2,1)node[circle,fill=white,draw]{};
\draw (2,2)node[circle,fill=white,draw]{}--(3,2)node[circle,fill=white,draw]{};
\draw (3,2)node[circle,fill=white,draw]{}--(4,2)node[circle,fill=white,draw]{};
\draw (4,2)node[circle,fill=white,draw]{}--(4,1)node[circle,fill=white,draw]{};
\draw (4,1)node[circle,fill=white,draw]{}--(4,0)node[circle,fill=white,draw]{};
\draw (3,0)node[circle,fill=white,draw]{}--(4,0)node[circle,fill=white,draw]{};
\draw (3,0)node[circle,fill=white,draw]{}--(3,1)node[circle,fill=white,draw]{};
\draw (2,-1)node[]{(b)};
\end{tikzpicture}
\end{adjustbox}
\end{minipage}
\caption{The topological structures (a) IBMQ20 and (b) T20.}
\label{fig:dif_graph}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{small}
\begin{tabular}{c}
\xymatrix @*=<0em> @C=1em @R=0.7em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \ctrl{1} & \qw & \qw & \qw\\
\lstick{ 3} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \qw\\
\lstick{ 4} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw\\
\lstick{ 5} & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw
} \\
(a)
\end{tabular}\qquad
\begin{tabular}{c}
\xymatrix @*=<0em> @C=1em @R=1.4em {
\lstick{ 1} & \qw & \qw & \qw\\
\lstick{ 2} & *=<0em>{\times} \qw & \qw & \qw\\
\lstick{ 3} & \qw \qwx& *=<0em>{\times} \qw & \qw\\
\lstick{ 4} & *=<0em>{\times} \qw \qwx& *=<0em>{\times} \qw \qwx& \qw\\
\lstick{ 5} & \qw & \qw & \qw
&
}\\
(b)
\end{tabular}
\begin{tabular}{c}
\begin{tikzpicture}[scale = 0.8]
\node[draw, thick,circle] at (0,0) (l) {1};
\node[draw, thick, circle]
at (1.3, 0)(c){4};
\node[draw, thick, circle]
at (2.6,0) (r) {5};
\draw (l)--(c);
\draw (c)--(r);
\node[draw, thick, circle]
at (1.3,-1.3) (b1) {3};
\node[draw, thick, circle]
at (1.3,-2.6) (b2) {2};
\draw (c)--(b1);
\draw (b1)--(b2);
\end{tikzpicture}
\\
(c)
\end{tabular}
\end{small}
\caption{(a) A staircase of CNOT circuit;
(b) A SWAP circuit;
(c) The topological constraint of the quantum circuit.}
\label{fig:stairCNOT}
\end{figure}
In this subsection, we give the comparison of the experimental simulation of Algorithm \textbf{ROWCOL} and algorithms in Refs. \cite{kissinger2019cnot,nash2019quantum}.
The simulation are ran under IBM Q20 graph and T-like graph (T20). The topological structure of IBMQ20 and T20 are depicted in Fig. \ref{fig:dif_graph}.
We choose different size of original circuits, from 20 to 800 in the experiment, where the number of qubit is 20.
For each input size of original circuit, 200 CNOT circuits are random sampled. All these three algorithm, \textbf{ROWCOL} and algorithms in \cite{kissinger2019cnot,nash2019quantum}, are used to optimize these CNOT circuit. The average optimized size of 200 random CNOT circuits are used to determine the quality of the algorithm.
Since we want to show our algorithm performs well in most kinds of CNOT circuits, not only for some specific kinds, we randomly sample CNOT circuits.
To give a more explicit comparison, we simulate these algorithms on two specific CNOT circuits. The simulation results of our algorithm are superior to the other algorithms in both cases. In this subsection, we sample a random CNOT circuit by randomly sampling two adjacent qubits and adding a CNOT gate until the size meets the requirement.
The result shows that our algorithm can generate the smallest CNOT circuit in 82.3\% of the input circuits.
In Fig.~\ref{fig:exp_size}, we compare the optimized size of Algorithm \textbf{ROWCOL} with algorithms in Refs. \cite{kissinger2019cnot,nash2019quantum}.
The $y$ axis shows the average optimized size of the 200 random circuits with the same input size after performing these algorithms.
The experimental result of Algorithm \textbf{ROWCOL} is significantly superior to the algorithm of Nash \emph{et al.} for all of physical devices, which fit the analysis since their algorithm has a larger factor.
Our algorithm also performs better in most generated random circuits than the algorithm of Kissinger~\emph{et al.} for the constrained graph with a Hamiltonian path, as shown in Fig~\ref{fig:exp_size} (a).
In particular, we have a better optimized size for $82.3\%$ random circuits on IBMQ20, $99.9\%$ random circuits on T20.
When the graph does not have a Hamiltonian path, our algorithm has obvious advantages than Kissinger \emph{et al.}~\cite{kissinger2019cnot} (as in Fig.~\ref{fig:exp_size} (b)).
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{0.4\textwidth}
\begin{adjustbox}{width=8cm, height=5.5cm, keepaspectratio}
\begin{tikzpicture}[font=\small]
\begin{axis}[legend pos = north west,
title style={at={(0.5,-0.25)},anchor=north,yshift=-0.1},
title={(a) IBMQ20},
xlabel={The size of the original circuit},
ylabel={Optimized size}]
\addplot
table
{
X Y
20 20
50 50
100 100
200 200
300 300
400 400
500 500
600 600
700 700
800 800
};
\addplot
table
{
X Y
20 22.62
50 70.56
100 156.61
200 261.38
300 306.01
400 324.49
500 329.36
600 333.43
700 333.6
800 334.4
};
\addplot
table
{
X Y
20 26.88
50 92.55
100 226.94
200 405.78
300 478.32
400 515.13
500 535.52
600 532.11
700 536.12
800 531.34
};
\addplot
table
{
X Y
20 26.71
50 79.47
100 170.27
200 276.02
300 321.36
400 340.23
500 349.39
600 351.5
700 351.86
800 353.79
};
\addlegendentry{$y=x$}
\addlegendentry{Alg \textbf{ROWCOL}}
\addlegendentry{Nash \emph{et al.}}
\addlegendentry{Kissinger \emph{et al.}}
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\begin{adjustbox}{width=8cm, height=5.5cm, keepaspectratio}
\begin{tikzpicture}[font=\small]
\begin{axis}[legend pos = north west,
title style={at={(0.5,-0.25)},anchor=north,yshift=-0.1},
title={(b) T20},
xlabel={The size of the original circuit},
ylabel={Optimized size}]
\addplot
table
{
X Y
20 20
50 50
100 100
200 200
300 300
400 400
500 500
600 600
700 700
800 800
};
\addplot
table
{
X Y
20 15.92
50 35.05
100 64.31
200 113.08
300 161.42
400 204.01
500 243.52
600 281.29
700 317.54
800 348.5
};
\addplot
table
{
X Y
20 19.79
50 52.27
100 121.22
200 259.92
300 411.65
400 534.03
500 649.22
600 730.86
700 813.29
800 865.76
};
\addplot
table{
X Y
20 42.03
50 68.17
100 110.34
200 175.74
300 242.1
400 299.26
500 348.44
600 391.04
700 429.69
800 455.07
};
\addlegendentry{$y=x$}
\addlegendentry{Alg \textbf{ROWCOL}}
\addlegendentry{Nash \emph{et al.}}
\addlegendentry{Kissinger \emph{et al.}}
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\end{minipage}
\caption{The experimental results of Algorithm \textbf{ROWCOL}, algorithms in~\cite{nash2019quantum} and ~\cite{kissinger2019cnot} under topological constraint graphs: (a) IBM Q20 and (b) T20.
As a contrast, we also draw the curve $y=x$ in the figure.}
\label{fig:exp_size}
\end{center}
\end{figure}
The above optimized result for the random generated CNOT circuits shows the great advantages of Algorithm \textbf{ROWCOL} for general CNOT circuits.
In the following, we perform Algorithm \textbf{ROWCOL} and Algs. in Refs. \cite{kissinger2019cnot,nash2019quantum} on two specific CNOT circuits to show the applicability of our algorithm. The comparison results are coincident with the average case.
The first example is a staircase CNOT circuit (as shown in Fig.~\ref{fig:stairCNOT} (a)). The staircase CNOT circuit is a crucial part in the quantum circuits of
error detection and correction~\cite{linke2017fault,lu2008experimental,salas2004effect}, simulation of quantum chemistry~\cite{tranter2018comparison,mccaskey2019quantum}, Hamiltonian simulation~\cite{gui2020term} and near-term variational algorithms~\cite{gokhale2019partial}. Here we simulate Algorithm \textbf{ROWCOL} for the particular staircase CNOT circuit in Fig. \ref{fig:stairCNOT} (a) under the topological constraint in Fig. \ref{fig:stairCNOT} (b).
Fig. \ref{fig:OptimizedStair} gives a comparison of the optimized circuit between Algorithm \textbf{ROWCOL} and Algorithms in Refs. \cite{kissinger2019cnot,nash2019quantum}. The optimized size of Algorithm \textbf{ROWCOL} is 3, however, algorithms in Ref. \cite{kissinger2019cnot} and Ref. \cite{nash2019quantum} need $7$ and $15$ CNOT gates respectively.
The second example is a SWAP circuit (as shown in Fig.~\ref{fig:stairCNOT} (b)), which is widely used in quantum circuit implementation of general unitaries. Fig. \ref{fig:OptimizedSWAP} gives the optimized circuits of Algorithm \textbf{ROWCOL} and algorithms in Ref. \cite{kissinger2019cnot,nash2019quantum} for the SWAP circuit in Fig.~\ref{fig:stairCNOT} (b) under the constrained graph in Fig.~\ref{fig:stairCNOT} (c).
\begin{figure}[htbp]
\centering
\begin{small}
\begin{tabular}{c}
\xymatrix @*=<0em> @C=0.6em @R=0.6em {
\lstick{ 1} & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \ctrl{1} & \qw & \qw & \qw\\
\lstick{ 3} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \qw\\
\lstick{ 4} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw\\
\lstick{ 5} & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw
}
\\
(a)
\end{tabular}
\quad
\begin{tabular}{c}
\xymatrix @*=<0em> @C=0.6em @R=0.6em {
\lstick{ 1} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \ctrl{1} & \qw & \qw & \qw & \ctrl{1} & \ctrl{1} & \qw & \qw\\
\lstick{ 3} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw\\
\lstick{ 4} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 5} & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & \qw & \qw & \qw
}
\\
(b)
\end{tabular}\quad
\begin{tabular}{c}
\xymatrix @*=<0em> @C=0.6em @R = 0.6em{
\lstick{ 1} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
\\
\lstick{ 2} & \qw & \ctrl{1} & \qw & \qw & \qw & \ctrl{1} & \qw & \qw & \qw & \qw & \qw & \qw & \ctrl{1} & \qw & \qw\\
\lstick{ 3} & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \ctrl{1} & \qw & \ctrl{1} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw\\
\lstick{ 4} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{5} & \qw & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw\\
}
\\
(c)
\end{tabular}
\end{small}
\caption{Optimized circuits for the staircase CNOT circuit in Fig. \ref{fig:stairCNOT} (a) under the constrained graph in Fig. \ref{fig:stairCNOT} (c).
The optimized size of Algorithm \textbf{ROWCOL}, Refs. \cite{kissinger2019cnot} and \cite{nash2019quantum} are 3, 7 and 15 respectively, as shown in (a), (b) and (c).
}
\label{fig:OptimizedStair}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{small}
\begin{tabular}{c}
\xymatrix @*=<0em> @C=0.6em @R = 0.6em{
\lstick{ 1} & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\
\lstick{ 2} & \qw & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 3} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw \\
\lstick{ 4} & \ctrl{-1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw & \qw & \qw & \qw \\
\lstick{ 5} & \qw & \qw & \qw & \qw & \qw & \qw & \qw
}
\\
(a)
\end{tabular}\qquad
\begin{tabular}{c}
\xymatrix @*=<0em> @C=0.6em @R = 0.6em{
\lstick{ 1} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw\\
\lstick{ 2} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ 3} & \ctrl{-1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw \\
\lstick{ 4} & \qw & \ctrl{-1} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{-1} & \qw & \qw & \qw\\
\lstick{ 5} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw
}
\\
(b)
\end{tabular}
\end{small}
\caption{Optimized circuits for the SWAP circuit in Fig. \ref{fig:stairCNOT} (b) under the constrained graph in Fig. \ref{fig:stairCNOT} (c).
The optimized size of Algorithm \textbf{ROWCOL}, Refs. \cite{kissinger2019cnot} and \cite{nash2019quantum} are 6 (as in (a)), 7 (as in (b)) and 7 (as in (b)) respectively.}
\label{fig:OptimizedSWAP}
\end{figure}
\section{Depth optimization of a CNOT circuit}\label{sec:DepthOpt}
Due to decoherence in the near-term quantum devices, the quantum computing task should be finished in a short time.
Although Algorithm \textbf{ROWCOL} can also be used to optimize the depth of any given CNOT circuit, the depth of the optimized circuit is almost the same as the size.
To our knowledge, the ancillas can reduce the depth of a circuit to a great extent. A bunch of work aimed to reduce the depth of CNOT circuits via designing optimized circuits with some ancillas~\cite{moore2001parallel,jiang2019optimal,brown2011ancilla}. In this section, we first propose a depth optimization algorithm under grid with limited ancillas, and then show the great improvements of the optimized depth compared to other existing algorithms by numerical experiment.
\subsection{Depth optimization algorithm for the near-term quantum devices}\label{subsec:DepthOptNear}
Here we optimize the depth of CNOT circuits via bringing in limited ancillas on a $2$ dimensional grid. We then generalize the result to any constant $d$ dimensional grid.
\begin{theorem}\label{thm:Depth2Grid}
Given $m_1m_2-n$ ancillas, the depth of any $n$-qubit CNOT circuit can be optimized to $O\pbra{\frac{n^2}{\min \cbra{m_1,m_2}}}$ under the $m_1\times m_2$ grid where $3n \leq m_1m_2\leq n^2$.
\end{theorem}
This theorem gives a trade-off of depth and ancillas for CNOT circuits under grid topological structure. The depth can be optimized to $O(n)$ when $m_1=m_2=n$. It is easy to check there exists a CNOT circuit, the optimized depth is $\Omega\pbra{n}$ on $n\times n$ grid. Hence our optimized depth is asymptotically tight in this case.
The main idea for Theorem \ref{thm:Depth2Grid} is to divide the output of CNOT circuit into several intermediate results conserving in the ancillas. Then combine the intermediate results to generate the output and restore the ancillas.
Before giving the algorithm to show the correctness of Theorem \ref{thm:Depth2Grid}, we would like to cast two lemmas which show how to copy and add inputs under the $d$ dimensional grid, and one can easily check the lower bound for these operation on $(m_1\times m_2\times \cdots \times m_d)$ grid is also $\Omega\pbra{\sum_{j=1}^d m_j}$.
\begin{lemma}
Let $i\in[m]$, integer $m>1$ and $\prod_{i = 1}^d m_i = m$.
The copy operation $|0\rangle^{\otimes (i-1)}|x\rangle|0\rangle^{m-i} \rightarrow|x\rangle^m$ on $(m_1\times m_2 \times \cdots \times m_d)$ grid with $m$ vertices can be implemented by CNOT gates with depth at most $O\pbra{\sum_{j=1}^d m_j}$, where $x\in\{0,1\}$.
\label{lem:CopyGrid}
\end{lemma}
The following lemma gives a tight $O\pbra{dm^{1/d}}$ depth construction for addition operation on $d$ dimensional grid.
\begin{lemma}
Let $S\subseteq [m-1]$, integer $m>1, y = \sum_{i\in S} x_i \mod2$, and $\prod_{i=1}^{d} m_{i} = m$.
The addition operation $|x_1\rangle\cdots|x_{m-1}\rangle |0\rangle\rightarrow |x_1\rangle\cdots|x_{m - 1}\rangle|y\rangle$ on the above grid can be implemented by CNOT gates with depth at most $O\pbra{\sum_{i = 1}^{d}m_{i}}$, where $x_i\in\{0,1\}$ and $|y\rangle$ can be arbitrary vertex in the grid.
\label{lem:AddGrid}
\end{lemma}
We prove this lemma by constructing a tree rooted at vertex $y$ with depth $O(\sum_{i=1}^dm_i)$ in the $d$ dimensional grid, then we divide the tree into some disjoint path to parallelize the CNOT gates.
See Appendix \ref{app:CopyAdd} for the proofs of Lemmas \ref{lem:CopyGrid},\ref{lem:AddGrid}.
\begin{figure}[htbp]
\begin{tikzpicture}[scale=0.3]
\centering
\pgfmathsetmacro{\S}{0.35/0.5}
\pgfmathsetmacro{\X}{5}
\pgfmathsetmacro{\T}{3/8*\X}
\pgfmathsetmacro{\Z}{\X-\T}
\begin{scope}[xshift=0]
\foreach \x in {0,...,8}{
\foreach \y in {0,...,8}{
\fill [fill=gray!50] (\x,\y) circle[radius=8*\S pt];
}
}
\foreach \x in {0,...,1}{
\foreach \y in {0,...,8}{
\fill [fill=black] (\x,\y) circle[radius=8*\S pt];
}
}
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,8){$x_1$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,6.75){$x_2$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,3.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,1.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,0){$x_{m_1}$};
\draw[decorate,decoration={brace,amplitude=8*\S pt},xshift=-2 cm] (0,0) -- (0,8) node[midway,xshift=-1*\S cm,scale=1.5*\S] {$m_1$};
\draw[decorate,decoration={brace,amplitude=8*\S pt},yshift=-0.2 cm] (1,0)--(0,0) node[midway,yshift=-0.6*\S cm,scale=1.5*\S] {$\frac{n}{m_1}$};
\draw[decorate,decoration={brace,amplitude=8*\S pt},yshift=0.2cm] (2,8)--(6,8) node[midway,yshift=0.7*\S cm,xshift=0.3*\S cm,scale=1.5*\S] {$s$};
\draw[decorate,decoration={brace,amplitude=8*\S pt},yshift=-0.2 cm] (8,0)--(7,0) node[midway,yshift=-0.6*\S cm,scale=1.5*\S] {$\frac{n}{m_1}$};
\draw[black,very thin](1.5, -1)[dashed] -- (1.5 ,9);
\draw[black,very thin](6.5, -1)[dashed] -- (6.5 ,9);
\node[scale=1.2*\S,yshift=-0.6 cm] at(4,-2){$(a)$};
\node[scale=2*\S] at (2*\X,\X){$\xLongrightarrow{\small\textcircled{\tiny 1}}$};
\end{scope}
\begin{scope}[xshift=2.5*\X cm]
\foreach \x in {0,...,8}{
\foreach \y in {0,...,8}{
\fill [fill=gray!50] (\x,\y) circle[radius=8*\S pt];
}
}
\foreach \y in {0,...,8}{
\fill[fill = black] (0,\y) circle[radius = 8 * \S pt];
}
\foreach \y in {0,...,8}{
\foreach \x in {2,...,6}{
\fill [fill=black] (\x,\y) circle[radius=8*\S pt];
\draw[black,very thin](0,\y)--(6,\y);
}
}
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,8){$x_1$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,6.75){$x_2$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,3.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,1.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,0){$x_{m_1}$};
\node[scale=1.2*\S,yshift=0.5 cm] at(2.5,8){$x_1$};
\node[scale=1.2*\S,yshift=0.5 cm] at(4.25,8){$\cdots$};
\node[scale=1.2*\S,yshift=0.5 cm] at(6,8){$x_1$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(3,0){$x_{m_1}$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(4.25,0){$\cdots$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(6,0){$x_{m_1}$};
\draw[decorate,decoration={brace,amplitude=8*\S pt},yshift=2cm] (3,8)--(6,8) node[midway,yshift=0.7*\S cm,xshift=0.3*\S cm,scale=1.5*\S] {work space};
\draw[black,very thin](1.5, -1)[dashed] -- (1.5 ,9);
\draw[black,very thin](6.5, -1)[dashed] -- (6.5 ,9);
\node[scale=2*\S] at (2*\X,\X){$\xLongrightarrow{\small\textcircled{\tiny 2}}$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(4,-2){$(b)$};
\end{scope}
\begin{scope}[xshift=5.0*\X cm]
\foreach \x in {0,...,8}{
\foreach \y in {0,...,8}{
\fill [fill=gray!50] (\x,\y) circle[radius=8*\S pt];
}
}
\foreach \y in {0,...,8}{
\foreach \x in {2,...,6}{
\fill [fill=black] (\x,\y) circle[radius=8*\S pt];
\draw[black,very thin](\x,0)--(\x,8);
}
}
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,8){$x_1$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,6.75){$x_2$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,3.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,1.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,0){$x_{m_1}$};
\node[scale=1.2*\S,yshift=0.5 cm] at(2,8){$\bm{\mathrm M}_{1,1}x_1$};
\node[scale=1.2*\S,yshift=0.5 cm] at(4.25,8){$\cdots$};
\node[scale=1.2*\S,yshift=0.5 cm] at(6,8){$\bm{\mathrm M}_{s1}x_1$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(2,0){$z_1$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(3.5,0){$z_2$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(6,0){$\cdots$};
\draw[decorate,decoration={brace,amplitude=8*\S pt},yshift=2cm] (3,8)--(6,8) node[midway,yshift=0.7*\S cm,xshift=0.3*\S cm,scale=1.5*\S] {work space};
\draw[black,very thin](1.5, -1)[dashed] -- (1.5 ,9);
\draw[black,very thin](6.5, -1)[dashed] -- (6.5 ,9);
\node[scale=2*\S] at (2*\X,\X){$\xLongrightarrow{\cdots}$};
\node[scale=1.2*\S,yshift=-0.6 cm] at(4,-2){$(c)$};
\end{scope}
\begin{scope}[xshift=7.7*\X cm]
\foreach \x in {0,...,8}{
\foreach \y in {0,...,8}{
\fill [fill=gray!50] (\x,\y) circle[radius=8*\S pt];
}
}
\foreach \y in {0,...,8}{
\foreach \x in {7,...,8}{
\fill[fill = black] (\x, \y) circle[radius = 8 * \S pt];
}
}
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,8){$x_1$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,6.75){$x_2$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,3.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,1.5){$\vdots$};
\node[scale=1.2*\S,xshift=-0.4 cm] at(0,0){$x_{m_1}$};
\node[scale=1.2*\S,yshift=0.4 cm] at(2,8){$0$};
\node[scale=1.2*\S,yshift=0.4 cm] at(4.25,8){$\cdots$};
\node[scale=1.2*\S,yshift=0.4 cm] at(6,8){$0$};
\node[scale=1.2*\S,yshift=0.4 cm] at(9,7){$y_1$};
\node[scale=1.2*\S,yshift=0.4 cm] at(9,3){$\vdots$};
\node[scale=1.2*\S,yshift=0.4 cm] at(9,5){$\vdots$};
\node[scale=1.2*\S,yshift=0.4 cm] at(9,1){$\vdots$};
\node[scale=1.2*\S,yshift=0.4 cm] at(9,-1){$y_{m_1}$};
\draw[black,very thin](1.5, -1)[dashed] -- (1.5 ,9);
\draw[black,very thin](6.5, -1)[dashed] -- (6.5 ,9);
\node[scale=1.2*\S,yshift=-0.6 cm] at(4,-2){$(d)$};
\end{scope}
\end{tikzpicture}
\caption{ CNOT circuit construction process of the algorithm with $O(\frac{n^2}{\min(m_1, m_2)})$ depth and $3n\leq m_1m_2\leq n^2$ ancillas on $m_1\times m_2$ dimensional grid, where $s:=m_2 - \frac{2n}{m_1}$.}
\label{fig:2dgrid}
\end{figure}
In the following, we give the algorithm for Theorem \ref{thm:Depth2Grid}.
Let $y:= y_1\cdots y_n\in\cbra{0,1}^n$ be the output, then $y_i =\sum_{j}\bm{\mathrm M}_{ij}x_j$. We first divide the summation into several parts. Let $z_{ij}:= \sum_{k=(j-1)m_1 + 1}^{j m_1} \bm{\mathrm M}_{ik}x_k$ where $i\in[n]$ and $j \in [n/m_1]$ (Here we suppose $n/m_1$ is an integer. It is easy to generalize it to the general case.). It is easy to check that the $i$-th output qubit $y_i = \sum_{j}z_{ij}$. Let $s := m_2 - 2n/m_1$. Let the coordinate $(i,j)$ represent the $i$-th row and $j$-th column of the $m_1\times m_2$ grid.
Algorithm \textbf{DepAncGrid} implements the transformation \[\pbra{x,0^{\otimes(m-2n)},0^{\otimes n}}\xrightarrow{U_{\bm{\mathrm M}}} \pbra{x, 0^{\otimes (m-2n)}, \bm{\mathrm M} x}.\]
Hence, the transformation $\pbra{x, 0^{\otimes (m-n)}}\rightarrow \pbra{\bm{\mathrm M} x, 0^{\otimes (m-n)}}$ can be implemented by first performing
\begin{equation}
\pbra{x,0^{m-2n},0^{ n}}\xrightarrow{U_{\bm{\mathrm M}}} \pbra{x, 0^{m-2n}, \bm{\mathrm M} x},
\end{equation}
and then performing
\begin{equation}
\pbra{x,0^{m-2n},\bm{\mathrm M} x}\xrightarrow{U_{\bm{\mathrm M}^{-1}}} \pbra{x\oplus\bm{\mathrm M}^{-1}\bm{\mathrm M} x, 0^{m-2n}, \bm{\mathrm M} x} = \pbra{0^{m-n}, y}.
\end{equation}
Finally, move $y_j$ to the first $n/m_1$ columns for all $j$ by SWAP gates.
Hence we have an equivalent paralleled CNOT circuit for any given CNOT circuit. We depict this process in Fig. \ref{fig:2dgrid}.
\begin{algorithm}[h]
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Matrix $\bm{\mathrm M}\in \cbra{0,1}^{n\times n}$, $m_1\times m_2$ grid, $s:=m_2 - 2n/m_1$.}
\Output{Optimized CNOT circuit.}
\emph{Place input $x_1,\dots, x_n$ in the first $n/m_1$ columns sequentially of the grid \tcp*{$x_1,\ldots, x_{m_1}$ in the first column, and so on, as in Fig. \ref{fig:2dgrid}}}
\For{$l\leftarrow 1$\KwTo $n/m_1$}{
\emph{Copy all of $x_i$ in the $l$-th column to the columns $j$ for $n/m_1+1\leq j \leq n/m_1 + s$ in the same row as $x_i$}\;
\For{$j\leftarrow s$ \textbf{down} \KwTo $2$}{
\If{ $\bm{\mathrm M}_{j,i}$ equals $0$ }{
\emph{
Perform CNOT$_{a,b}$, where qubit $a$ is in coordinate $(i, n/m_1 + j-1)$ and $b$ is in coordinate $(i,n/m_1 + j)$ for $i\in[m_1]$ in parallel}\;
}
}
\For{$1\leq j \leq n/s$}{
\For{$(j-1)s+1\leq k\leq js$ in parallel}{
\emph{Add all values of each column $k$ to the last row to generate $z_{k,1}$ in the coordinate $(m_1, n/m_1 + k)$}\;
}
\emph{Add $z_{(j-1)s+1,1},\dots, z_{js,1}$ to the mirror symmetric coordinate}\;
}
\emph{Restore the ancillas in columns $j$ for $n/m_1+1\leq j \leq n/m_1 + s$}\;
}
\caption*{\textbf{DepAncGrid}: Depth optimization algorithm under $m_1\times m_2$ grid}
\end{algorithm}
The total number of paralleled operations in Algorithm \textbf{DepAncGrid} equals
\[
c\frac{n}{m_1}\pbra{\frac{n}{s}\pbra{m_1 + m_2}}=O\pbra{\frac{n^2}{\min(m_1,m_2)}},
\]
for a suitable constant $c$.
Theorem \ref{thm:Depth2Grid} can be generalized to constant $d$ dimensional grid. In specific, the depth of any CNOT circuit can be optimized to $O\pbra{\frac{n^2\pbra{m_1 + \cdots + m_d}}{m_1\cdots m_d}}$ under $m_1\times \cdots \times m_d$ grid, where $d$ is a constant and $3n \leq m_1\cdots m_d\leq n^2$.
Let $m$ be the third largest value of $\cbra{m_1,\cdots, m_d}$.
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{0.4\textwidth}
\begin{adjustbox}{width=8cm, height=5.5cm, keepaspectratio}
\begin{tikzpicture}[font=\small]
\begin{axis}[legend pos = north west,
title style={at={(0.5,-0.25)},anchor=north,yshift=-0.1},
title={(a)},
xlabel={The number of input qubits},
ylabel={Optimized depth}]
\addplot
table
{
X Y
16 165.82
25 391.79
36 783.68
49 1419.22
64 2367.57
81 3727.99
100 5616.05
121 8125.76
};
\addplot
table
{
X Y
16 141.96
25 216.15
36 306.41
49 412.15
64 533.76
81 670.87
100 824.2
121 993.39
};
\addplot
table
{
X Y
16 165.48
25 384.29
36 762.09
49 1352.04
64 2228.78
81 3470.22
100 5161.82
121 7378.8
};
\addplot
table
{
X Y
16 435.02
25 935.61
36 1698.63
49 2804.88
64 4294.65
81 6252.48
100 8704.51
121 11748.62
};
\addlegendentry{Alg \textbf{ROWCOL}}
\addlegendentry{Alg \textbf{DepAncGrid}}
\addlegendentry{Nash \emph{et al.}}
\addlegendentry{Kissinger \emph{et al.}}
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\end{minipage}%
\begin{minipage}{0.4\textwidth}
\begin{adjustbox}{width=8cm, height=5.5cm, keepaspectratio}
\begin{tikzpicture}[font=\small]
\begin{axis}[legend pos = north west,
title style={at={(0.5,-0.25)},anchor=north,yshift=-0.1},
title={(b)},
xlabel={The number of input qubits},
ylabel={Optimized depth}]
\addplot
table
{
X Y
16 168.57
25 395.64
36 799.39
49 1448.65
64 2433.08
81 3835.47
100 5772.96
121 8267.27
};
\addplot
table
{
X Y
16 141.58
25 216.41
36 306.4
49 412.18
64 533.35
81 671.03
100 824.28
121 993.21
};
\addplot
table
{
X Y
16 205.29
25 498.99
36 1009.84
49 1823.02
64 3032.43
81 4731.06
100 7042.92
121 10102.16
};
\addplot
table
{
X Y
16 451.95
25 966.83
36 1760.96
49 2911.93
64 4469.38
81 6511.31
100 9094.34
121 12285.08
};
\addlegendentry{Alg \textbf{ROWCOL}}
\addlegendentry{Alg \textbf{DepAncGrid}}
\addlegendentry{Nash \emph{et al.}}
\addlegendentry{Kissinger \emph{et al.}}
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\end{minipage}
\caption{The comparison of optimized depth of these algorithms in the grid graph. We compare the performance of different synthesis algorithms in $n\times n$ grid. We use two different numbering of the gird: (a) numbering row by row from the first row to the last row, (b) numbering from the inside to the outside in the form of a grid.}
\label{fig:exp_depth}
\end{center}
\end{figure}
\subsection{Experimental results of Algorithm \textbf{DepAncGrid}}\label{subsec:DepthExperiment}
In this subsection, we give the experimental simulation of Algorithm \textbf{DepAncGrid}. We compare the optimized depth of Algorithm \textbf{DepAncGrid} with all of existing size optimization algorithms on grid graph mentioned previously. To show the performance of these algorithms, different size of $n\times n$ dimensional grids ranging from $4$ to $11$ are selected in this experiment. For one grid, we firstly randomly sample 200 different CNOT circuits, and then run all these algorithms under this condition. The method to sample a random CNOT circuit here is: (1). Randomly sample an $n\times n$ 0-1 matrix by randomly selecting a ``0'' or ``1'' in each position; (2). Determine whether the matrix sampled in (1) is invertible, if not, return to (1), otherwise accept the matrix as a random CNOT circuit.
The comparison results is depicted in Fig. \ref{fig:exp_depth}, including the optimized depth of Algorithms \textbf{ROWCOL}, \textbf{DepAncGrid}, and algorithms in Refs. \cite{kissinger2019cnot,nash2019quantum} on grid graph. In particular, here Algorithm \textbf{DepAncGrid} needs $n^2$ qubits and other algorithms only need $n$ qubits.
The $y$ axis shows the average depth of the optimized circuit. In consideration of reducing the impact of the different Hamiltonian paths chosen in Ref. \cite{kissinger2019cnot}, we choose two different Hamiltonian paths to synthesis the same CNOT circuit.
The comparison results show Algorithm \textbf{DepAncGrid} has a significant improvement for the optimized depth as the number of qubits increasing.
Theoretical, the depth of CNOT circuit generated by Algorithm \textbf{DepAncGrid} equals $O(n)$, while $O(n^2)$ for other algorithms.
This experimental result shows that the depth of CNOT circuit can be greatly reduced for the ancillas free quantum system.
\section{Experimental result on IBMQ\label{sec:experimentIBMQ}}
In this section, we test the performance of our optimized CNOT circuits on IBM devices.
We implement a staircase CNOT circuit and an Add CNOT circuit, which have wide applications in error correction~\cite{nielsen2002quantum}, variational algorithms~\cite{nielsen2002quantum,cerezo2020variational} and quantum chemistry~\cite{hastings2014improving,sugisaki2019open}.
We leverage IBMQ devices (ibmq\_athens and ibmq\_5\_yorktown) as the topological constrained graphs~\cite{IBMQ2021}, as shown in Fig. \ref{fig:ibm_device} of the Appendix. In Fig. \ref{fig:example} (a), we give the staircase circuit without considering the topological constrained graph, with input $\ket{\phi} = \frac{\ket{0} + \ket{1}}{\sqrt{2}} \ket{0} \frac{\ket{0} + \ket{1}}{\sqrt{2}} \ket{0} \ket{0}$. We perform the circuit on IBMQ with the mapping CNOT circuit provided by the ROWCOL algorithm in Fig. \ref{fig:example} (b). There is a layer of $H$ gates before the CNOT circuit in Fig. \ref{fig:example} (b) to generate the input state $\ket{\phi}$ from initial state $\ket{0}^{\otimes 5}$ of IBMQ device.
We compared the measurement results of ROWCOL algorithm and IBM optimization algorithm on ibmq\_athens quantum device, and plot the classical simulation result (ideal quantum circuit without any error) as a comparison, as shown in Fig. \ref{fig:CNOTIBM}. We performed 8,000 measurements on each circuit independently. The horizontal axis shows the results measured under the computational basis, where $j$ represents computational basis $j_0j_1\ldots j_4$. The vertical axis shows the frequency of each outcomes after 8,000 measurements. The ideal output state after performing the CNOT circuit $\mathcal{C}$ in Fig. \ref{fig:example} (a) is $\ket{\psi} = \mathcal{C} \ket{\phi} = \frac{\ket{0} + \ket{4} + \ket{16} + \ket{20}}{2}$. Hence the expected frequency of the result after 8,000 repeated measurements is $\cbra{\ket{0}:4000;\ket{4}:4000;\ket{16}:4000;\ket{20}:4000}$.
Fig. \ref{fig:example} shows that the simulation results are consistent with the expected frequency. It can also be seen from Fig. \ref{fig:CNOTIBM} that the circuit optimized by ROWCOL algorithm has a strong robustness to errors. We can extract the correct measurement result by setting a small threshold $y = 1000$ and selecting the outcomes whose frequency is greater than that threshold. As a comparison, there are tremendous errors in the circuit measurement results obtained directly by IBM's mapping method.
The Add circuit, as shown in Fig. \ref{fig:example_Add} (a) and performed on ibmq\_5\_yorktown device, has similar performance, as shown in Fig. \ref{fig:CNOTIBM} (b).
\begin{figure}[htbp]
\centering
\includegraphics[width = 1.0\textwidth]{comp_ibm.pdf}
\caption{The comparison of the ROWCOL algorithm, IBM optimization algorithm on IBM devices, and the classical simulation result.
(a) For the CNOT circuit in Fig. \ref{fig:example} (a) under ibmq\_athens device. (b) For the CNOT circuit in Fig. \ref{fig:example_Add} (a) under ibmq\_5\_yorktown device.}
\label{fig:CNOTIBM}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{small}
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=0.6em @R=0.6em {
\lstick{ q_0} &\ctrl{1} & \qw & \qw & \qw &
\qw & \qw & \ctrl{1} &
\qw\\
\lstick{ q_4} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \qw &
\qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw\\
\lstick{ q_1} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw &
\ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw &
\qw\\
\lstick{ q_3} & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} &
*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \qw &
\qw\\
\lstick{ q_2} & \qw& \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw & \qw & \qw&
\qw
}
\\
\small (a)
\end{tabular}
\end{small}
\qquad
\begin{small}
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=0.6em @R=0.6em {
\lstick{ q_0} & \gate{H} \barrier{4}&\ctrl{1} & \qw & \qw & \qw & \ctrl{1} &
\qw\\
\lstick{ q_1} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \ctrl{1} & \qw & \ctrl{1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw\\
\lstick{ q_2} & \gate{H} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\qw &
\qw\\
\lstick{ q_3} & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & \ctrl{-1} & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw & \qw \\
\lstick{ q_4} & \qw & \ctrl{-1}& \qw & \qw & \ctrl{-1} &
\qw & \qw
}
\\
\small (b)
\end{tabular}
\end{small}
\caption{Mapping the staircase CNOT circuit to the topological 1D grid graph. (a) Staircase CNOT circuit with input state $\frac{\ket{0} + \ket{1}}{\sqrt{2}}\ket{0} \frac{\ket{0} + \ket{1}}{\sqrt{2}}\ket{0}\ket 0$. (b) A layer of $H$ gates, followed by a block of CNOT circuit, which is equivalent to the CNOT circuit of (a), and can be performed on the ibmq\_athens device (a 1D grid). The input state in (b) equals $\ket{0}^{\otimes 5}$.
}
\label{fig:example}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{small}
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=0.6em @R=0.6em {
\lstick{ q_0}
&\ctrl{4} & \qw & \qw & \qw &
\ctrl{3} & \qw & \qw &
\ctrl{2} & \qw &
\ctrl{1} &\qw\\
\lstick{ q_1}
& \qw & \ctrl{3} & \qw & \qw &
\qw & \ctrl{2} & \qw &
\qw & \ctrl{1} &
*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw \\
\lstick{ q_2}
& \qw & \qw & \ctrl{2} & \qw &
\qw & \qw &\ctrl{1} &
*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw & \qw\\
\lstick{ q_3}
& \qw & \qw & \qw & \ctrl{1} &
*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw & \qw &
\qw & \qw\\
\lstick{ q_4} &
*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw& *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw& *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw & \qw & \qw&
\qw & \qw &
\qw & \qw
}
\\
\small (a)
\end{tabular}
\end{small}
\qquad
\begin{small}
\begin{tabular}[b]{c}
\xymatrix @*=<0em> @C=0.6em @R=0.6em {
\lstick{ q_0} & \gate{H} \barrier{4}&\qw & \ctrl{2} & \qw & \ctrl{1} & \qw\\
\lstick{ q_1} & \qw & \ctrl{1}& \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw\\
\lstick{ q_2} & \gate{H}
& *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &\ctrl{2} & \ctrl{1} &\qw\\
\lstick{ q_3} & \qw &
\ctrl{1} & \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw &
\qw \\
\lstick{ q_4} & \qw &
*+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw & *+<.02em,.02em>{\xy ="i","i"-<.39em,0em>;"i"+<.39em,0em> **\dir{-}, "i"-<0em,.39em>;"i"+<0em,.39em> **\dir{-},"i"*\xycircle<.4em>{} \endxy} \qw & \qw &
\qw
}
\\
\small (b)
\end{tabular}
\end{small}
\caption{Mapping the CNOT Add circuit to ibmq\_5\_yorktown device. (a) CNOT Add circuit with input state $\frac{\ket{0} + \ket{1}}{\sqrt{2}}\ket{0} \frac{\ket{0} + \ket{1}}{\sqrt{2}}\ket{0}\ket 0$. (b) A layer of $H$ gates, followed by a block of CNOT circuit, which is equivalent to the CNOT circuit of (a), and can be performed on the ibmq\_5\_yorktown device. The input state in (b) equals $\ket{0}^{\otimes 5}$.
}
\label{fig:example_Add}
\end{figure}
\section{Dicussion and outlook}\label{sec:discuss}
Optimization of size/depth of the quantum circuit with topological constraints is one of the main challenges in near-term quantum computing~\cite{preskill2018quantum,paler2014mapping,paler2016synthesis}. In this paper, we propose two main size/depth optimization algorithms on the topological constrained graph.
The experimental results show our algorithms have better performances compared to the existing optimization algorithms.
In specific, we can optimize any CNOT circuits to $2n^2$ size on any connected graph, the order is tight when the constrained graph is a simple path.
We also give an algorithm that takes more features of the graph into accounts and gives a better upper bound for the specific class of graphs.
In specific, for the connected graph which has minimum degree $\delta$, any $n$-qubit CNOT circuits can be optimized to $O\pbra{\frac{n^2}{\log \delta}}$ size on such graph. We also prove the order is tight for a regular graph.
Algorithm \textbf{SBE} further indicates the size of any $n$-qubit CNOT circuits can be optimized to $O\pbra{\frac{n^2}{\log (n/k)}}=O\pbra{\frac{n^2}{\log \delta}}$
on the constrained graph where the average degree of any $k$ vertex set is greater than $n/k$ for the constrained graph.
{The maximum degree of the vertices equals $4$ for current quantum superconducting devices~\cite{IBMQ2021, Arute2019,boixo2018characterizing}, hence the optimized size equals $O(n^2)$. The lower bound of the size is in the same order for these grid-type devices.}
In the end, we consider a special constrained graph $m_1 \times m_2$ grid. We show that any $n$-qubit CNOT circuits can be optimized to $O\pbra{\frac{n^2\pbra{m_1 + m_2}}{m_1m_2}}$ depth on this grid, where $3n \leq m_1\cdots m_d\leq n^2$. This optimized result can be easily generalized to any constant $d$ dimensional grid.
{The dimensions for corresponding grid of current quantum superconducting devices~\cite{IBMQ2021,Arute2019,boixo2018characterizing} are $d\in \{1,2\}$.
Note that the 65-qubit quantum superconducting devices proposed by IBMQ~\cite{IBMQ2021} is not exactly a grid, nevertheless, it is a sub-graph of a grid. We can still perform Algorithm \textbf{DepAncGrid} on the expanded grid by leverage of SWAP gates to implement the CNOT gate for two vertices which are not connected in the sub-graph. Another avenue is to construct a new virtual 2D grid, and convert the single CNOT gate in the virtual grid to a series of CNOT gates in the real devices. We can ensure the additional cost of CNOT gates to be bounded to a constant factor of the original one due to its sub-grid structure.}
We list two open problems for the optimization of CNOT circuits on the constrained graph.
(1) Is there any improved size optimization algorithm for some more specific structure under the constrained graph?
(2) If there are no ancillas, can we give some better results for the depth optimization of CNOT circuits on $m_1 \times \cdots \times m_d$ grid for constant $d$?
\section*{ACKNOWLEDGMENTS}
We would like to thank Ziheng Chen for pointing out some typos of the depth optimization algorithms, and Jiaqing Jiang for the discussions of the size optimization algorithm.
\bibliographystyle{unsrt}
|
1,477,468,750,970 | arxiv | \section{Introduction}
Light-harvesting complexes (LHC) are pigment protein-complexes that act as the functional units of photosynthetic systems, capable of absorbing the energy of a photon and transferring it towards the reaction center where it is converted into chemical energy usable for the cell. The transfer of energy in such systems is described by electronic exciton-dynamics
coupled to the vibrations and other mechanical modes of the complex \cite{May2004a}. Laser spectroscopy shows quantum coherent effects in the energy transfer in LHC at temperatures up to $300$~K \cite{Engel2007a,Collini2010a,Panitchayangkoon2010a}.
Theoretical studies of model Hamiltonians at different levels of approximation \cite{Gaab2004a,Plenio2008a,Rebentrost2009a,Fassioli2010a,Wu2010a,Hsin2010,Hoyer2010a} show that the interplay between coherent transport and dissipation leads to high efficiencies in the energy transport in these systems. LHC provide a remarkable example of systems where noise or dissipation aids the transport. Understanding these systems is relevant as it gives insight into the optimal design of artificial systems such as novel nanofabricated structures for quantum transport or optimized solar cells.
The modelling of LHC is challenging due to the lack of atomistic ab-initio methods and requires to resort to effective descriptions. This is most apparent in the treatment of the vibrational excitations, which are commonly described by a structureless mode distribution. Then the energy transfer is calculated by the time propagation of a density matrix, which couples the electronic exciton dynamics to the vibrational environment. For LHC, the rearrangement of the molecular states after the absorption of the photon has to be taken into account and is described by the reorganization energy. The hierarchical equations of motion (HEOM) \cite{Yan2004a,Xu2005a,Ishizaki2005a} for the time evolution of the density-matrix were adapted by Ishizaki and Fleming \cite{Ishizaki2009c} to include the reorganization process in the transport equations and is exact within the model of exciton dynamics coupled to a bath with a Drude-Lorentz spectral density.
In principle the HEOM can be extended to other spectral densities by using a superposition of Drude-Lorentz peaks \cite{Meier1999a,Kleinekathofer2004}. Previous calculations for the energy-transfer efficiency of the FMO complex did not consider memory effects and used a weak coupling perturbation theory \cite{Rebentrost2009a,Fassioli2010a}. Other models try to get around these limitations by using the generalized Bloch-Redfield equations \cite{Wu2010a}, but yield different results compared to the HEOM solution of the same model-system. Prolonged coherent dynamics is predicted due to the slow dissipation of reorganization energy to the vibrational environment \cite{Ishizaki2009a}. Theoretical descriptions must go beyond the rotating-wave approximation, perturbation theory, and require a full incorporation of time non-local effects, and physiological temperature. The HEOM fulfill all these premises.
To date, only the exciton population-dynamics for the FMO model has been studied within the full hierarchical approach \cite{Ishizaki2009a,Zhu2011a} whereas the calculation of efficiency or 2D absorption spectra have been considered out-of-range for present computational power, since they require stable algorithms to propagate enlarged system matrices over many more time-steps.
The adverse computational scaling of the HEOM stems from the need to propagate a complete hierarchy of coupled auxiliary equations, which need to be simultaneously accessed in memory and propagated in time.
The insufficient computational power and memory-transfer bandwidth of conventional CPU clusters \cite{Struempfer2009a} has limited the application of the HEOM to study energy-transfer efficiency in small dimer systems, where other methods are available for comparison around $T=0$~K \cite{Anders2007a,Thorwart2009a,Roden2009,Prior2010a}.
The advent of high-performance graphics processing units (GPU) with several hundred stream-processors working in parallel and with a high-bandwidth memory has lead us to perform the full HEOM approach for the exciton model of LHC. The efficiency calculations for the FMO system in the strong coupling regime require to propagate 240000 auxiliary matrices up to 50 ps (corresponding to 20000 time steps). The full HEOM approach takes only hours of computational time on a single GPU, whereas the corresponding CPU calculation would run several weeks and becomes completely unfeasible for bigger LHC due to the large communication overhead.
We use the GPU algorithmic advance to characterize the exciton energy-transfer efficiency in LHC for a wide range of reorganization energies under full consideration of the memory-effects and at $T=300$~K. Our calculations reveal several important mechanisms which are not contained within the approximative methods. The GPU-HEOM method opens the window to a wide-spread utilization of the HEOM, including the calculation of two-dimensional non-linear spectra of LHC as we will discuss elsewhere. Also the implementation of a scaled version of the HEOM \cite{Zhu2011a}, which reduces the number of auxiliary matrices, could be achieved on a GPU and reduces the computational effort of hierarchical methods further.
For the development of new theoretical chemistry and physics algorithms, GPU are important devices and considerably enlarge the class of solvable problems if one manages to devise a program code which takes full advantage of the GPU stream-processing architecture. For interacting many-body systems, this cannot be generally achieved by porting an existing program to the GPU, but requires to follow the vector-programming paradigm from the onset \cite{Olivares-Amaya2010a,Kramer2009c}.
The manuscript is organized as follows: in Sect.~\ref{sec:model} we set up the model for energy transfer to the reaction center in the FMO complex. In Sect.~\ref{sec:EffReorg} we calculate the key-quantities used to characterize the
energy flow, namely the efficiency and the transfer time to the reaction center. We compute them for a wide range of reorganization energies and bath correlation-times within the hierarchical approach. This section contains a detailed discussion of the differences of the HEOM results compared to calculations based on approximative methods. We highlight the main mechanism behind the high efficiency, the delicate balance between the requirements of an energy gradient towards the reaction center and the detuning of the energies, as shown in Sect.~\ref{sec:EffLevels}. In Sect.~\ref{sec:EffTemp} we discuss how the transport efficiency is optimized with respect to physiological temperature and comment on the thermalization properties of the HEOM. Finally we summarize our findings in Sect.~\ref{sec:conclusions}. Throughout the article, we provide detailed information about the computational times and requirements and collect in the appendices additional detailed information about the algorithms used and our GPU implementation.
\section{Model}\label{sec:model}
The FMO protein is part of the light harvesting complex that appears in green sulfur bacteria. Its structure has been widely studied both with X-ray and optical spectroscopic techniques \cite{Olson2004,Brixner2005,Milder2010a}. It has a trimer structure, with each of the monomers consisting of seven bacteriochlorophyll (BChl) pigment molecules, which are electronically excited when the energy flows from the antenna to the reaction center. An ab-initio calculation of the energy-transfer process within an atomistic model is far beyond present computational capabilities. Instead one has to develop effective model Hamiltonians such as the widely used excitonic Frenkel-Hamiltonian \cite{Leegwater1996a,Ritz2001a,May2004a}. Within the Frenkel model, which assumes that excitations enter the system one at a time, the seven BChl pigments of the FMO complex are treated as individual sites which are coupled to each other and also to the protein environment. The excitonic Hamiltonian is given by
\begin{eqnarray}
\mathcal{H}_{\rm ex}&=& E_0 |0\rangle\langle0|
+\sum_{m=1}^N(\varepsilon_m^0+\lambda_m) |m\rangle\langle m|\nonumber \\
&&+\sum_{m>n}J_{mn} \left( |m\rangle\langle n|+|n\rangle\langle m| \right)
,\end{eqnarray}
where $N=7$, $|m\rangle$ corresponds to an electronic excitation of the chromophore BChl$_m$ and $|0\rangle$ denotes the electronic ground state of the pigment protein complex where we fix the zero of energy $E_0=0$. The site energies $\varepsilon_m=\varepsilon_m^0+\lambda_m$ of the chromophores consist of the ``zero-phonon energies'' $\varepsilon_m^0$ and a reorganization energy $\lambda_m$, which takes into account the rearrangement of the complex during excitation due to the phonon bath\cite{May2004a}
\begin{equation}
\mathcal{H}_{\rm reorg}=\sum_{m=1}^N \lambda_m |m\rangle \langle m|.
\end{equation}
In the following we will consider identical couplings for all sites, $\lambda_m=\lambda$.
The inter site couplings $J_{mn}$ are obtained by fits to experimentally measured absorption spectra~\cite{Milder2010a}. In this contribution we use the designations and parameters of Ref.~\cite{Renger2006a}, table~4 (trimer column) and table~1 (column 4), summarized in \ref{tab:tab1}.
A sketch of the dominant couplings is shown in \ref{fig:sites}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{fig_1}
\caption{\label{fig:sites} Sketch of the exciton energies of the FMO complex (\ref{tab:tab1}), the reaction center, and the ground state. Each site, designated with a number, represents a BChl pigment of the FMO complex. The arrows indicate the dominant inter-site couplings. The excitation enters the FMO complex through the chlorosome antenna located close to sites~1 and 6. The incoming excitation, depicted with wavy arrows pointing upwards, follows two energy pathways to the reaction center. Wavy arrows pointing downwards indicate radiative loss-channels leading to the electronic ground state. In addition, each site is coupled to a phonon bath which accounts for the protein environment surrounding the pigments.}
\end{center}
\end{figure}
\begin{table}[b]
\begin{center}
\begin{tabular}{c c c c c c c c}
\hline
& BChl$_1$ & BChl$_2$ & BChl$_3$ & BChl$_4$ & BChl$_5$ & BChl$_6$ & BChl$_7$ \\
\hline
BChl$_1$ & \textbf{12410} & \textbf{-87.7} & 5.5 & -5.9 & 6.7 & -13.7 & -9.9 \\
BChl$_2$ & \textbf{-87.7} & \textbf{12530} & \textbf{30.8} & 8.2 & 0.7 & 11.8 & 4.3 \\
BChl$_3$ & 5.5 & \textbf{30.8} & \textbf{12210} & \textbf{-53.5} & -2.2 & -9.6 & 6.0 \\
BChl$_4$ & -5.9 & 8.2 & \textbf{-53.5} & \textbf{12320} & \textbf{-70.7} & -17.0 & \textbf{-63.3} \\
BChl$_5$ & 6.7 & 0.7 & -2.2 & \textbf{-70.7} & \textbf{12480} & \textbf{81.1} & -1.3 \\
BChl$_6$ & -13.7 & 11.8 & -9.6 & -17.0 & \textbf{81.1} & \textbf{12630} & \textbf{39.7} \\
BChl$_7$ & -9.9 & 4.3 & 6.0 & \textbf{-63.3} & -1.3 & \textbf{39.7} & \textbf{12440} \\
\end{tabular}
\caption{\label{tab:tab1} Exciton Hamiltonian in the site basis in (cm$^{-1}$). Bold font shows the dominant couplings and site energies. Values taken from Ref.~\cite{Renger2006a}.}
\end{center}
\end{table}
The protein environment surrounding the pigments is modeled as identical featureless spectral bath densities coupled to each BChl. For simplicity, we neglect correlations between the baths. The electronic excitations at each site couple linearly with strength $d_i$ to the vibrational phonon modes
$b_i^\dag$ of frequency $\omega_i$. The coupling Hamiltonian is given by
\begin{eqnarray}
\mathcal{H}_{\rm ex-phon}&=&\sum_{m=1}^N \left( \sum_i \hbar \omega_i d_i(b_i+b_i^\dag)\right)_m |m\rangle\langle m|,
\end{eqnarray}
where we assume identical baths at every site. Note that the reorganization energy is related to the coupling by
$\lambda=\sum_i\hbar\,\omega_i\,d_i^2/2$.
We model the losses due to radiative decay from the exciton to the electronic ground state $|0\rangle$ introducing a dipole coupling to an effective radiation photon field $a_{\nu}^\dag$
\begin{equation}
\mathcal{H}_{\rm ex-phot}=
\sum_{m=1}^N\sum_{\rm \nu} (a_{\nu}+a_{\nu}^\dag)\mu^\nu_m
\left( |0\rangle\langle m| +|m\rangle\langle 0|\right), \label{eq:exphot}
\end{equation}
which results in a finite life-time for the exciton. The reaction center (RC) is treated as a population-trapping state
\begin{equation}
\mathcal{H}_{\rm trap}=E_{RC}|RC\rangle\langle RC|
\end{equation}
and enlarges the system Hamiltonian to a $9\times 9$ matrix.
Adolphs and Renger \cite{Renger2006a} suggest that pigments~$3$ and $4$, which have
the largest overlap with the energetically lowest exciton-state, couple to the reaction
center. Recent experimental evidence shows that pigment~$3$ is orientated towards the reaction center \cite{Wen2009a}. In addition it has been proposed that
an 8th pigment may play a role in the initial stages of the energy transfer \cite{Schmidt2011a}.
Here, we include the reaction center by introducing leakage rates from pigments~$3$ and $4$ to the reaction center, which acts as a population trapping state.
Thus the coupling term to the reaction center reads
\begin{equation}
\mathcal{H}_{\rm ex-RC}=\sum_{m=3}^4\sum_{\rm \nu'} (a_{\rm \nu'}+a_{\rm \nu'}^\dag)
\mu_{RC}^{\nu'} \left( |RC\rangle\langle m|+|m\rangle\langle RC| \right) \label{eq:exrc}
\end{equation}
where the sum runs over the photon modes at the reaction center. As shown in Sect.~\ref{sec:markovif}, Eqs.~(\ref{eq:lab2},\ref{eq:lab3}), the coupling can be expressed in terms of a trapping rate $\Gamma_{\rm RC}$, and similarly for the radiative decay in \ref{eq:exphot} with the rate $\Gamma_{\rm phot}$.
The total Hamiltonian of the system is thus given by
\begin{eqnarray}
\mathcal{H}&=& \mathcal{H}_{\rm ex}+ \mathcal{H}_{\rm trap} + \mathcal{H}_{\rm ex-phon}
+\mathcal{H}_{\rm ex-phot}\nonumber\\
&&+\mathcal{H}_{\rm ex-RC}
+ \mathcal{H}_{\rm phon}+ \mathcal{H}_{\rm phot}^0+ \mathcal{H}_{\rm phot}^{\rm RC},
\end{eqnarray}
where $\mathcal{H}_{\rm phon}= \sum_{i,m} (\hbar \omega_i b^\dag_i b_i)_m $,
$\mathcal{H}_{\rm phot}^0=\sum_{\nu,m} ( h\nu a^\dag_{\nu} a_{\nu})_m$, and
$\mathcal{H}_{\rm phot}^{\rm RC}= \sum_{\nu',m=3,4} ( h\nu' a^\dag_{\nu'} a_{\nu'})_m$.
The time evolution of the total density operator $R(t)$ is described by the Liouville equation
\begin{equation}\label{eq:Liouvaa}
\dt R(t)=-\frac{{\rm i}}{\hbar}[\mathcal{H},\ R(t)]. \label{eq:L}
\end{equation}
We assume that at initial time $t=0$ the total density operator factorizes in system and bath components
\begin{equation}
R(t=0)=\rho(t=0)\otimes\rho_{\rm phon}\otimes\rho_{\rm phot}^0\otimes\rho_{\rm phot}^{\rm RC} \label{eq:R},
\end{equation}
while at later times the system and the bath get entangled.
Since we are only interested in the exciton dynamics, we trace out the degrees of freedom of the phonon and photon environments $\alpha=\{\rm phon, phot^0, phot^{RC}\}$
and propagate the reduced $9 \times 9$ density matrix in the Schr\"odinger picture
\begin{eqnarray}\label{eq:10aa}
\rho(t)&=&
\mbox{Tr}_\alpha\big(
{\rm e}^{-\frac{{\rm i} t}{\hbar}(
\mathcal{L}_0
+{\mathcal{L}}_{\rm ex-phon}
+{\mathcal{L}}_{\rm ex-phot}
+{\mathcal{L}}_{\rm ex-RC}
+{\mathcal{L}}_{\rm bath}
)
} R(0)\big)
\end{eqnarray}
for the exciton system $\{|m\rangle\}_{m=1,\ldots,7}$, the ground electronic state $|0\rangle$, and the reaction center $|{\rm RC}\rangle$.
Eq.~(\ref{eq:10aa}) is obtained by formal integration of the Liouville equation~(\ref{eq:Liouvaa}).
The operator $\mathcal{L}_{0}=[\mathcal{H}_{\rm ex}+ \mathcal{H}_{\rm trap} ,\bullet]$ represents the coherent dynamics and $\mathcal{L}_{\rm ex-phon}$ accounts for dephasing and energy relaxation due to vibrations induced by the interaction with the protein environment, while the recombination and energy trapping are expressed by $\mathcal{L}_{\rm ex-phot}$ and $\mathcal{L}_{\rm ex-RC}$, respectively. The parts describing the different baths are summarized in $\mathcal{L}_{\rm bath}=[ \mathcal{H}_{\rm phon}+ \mathcal{H}_{\rm phot}^0+ \mathcal{H}_{\rm phot}^{\rm RC},\bullet]$.
The coupling to the phonon and photon baths can be studied with different degrees of approximation.
We calculate the energy flow within a hybrid formulation which treats the exciton dynamics and the vibrational environment within the HEOM and the trapping to the reaction center and the radiative decay within a Markov model. The Markovian treatment of the photon modes is justified as it occurs in a very different time scale and no backward energy flow to the system is allowed. We abbreviate our model by ME-HEOM, see Sect.~\ref{sec:markovif}. We solve the hierarchical equations using GPUs, which are ideally suited for this task and lead to huge speed-ups of the algorithm. Details of the computational implementation are collected in Sect.~\ref{sec:gpu}.
\section{Trapping time for different reorganization energies}\label{sec:EffReorg}
The strong coupling of the excitonic system to the vibrational environment, which is of the same order as the excitonic energy differences (100~cm$^{-1}$), requires a detailed treatment of the phonon bath over the time-scale of the correlations present in the system. The coupling is quantified by the parameter $\gamma$ Eq.~(\ref{eq:gamma}), ranging from (35-166~fs)$^{-1}$ for models of light-harvesting complexes \cite{Ishizaki2009a}.
We calculate the efficiency of the energy transfer from an initially excited site to the reaction center using the hierarchical equations (\ref{eq:labsevena},\ref{eq:lab8}). The efficiency $\eta$ is defined as the population of the reaction center at long times
\begin{equation}\label{eq:eta}
\eta=\langle RC | \rho (t\rightarrow\infty) | RC \rangle.
\end{equation}
For the FMO complex, two sites are located near the light-absorbing antenna \cite{Renger2006a}. We consider initial excitations at either site~1 or 6, which give rise to two energy pathways to the reaction center. One pathway starts from site~1 and transfers energy via site~2 to site~3, and the second pathway starts from site~6 and the energy flows via site~7 or 5 to site~4, see \ref{fig:sites}.
We fix the upper limit of time propagation at $t_{\rm max}$, defined such that the remaining population in the system, excluding the ground-state and reaction center, has dropped from initially $1$ to $10^{-5}$.
To our knowledge, no solid experimental data exists for the coupling strength in eq.~(\ref{eq:exrc}), given
in terms of the trapping rate $\Gamma_{\rm RC}$ of sites~3 or 4 to the reaction center. In
the following we assume values of $\Gamma_{\rm RC}^{-1}=2.5$~ps and $\Gamma_{\rm phot}^{-1}=250$~ps, which are of the same order of magnitude as in other theoretical studies \cite{Rebentrost2009a,Hoyer2010a,Wu2010a}.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{fig_2a}
\includegraphics[width=0.5\columnwidth]{fig_2b}
\caption{\label{fig:reorg}
Trapping time from \ref{eq:trappingtime} as function of reorganization energy $\lambda$ at temperature $T=300$~K. Trapping rate to BChl $3$ and $4$ $\Gamma_{\rm RC}^{-1}=2.5$~ps and $\Gamma_{\rm phot}^{-1}=250$~ps.
Upper panel: secular Redfield result with $\gamma^{-1}=166$~fs and the ME-HEOM results for three different bath correlation times $\gamma^{-1}=166$~fs, $\gamma^{-1}=50$~fs, $\gamma^{-1}=5$~fs.
The excitation enters at site 1.
Lower panel: Comparison of the trapping times for the two possible pathways in the FMO when the energy is entering the complex starting from site~1, or at site 6 for a bath correlation time of $\gamma^{-1}=166$~fs.
}
\end{center}
\end{figure}
The actual time scale of the energy trapping is quantified by the trapping time
\begin{equation}\label{eq:trappingtime}
\langle t \rangle=\int_0^{t_{\rm max}} {\rm d} t'\ t'\, \big(\frac{{\rm d}}{{\rm d} t} \langle RC | \rho(t) | RC \rangle \big)_{t=t'},
\end{equation}
where we replace the upper limit of the integral by $t_{\rm max}$.
The trapping time depends strongly on the reorganization energy as shown in \ref{fig:reorg}. For reorganization energies $\lambda<50$~cm$^{-1}$ the coupling to the environment assists the transport and the trapping time decreases when $\lambda$ increases.
Evaluating the equations of motion (\ref{eq:labsevena},\ref{eq:lab8}) in the ME-HEOM approach requires to truncate the hierarchy at $N_{\rm max}$, which has to be large enough to reach convergence.
In \ref{fig:reorg} we adjust the truncation such that the trapping times for $N_{\rm max}=N$ and $N_{\rm max}=N+1$ differ at most by 0.02~ps. The required truncation increases with reorganization energy and for $\lambda=300$~cm$^{-1}$ we need $N_{\rm max}=16$ where we have to propagate 245157 auxiliary matrices over 22000 time steps ($\Delta t$=2.5~fs) leading to a GPU computation time of 3.7 hours. On a standard CPU the same calculation takes more than one month and a systematic study of parameters is not feasible.
In the upper panel of \ref{fig:reorg} we compare the ME-HEOM result with the secular Redfield theory, which employs the time-local Born-Markov approximation in combination with the rotating-wave approximation. For stronger values of the coupling, the hierarchical approach strongly deviates from the plateau obtained within the secular Redfield theory, which assumes a fast decay of the phonon bath. The secular Redfield limit (see Sect.~\ref{sec:markovif}) reflects, as expected, the qualitative behavior only for small reorganization energies and overestimates the energy transfer to the reaction center
for $\lambda>10$~cm$^{-1}$.
An interesting question is the existence of an optimal value for the coupling $\lambda$ and the bath correlation-rate $\gamma$, for which the trapping time is minimized (and the efficiency maximized). Secular and full Redfield do not yield a local minimum of the trapping time, and thus no corresponding optimal $\lambda$. Introducing the bath-correlations and memory effects by the parameter $\gamma$ in the ME-HEOM gives rise to a local minimum and an optimal value of $\lambda$, as shown in \ref{fig:reorg}. In addition an optimal value of $\gamma$ emerges around $\gamma^{-1}=25-35$~fs. For a small value $\gamma^{-1}=5$~fs, the theory predicts a rapid loss of efficiency.
The lower panel of \ref{fig:reorg} details the changes of the trapping time for the two different pathways of the energy flow in the FMO complex as function of the reorganization energy. The optimal reorganization energy for an initial excitation of site~1 is given by $\lambda_{\rm opt}^{1}=55$~cm$^{-1}$ ($\langle t\rangle_{\rm opt}^{1}=6.0$~ps), while for an initial excitation of site~6 we obtain $\lambda_{\rm opt}^{6}=85$~cm$^{-1}$ ($\langle t\rangle_{\rm opt}^{6}=5.4$~ps).
Optimal values of trapping times have been calculated within the generalized Bloch-Redfield (GBR) approximation \cite{Wu2010a}. Using the same parameters, couplings, and Hamiltonian as in Ref.~\cite{Wu2010a}, the ME-HEOM yield qualitative and quantitative differences with a $0.9$~ps longer trapping time for an initial excitation of site $1$. For an initial excitation located at site $6$ the ME-HEOM and GBR results for the trapping time differ by $0.2$~ps.
\section{Efficiency for rearranged energy levels}\label{sec:EffLevels}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{fig_3a}
\includegraphics[width=0.5\columnwidth]{fig_3b}
\caption{\label{fig:efflevelshift}Upper panel: Energy transfer efficiency $\eta$ in \ref{eq:eta} as function of temperature and site-energy shifts $\varepsilon_{3/4}\rightarrow\varepsilon_{3/4}+\Delta E$. ME-HEOM parameters: $\lambda=35$~cm$^{-1}$, $\gamma^{-1}=166$~fs, $\Gamma_{\rm phot}^{-1}=250$~ps and $\Gamma_{\rm RC}^{-1}=2.5$~ps. The hierarchy is truncated at $N_{\rm max}=8$. Lower panel: energy-level shifts considered in the parameter range of the left panel.
}
\end{center}
\end{figure}
In this section we study the relevance of the spacings of the energy levels in the FMO complex to see if the experimentally obtained energy levels (\ref{tab:tab1}) are close to an optimal value with respect to transport efficiency at physiological temperature.
The isolated excitonic system shows coherent oscillations of energy between the initially populated site and the delocalized excitonic states. Coupling to the environment gives rise to several mechanisms leading to a non-reversible energy transfer.
In the simplest Haken-Strobl model, only dephasing is incorporated \cite{Rebentrost2009a,Chin2010a}, but the temperature is fixed at $T=\infty$. Only by adjusting the dephasing rate, temperature effects can be included on a rudimentary level. The ME-HEOM approach enables us to calculate the transport at physiological temperature ($T=300$~K) and brings into the picture another crucial mechanism to achieve highly efficient energy transfer. Namely, the temperature dependent stationary site populations. Since the system is in contact with a thermal environment at finite temperature, there is energy dissipation and the system relaxes to thermal equilibrium. This process guides the excitons to the lowest energy states (for the FMO complex within a few picoseconds) and is not contained in pure dephasing models.
For a small coupling $\lambda$ and under the assumption that the system and bath degrees of freedom factggze, the thermal state of the system is given by the Gibbs measure
\begin{equation}
\rho_{\rm thermal}=e^{-\beta \mathcal{H_{\rm ex}}}/\mbox{Tr}\,e^{-\beta \mathcal{H_{\rm ex}}},\ \beta=1/(k_B T),
\end{equation}
which populates the eigenstates of $\mathcal{H}_{\rm ex}$ according to the Boltzmann statistics. Stronger couplings lead to deviations from the Boltzmann statistics \cite{Zuercher1990a}.
Since the coupling to the reaction center, where the system deposits its excitation, is linked to sites $3$ and $4$, the efficiency
depends strongly on the population and actual site-energies $3$ and $4$.
To study this relation, we shift levels $\varepsilon_{3/4}\rightarrow\varepsilon_{3/4}+\Delta E$ and compute the efficiency of the energy transfer. \ref{fig:efflevelshift} shows the efficiency evaluated with the ME-HEOM. We observe an almost symmetric behavior of the efficiency for positive and negative energy shifts, with slightly higher efficiencies towards negative energy shifts.
A shift to lower energies increases the energy gradient in the FMO as the thermal state prefers to populate the low-lying sites. This mechanism improves the transfer efficiency but shifts the two sites out of resonance and they get decoupled from the other levels of the FMO. Thus coherent transport becomes more difficult and the energy transfer to the reaction center is expected to slow down.
Similar arguments hold when the energies $\varepsilon_{3}$ and $\varepsilon_{4}$ are shifted to higher energies.
On the one hand $\Delta E>0$ brings the sites~3 and 4 closer to resonance and increases the coupling to the remaining sites, thus enhancing coherent transport. On the other hand the thermal state gets delocalized over all sites of the FMO complex and there is no special preference to
populate site~$3$ and site~$4$. In such case the FMO loses its property to act as an energy funnel and environment assisted transport
to the reaction center is hindered.
\ref{fig:efflevelshift} shows how the delicate interplay between coherent delocalization and energy dissipation towards the reaction center gives rise to an optimal arrangement of site energies. We obtain maximal efficiency around $\Delta E=0$ corresponding to the original parameters in \ref{tab:tab1} and the optimum value is robust against small variations in the site energies.
\section{Trapping time for different temperatures}\label{sec:EffTemp}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{fig_4}
\caption{\label{fig:efftemp}
Energy transfer as a function of temperature for the secular Redfield approximation and the exact ME-HEOM calculation with $\gamma^{-1}=166$~fs, $\Gamma_{\rm phot}^{-1}=250$~ps, $\Gamma_{\rm RC}^{-1}=2.5$~ps and truncation $N_{\rm max}=8$. Both approaches use a reorganization energy of $\lambda=$35~cm$^{-1}$ and start with initial population at site~$1$.
(a) Trapping time as a function of temperature.
(b) Efficiency as a function of temperature.
(c) Population of site $3$ for different temperatures in the Boltzmann thermal equilibrium state $\rho_{\rm thermal}$ and for the isolated FMO (decoupled from the reaction center and the radiative decay) using the ME-HEOM, $\rho(t\rightarrow\infty)$. Note that the ME-HEOM needs further corrections at temperatures below $100$~K in order to reach the thermal state.
}
\end{center}
\end{figure}
As discussed in the previous section, the environment assists the transport towards the thermal equilibrium state. In the FMO complex, the sites $3$ and $4$ are coupled to the reaction center and present the lowest exciton energies in the system (see
\ref{fig:sites}), thus the energy dissipation in the phonon environment enhances the population of those sites and hence the efficiency.With increasing temperature one might expect high transfer efficiencies because thermalisation occurs on a faster time scale. However, with increasing temperature higher energy states have a higher thermal-equilibrium population and thus the transport efficiency towards the reaction center decreases.
These two competing mechanisms result in an optimal temperature with maximal efficiency.
Both mechanism are already present in the secular Redfield limit, and the optimal energy transfer is obtained around $75$~K, see \ref{fig:efftemp}(a).
Our ME-HEOM calculations predict optimal efficiency at slightly lower temperature $70$~K, but this value is outside the range where our high-temperature implementation is supposed to work (see Sect.~\ref{sec:markovif}). We obtain a steep increase of the trapping time for low temperatures shown in \ref{fig:efftemp}(a), which is also reflected in the efficiencies \ref{fig:efftemp}(b).
This increase in trapping time and decrease in efficiency is not present in the secular Redfield approach, which saturates for $T\rightarrow 0$. Although we take into account the lowest-order quantum correction to the Boltzmann statistics \cite{Ishizaki2009a}, at low temperatures more correction terms are required. One criteria to validate the HEOM is to check if the stationary state $\rho(t\rightarrow\infty)$ of the population dynamics of the isolated FMO, which is decoupled from the reaction center and radiative decay, approaches the thermal state.
As is shown in \ref{fig:efftemp}(c), the HEOM high-temperature implementation fails to approach the thermal state
for temperatures below $100$~K, where the HEOM predict an unphysical steep decent of population at low energy site~3 and hence
transfer efficiency is underestimated.
For temperatures above $100$~K the high temperature limit agrees very well with the thermal state and the ME-HEOM results are reliable.
Comparing our ME-HEOM results above $100$~K to the secular Redfield ones shown in \ref{fig:efftemp}(a) and (b) we conclude that the Redfield approach, which is known to be valid in the weak coupling limit only, overestimates the efficiency and underestimates the trapping time.
\section{Conclusions}\label{sec:conclusions}
We have shown that the HEOM are computationally feasible for calculating the energy transfer for large systems following our GPU implementation. This algorithmic advance allowed us to calculate the efficiency and trapping time of the energy transfer in the FMO complex for a wide range of parameters.
The results point to qualitative and quantitative deficiencies of approximative methods and show that an accurate treatment of memory effects and reorganization processes in the system-bath coupling of LHC is needed to evaluate the precise role of temperature, exciton energy-differences, the coupling strength, and the time correlations in the bath.
The ME-HEOM yield longer trapping times and indicate the importance of memory effects and correlations in order to maximize the efficiency in the FMO complex at physiological temperature.
Interestingly, the zero-shift energies of the FMO complex provide an almost optimal arrangement for funneling the energy flow to the reaction center at $T=300$~K.
Beyond the results for the FMO complex, our fast computational GPU-algorithm for the HEOM provides a robust and scalable way to treat bigger systems and allows us to calculate two-dimensional spectra of LHC, which requires to enlarge the dimension of the density matrix by taking into account double-excitonic states.
\section*{Acknowledgement}
This work has been supported by the DAAD project 50240755 and the Spanish MINCINN AI DE2009-0088 (Acciones Integradas Hispano-Alemanas), the Emmy-Noether program of the DFG, KR~2889/2, the Spanish MICINN project FIS2010-18799 and the Ram{\'o}n y Cajal program.
|
1,477,468,750,971 | arxiv | \section{Introduction}
Security techniques \cite{saeed2018examine, kumar2018data} have been acknowledged to be an integral part in many fields i.e., business, national defense, military and etc. One special part of cryptographic algorithms is the hashing family that is important, especially in the field of modern information security where have a wide range of applications, i.e., digital signatures, message authentication codes, password authentication and etc. Unfortunately, today increasing existing hash algorithms like MD5, SHA-1 and so on are at high risk of being cracked. To improve the
security of the hash algorithms, a new SHA-3 \cite{dworkin2015sha} algorithm driven from KECCAK has been proposed to replace the older hash functions. Hash functions are unique in the way that an output is generated. A message is broken down into a number of blocks and the hash function consumes each block of the message into some type of internal state, with a final output produced after the last block is consumed. This structure is difficult to parallelize. In this case, the inputs could range from a few bytes to a few terabytes, and using a sequential hash function is not the best choice. A function with a tree hashing mode could be used to significantly reduce the amount of time that is required to compute the hash.
There are classes of problems that may be expressed as data-parallel computations with high arithmetic intensity where a CPU is not particularly efficient. Multi-core CPUs excel at managing multiple discrete tasks and processing data sequentially, by using loops to handle each element. Instead, the architecture of GPU maps the data to thousands of parallel threads, each handling one element. This architecture looks ideal for our fast algorithm implementation.
\section{Related Work}
Lowden \cite{lowden2015design} focus on the exploration and analysis of the Keccak tree hashing mode on a GPU platform. Based on the implementation, there are core features of the GPU that could be used to accelerate the time it takes to complete a hash due to the massively parallel architecture of the device. In addition to analyzing the speed of the algorithm, the underlying hardware is profiled to identify the bottlenecks that limited the hash speed. The results of their work show that tree hashing can hash data at rates of up to 3 GB/s for the fixed size tree mode.
Qinjian et al., \cite{li2012implementation} propose a GPU based AES implementation. In their implementation, the frequently accessed T-boxes were allocated on on-chip shared memory and the granularity that one thread handles a 16 Bytes AES block was adopted. Finally, they achieve a performance of around 60 Gbps throughput on NVIDIA Tesla C2050 GPU, which runs up to 50 times faster than a sequential implementation based on Intel Core i7-920 2.66GHz CPU.
Kaiyong et al., \cite{zhao2014g} develop G-BLASTN, a GPU-accelerated nucleotide alignment tool based on the widely used NCBI-BLAST. G-BLASTN can produce exactly the same results as NCBI-BLAST, and it has very similar user commands. Compared with the sequential NCBI-BLAST, G-BLASTN can achieve an overall speedup of 14.80X under ‘megablast’ mode. They \cite{zhao2010gpump} also propose to exploit the computing power of Graphic Processing Units (GPUs) for homomorphic hashing. Specifically, they demonstrate how to use NVIDIA GPUs and the Computer Unified Device Architecture (CUDA) programming model to achieve 38 times of speedup over the CPU counterpart. They also develop a multi-precision modular arithmetic library on CUDA platform, which is not only key to our specific application, but also very useful for a large number of cryptographic applications.
Xinxin et al., \cite{mei2014benchmarking, mei2017dissecting} propose a novel fine-grained benchmarking approach and apply it on two popular GPUs, namely Fermi and Kepler, to expose the previously unknown characteristics of their memory hierarchies. They also investigate the impact of bank conflict on shared memory ac- cess latency.
Thuong et al., \cite{yang2017parallel} implements a high speed hash function Keccak
(SHA3-512) using the integrated development environment CUDA for GPU is proposed. In addition, the safety level of Keccak is also discussed at the point of Pre-Image Resistance especially. In order to implement a high speed hash function for password cracking, the special program is also developed for passwords up to 71 characters. Moreover, the throughput of 2-time hash is also evaluated in their work.
Chengjian et al., \cite{liu2018g} propose a graphics processing unit (GPU)-based implementation of erasure coding named G-CRS, which employs the Cauchy Reed-Solomon (CRS) code, to overcome the aforementioned bottleneck. To maximize the coding performance of G-CRS, they designed and implemented a set of optimization strategies, such as a compact structure to store the bitmatrix in GPU constant memory, efficient data access through shared memory, and decoding parallelism, to fully utilize the GPU resources.
Xiaowen et al., \cite{chu2009practical, chu2008practical} exploit the po- tential of the huge computing power of Graphic Processing Units (GPUs) to reduce the computational cost of network coding and homomorphic hashing. With their network coding and HHF implementation on GPU, they observed significant computational speedup in comparison with the best CPU implemen- tation. This implementation can lead to a practical solution for defending against the pollution attacks in distributed systems.
Cheong et al., \cite{cheong2015fast} contribute to the cryptography research community by presenting techniques to accelerate symmetric block ciphers (IDEA, Blowfish and Threefish) in NVIDIA GTX 690 with Kepler architecture. The results are benchmarked against implementation in OpenMP and existing GPU implementations in the literature. We are able to achieve encryption throughput of 90.3 Gbps, 50.82 Gbps and 83.71 Gbps for IDEA, Blowfish and Threefish respectively. Block ciphers can be used as pseudorandom number generator (PRNG) when it is operating under counter mode (CTR), but the speed is usually slower compare to other PRNG using lighter operations. Hence, they attempt to modify IDEA and Blowfish in order to achieve faster PRNG generation. The modified IDEA and Blowfish manage to pass all NIST Statistical
Test and TestU01 Small Crush except the more stringent tests in TestU01 (Crush and BigCrush).
\section{Preliminary}
The secure hash algorithm-3 (SHA-3) family is based on an instance of \textit{Keccak} algorithm that has been selected as the winner of the SHA-3 cryptographic hash algorithm competition by NIST in 2012. The SHA-3 consists of four cryptographic hash functions, including SHA-3-224, SHA-3-256, SHA-3-384 and SHA-3-512, as well as two additional extendable output functions, SHAKE-128 and SHAKE-256. Specifically, the extendable output functions are different from hashing functions. It provides a flexible way to be adopt easily in according with the requirements of individual applications. In general, the hash functions play an important role in many fields, including digital signatures, pseudorandom bit generation etc.
\subsection{\textit{Keccak-p}}
The SHA-3 functions can be viewed as modes of $Keccak-p$ permutations, which are designed as the main components of various cryptographic functions. Two core parameters of $Keccak-p$ permutations are specified as $width$ and $round$. In this case, $width$ is denoted by $b$, meaning the fixed length of the permuted strings and $round$ is denoted by $n_r$, meaning that the number of iterations of an internal transformation.
The state of $Keccak-p[b,n_r]$ consists of $b$ bits. In addition, specifications in standard contain two other quantities related to $b$: $b/25$ and $log_2(b/25)$, denoted by $w$ and $l$, respectively. Seven possible cases for these variables that are defined for $Keccak-p[b,n_r]$ are given in Table 1.
\begin{table}[htbp]
\centering
\caption{The widths and other quantities of $Keccak-p[b,n_r]$}
\label{my-label}
\begin{tabular}{l|l|l|l|l|l|l|l}
\hline
b & 25 & 50 & 100 & 200 & 400 & 800 & 1600 \\ \hline
w & 1 & 2 & 4 & 8 & 16 & 32 & 64 \\ \hline
l & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
\end{tabular}
\end{table}
It is convenient to represent the input and output states of the step mappings as five-by-five-$w$ array denoted as $A[x,y,z]$, meaning that an integer triple $(x,y,z)$ where $0 \leq x\leq 5$, $0 \leq y\leq 5$ and $0 \leq z\leq w$. A string can be denoted as $S$. An array is a representation of the string by a three-dimensional array and their relationship can be expressed in equation (1).
\begin{equation}
A[x,y,z] = S[w(5y+x)+z]
\end{equation}
After representing a string into a state, next we need to operation the state. The specifications of operations, including $\vartheta$, $\rho $, $\pi $, $\chi $ and $\zeta $ are discussed in the following section. Note that the algorithm for each step mapping takes a state array denoted by $A$. An return or output state array is denoted by $A'$. The size of the state is a parameter that is omitted from notation because $b$ is always specified when step mappings are invoked.
\noindent \textbf{Definition 1. $\vartheta $:} the input state array is denoted by $A$ and the output state array is denoted by $A'$. Then,
\begin{equation}
C[x,z]=A[x,0,z] \oplus A[x,1,z] \oplus A[x,2,z] \oplus A[x,3,z] \oplus A[x,4,z]
\end{equation}
\begin{equation}
D[x,z]=C[(x-1)mod(5), z] \oplus C[(x+1)mod(5), (z-1)mod(w)]
\end{equation}
\begin{equation}
A'[x,y,z]=A[x,y,z] \oplus D[x,z]
\end{equation}
\noindent where $0 \leq x < 5$, $0 \leq y < 5$ and $0 \leq z < w$. The effect of the specification $\vartheta $ is to $XOR$ each bit in the state with parties of two columns in the array. In particular, for $A[x_0,y_0,z_0]$, the $x$-coordinate of one of the columns is $(x_0-1)mod(5)$ with the same $z$-coordinate $z_0$; while the $x$-coordinate of one of the columns is $(x_0+1)mod(5)$ with coordinate $(z_0-1)mod(w)$.
\noindent \textbf{Definition 2. $\rho $:} the input state array is denoted as $A$ and the output state array is denoted by $A'$. $\forall t \in [0, 24)$, the specification of $\rho $ is expressed as follows.
\begin{equation}
A'[y,(2x+3y)mod(5), (z-(t+1)(t+2)/2)mod(w)]=A[x,y,z]
\end{equation}
\noindent where $0 \leq z < w$ and the initial value $x=1$ and $y=0$. The effect of the specification of $\rho $ is to rotate the bits of each lane by a length named $offset$, which depends on the fixed $x$ and $y$ coordinates of the lane. Equivalently, for each bit in the lane, the $z$ coordinate is modified by adding the $offset$ modulated by the lane size.
\noindent \textbf{Definition 3. $\pi $:} the input state array is denoted by $A$ and the output state array is denoted by $A'$. The specification of $\pi$ is expressed as follows.
\begin{equation}
A'[x,y,z]=A[(x+3y)mod(5),x,z]
\end{equation}
\noindent where $0 \leq x < 5$, $0 \leq y < 5$ and $0 \leq z < w$. The effect of the specification $\pi$ is to rearrange the positions of the lanes.
\noindent \textbf{Definition 4. $\chi $:} the input state array is denoted by $A$, the output state array is denoted by $A'$. The specification of $\chi$ is expressed as follows.
\begin{equation}
A'[x,y,z]=A[x,y,z] \oplus (A[(x+2)mod(5), y, z] \cdot (1 \oplus A[(x+1)mod(5), y, z]))
\end{equation}
\noindent where $0 \leq x < 5$, $o \leq y < 5$ and $0 \leq z < w$. Note that the dot in the equation (7) indicates integer multiplication which in this case is equivalent to the intended Boolean $AND$ operation. The effect of $\chi$ is to $XOR$ each bit with a non-linear function of two other bits in its row.
\noindent \textbf{Definition 5. $\zeta $:} the input state array is denoted by $A$ and the output state array is denoted by $A'$. The specification of $\zeta$ is expressed as follows.
\begin{equation}
RC[2^{j}-1] = rc(j+7i_r)
\end{equation}
\begin{equation}
A'(0,0,z) = A(0,0,z) \oplus RC[z]
\end{equation}
\noindent where $0 \leq z < w$, $0 \leq j < l$. Note that within the specification of $\zeta$, a parameter determines $l+1$ bits of a lane value called the Round Constant and denoted by $RC$. Each of these $l+1$ bits is generated by a function that is based on a linear feedback shift register. The function is denoted by $rc$. The effect of $\zeta$ is to modify some of the bits of $A(0,0,z)$ where $0 \leq z < w$ in a manner that depends on the round index $i_r$.
Thus, given a state array $A$ and a round index $i_r$, the round function $Rnd$ is the transformation that results from applying the steps as follows.
\begin{equation}
Rnd(A,i_r)=\zeta( \chi( \pi( \rho ( \vartheta (A) ) ) ), i_r )
\end{equation}
Note that the $Keccak-p[b, n_r]$ permutation consists of $n_r$ iterations of $Rnd$.
\subsection{Sponge}
The spong construction is a framework for specifying functions on binary data with arbitrary output length. The construction employs three components: an underlying function on fixed-length strings denoted by $f$, a parameter named the Rare denoted by $r$ and a padding function denoted by $pad$. These components form a sponge function denoted by $Sponge[f, pad, r]$. The sponge function takes two inputs: a bit string denoted by $N$ and the bit length denoted by $d$ of the output string denoted by $Z$. Note that the input $d$ determines the number of bits that the Sponge algorithm returns. But it does not affect the actual values. In principle, the output can be regarded as an infinite string whose computation is halted after the desired number of output bits is produced.
\begin{equation}
pad10 \cdot 1 (x,m) = 1||0^{(-m-1)mod(x)}||1
\end{equation}
The padding rule for $Keccak$ family is named multi-rate padding. Given a positive integer $x$, a non-negative integer $m$, the specification of padding rule for $Keccak$ denoted by $pad10 \cdot 1 (x,m)$ is specified as given in equation (11).
\subsection{SHA-3}
Keccak is the family of the sponge functions with $Keccak-p[b,12+2l]$ permutation. The family is parameterized by any choices of the rare $r$ and the capacity $c$ such that $r+c$ is in ${25,50,100,200,400,800,1600}$. When restricted to the case $b=1600$, the $Keccak$ family is denoted by $Keccak[c]$. In this case, $r$ is determined by the choice of $c$. The algorithm $Keccak[c]$ is specified as follows.
\begin{equation}
Keccak[c](N,d)=Sponge[Keccak-p[1600, 24], pad10 \cdot 1, 1600 -c](N,d)
\end{equation}
SHA-3 hash functions and two SHA-3 $XOF$s will be defined. Given a message $M$, the four SHA-3 hash functions are defined from $Keccak[c](N,d)$ function by spending a two-bit suffix to $M$ and by specifying the length of the output as follows.
\begin{equation}
SHA-3-224(M)=Keccak[448](M||01,224)
\end{equation}
\begin{equation}
SHA-3-256(M)=Keccak[512](M||01,256)
\end{equation}
\begin{equation}
SHA-3-384(M)=Keccak[768](M||01,384)
\end{equation}
\begin{equation}
SHA-3-512(M)=Keccak[1024](M||01,512)
\end{equation}
\begin{equation}
SHAKE-128(M,d)=Keccak[256](M||1111,d)
\end{equation}
\begin{equation}
SHAKE-256(M,d)=Keccak[512](M||1111,d)
\end{equation}
In this case, the capacity is double the digest length, in other words, $c=2d$ and the resulting input $N$ to $Keccak[c]$ is the message with the suffix appended $N=M||01$. The suffix supports domain separation, which distinguishes the inputs to $Keccak[c](N,d)$ arising from the SHA-3 hash functions from the inputs arising from the SHA-3 $XOF$s.
\section{GPU Accelerated SHA-3}
The SHA-3 parallel hash mode can be divided into two types: Batch mode and Tree mode. Batch mode is to divide a piece of information into multiple identical slices, and then hash the slices in parallel; or, Batch mode is to process multiple pieces of the same information in parallel at one time. Tree mode is to process multiple information into a form of Hash Root. In other words, multiple information is hashed in twos until the Hash Root is finally obtained. This Hash Root can be understood as Merkle Tree Root. In this article, we will adopt the Batch mode to implement the parallel calculation of the hash algorithm.
\subsection{Parallel Granularity}
SHA-3 parallel mode can be divided into Batch mode and Tree mode. The GPU parallelism used by different modes is different. For example, Batch mode normally uses 'one thread one message' parallel granularity which means that multiple GPU threads process multiple messages at the same time. Tree mode normally uses 'one thread one tree' parallel granularity which means that multiple GPU threads process multiple hash trees at the same time. In this paper, we mainly focus on Batch mode and our parallel granularity is 'one thread per message'.
\subsection{RC Tables Allocation}
SHA-3 RC tables are essentially look-up tables through which users can quickly implement part of cryptographic operations. Every thread needs to get access RC tables in each round of cryptographic operations. Hence we need to load RC tables in the memory of GPU in advance. In our benchmarking approach, we load RC tables into CUDA constant memory. Another possible solution is to load RC tables into CUDA share memory.
\subsection{Plaintext Allocation}
In our implementation of SHA-3 algorithm, we mainly use Batch mode. A large amount of messages are hashed at the same time. In our experiment, we hash a large amount of same length messages (10 bytes) via a large number of synchronized GPU threads. For example, if we have 100 messages and the length of each message is 10 bytes, then we need at least 100 CUDA threads for hashing.
\section{Experimental Results}
\begin{table}[]
\centering
\caption{Environment specification}
{\footnotesize \label{my-label}
\begin{tabular}{l|c}
\hline
CPU & Intel Core i5-7200U @ 2.5GHz*4 \\ \hline
GPU & GeForce 940MX/PCIe/SSE2 \\ \hline
Memory & 12 GiB \\ \hline
OS & Ubuntu 16.04 LTS, 64 bits \\ \hline
CUDA compliation & V7.5.17 \\ \hline
GCC & V5.4.0 \\ \hline
\end{tabular}}
\end{table}
Table 2 shows the configuration of our experiment platform. We conduct our experiments on a Ubuntu 16.04 LTS 64-bit operating system with a 12-GiB memory, an Intel Core i5-7200U @ 2.5GHz*4, an GeForce 940MX/PCIe/SSE2 GPU. The version of nvcc compliation is V7.5.17 and the version of GCC is V5.4.0. Our CPU code is written in standard C language and the GPU code is written in CUDA.
\begin{table}[]
\centering
\caption{SHA-3 CPU V.S. GPU}
{\footnotesize \label{my-label}
\begin{tabular}{l|l|l|l|l}
\hline
\multirow{3}{*}{File Size (bytes)} & \multicolumn{4}{c}{Hash} \\ \cline{2-5}
& \multicolumn{2}{c|}{Time (seconds)} & \multicolumn{2}{c}{Throughput (bytes per second)} \\ \cline{2-5}
& \multicolumn{1}{c|}{CPU} & \multicolumn{1}{c|}{GPU} & \multicolumn{1}{c|}{CPU} & \multicolumn{1}{c}{GPU} \\ \hline
1202 & 0.002656 & 0.000431 & 452560.24 & 2788863.11 \\ \hline
4652 & 0.008600 & 0.000330 & 540930.23 & 14096969.69 \\ \hline
9302 & 0.016400 & 0.000330 & 567195.12 & 28187878.78 \\ \hline
18602 & 0.032475 & 0.000346 & 572809.85 & 53763005.78 \\ \hline
37202 & 0.065230 & 0.000373 & 570320.40 & 99737265.42 \\ \hline
74402 & 0.129495 & 0.000382 & 574555.00 & 194769633.41 \\ \hline
148802 & 0.258204 & 0.000437 & 576296.26 & 340508009.15 \\ \hline
297602 & 0.516079 & 0.000557 & 576659.77 & 534294434.47 \\ \hline
595202 & 1.036339 & 0.000755 & 574331.37 & 788347019.87 \\ \hline
1190402 & 2.060367 & 0.001204 & 577762.12 & 988705980.06 \\ \hline
\end{tabular}}
\end{table}
Table 3 shows the SHA-3 hashing performance comparison between CPU and GPU. One thing to note is that the GPU's parallel computing performance is affected by several factors, such as the ability of the GPU to perform parallel computing when the amount of concurrent tasks is small, such as where the file size is less than 1K bytes. Can't be played well. Once the number of parallelizable tasks is large, the GPU's parallel computing capabilities will be greatly utilized. In our experiments, it is very obvious that the performance of the GPU when the GPU performance is more than 1K bytes when the parallel file size is larger than the GPU performance. Exceeds the performance of the CPU by more than 4 times, and, as the number of parallelizable files increases, the GPU's powerful parallelism gets better.
\section{Conclusion}
This paper implements and optimizes Batch mode based Keccak algorithms on
NVIDIA GPU platform. Our work consider the case of processing multiple hash tasks at once and implement the case on CPU and GPU respectively. Our experimental results show that GPU performance is significantly higher than CPU is the case of processing large batches of small hash tasks. In future work, we aim to implement and analyze Hash Tree Mode based Keccak algorithms where many CUDA Reduce operations are involved. And our projected is now available at Github: https://github.com/Canhui/SHA3-ON-GPU.
\section*{Acknowledgements}
This work is supported by Shenzhen Basic Research Grant SCI-2015-SZTIC-002.
{\small |
1,477,468,750,972 | arxiv | \section{Introduction}
Confidence sets, regions, and intervals are fundamental tools in data science, statistical inference, and machine learning, capturing a range of plausible beliefs of the parameters of a model. For simplicity of computation and analysis, most approaches to construct confidence sets rely on approximation or bounds that are loose in the small sample regime \cite{casella2021statistical, chafai2009confidence, malloy2021ISIT}. While these approaches are often optimal asymptotically, tighter confidence sets in the small sample regime can reduce sample complexity in A/B testing, reinforcement learning algorithms, and other problems in applied data science \cite{malloy2020optimal, jamieson2013finding, malloy2015contamination, malloy2012quickest, malloy2013sample}.
Finding tight confidence sets for categorical distributions is a long studied problem. The goal is to construct sets of minimal volume (i.e, as small as possible) that contain the true parameter with high confidence.
Recent work \cite{malloy2021ISIT} studied a confidence set construction based on level-sets of the exact $p$-value function that satisfies a minimum volume property. Averaged over the possible empirical outcomes, \cite{malloy2021ISIT} showed that the confidence sets proposed in \cite{chafai2009confidence} have minimum volume among any confidence set construction. The result is based on a duality between hypothesis testing and confidence sets, and a is specific instance of a general theory of optimal confidence sets \cite{brown1995optimal} first observed in restricted settings in the traditional work of Sterne \cite{sterne1954some} and Crow \cite{crow1956confidence}.
The minimum volume confidence sets (MVCs) for the multinomial parameter are defined by the level-sets of the exact $p$-value. To compute membership of a single parameter value in the set, one must compute the exact $p$-value; naively, this involves enumerating and computing partial sums of all the empirical outcomes of $n$ i.i.d.~observations that may take one of $k$ possible values. While this direct approach to computing the $p$-value scales as $n^k$, recent work has \cite{resin2020simple} reduced this computation to $(\sqrt{n})^k$. Nonetheless, checking membership of a single parameter value in the confidence set becomes prohibitive for modest values of $k$ and $n$.
This computational limitation becomes even more challenging for basic applications. A common task is to observe two empirical outcomes and determine if the corresponding confidence sets are disjoint. This arises in A/B testing; if two confidence sets are disjoint, then the underlying multinomial parameters associated with the outcomes are different (to within a significance specified by the confidence-level). A naive approach is to grid the multinomial parameter values and check for a value that lies in the confidence sets of both outcomes. Unfortunately, this fails to guarantee an empty intersection, as the MVCs can have irregular geometry, including arbitrarily small disconnected regions. As they are constructed from the level-sets of a discontinuous function, the MVCs do not satisfy properties such as convexity, radial-convexity, or connected-ness (see Fig. \ref{fig:confidence_set}). This contrasts with traditional confidence sets where an empty intersection can be determined by exploiting geometric properties such as convexity.
This paper studies the geometry of the {\em minimum volume confidence sets for the multinomial parameter}. We describe an algorithm that enumerates and covers the regions of the simplex over which the exact $p$-value function is continuous, enabling numerical characterization of the MVCs. As a consequence, we answer a basic question in A/B testing in the restricted setting of three categories. Given two multinomial outcomes, how can one determine if their corresponding confidence sets are disjoint? The numerical characterization of the sets, facilitated by the enumeration and covering of the continuous regions of the exact $p$-value function provides a definitive answer to this question. The approach sheds light on the answer for more than three categories, but this remains an open question.
\section{Notation and Basic Definitions}
Let $X_1, \dots, X_n$ be i.i.d.\ samples of a categorical random variable that takes one of $k$ possible values from a finite number of categories $\mathcal{X} = \{x_1, \dots, x_k\}$. The empirical distribution $\widehat{\boldsymbol{p}}$ is the relative proportion of occurrences of each element of $\mathcal{X}$ in $X_1, \dots, X_n$, i.e., $\widehat{\boldsymbol{p}} = [\nicefrac{n_1}{n}, \dots, \nicefrac{n_k}{n}]$, where ${n}_i = \sum_{j=1}^n {{\mathds{1}}_{\{ X_j = x_i \} } }$. Let $\Delta_{k,n}$ denote the discrete simplex from $n$ samples over $k$ categories:
\begin{align*}
\Delta_{k,n} \ := \ \left\{ \widehat{\boldsymbol{p}} \in \{0, \ \nicefrac{1}{n},\ \nicefrac{2}{n}, \ \dots, \ 1\}^k \ : \ \sum_{i=1}^k \widehat{p}_i =1 \right\},
\end{align*}
and define $m = |\Delta_{k,n}|= { n+k-1 \choose k-1}$.
Denote the continuous simplex as $\Delta_k = \left\{\boldsymbol{p} \in [0,1]^k : \sum_i p_i =1 \right\}$. We use $\mathcal{P}(\Delta_{k,n})$ to denote the power set of $\Delta_{k,n}$, and $\mathcal{P}(\Delta_k)$ to denote the set of Lebesgue measurable subsets of $\Delta_k$. For any $\mathcal{S} \subset \Delta_{k,n}$ we write $\mathbb{P}_{\boldsymbol{p}}(\mathcal{S})$ as shorthand for $\mathbb{P}_{\boldsymbol{p}} \left(\left\{ X \in \mathcal{X}^n : \widehat{\boldsymbol{p}}(X) \in \mathcal{S}\right\} \right)$, where $\mathbb{P}_{\boldsymbol{p}}( \cdot)$ denotes the probability measure under the multinomial parameter $\boldsymbol{p} \in\Delta_{k}$.
\begin{defi}(Confidence set) Let $\mathcal{C}_{\alpha}( \widehat{\boldsymbol{p}}): \Delta_{k,n} \rightarrow \mathcal{P}(\Delta_{k})$ be a set valued function that maps an observed empirical distribution $\widehat{\boldsymbol{p}}$ to a subset of the $k$-simplex. $\mathcal{C}_{\alpha}( \widehat{\boldsymbol{p}})$ is a \emph{confidence set} at confidence level $1-\alpha$ if the following holds:
\begin{eqnarray} \label{eqn:cr}
\sup_{\boldsymbol{p} \in \Delta_{k}} \mathbb{P}_{\boldsymbol{p}} \left( \boldsymbol{p} \not \in \mathcal{C}_{\alpha}(\widehat{\boldsymbol{p}}) \right) \ \leq \ \alpha.
\end{eqnarray}
\label{def:cr}
\end{defi}
\begin{defi} ($p$-value) Fix an outcome $\widehat{\boldsymbol{p}}$. The
$p$-value as a function of the null hypothesis $\boldsymbol{p}$ is given by:
\begin{align} \label{eqn:partial}
\rho_{\widehat{\boldsymbol{p}}}(\boldsymbol{p}) \ = \ \sum_{\widehat{\boldsymbol{q}} \in \Delta_{k,n} : \mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{q}}) \leq \mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{p}}) } \mathbb{P}_{\boldsymbol{p}} \left( \widehat{\boldsymbol{q}} \right).
\end{align}
\end{defi}
\noindent
For a fixed outcome $\widehat{\boldsymbol{p}}$, we write $\rho(\boldsymbol{p})$ for simplicity.
\begin{prop}
\label{def:minvol}
(Minimum volume confidence set (MVCs) \cite{malloy2021ISIT}).
The MVCs are defined as
\begin{eqnarray*}
\mathcal{C}_{\alpha}^\star(\widehat{\boldsymbol{p}}) \ := \ \big\{\boldsymbol{p} \in\Delta_{k} \ : \ \rho_{\widehat{\boldsymbol{p}}}(\boldsymbol{p}) \geq \alpha \big\},
\end{eqnarray*}
and satisfy
\begin{eqnarray*}
\sum_{\widehat{\boldsymbol{p}} \in \Delta_{k,n} } \mathrm{vol} \left( \mathcal{C}_{\alpha}^\star(\widehat{\boldsymbol{p}}) \right) \leq
\sum_{\widehat{\boldsymbol{p}} \in \Delta_{k,n} } \mathrm{vol} \left( \mathcal{C}_{\alpha}(\widehat{\boldsymbol{p}}) \right)
\end{eqnarray*}
for any confidence set $\mathcal{C}_{\alpha}( \cdot )$; here $\mathrm{vol}( \cdot ) $ denotes the Lesbague measure. A proof can be found in \cite{malloy2021ISIT}.
\end{prop}
\section{Geometry of the Minimum Volume \\ Confidence Sets} \label{sec:Geometry}
By definition, the MVCs are the level-sets of the $p$-value function, which is a discontinuous function of $\boldsymbol{p}$, because it is a partial sum of the multinomial outcomes. In particular, an arbitrarily small change in $\boldsymbol{p}$ can include or exclude new terms of the sum in \eqref{eqn:partial}. For a region of the simplex over which the terms included in the partial sum do not change, the $p$-value \emph{is} continuous, as it is a sum of continuous functions. If terms included in the partial sum change, a discontinuity may occur.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pictures/Discon_CS.png}
\caption{An example of disconnected MVCs (blue) with observation $\widehat{\boldsymbol{p}} = [0,1,0]$ and confidence level 0.5 for $k=3,n=4$. Note the figure represents a corner of the simplex, as specified by the range on the axis.
\label{fig:confidence_set}}
\end{figure}
The terms included in the partial sum in (\ref{eqn:partial}) correspond to $\{\widehat{\boldsymbol{q}} \in \Delta_{k,n} : \mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{q}}) \leq \mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{p}})\}$. Consequently, the discontinuities of $\rho_{\widehat{\boldsymbol{p}}}(\boldsymbol{p})$ occur whenever $\mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{p}}) = \mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{q}})$ for some $\widehat{\boldsymbol{q}} \in \Delta_{k,n}\backslash \widehat{\boldsymbol{p}}$. Observe that $\mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{p}})$ is fully characterized by the multinomial distribution with parameter $\boldsymbol{p}$ as:
\begin{align*}
\mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{p}})
\ = \ \frac{n!}{(n\widehat{p}_1)! \ldots (n\widehat{p}_k)!} p_1^{n\widehat{p}_1} \cdots p_k^{n\widehat{p}_k}.
\end{align*}
It follows that the condition $\mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{p}}) = \mathbb{P}_{\boldsymbol{p}}(\widehat{\boldsymbol{q}})$ can be rewritten as the following equation:
\begin{align*}
c_0 p_1^{c_1} p_2^{c_2} \cdots p_k^{c_k} = 1,
\end{align*}
where $c_0:=\frac{(n\widehat{q}_1)! \cdots (n\widehat{q}_k)!}
{(n\widehat{p}_1)! \cdots (n\widehat{p}_k)!}$, and $c_i = n(\widehat{p}_i - \widehat{q}_i)$.
This implies that the discontinuities of $\rho(\boldsymbol{p})$ are characterized by a union of algebraic varieties in the simplex $\Delta_k$.
\begin{defi}(Discontinuity variety) \label{def:dis_var}
Consider a fixed observation $\widehat{\boldsymbol{p}} \in \Delta_{k,n}$, and let $\ell=1,\dots,m-1$ enumerate the types $\widehat{\boldsymbol{q}} \in \Delta_{k,n} \backslash \widehat{\boldsymbol{p}}$. We define the {\bf discontinuity variety} $\mathcal{V}_{\ell}$ as the intersection of the simplex $\Delta_k$ with the $(k-1)$-dimensional algebraic variety characterized by
\begin{align}
\label{polyDef}
f_\ell(\boldsymbol{p}) &= 1 - c_0 p_1^{c_1} p_2^{c_2} \cdots p_k^{c_k}.
\end{align}
Notice that $f_{\ell}(\boldsymbol{p})$ is defined over ${\mathbb R}^k$, whereas $\mathcal{V}_{\ell}$ is a $(k-2)$-dimensional subset of $\Delta_k$, and both have implicit dependence on $\widehat{\boldsymbol{p}}$ and $\widehat{\boldsymbol{q}}$ through $c_0, \dots, c_{k}$.
\end{defi}
It follows that the union of the discontinuity varieties
\begin{align*}
\bigcup_{\ell =1}^{m-1} \mathcal{V}_{\ell}
\end{align*}
characterizes all the discontinuities in the $p$-value function, which in turn partitions the simplex $\Delta_k$ into at most $2^{m-1}$ (possibly disconnected) sets. To see this observe that each discontinuity variety $\mathcal{V}_{\ell}$ splits $\Delta_k$ in two open sets:
\begin{align*}
\{\boldsymbol{p} \in \Delta_k : f_\ell(\boldsymbol{p}) < 0\},
\hspace{.25cm} \text{and} \hspace{.25cm}
\{\boldsymbol{p} \in \Delta_k : f_\ell(\boldsymbol{p}) > 0\}.
\end{align*}
Since $\ell \in \{1, \dots, m-1\}$ (there are $m-1$ elements $\widehat{\boldsymbol{q}}$ in $\Delta_{k,n} \backslash \widehat{\boldsymbol{p}}$), the discontinuity varieties will split $\Delta_k$ in at most $2^{m-1}$ {\em candidate sets}, each defined by a combination of directions in the {\em splitting inequalities}
\begin{align}
\label{eqn:cont_region}
\big\{ f_\ell(\boldsymbol{p}) \ \lessgtr \ 0 \big\}_{\ell=1}^{m-1}.
\end{align}
By construction, no point in these candidate sets satisfies a discontinuity condition $f_{\ell}(\boldsymbol{p})=0$, which implies that $\rho_{\widehat{\boldsymbol{p}}}(\boldsymbol{p})$ is continuous in these regions. However, many of these candidate sets may be empty (if they result from inconsistent splitting inequalities). Moreover, each non-empty candidate set consists of a finite number of connected regions (a notion we make precise in the following section). We refer to each as a {\em continuity region} and formalize these ideas in the following.
\begin{defi}
(Candidate set; continuity set; continuity region) Given ${\boldsymbol{\omega}} \in \{-1,1\}^{m-1}$, let
\begin{align*}
\mathcal{R}_{\boldsymbol{\omega}} \ = \ \left\{ \boldsymbol{p} \in \Delta_k : \bigwedge_{\ell=1}^{m-1} f_{\ell}(\boldsymbol{p}) \mathop{\Scale[1.1]{\lessgtr}}_{\Scale[0.65]{\omega_{\ell}=1}}^{\Scale[0.65]{\omega_{\ell}=-1}} 0 \right\}
\end{align*}
be the {\bf candidate set} associated with the combination of splitting inequalities indexed by ${\boldsymbol{\omega}}$ (here $\omega_{\ell}$ denotes the $\ell^{\rm th}$ entry of ${\boldsymbol{\omega}}$).
We say $\mathcal{R}_{\boldsymbol{\omega}}$ is a {\bf continuity set} if $\mathcal{R}_{\boldsymbol{\omega}} \neq \emptyset$. Furthermore, each continuity set is the union of a finite number of connected subsets, termed {\bf continuity regions}.
We say $f_\ell(\boldsymbol{p})$ {\bf touches} $\mathcal{R}_{{\boldsymbol{\omega}}}$ if the closure of $\mathcal{R}_{{\boldsymbol{\omega}}}$ includes a $\boldsymbol{p}$ such that $f_\ell(\boldsymbol{p})=0$.
\end{defi}
An example of the continuity regions associated with an observation $\widehat{\boldsymbol{p}}$ is shown in Fig. \ref{fig:continuity_regions}. The $p$-value function $\rho(\boldsymbol{p})$ is continuous over each region.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{pictures/Discon_CS.PNG}
\caption{Partitioning of the simplex $\Delta_k$ into continuity regions for $n = 4$, $k = 3$, and $\widehat{\boldsymbol{p}} = [\nicefrac{1}{4},\nicefrac{1}{2},\nicefrac{1}{4}]$. The $p$-value function is continuous over each continuity set (indicated by the colors). Figure generated by Plotly \cite{plotly}.
\label{fig:continuity_regions}}
\end{figure}
\subsection{Identifying Continuity Sets} \label{sec:finding}
The key to identifying \emph{continuity} sets in the simplex lies in determining whether the each of the $2^{m-1}$ \emph{candidate} sets are empty. To determine if a candidate set $\mathcal{R}_{\boldsymbol{\omega}}$ is empty, we first check if its splitting inequalities are feasible.
If they are infeasible, the candidate set is empty.
On the other hand, if the splitting inequalities result in a non-empty set, we can further check if this set intersects the simplex by finding the minimum and maximum of $p_1+\dots+p_k$, constrained by the splitting inequalities. As we show, if this minimum value is less one, and the maximum is greater than one, we can conclude that $\mathcal{R}_{\boldsymbol{\omega}}$ is a (non-empty) \emph{continuity set}. We make this approach precise in the following discussion and Alg. \ref{alg:fcr}.
\begin{algorithm} \label{alg:fcr}
\caption{Find \emph{continuity sets}}\label{alg:fcr}
\begin{algorithmic}[1]
\State{\textbf{Input}}: observation $\widehat{\boldsymbol{p}} \in \Delta_{k, n}$
\State {initialize: set ${\boldsymbol{\Omega}} = \{ \}$ }
\For {${\boldsymbol{\omega}} \in \{-1, 1\}^{m-1}$}
\State {$\mathcal{S} = \left\{ \boldsymbol{p} \in \mathbb{R}^k : \bigwedge_{\ell=1}^{m-1} f_{\ell}(\boldsymbol{p}) \mathop{\Scale[1.1]{\lessgtr}}_{\Scale[0.65]{\omega_{\ell}=1}}^{\Scale[0.65]{\omega_{\ell}=-1}} 0 \right\}$ }
\State{Solve GP: $t_{\mathrm{min}} = \min_{\boldsymbol{p} \in \mathcal{S}} \sum_{i=1}^{k}p_i$}
\If {GP feasible and $ t_{\mathrm{min}} \leq 1$}
\State {$\mathcal{P} =\left\{ \boldsymbol{z} \in \mathbb{R}^{k+} : \bigwedge_{\ell=1}^{m-1} \boldsymbol{z}^T c_{\ell} \mathop{\Scale[1.1]{\lessgtr}}_{\Scale[0.65]{\omega_{\ell}=1}}^{\Scale[0.65]{\omega_{\ell}=-1}} c_\ell \right\}$}
\State {$\{\boldsymbol{v}_1, \dots, \boldsymbol{v}_j\} \leftarrow$ vertices of $\mathcal{P}$}
\State { $t_{\mathrm{max}} = \max_{\boldsymbol{z} \in \{\boldsymbol{v}_1, \dots, \boldsymbol{v}_j\} } \sum_{i=1}^{k}e^{-z_i}$}
\If {$t_{\mathrm{max}} \geq 1$}
\State {Append ${\boldsymbol{\omega}}$ to ${\boldsymbol{\Omega}}$}
\EndIf
\EndIf
\EndFor
\State {{\bf Return}: continuity sets ${\boldsymbol{\Omega}}$}
\end{algorithmic}
\end{algorithm}
\label{subsec:GP}
To check feasibility and find the minimum of $p_1+\cdots+p_k$ we employ a Geometric Program (GP). Denote $t_{\mathrm{min}}$ the solution to the following optimization:
\begin{align*}
t_{\mathrm{min}} =
\underset{\boldsymbol{p} \in \mathbb{R}^{k+}}{\text{min }} & p_1+\cdots+p_k \\
\qquad \text{subject to } &
\bigwedge_{\ell=1}^{m-1} f_{\ell}(\boldsymbol{p}) \mathop{\Scale[1.1]{\lessgtr}}_{\Scale[0.65]{\omega_{\ell}=1}}^{\Scale[0.65]{\omega_{\ell}=-1}} 0.
\end{align*}
\noindent Note the objective is a posynomial function and the term $c_0 p_1^{c_1} p_2^{c_2} \cdots p_k^{c_k}$ in the discontinuity variety is a monomial function (which is also a posynomial function). This makes the optimization problem a standard GP \cite{boyd2007tutorial} and we can easily deterimine $t_{\mathrm{min}}$ using off-the-shelf solvers.
Next, we aim to find the maximum of $p_1+\dots+p_k$, which is most easily accomplished by a logarithmic transformation, which we refer to as \emph{$z$-space}. While the continuity sets are defined as the intersection of the simplex with a region defined by non-linear splitting equalities, after a logarithmic transformation, the splitting equalities represent hyperplanes. More specifically, the constraint $f_{\ell}(\boldsymbol{p}) = 0$ for $\boldsymbol{p} \in \mathbb{R}^{k+}$ is equivalent to
$ \log(c_{0} p_1^{c_{1}} \cdots p_k^{c_{k}}) = 0 $
which can be represented as an inner product $\boldsymbol{z}^T\boldsymbol{c} = c$, a notion we make precise in the following observation.
\begin{obs} \label{obs:zspace} $\boldsymbol{z}$-space.
Let $z_i = -\log(p_i)$, $c = \log(c_0)$ and $\boldsymbol{c} = [c_1, \dots, c_k]^T$. Then,
\begin{eqnarray*}
\left\{ \boldsymbol{p} \in \mathbb{R}^{k+}: f_{\ell}(\boldsymbol{p}) = 0 \right\}
= \left\{ e^{-\boldsymbol{z}} : \boldsymbol{z}^T \boldsymbol{c} = c \right\}
\end{eqnarray*}
for $\boldsymbol{z} \in \mathbb{R}^{k+}$.
\end{obs}
\noindent Obs. \ref{obs:zspace} implies the set of splitting inequalities in
(\ref{eqn:cont_region}) is equivalent linear halfspace inequalities:
\begin{eqnarray}
\{ \boldsymbol{z}^T \boldsymbol{c}_\ell \lessgtr c_\ell \}_{\ell=1}^{m-1}
\end{eqnarray}
where $\ell=1,\dots,m-1$ indexes $\widehat{\boldsymbol{q}} \in \Delta_{n,k} \setminus \widehat{\boldsymbol{p}}$. The set of splitting equalities is the intersection of linear halfspaces, which forms a polyhedron $\mathcal{P}$. To compute the maximum of $p_1+\cdots+p_k$ in the original space, we can equivalently maximize $e^{-z_1}+\cdots+e^{-z_k}$ which is convex in $\boldsymbol{z}$.
\begin{obs} \label{obs:conv}
The maximum (if it exists) of a convex function $g(\boldsymbol{z})$ over the polyhedron $\mathcal{P}$ is achieved at one of the vertices $\boldsymbol{v}_1,\cdots,\boldsymbol{v}_j$ of $\mathcal{P}$, since for $\sum \lambda_i =1$, $\lambda_i \geq 0$,
\begin{eqnarray*}
g\left(\sum_{i} \lambda_i \boldsymbol{v}_i\right) \leq \sum_{i} g\left( \lambda_i \boldsymbol{v}_i\right) \leq \mathrm{max}_i\left(g(\boldsymbol{v}_i) \right)).
\end{eqnarray*}
\end{obs}
\noindent Obs. \ref{obs:conv} allows us to directly compute the maximum by enumerating the vertices of the $\mathcal{P}$. In particular, define
\begin{eqnarray}
t_{\max} = \max_{\boldsymbol{z} \in \{\boldsymbol{v}_1, \dots, \boldsymbol{v}_j \} } \sum_{i=1}^{k} e^{-z_i}
\end{eqnarray}
where $\{\boldsymbol{v}_1, \dots, \boldsymbol{v}_j \}$ are the vertices of $\mathcal{P}$, and set $t_{\max} = \infty$ if the maximum does not exist. This gives rise to the following corollary, which provides conditions under which a candidate set is non-empty.
\begin{cor}
If $t_\mathrm{max} \geq 1$ and $t_{\min} \leq 1$, then $\mathcal{R}_{{\boldsymbol{\omega}}} \neq \emptyset$.
\begin{proof}
If the splitting inequalities are feasible, we are guaranteed to find the minimum (using the GP) and the maximum (by checking the vertices in $\boldsymbol{z}$-space) of $p_1+\dots+p_k$. Since $t_\mathrm{max} \geq 1$ and $t_{\min} \leq 1$, this implies $p_1 + \dots + p_k = 1$ for some $\boldsymbol{p}$ constrained by the splitting inequalities, which follows as the splitting inequalities define a connected subset of $\mathbb{R}^{k+}$. Together, this implies $\mathcal{R}_{{\boldsymbol{\omega}}} \neq \emptyset$.
\end{proof}
\end{cor}
\subsection{Identifying Continuity Set Vertices}
Understanding and enumerating vertices of the continuity sets plays an important role in understanding their geometry of the MVCs.
\begin{defi} \label{def:vertex}
A \emph{vertex} is a point $\boldsymbol{v} \in \Delta_{k}$ where $k-1$ splitting equalities intersect:
\begin{eqnarray*}
\boldsymbol{v} \in \Delta_k : f_{\ell_i}(\boldsymbol{v}) = 0,\ \ \ i=1,\dots,k-1.
\end{eqnarray*}
\end{defi}
\noindent We highlight that the vertices defined here are different from the vertices \emph{of a polytope} $\mathcal{P}$ discussed in the previous section. In particular, Def. \ref{def:vertex} refers to vertices of the continuity sets (which must lie in the simplex) as opposed to the vertices of a polytope $\mathcal{P}$ in $\boldsymbol{z}$-space.
\begin{cor} \label{cor:vertex1}
The number of vertices is at most $2 {m-1 \choose k-1}$.
\begin{proof}
There are $m-1$ splitting equalities; each choice of $k-1$ results in at most 2 vertices. To see this note the following two observations. First, in $\boldsymbol{z}$-space, the intersection of $k-1$ splitting equalities is is a 1-d affine space in $\mathbb{R}^k$.
This follows as the intersection of $k-1$ splitting equalities is the solution set to a system of linear equations $A \boldsymbol{z} = \boldsymbol{c}$, where $A \in \mathbb{R}^{(k-1)\times k}$ with rows $\boldsymbol{c}_{\ell_1}^T,\dots, \boldsymbol{c}_{\ell_{k-1}}^T$. This follows under the assumption that the $\boldsymbol{c}_{\ell_1},\dots, \boldsymbol{c}_{\ell_{k-1}}$ are in general position and form a linearly independent set.
Next note that the simplex in $\boldsymbol{z}$-space is
\begin{eqnarray} \label{eqn:z_simplex}
\tilde \Delta_k = \left\{\boldsymbol{z} \in \mathbb{R}^{k+} : \sum_{i=1}^k e^{-z_i} =1 \right\}
\end{eqnarray}
and is the exterior of a strictly convex set; specifically,
$\{ \boldsymbol{z} \in \mathbb{R}^{+}: \sum_{i=1}^k e^{-z_i} \leq 1 \}$
is a strictly convex set since $\sum_{i=1}^k e^{-z_i}$ is a strictly convex function for $z_i > 0$. The level-sets of a strictly convex function are strictly convex sets \cite{boyd2004convex}, implying $\{ \boldsymbol{z} \in \mathbb{R}^{+}: \sum_{i=1}^k e^{-z_i} \leq 1 \}$ is a strictly convex set.
Lastly, a 1-d affine space can intersect the exterior of a strictly-convex set at most twice. \end{proof}
\end{cor}
The vertices can be identified in $\boldsymbol{z}$-space by directly enumerating all subsets of $k-1$ splitting inequalities. For each subset, the (at most two) vertices can be computing numerically using a line search to determine where the $1$-$d$ affine space intersects $\tilde{\Delta}_k$ defined in \ref{eqn:z_simplex}. Further observations relating continuity regions to vertices can be found in Appendix \ref{app:verts}, Cor. \ref{cor:vertex2} and Cor. \ref{cor:vertex3}.
\subsection{Covering of Continuity Sets}
Fully specifying the MVCS requires specifying the level-sets of the $p$-value function over $\Delta_{k}$. Numerically, this requires specifying a \emph{discrete covering set} or \emph{cover} for each continuity set. Note that naive approach to covering a continuity set -- covering the simplex and assigning each point to the continuity set that it belongs -- fails, as it can miss arbitrarily small continuity sets or those with a irregular, `pointy' geometry.
Consider a single continuity set $\mathcal{R}_{{\boldsymbol{\omega}}}$. One approach to covering $\mathcal{R}_{{\boldsymbol{\omega}}}$ is to consider subsets of the discrete simplex comprised of all the points inside $\mathcal{R}_{{\boldsymbol{\omega}}}$ \emph{and} all points outside $\mathcal{R}_{{\boldsymbol{\omega}}}$ but within a specified distance.
\begin{defi}
$(\epsilon, \delta)$-cover. We say $\mathcal{G}_{{\boldsymbol{\omega}}} \subset \Delta_{k,\eta}$ is an $(\epsilon, \delta)$-cover of $\mathcal{R}_{{\boldsymbol{\omega}}}$ if for every $\boldsymbol{p} \in \mathcal{R}_{{\boldsymbol{\omega}}}$ there is a $\boldsymbol{q} \in \mathcal{G}_{{\boldsymbol{\omega}}}$ such that $||\boldsymbol{p} - \boldsymbol{q}||_2 \leq \epsilon$ and for every $\boldsymbol{q} \in \mathcal{G}_{{\boldsymbol{\omega}}}$ there is a $\boldsymbol{p} \in \mathcal{R}_{{\boldsymbol{\omega}}}$ such that $||\boldsymbol{q} - \boldsymbol{p}||_2 \leq \delta$.
\end{defi}
The cover $\mathcal{G}_{{\boldsymbol{\omega}}}$ requires computing the minimum distance between a point $\boldsymbol{q} \in \Delta_{k, \eta}$ and $\mathcal{R}_{{\boldsymbol{\omega}}}$. To facilitate this computation, we proceed in two steps: (1) presenting a necessary condition for such minimum distance points to a single discontinuity variety and enumerating all points that satisfy the necessary condition, and (2) showing that this can be used to find the minimum distance to a continuity set $\mathcal{R}_{\omega}$ in the restricted setting of $k=3$.
\begin{cor} \label{cor:orth}
Fix $\boldsymbol{q} \in \Delta_k$ and consider a discontinuity variety $f_\ell(\boldsymbol{p})$. Define $\boldsymbol{p}^{-1} = [p_1^{-1} \ \dots \ p_k^{-1}]^T$ and let $\boldsymbol{B} \in \mathbb{R}^{k\times k }$ be given as $\boldsymbol{B} = \boldsymbol{U}\bU^T \mathrm{diag}(c_1, \dots, c_k)$ where the columns of $\boldsymbol{U} \in \mathbb{R}^{k \times (k-1)}$ are an orthonormal basis for $\Delta_k$.
Then,
\begin{align} \label{eqn:quad}
\lambda \boldsymbol{B} \boldsymbol{p}^{-1} = \boldsymbol{q} - \boldsymbol{p}
\end{align}
for some $\lambda \in \mathbb{R}$
is a necessary condition for any $\boldsymbol{p}$ that satisfies
$$\argmin_{\boldsymbol{p} \in \Delta_k: f_{\ell}(\boldsymbol{p}) = 0} ||\boldsymbol{p}-\boldsymbol{q}||_2.$$
\begin{proof}
Recall $f_\ell(\boldsymbol{p}) = 1 - c_0 p_1^{c_1} p_2^{c_2} \cdots p_k^{c_k}$. The closest point $\{\boldsymbol{p} \in \Delta_k: f_\ell(\boldsymbol{p})= 0 \}$ and a fixed point $\boldsymbol{q} \in \Delta_k$
must satisfy the \emph{orthogonality condition} \cite{gubner2010ece}, Thm. 3.11, which implies that the shortest distance between a point and a surface be normal to the surface. Note that the normal vector of the discontinuity variety $\{\boldsymbol{p} \in \Delta_k: f_\ell(\boldsymbol{p})=\gamma \}$ is the gradient of its level set function $f_\ell(\boldsymbol{p})$ projected onto the simplex. By the orthogonality condition, if the columns of $\boldsymbol{U}$ are a basis for $\{\boldsymbol{x} \in \mathbb{R}^k : \boldsymbol{x}^T \boldsymbol{1} =0\}$, we require
\begin{align} \label{eqn:orth1}
\lambda \boldsymbol{U} \boldsymbol{U}^T \nabla f_\ell(\boldsymbol{p})= \boldsymbol{q}-\boldsymbol{p}
\end{align}
of any point that minimizes the distance to $\boldsymbol{q}$, where the $i$th element of $\nabla f_\ell(\boldsymbol{p})$ is
\begin{align}
\left[\nabla f_\ell(\boldsymbol{p})\right]_i = c_i p_i^{-1} (f_\ell(\boldsymbol{p})-1).
\end{align}
Together with (\ref{eqn:orth1}), this implies
\begin{align}
\lambda (f_{\ell}(\boldsymbol{p}) -1) \boldsymbol{U} \boldsymbol{U}^T \boldsymbol{C} \boldsymbol{p}^{-1} = \boldsymbol{q} - \boldsymbol{p}.
\end{align}
where $\boldsymbol{p}^{-1} = [p_1^{-1} \ \dots \ p_k^{-1}]^T$ and $\boldsymbol{C} = \mathrm{diag}(c_1, \dots, c_k)$. Since $(f_\ell(\boldsymbol{p})-1) \in \mathbb{R}$, this gives the result. We provide further details in Appendix \ref{app:orth_cond}.
\end{proof}
\end{cor}
Our goal is to find \emph{all} points that satisfy the necessary condition in Cor. \ref{cor:orth} for $f_\ell(\boldsymbol{p}) = 0$. Note that (\ref{eqn:quad}) is a system of polynomial equations, and each equation involves a quadratic term of a single variable. In some settings, this can be solved explicitly (see \cite{cox2013ideals}, chapter 3) using Grobner bases.
\begin{obs}
Fix $\lambda$. For $k=3$ the solutions to (\ref{eqn:quad}) can be found by a numerical procedure equivalent to finding the roots of a single variable differentiable function over an interval. See App. \ref{sec:quad_roots}.
\end{obs}
\begin{obs}
Let $\boldsymbol{p}_{j}(\lambda)$ enumerate the solutions to (\ref{eqn:quad}). Then, $f_{\ell}(\boldsymbol{p}_{j}(\lambda)) : \mathbb{R} \mapsto \mathbb{R}$ is a differentiable function. Finding solutions to $f_{\ell}(\boldsymbol{p}_{j}(\lambda)) = 0$ is equivalent to finding a roots of a single variable differentiable function. See App. \ref{sec:grad}.
\end{obs}
Equipped with the ability to compute the distance to an individual variety, we next compute the minimum distance from an arbitrary point to a \emph{continuity set}. Consider only the case for $k=3$. Suppose a continuity set contains discontinuity varieties $f_{i_1}, \dots\, f_{i_m}$, and vertices $\boldsymbol{v}_1, \dots\, \boldsymbol{v}_l$. We can compute the distance from a point $\boldsymbol{q}$ to $f_{i_1}, \dots\, f_{i_m},\boldsymbol{v}_1,\dots\,\boldsymbol{v}_l$. Notice that distance from an arbitrary point to a discontinuity variety may not be feasible for that continuity set (which is readily verified). This point is then excluded computing the minimum.
To complete this process, we can evenly grid the whole simplex so that grid points are separated by at most $\epsilon$. Next, we only pick points inside of continuity set, or whose distance to the continuity set are less than or equal to $\delta$. Then these points form a $(\epsilon, \delta)$-cover for the continuity set. The approach is detailed in Alg. \ref{alg:findd}.
\begin{cor}
$\mathcal{G}_{{\boldsymbol{\omega}}}$ is an $(\epsilon, \delta)$-cover of $\mathcal{R}_{{\boldsymbol{\omega}}}$.
\end{cor}
\begin{proof}
Note that $\Delta_{k, \eta}$ is an $(\epsilon, 0)$ cover of $\Delta_k$ when $\eta = \left \lceil \frac{\sqrt{k}}{\epsilon} \right \rceil$ (see
Appendix \ref{app:eps_eta}).
Since $\delta \geq \epsilon$, none of the points that cover any portions of the continuity set are discarded. Every point that remains in $\mathcal{G}_{{\boldsymbol{\omega}}}$ is closer than $\delta$ to at least one point in $\mathcal{R}_{{\boldsymbol{\omega}}}$, and we conclude the result.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=0.44\textwidth]{pictures/Delta-Epsilon-Points.PNG}
\caption{An example of $(\epsilon, \delta)$-cover of $\mathcal{R}_{{\boldsymbol{\omega}}}$. Grid points are $\delta$ away from each other. Blue dots are outer points with distance less than $\epsilon$ away from $\mathcal{R}_{{\boldsymbol{\omega}}}$. Pink dots are inside $\mathcal{R}_{{\boldsymbol{\omega}}}$. Figure generated by Plotly \cite{plotly}.
\label{fig:delta_epsilon_cover}}
\end{figure}
\begin{algorithm}
\caption{ \label{alg:findd} Finding an $(\epsilon, \delta)$-cover for $\mathcal{R}_{{\boldsymbol{\omega}}}$ }
\begin{algorithmic}[1]
\State{\textbf{Input}}: $\delta \geq \epsilon \geq 0$, $\widehat{\boldsymbol{p}} \in \Delta_{k, n}$, continuity set $\mathcal{R}_{{\boldsymbol{\omega}}}$ with vertices $\mathcal{V} = \{v_1,...v_l\}$ and discontiuity varieties $\mathcal{F} = \{f_{i_1},\dots\, f_{i_m}\}$ that touch $\mathcal{R}_{{\boldsymbol{\omega}}}$.
\State {initialize: set $\mathcal{G}_{{\boldsymbol{\omega}}}=\{ \}$ and
$\eta = \left \lceil \frac{\sqrt{k}}{\epsilon} \right \rceil$}
\For {$p \in \Delta_{k,\eta}$}
\State $\mathcal{Q}$ = $ \mathcal{V} \cup \{q_j: \argmin_{q_j:j = 1, \cdots m} d(p,q_j), p \in f_{i_j} \cap \mathcal{R}_{{\boldsymbol{\omega}}}, f_{i_j} \in \mathcal{F} \} $
\If {$p \in \mathcal{R}_{{\boldsymbol{\omega}}}$ or $\min d(p, q)\leq \delta, q \in \mathcal{Q} $}
\State {Append $p$ to $\mathcal{G}_{{\boldsymbol{\omega}}}$}
\EndIf
\EndFor
\State {\textbf{Return}: $(\epsilon, \delta)$-cover $\mathcal{G}_{{\boldsymbol{\omega}}}$}
\end{algorithmic}
\end{algorithm}
\section{Applications: A/B Testing}
Suppose we have two empirical observations, $\widehat \boldsymbol{p}_1$ and $\widehat \boldsymbol{p}_2$, each which have a MVCs, $\mathcal{C}^{\star}_{\delta}(\widehat \boldsymbol{p}_1)$ and $\mathcal{C}^{\star}_{\delta}(\widehat \boldsymbol{p}_2)$. Our aim is to determine whether the two MVCs intersect.
Given the full enumeration of the continuity sets and their corresponding $(\epsilon, \delta)$ covers, the $p$-value function can be fully specified numerically, and the corresponding level sets can be computed to required precision. More precisely, for determining an empty intersection, we consider the following approach. Recall the splitting inequalities will split each confidence sets into continuity sets, where $p$-value is continuous under each continuity sets. Because we are now measuring two confidence sets, we want to make sure that in each continuity set, the $p$-value functions for both confidence sets are continuous. We can achieve this by including the splitting inequalities for both confidence sets and computing the corresponding continuity sets. This will double the number of splitting inequalities.
To continue, we cover the new continuity sets, and measure the $p$-value function under $\widehat{\boldsymbol{p}}_1$ and $\widehat{\boldsymbol{p}}_2$ for each point in $\mathcal{G}_{\boldsymbol{\omega}}$. We can bound the $p$-value over the ball of radius $\epsilon$, denoted $R_\epsilon(\boldsymbol{p})$, using the Lipschitz constant $L$:
\begin{align} \label{eqn:lip1}
\left \vert \rho_{\widehat{\boldsymbol{p}}_i}(\boldsymbol{q}) - \rho_{\widehat{\boldsymbol{p}}_i}(\boldsymbol{p}) \right \vert\leq L \left \vert\boldsymbol{q} - \boldsymbol{p}\right \vert\leq L\epsilon, \boldsymbol{q} \in R_\epsilon(\boldsymbol{p})
\end{align}
which implies $
\rho_{\widehat{\boldsymbol{p}}_i}(\boldsymbol{p})- L\epsilon \leq \rho_{\widehat{\boldsymbol{p}}_i}(\boldsymbol{q}) \leq \rho_{\widehat{\boldsymbol{p}}_i}(\boldsymbol{p})+ L\epsilon$.
See App. \ref{sec:grad}) to bound the Lipschitz constant $L$.
However, it is possible that parts of the ball are in both confidence sets, which means $\rho_{\widehat{\boldsymbol{p}}}(\boldsymbol{p})+ L\epsilon > 1 - \alpha$ for the $p$-value of a neighboring point in both sets, and it will remain unclear whether their intersection is empty. In this case, we decrease the value of $\epsilon$ and repeat the approach until such situation no longer happens.
\section{Summary}
In this paper we studied the {\em minimum volume confidence sets} for the multinomial parameter. We showed that the MVCs, which are prescribed as the level sets of a discontinuous function (the exact $p$-value) can be computed to arbitrary precision by enumerating and covering the regions over which they are continuous. This allowed us to answer a basic question in A/B testing in the restricted setting of three categories. While the approach sheds light on the general setting of more than three categories, defining a covering for the continuity sets in general remains an open problem. The primary challenge lies in covering the lower dimensional faces and edges of the continouity sets.
|
1,477,468,750,973 | arxiv | \section{Introduction}\label{sec:intro}
This paper is mainly concerned with \emph{channel connectivity}, by which we mean the relationship
that describes which input channels are connected to which output channels in a setting with message-passing concurrency.
In the {\pic}~\cite{DBLP:journals/iandc/MilnerPW92a},
channel connectivity is syntactic identity: in the process
\[\underline{a}(x).P \parop \overline{b}\,y.Q\]
\noindent where one parallel component is waiting to receive on channel $a$ and the other is waiting to send on channel $b$, interaction is possible only if $a=b$.
Variants of the {\pic} may have more interesting channel connnectivity.
The explicit fusion calculus pi-F~\cite{gardner.wischik:explicit-fusions-mfcs} extends
the {\pic} with a primitive for \emph{fusing} names;
once fused, they are treated as being for all purposes one and the same.
Channel connectivity is then given by the equivalence closure of the name fusions.
For example, if we extend the above example with the fusion $(a=b)$
\[\underline{a}(x).P \parop \overline{b}\,y.Q \parop (a=b)\]
\noindent then communication is possible.
Other examples may be found in e.g.~calculi for wireless communication~\cite{DBLP:journals/tcs/NanzH06}, where channel connectivity can be used to directly model the network's topology.
Psi-calculi~\cite{bengtson.johansson.ea:psi-calculi-long} is a family of applied process calculi,
where standard meta-theoretical results,
such as the algebraic laws and congruence properties of bisimulation,
have been established once and for all through mechanised proofs~\cite{DBLP:journals/jar/BengtsonPW16} for all members of the family.
Psi-calculi generalises e.g.~the {\pic} and the explicit fusion calculus in several ways.
In place of atomic names it allows
channels and messages to be taken from an (almost) freely chosen term language.
In place of fusions, it admits the formulas of an (almost) freely chosen logic as first-class
processes. Channel connectivity is determined by judgements of said logic, with one restriction:
the connectivity thus induced must be symmetric and transitive.
The main contribution of the present paper is a new way to define the semantics of psi-calculi that
lets us lift this restriction, without sacrificing any of the algebraic laws
and compositionality properties.
It is worth noting that this was previously believed to be impossible:
Bengtson et al.~\cite[p.~14]{bengtson.johansson.ea:psi-calculi-long} even offer counterexamples
to the effect that without symmetry and transitivity, scope extension is unsound.
However, a close reading reveals that these counterexamples apply only to their particular choice of
labelled semantics, and do not rule out the possibility that the counterexamples
could be invalidated by a rephrasing of the labelled semantics such as ours.
The price we pay for this increased generality is more complicated transition labels:
we decorate input and output labels with a \emph{provenance} that keeps track of which prefix a
transition originates from. The idea is that if I am an input label and you are an output label,
we can communicate if my subject is your provenance, and vice versa.
This is offset by other simplifications of the semantics and associated
proofs that provenances enable.
\paragraph{Contributions} This paper makes the following specific technical contributions:
\begin{itemize}
\item We define a new semantics of psi-calculi that lifts the requirement that
channel connectivity must be symmetric and transitive,
using the novel technical device of provenances.
(Section~\ref{sec:definitions})
\item We prove that strong and weak bisimulation is a congruence and satisfies the usual
algebraic laws such as scope extension.
Interestingly, provenances can be ignored for the purpose of bisimulation.
These proofs are machine-checked in Nominal Isabelle~\cite{U07:NominalTechniquesInIsabelleHOL}.
(Section~\ref{sec:bisimulation})
\item We prove, again using Nominal Isabelle, that this paper's developments
constitute a conservative extension of the original psi-calculi.
(Section~\ref{sec:validation})
\item To further validate our semantics, we define
a reduction semantics and strong barbed congruence,
and show that they agree with their labelled counterparts.
(Section~\ref{sec:validation})
\item We capture a pi-calculus with preorders
by Hirschkoff et al.~\cite{DBLP:conf/lics/HirschkoffMS13},
that was previously beyond the scope of psi-calculi because of its
non-transitive channel connectivity.
The bisimilarity we obtain turns out to coincide with that of Hirschkoff et al.
(Section~\ref{sec:prepi})
\item
We exploit non-transitive connectivity to show that
mixed choice is a derived operator of psi-calculi in a very strong sense:
its encoding is fully abstract and satisfies strong operational correspondence.
(Section~\ref{sec:choice})
\end{itemize}
\noindent
This paper is an extended version of \cite{DBLP:conf/forte/Pohjola19}.
In this version, we have extended many of the meta-theoretical results
and the associated Isabelle formalisation from strong to weak bisimulation
(Section~\ref{sec:weakbisimulation}).
We have also added a discussion of the aforementioned
counterexamples by Bengtson et al.~\cite{bengtson.johansson.ea:psi-calculi-long}
(Section~\ref{sec:counterexamples}), and a more thorough motivation of
the design decisions we have made when introducing provenances (Section~\ref{sec:design}).
Moreover, the other sections have been edited for detail and clarity.
We have opted against including the full proofs;
the interested reader is referred to the associated technical report~\cite{pohjola:newpsireport}
and Isabelle formalisation.
Isabelle proofs are available online.%
\footnote{\url{https://github.com/IlmariReissumies/newpsi}}
\section{Definitions}\label{sec:definitions}
This section introduces core definitions such as syntax and semantics.
Many definitions are shared with the original presentation of psi-calculi, so this section
also functions as a recapitulation of \cite{bengtson.johansson.ea:psi-calculi-long}. We will highlight
the places where the two differ.
For readers who desire a gentler introduction to psi-calculi than the present
paper offers, we recommend \cite{Johansson10}.
Psi-calculi is built on the theory of nominal sets~\cite{Gabbay01anew}, which allows us to reason
formally up to alpha-equivalence without committing to any particular syntax of the term
language.
We assume a countable set of \emph{names} $\nameset$ ranged over by $a,b,c,\dots,x,y,z$.
A \emph{nominal set} is a set equipped with a permutation action $\cdot$;
intuitively, if $X \in \mathbf{X}$ and $\mathbf{X}$ is a nominal set, then $(x\;y)\cdot X$, which denotes
$X$ with all occurrences of the name $x$ swapped for $y$ and vice versa, is also an element of
$\mathbf{X}$.
$\supp{X}$ (the \emph{support} of $X$) is, intuitively,
the set of names such that swapping them changes $X$.
We write $a \freshin X$ (``$a$ is fresh in $X$) for $a \notin \supp{X}$.
We overload $\freshin$ to sequences of names: $\vec{a} \freshin X$ means that for
each $a_i$ in $\vec{a}$, $a_i \freshin X$.
Similarly, $a \freshin X,Y$ abbreviates $a \freshin X \wedge a \freshin Y$.
A nominal set $\mathbf{X}$ has \emph{finite support} if for every $X\in\mathbf{X}$,
$\supp{X}$ is finite.
A function symbol $f$ is \emph{equivariant} if $p\cdot f(x) = f(p\cdot x)$,
where $p$ is a sequence of permutations $(x\;y)$;
this generalises to
$n$-ary function symbols in the obvious way.
Whenever we define inductive syntax with names, it is implicitly quotiented by permutation of
bound names, so e.g.~$(\nu x)\opa{a}{x} = (\nu y)\opa{a}{y}$ if $x,y \freshin a$.
Psi-calculi is parameterised on an arbitrary term language and a logic of environmental assertions:
\begin{defi}[Parameters]
A \emph{psi-calculus} is a 7-tuple $(\terms,\assertions,\conditions,\vdash,\otimes,\unit,\chcon)$ with
three finitely supported nominal sets:
\begin{enumerate}
\item $\terms$, the \emph{terms}, ranged over by $M,N,K,L,T$;
\item $\assertions$, the \emph{assertions}, ranged over by $\Psi$; and
\item $\conditions$, the \emph{conditions}, ranged over by $\varphi$.
\end{enumerate}
\noindent We assume each of the above is equipped with a substitution function
\subst{\_}{\_} that substitutes (sequences of) terms for names.
The remaining three parameters are equivariant function symbols written in infix:
\begin{center}
\begin{tabular}{ll}
$\vdash\; : \assertions \times \conditions \Rightarrow \kwd{bool}$ & (entailment) \\
$\otimes : \assertions \times \assertions \Rightarrow \assertions$ & (composition) \\
$\unit : \assertions$ & (unit) \\
$\chcon\;: \terms \times \terms \Rightarrow \conditions$ & (channel connectivity)
\end{tabular}
\end{center}
\end{defi}
\noindent Intuitively, $M \chcon K$ means the prefix $M$ can send a message to the prefix $K$.
The substitution functions must satisfy certain natural criteria wrt.~their treatment of names;
see \cite{bengtson.johansson.ea:psi-calculi-long} for the details.
We use $\sigma$ to range over substitutions $\subst{\ve{x}}{\ve{T}}$. A substitution
$\subst{\ve{x}}{\ve{T}}$ is \emph{well-formed} if $|\ve{x}| = |\ve{T}|$ and $\ve{x}$ are
pairwise distinct. Unless otherwise specified, we only consider well-formed substitutions.
\begin{defi}[Static equivalence]
Two assertions $\Psi,\Psi'$ are \emph{statically equivalent}, written
$\Psi \simeq \Psi'$, if
$\forall \varphi.\; \Psi \vdash \varphi \; \Leftrightarrow \; \Psi' \vdash \varphi$.
\end{defi}
\begin{defi}[Valid parameters]
A psi-calculus is \emph{valid} if $(\assertions/{\simeq},\otimes,\unit)$ form an abelian monoid.
\end{defi}
Note that since the abelian monoid is closed, static equivalence is preserved by composition.
Henceforth we will only consider valid psi-calculi.
The original presentation of psi-calculi had $\sch$ for channel equivalence in place of
our $\chcon$, and required that channel equivalence be symmetric
(formally, $\Psi \vdash M \sch K$ iff $\Psi \vdash K \sch M$)
and transitive.
\begin{defi}[Process syntax]
The \emph{processes} (or \emph{agents}) $\processes$, ranged over by $P,Q,R$, are inductively defined by the grammar
\begin{center}
\begin{tabular}{rcll}
$P$ & := & $\nil$ & (nil) \\
& & $\pass{\Psi}$ & (assertion) \\
& & $\outprefix{M}{N}.P$ & (output) \\
& & $\inprefix{M}{\vec{x}}{N}.P$ & (input) \\
& & ${\caseonly\;{\ci{\ve{\varphi}}{\ve{P}}}}$ & (case) \\
& & $P \parop Q$ & (parallel composition) \\
& & $(\nu x)P$ & (restriction) \\
& & $!P$ & (replication)
\end{tabular}
\end{center}
\noindent A process is \emph{assertion-guarded}
if all assertions occur underneath an input or output prefix.
We require that in $!P$, $P$ is guarded;
that in ${\caseonly\;{\ci{\ve{\varphi}}{\ve{P}}}}$, all $\ve{P}$ are guarded;
and that in $\inprefix{M}{\vec{x}}{N}\sdot P$ it holds that $\vec{x} \subseteq \supp{N}$.
We will use $P_G,Q_G$ to range over assertion-guarded processes.
A process $P$ is \emph{prefix-guarded} if its outermost operator is an input
or output prefix.
\end{defi}
Restriction, replication and parallel composition are standard.
We lift restriction to sequences of names by letting $(\nu\ve{a})P$
abbreviate $(\nu a_0)(\nu a_1)\dots(\nu a_i)P$; in
particular, $(\nu \epsilon)P = P$.
$\outprefix{M}{N}.P$ is a process ready to send
the message $N$ on channel $M$, and then continue as $P$.
Similarly, $\inprefix{M}{\vec{x}}{N}.P$ is a process ready to receive
a message on channel $M$ that matches the pattern $(\lambda\vec{x})N$.
We sometimes write $\outprefix{M}{N}$ to stand for $\outprefix{M}{N}.0$,
and similarly for input. We elide the object $N$ when it is unimportant.
The process $\pass{\Psi}$ asserts a fact $\Psi$ about the environment.
Intuitively, $\pass{\Psi} \parop P$ means that $P$ executes in an environment
where all conditions entailed by $\Psi$ hold. $P$ may itself
contain assertions that add or retract conditions.
Environments can evolve dynamically: as a process reduces,
assertions may become unguarded and thus added to the
environment.
${\caseonly\;{\ci{\ve{\varphi}}{\ve{P}}}}$ is a process that may act as
any $P_i$ whose guard $\varphi_i$ is entailed by the environment.
For a discussion of why replication and case must be assertion-guarded we refer to
\cite{bengtson.johansson.ea:psi-calculi-long,DBLP:conf/lics/JohanssonBPV10}.
We use $\casesep$ to denote composition of $\caseonly$ statements,
so e.g.~%
$\caseonly\;{\ci{\varphi}{P}}\casesep{\ci{\varphi'}{Q}}$
desugars to ${\caseonly\;\ci{\varphi,\varphi'}{P,Q}}$.
The assertion environment of a process is described by its \emph{frame}:
\begin{defi}[Frames]\label{def:frame}
The \emph{frame} of $P$, written $\frameof{P} = \framepair{\frnames{P}}{\frass{P}}$
where $\frnames{P}$ bind into $\frass{P}$, is defined as
\[
\begin{array}{rlll}
\frameof{\pass{\Psi}} & = & \framepair{\epsilon}{\Psi} &
\\
\frameof{P \parop Q} & = & \frameof{P} \otimes \frameof{Q} &
\\
\frameof{(\nu x)P} & = & (\nu x)\frameof{P} &
\\
\frameof{P} & = & \unit & \mbox{otherwise}
\end{array}
\]
\noindent where name-binding and composition of frames is defined as
$(\nu x)\framepair{\frnames{P}}{\frass{P}} = \framepair{x,\frnames{P}}{\frass{P}}$,
and, if $\frnames{P} \freshin \frnames{Q},\frass{Q}$ and $\frnames{Q} \freshin \frass{P}$,
\[\framepair{\frnames{P}}{\frass{P}} \otimes \framepair{\frnames{Q}}{\frass{Q}} = \framepair{\frnames{P},\frnames{Q}}{(\frass{P} \otimes \frass{Q})}\]
\noindent where $\nu$ binds stronger than $\otimes$.
We overload $\Psi$ to denote the frame $(\nu \epsilon)\Psi$.
\end{defi}
We extend entailment to frames as follows: $\frameof{P} \vdash \varphi$ holds if,
for some $\frnames{P},\frass{P}$ such that $\frameof{P} = \framepair{\frnames{P}}{\frass{P}}$
and $\frnames{P} \freshin \varphi$,
$\frass{P} \vdash \varphi$.
The freshness side-condition $\frnames{P} \freshin \varphi$ is important because it allows
assertions to be used for representing local state. By default, the assertion environment is
effectively a form of global non-monotonic state,
which is not always appropriate for modelling distributed processes.
With $\nu$-binding we can recover locality by writing e.g.~$(\nu x)(\pass{x=M} \parop P)$
for a process $P$ with a local variable $x$.
The notion of \emph{provenance} is the main novelty of our semantics.
It is the key technical device used to make our semantics compositional:
\begin{defi}[Provenances]
The \emph{provenances} $\provenances$, ranged over by $\pi$, are either $\bot$ or of form $(\nu\ve{x};\ve{y})M$, where $M$ is a term, and $\ve{x},\ve{y}$ bind into $M$.
\end{defi}
We write $M$ for $(\nu\epsilon;\epsilon)M$. When $\ve{x},\ve{y} \freshin \ve{x'},\ve{y'}$ and $\ve{x} \freshin \ve{y}$, we interpret the expression $(\nu\ve{x};\ve{y})(\nu\ve{x'};\ve{y'})M$ as $(\nu\ve{x}\,\ve{x'};\ve{y}\,\ve{y'})M$. Furthermore, we equate $(\nu\ve{x};\ve{y})\bot$ and $\bot$.
Let $\pi\downarrow$ denote the result of moving all binders from the outermost binding sequence to the innermost; that is,
$(\nu\ve{x};\ve{y})M{\downarrow} = (\nu\epsilon;\ve{x},\ve{y})M$.
Similarly, $\pi\downarrow\ve{z}$ denotes the result of inserting $\ve{z}$ at the end of the outermost binding sequence:
formally, $(\nu\ve{x};\ve{y})M\downarrow\ve{z} = (\nu\ve{x},\ve{z};\ve{y})M$.
Intuitively, a provenance describes the origin of an input or output transition.
For example, if an output transition is annotated with $(\nu\ve{x};\ve{y})M$, the sender
is an output prefix with subject $M$ that occurs underneath the $\nu$-binders $\ve{x},\ve{y}$.
For technical reasons, these binders are partitioned into two distinct sequences.
The intention is that $\ve{x}$ are the frame binders,
while $\ve{y}$ contains binders that occur underneath case and replication;
these are not part of the frame, but may nonetheless bind into $M$.
We prefer to keep them separate because the $\ve{x}$ binders
are used for deriving $\vdash$ judgements,
but $\ve{y}$ are not (cf.~Definition~\ref{def:frame}).
\begin{defi}[Labels]
The \emph{labels} $\labels$, ranged over by $\alpha,\beta$, are:
\begin{center}
\begin{tabular}{ll}
$\bout{\ve{x}}{M}{N}$ & (output) \\
$\inlabel{M}{N}$ & (input) \\
$\tau$ & (silent)
\end{tabular}
\end{center}
\noindent The bound names of $\alpha$, written $\bn{\alpha}$, is $\ve{x}$ if $\alpha = \bout{\ve{x}}{M}{N}$ and $\epsilon$ otherwise.
The subject of $\alpha$, written $\subj{\alpha}$, is $M$ if $\alpha = \bout{\ve{x}}{M}{N}$ or $\alpha = \inlabel{M}{N}$.
Analogously, the object of $\alpha$, written $\obj{\alpha}$,
is $N$ if $\alpha = \bout{\ve{x}}{M}{N}$ or $\alpha = \inlabel{M}{N}$.
\end{defi}
While the provenance describes the origin of a transition, a label describes how it can
interact.
For example, a transition labelled with $\inlabel{M}{N}$ indicates readiness to receive a
message $N$ from an output prefix with subject $M$.
\begin{defi}[Operational semantics]
The transition relation $\mathord{\apitransarrow{}}\subseteq \assertions \times \processes \times \labels \times \provenances \times \processes$ is inductively defined by the rules in Table~\ref{table:full-struct-free-labeled-operational-semantics}.
We write $\framedprovtrans{\Psi}{P}{\alpha}{\pi}{P'}$ for $(\Psi,P,\alpha,\pi,P')\in\mathord{\apitransarrow{}}$. In transitions, $\bn{\alpha}$ binds into $\obj{\alpha}$ and $P'$.
\end{defi}
\begin{table*}[tb]
\begin{mathpar}
\inferrule*[Left=\textsc{In}]
{\Psi \vdash K \chcon M}
{\framedprovtrans{\Psi}{\inprefix{M}{\ve{y}}{N} \sdot
P}{\inlabel{K}{N}\subst{\ve{y}}{\ve{L}}}{M}{P\subst{\ve{y}}{\ve{L}}}}
\inferrule*[left=\textsc{Out}]
{\Psi \vdash M \chcon K}
{\framedprovtrans{\Psi}{\outprefix{M}{N} \sdot P}{\outlabel{K}{N}}{M}{P}}
\inferrule*[left=\textsc{ParL}, right={$\inferrule{}{\bn{\alpha} \freshin Q}
$}]
{\framedprovtrans{\frass{Q} \ftimes \Psi}{P} {\alpha}{\pi}{P'}}
{\framedprovtrans{\Psi}{P \pll Q}{\alpha}{\pi\downarrow\frnames{Q}}{P' \pll Q}}
\inferrule*[left=\textsc{ParR}, right={$\inferrule{}{\bn{\alpha} \freshin P}
$}]
{\framedprovtrans{\frass{P} \ftimes \Psi}{Q} {\alpha}{\pi}{Q'}}
{\framedprovtrans{\Psi}{P \pll Q}{\alpha}{(\nu\frnames{P})\pi}{P \pll Q'}}
\inferrule*[left=\textsc{Com}, right={$\ve{a} \freshin Q$}]
\framedprovtrans{\frass{Q} \ftimes \Psi}{P}{\outlabel{M}{(\nu \ve{a})N}}{(\nu\frnames{P};\ve{x})K}{P'} \\
\framedprovtrans{\frass{P} \ftimes \Psi}{Q}{\inlabel{K}{N}}{(\nu\frnames{Q};\ve{y})M}{Q'}
}
{\framedprovtrans{\Psi}{P \pll Q}{\tau}{\bot}{(\nu \ve{a})(P' \pll Q')}}
\inferrule*[left={\textsc{Case}}]
{\framedprovtrans{\Psi}{P_i}{\alpha}{\pi}{P'} \\ \Psi \vdash \varphi_i}
{\framedprovtrans{\Psi}{\case{\ci{\ve{\varphi}}{\ve{P}}}}{\alpha}{\pi\downarrow}{P'}}
\inferrule*[left=\textsc{Scope}, right={$b \freshin \alpha,\Psi$}]
{\framedprovtrans{\Psi}{P}{\alpha}{\pi}{P'}}
{\framedprovtrans{\Psi}{(\nu b)P}{\alpha}{(\nu b)\pi}{(\nu b)P'}}
\inferrule*[left=\textsc{Open}, right={$\inferrule{}{b \freshin \ve{a},\Psi,M\\\\b \in \names{N}}$}]
{\framedprovtrans{\Psi}{P}{\outlabel{M}{(\nu \ve{a})N}}{\pi}{P'}}
{\framedprovtrans{\Psi}{(\nu b)P}{\outlabel{M}{(\nu \ve{a} \cup \{b\})N}}{(\nu b)\pi}{P'}}
\inferrule*[left=\textsc{Rep}]
{\framedprovtrans{\Psi}{P \pll !P}{\alpha}{\pi}{P'}}
{\framedprovtrans{\Psi}{!P}{\alpha}{\pi\downarrow}{P'}}
\end{mathpar}
\caption{\rm Structured operational semantics. A symmetric version of \textsc{Com} is elided. In the rule $\textsc{Com}$ we assume that
$\frameof{P} = \framepair{\frnames{P}}{\Psi_P}$ and
$\frameof{Q} = \framepair{\frnames{Q}}{\Psi_Q}$ where
$\frnames{P}$ is fresh for $\Psi$ and $Q$,
$\ve{x}$ is fresh for $\Psi, \frass{Q}, P$, and
$\frnames{Q},\ve{y}$ are similarly fresh.
In rule
\textsc{ParL} we assume that $\frameof{Q} = \framepair{\frnames{Q}}{\Psi_Q}$
where $\frnames{Q}$ is fresh for
$\Psi, P, \pi$ and $\alpha$.
\textsc{ParR} has the same freshness conditions but with the roles of $P,Q$ swapped.
In $\textsc{Open}$ the expression $\tilde{a} \cup \{b\}$ means the sequence
$\tilde{a}$ with $b$ inserted anywhere.
}
\label{table:full-struct-free-labeled-operational-semantics}
\end{table*}
Note that to avoid clutter, the freshness conditions for some of the rules are stated in the caption of Table~\ref{table:full-struct-free-labeled-operational-semantics}.
The operational semantics differs from \cite{bengtson.johansson.ea:psi-calculi-long}
mainly by the inclusion of provenances: anything underneath the transition arrows is novel.
The \textsc{Out} rule states that in an environment where $M$ is connected to $K$,
the prefix $\outprefix{M}{N}$ may send a message $N$ from $M$ to $K$.
The \textsc{In} rule is dual to \textsc{Out}, but also features pattern-matching.
If the message is an instance of the pattern, as witnessed by a substitution,
that substitution is applied to the continuation $P$.
In the \textsc{Com} rule, we see how provenances are used to determine when two
processes can interact.
Specifically, a communication between $P$ and $Q$ can be derived if $P$ can send a message to $M$ using prefix $K$,
and if $Q$ can receive a message from $K$ using prefix $M$.
Because names occurring in $M$ and $K$ may be local to $P$ and $Q$ respectively,
we must be careful not to conflate the local names of one with the other;
this is why the provenance records all binding names that occur above $M,K$ in the process syntax.
Note that even though we identify frames and provenances up to alpha,
the \textsc{Com} rule uses syntactically identical binding sequences $\ve{b}_P,\ve{b}_Q$ in two roles:
as frame binders $\frameof{P} = \framepair{\frnames{P}}{\frass{P}}$,
and as the outermost provenance binders.
By thus insisting that these binding sequences are chosen to coincide,
we ensure that the $K$ on $Q$'s label
really is the same as the $K$ in $P$'s provenance.
It is instructive to compare our \textsc{Com} rule with the original:
\begin{mathpar}
\inferrule*[left=\textsc{Com-Old}, right={$\inferrule{}{\ve{a} \freshin Q }$}]
{\framedtrans{\frass{Q} \ftimes \Psi}{P}{\bout{\ve{a}}{M}{N}}{P'} \\
\framedtrans{\frass{P} \ftimes \Psi}{Q}{\inlabel{K}{N}}{Q'} \\
\Psi \ftimes \frass{P} \ftimes \frass{Q} \vdash M \sch K
}
{\framedtrans{\Psi}{P \pll Q}{\tau}{(\nu \ve{a})(P' \pll Q')}}
\end{mathpar}
\noindent where $\frameof{P} =\framepair{\frnames{P}}{\frass{P}}$ and
$\frameof{Q} = \framepair{\frnames{Q}}{\frass{Q}}$ and $\frnames{P} \freshin \Psi, \frnames{Q}, Q, M, P$ and $\frnames{Q} \freshin \Psi, \frnames{Q}, Q, K, P$.
Here we have no way of knowing if $M$ and $K$ are able
to synchronise other than making a channel equivalence judgement.
Hence any derivation involving \textsc{Com-Old} makes three channel equivalence judgements:
once each in \textsc{In}, \textsc{Out} and \textsc{Com-Old}. With \textsc{Com} we only make one ---
or more accurately, we make the exact same judgement twice, in \textsc{In} resp.~\textsc{Out}.
Eliminating the redundant judgements is crucial: the reason \textsc{Com-Old} needs associativity
and commutativity is to stitch these three judgements together, particularly
when one end of a communication is swapped for a bisimilar process
that allows the same interaction via different prefixes.
Note also that \textsc{Com} has fewer freshness side-conditions.
A particularly unintuitive aspect of \textsc{Com-Old} is that it
requires $\frnames{P} \freshin M$ and $\frnames{Q} \freshin K$, but not
$\frnames{P} \freshin K$ and $\frnames{Q} \freshin M$: we would expect that all bound
names can be chosen to be distinct from all free names, but adding the missing
freshness conditions makes scope extension unsound~\cite[pp.~56-57]{Johansson10}.
With \textsc{Com}, it becomes clear why:
because $\frnames{Q}$ binds into $M$.
All the other rules can fire independently of what the provenance of the premise is.
They manipulate the provenance, but only for bookkeeping purposes:
in order for the \textsc{Com} rule to be sound,
we maintain the invariant that if $\framedprovtrans{\Psi}{P}{\alpha}{\pi}{P'}$,
the outer binders of $\pi$ are precisely the binders of $\frameof{P}$.
Otherwise, the rules are exactly the same as in the original psi-calculi.
The reader may notice a curious asymmetry between the treatment of provenance binders
in the \textsc{ParL} and \textsc{ParR} rules.
This is to ensure that the order of the provenance binders coincides with the order of the frame
binders, and in the frame $\frameof{P \parop Q}$, the binders of $P$ occur syntactically
outside the binders of $Q$ (cf.~Definition~\ref{def:frame}).
\begin{exa}\label{exa:ether}
To illustrate how subjects and provenances interact,
we consider a psi-calculus where terms are names, assertions are
(finite) sets of names, composition is union, and channel connectivity
is defined as follows:
\[
\Psi \vdash x \chcon y \quad \mbox{iff} \quad x,y \in \Psi
\]
The intuition here is that there exists a single, shared communication
medium through which all communication happens. Processes are
allowed to declare aliases for this shared medium by
adding them to the assertion environment.
Consider the following processes, where $\pass{x}$ abbreviates $\pass{\{x\}}$
and $x \neq y$:
\begin{mathpar}
P = (\nu x)(\overline{x} \parop \pass{x}) \and
Q = (\nu y)(\underline{y} \parop \pass{y})
\end{mathpar}
\noindent Here $P$ and $Q$ are sending and receiving, respectively, via locally scoped
aliases of the shared communication medium. This example has been used
previously~\cite{bengtson.johansson.ea:psi-calculi-long},
to illustrate why the original psi-calculi needs channel equivalence
in all three of the \textsc{In}, \textsc{Out} and \textsc{Com} rules.
Up to scope extension $P \parop Q$ is equivalent to
\[
(\nu x,y)(\overline{x} \parop \pass{x} \parop \underline{y} \parop \pass{y})
\]
in which a communication between $x$ and $y$ is clearly possible,
because $x$ and $y$ are connected in the environment $\{x,y\}$.
Hence a communication must also be possible in $P \parop Q$. But
the two processes share no free names that can be used as communication subjects;
$P$ cannot do an output action with subject $x$ because $x$ is bound, and
similarly, $Q$ cannot do an input with subject $y$.
The only available option is for each of $P$ and $Q$ to derive transitions
labelled with the other's prefix:
\begin{mathpar}
\and
\inferrule*[Left=\textsc{Scope}]
{\inferrule*[Left=\textsc{Par-L}]
{\inferrule*[Left=\textsc{Out}]
{\{x,y\} \vdash x \chcon y}
{\framedprovtrans{\{x,y\}}{\overline{x}}{\overline{y}}{x}{0}}
}
{\framedprovtrans{\{y\}}{\overline{x} \parop \pass{x}}{\overline{y}}{x}{0 \parop \pass{x}}}
}
{\framedprovtrans{\{y\}}{P}{\overline{y}}{(\nu x)x}{(\nu x)(0 \parop \pass{x})}}
\and
\inferrule*[Left=\textsc{Scope}]
{\inferrule*[Left=\textsc{Par-L}]
{\inferrule*[Left=\textsc{In}]
{\{x,y\} \vdash x \chcon y}
{\framedprovtrans{\{x,y\}}{\underline{y}}{\underline{x}}{y}{0}}
}
{\framedprovtrans{\{x\}}{\underline{y} \parop \pass{y}}{\underline{x}}{y}{0 \parop \pass{y}}}
}
{\framedprovtrans{\{x\}}{Q}{\underline{x}}{(\nu y)y}{(\nu y)(0 \parop \pass{y})}}
\end{mathpar}
In the original psi-calculi---where the exact same input and output transitions can be derived, but without the provenance annotations---it is clear that without the extra channel equivalence check in the \textsc{Com-Old} rule, we could not derive a communication between $P$ and $Q$.
With our provenance semantics the \textsc{Com} rule applies immediately.
Note that we have $\frameof{P} = (\nu x)\{x\}$ and $\frameof{Q} = (\nu y)\{y\}$,
and that $P$'s transition matches $Q$'s provenance and vice versa:
\begin{mathpar}
\inferrule*[Left=\textsc{Com}]
{\framedprovtrans{\{y\}}{P}{\overline{y}}{(\nu x)x}{(\nu x)(0 \parop \pass{x})} \\
\framedprovtrans{\{x\}}{Q}{\underline{x}}{(\nu y)y}{(\nu y)(0 \parop \pass{y})}
}
{\framedprovtrans{\{\}}{P \parop Q}{\tau}{\bot}{(\nu x)(0 \parop \pass{x}) \parop (\nu y)(0 \parop \pass{y})}}
\end{mathpar}
\end{exa}
\begin{exa}
This example is intended to illustrate how and why we maintain the invariant that frame
and provenance binders coincide, and why it matters that they coincide in the
\textsc{Com} rule.
To this end, we will consider the process $P \parop Q$,
where $P$ and $Q$ are defined as follows
\begin{mathpar}
P = (\nu x)((\nu y)\pass{\frass{P}} \parop !(\nu z)\outprefix{x}{z})
\and
Q = (\nu a b)(\inprefix{a}{c}{c}.R \parop \inprefix{b}{c}{c}.S \parop \pass{\frass{Q}})
\end{mathpar}
and where the composition $\frass{P} \otimes \frass{Q}$ entails the connectivity judgements
$x \chcon a$ and $y \chcon b$, but not $x \chcon b$ or $y \chcon a$.
We also assume $x,y \freshin \frass{Q}$ and $a,b \freshin \frass{P}$.
Concretely, this can be realised by e.g.~extending the setup from Example~\ref{exa:ether}
to use a pair of sets instead of a single set, and letting connectivity be membership
in the same set.
Let us focus on how we can derive a communication between the subjects $x$ and $a$.
We have that $\frameof{P} = (\nu xy)\frass{P}$ and $\frameof{Q} = (\nu ab)\frass{Q}$.
In the environment $\frass{P}$, we have the following derivation of an input transition from $Q$:
\begin{mathpar}
\inferrule*[Left=\textsc{Scope} $\times 2$]{
\inferrule*[Left=\textsc{Par-L}]
{\inferrule*[Left=\textsc{In}]
{\frass{P} \otimes \frass{Q} \vdash x \chcon a}
{
\framedprovtrans%
{\frass{P} \otimes \frass{Q}}
{\inprefix{a}{c}{c}.R}
{\inlabel{x}{z}}
{a}
{R\subst{c}{z}}
}
}
{\framedprovtrans%
{\frass{P}}%
{\inprefix{a}{c}{c}.R \parop \inprefix{b}{c}{c}.S \parop \pass{\frass{Q}}}%
{\inlabel{x}{z}}%
{a}%
{R\subst{c}{z} \parop \inprefix{b}{c}{c}.S \parop \pass{\frass{Q}}}%
}
}
{\framedprovtrans%
{\frass{P}}%
{Q}%
{\inlabel{x}{z}}%
{(\nu a,b;\epsilon)a}%
{(\nu ab)(R\subst{c}{z} \parop \inprefix{b}{c}{c}.S \parop \pass{\frass{Q}})}%
}
\end{mathpar}
The corresponding output transition in $Q$ is derived as follows:
\begin{mathpar}
\inferrule*[Left={\textsc{Scope}}]
{\inferrule*[Left=\textsc{Par-R}]
{
\inferrule*[Left=\textsc{Rep}]
{
\inferrule*[Left=\textsc{Par-L}]
{
\inferrule*[Left=\textsc{Open}]
{
\inferrule*[Left=\textsc{Out}]
{
\frass{P} \otimes \frass{Q} \vdash x \chcon a
}
{\framedprovtrans
{\frass{P} \otimes \frass{Q}}
{\outprefix{x}{z}}
{\outlabel{a}{z}}
{x}
{0}
}
}
{\framedprovtrans
{\frass{P} \otimes \frass{Q}}
{(\nu z)\outprefix{x}{z}}
{\outlabel{a}{(\nu z)z}}
{(\nu z;\epsilon)x}
{0}
}
}
{\framedprovtrans
{\frass{P} \otimes \frass{Q}}
{(\nu z)\outprefix{x}{z} \parop !(\nu z)\outprefix{x}{z}}
{\outlabel{a}{(\nu z)z}}
{(\nu z;\epsilon)x}
{0 \parop !(\nu z)\outprefix{x}{z}}
}
}
{\framedprovtrans
{\frass{P} \otimes \frass{Q}}
{!(\nu z)\outprefix{x}{z}}
{\outlabel{a}{(\nu z)z}}
{(\nu \epsilon;z)x}
{0 \parop !(\nu z)\outprefix{x}{z}}
}
}
{\framedprovtrans
{\frass{Q}}
{(\nu y)\pass{\frass{P}} \parop !(\nu z)\outprefix{x}{z}}
{\outlabel{a}{(\nu z)z}}
{(\nu y;z)x}
{(\nu y)\pass{\frass{P}} \parop 0 \parop !(\nu z)\outprefix{x}{z}}
}
}
{\framedprovtrans
{\frass{Q}}
{P}
{\outlabel{a}{(\nu z)z}
}
{(\nu x,y;z)x}
{(\nu x)((\nu y)\pass{\frass{P}} \parop 0 \parop !(\nu z)\outprefix{x}{z})}
}
\end{mathpar}
In the derivation above, notice how the provenance evolves throughout the derivation to
maintain the correspondence between the outer provenance binders and the frame binders.
Two rule applications are particularly noteworthy.
First, the \textsc{Rep} rule pushes $z$ from the outer binders to the inner binders,
because binders underneath the replication operator are not considered part of the
frame (cf.~Definition~\ref{def:frame}).
Second, the \textsc{Par-R} rule adds $y$, the frame binder of the leftmost parallel
component, to the outer binders.
Because both derivations have matching provenances and subjects,
and the frame binders and provenance binders used are the same,
the \textsc{Com} rule allows a derivation as follows:
\begin{mathpar}
\inferrule*[left=\textsc{Com}]
{
\framedprovtrans
{\frass{Q}}
{P}
{\outlabel{a}{(\nu z)z}
}
{(\nu x,y;z)x}
{\dots}
\\
{\framedprovtrans%
{\frass{P}}%
{Q}%
{\inlabel{x}{z}}%
{(\nu a,b;\epsilon)a}%
{\dots}%
}
}
{\framedprovtrans
{\unit}
{P \pll Q}
{\tau}
{\bot}
{(\nu z)
(\dots
\parop
\dots
)}
}
\end{mathpar}
In order for this rule to be sound, it is important that the frame binders
of $\frameof{P}$, and the provenance binders of the transition from $P$,
have the same ordering.
To see this, suppose we have a version of the \textsc{Com} rule which
allows transitions to be derived when frame and provenance binders are
equal up to reordering.
Call this alternative rule \textsc{Com'}.
We will now argue that \textsc{Com'} is unsound, because we
lose the ability to distinguish synchronisations
between $x$ and $a$ from synchronisations between $x$ and $b$.
First, note that since we identify frames up to alpha, we have
$\frameof{P} = (\nu yx)((x\;y)\cdot \frass{P})$.
By equivariance of $\vdash$ and $\chcon$ we have
\begin{mathpar}
((x\;y)\cdot \frass{P}) \otimes \frass{Q} \vdash x \chcon b
\and
((x\;y)\cdot \frass{P}) \otimes \frass{Q} \vdash y \chcon a
\end{mathpar}
In this permuted frame, we can derive an input where $b$ receives from $x$:
\begin{mathpar}
\framedprovtrans%
{(x\;y)\cdot \frass{P}}%
{Q}%
{\inlabel{x}{z}}%
{(\nu a,b;\epsilon)b}%
{(\nu ab)(\inprefix{a}{c}{c}.R \parop S\subst{c}{z} \parop \pass{\frass{Q}})}
\end{mathpar}
Since we identify provenances up to alpha, we also have
\begin{mathpar}
\framedprovtrans%
{(x\;y)\cdot \frass{P}}%
{Q}%
{\inlabel{x}{z}}%
{(\nu b,a;\epsilon)a}%
{(\nu ab)(\inprefix{a}{c}{c}.R \parop S\subst{c}{z} \parop \pass{\frass{Q}})}
\end{mathpar}
Using this transition, and the same derivation of a transition from $P$ as above, we can now apply \textsc{Com'} to derive a synchronisation between $x$ and $b$, despite the fact that $x$ and $b$ are not connected:
\begin{mathpar}
\inferrule*[left=\textsc{Com'}]
{
\framedprovtrans
{\frass{Q}}
{P}
{\outlabel{a}{(\nu z)z}
}
{(\nu x,y;z)x}
{\dots}
\\
{\framedprovtrans%
{(x\;y)\cdot \frass{P}}%
{Q}%
{\inlabel{x}{z}}%
{(\nu b,a;\epsilon)a}%
{\dots}
}
}
{\framedprovtrans
{\unit}
{P \pll Q}
{\tau}
{\bot}
{(\nu z)
(\dots
\parop
\dots
)}
}
\end{mathpar}
If we push all binders in $P \parop Q$ to the top level,
no such derivation is possible. Thus scope extension fails to hold
with \textsc{Com'}.
With \textsc{Com}, the existence of this alternative transition from $Q$
is unproblematic: it cannot synchronise with any transition from $P$ unless
that transition too uses the permuted frame.
With the same counterexample,
we can also see why it is important that the provenance retains all of the
frame binders, even the vacuous ones: the provenances $(\nu x)x$ and $(\nu y)y$
are equal, so the provenance would contain no information about which prefix
the transition originates from.
\end{exa}
\section{Meta-theory}\label{sec:metatheory}
In this section, we will derive the standard algebraic and congruence laws of strong
and weak bisimulation,
develop an alternative formulation of strong bisimulation in terms of a reduction relation and
barbed congruence, and show that our extension of psi-calculi is conservative.
\subsection{Strong bisimulation}\label{sec:bisimulation}
We write $\framedtrans{\Psi}{P}{\alpha}{P'}$ as shorthand for $\exists \pi.\;\framedprovtrans{\Psi}{P}{\alpha}{\pi}{P'}$. Bisimulation is then defined exactly as in the original psi-calculi:
\begin{defi}[Strong bisimulation]\label{def:strongbisim} A symmetric relation $\mathcal{R} \subseteq \assertions \times \processes \times \processes$ is a \emph{strong bisimulation} iff for every $(\Psi,P,Q) \in \mathcal{R}$
\begin{enumerate}
\item $\Psi \otimes \frameof{P} \;\simeq\; \Psi \otimes \frameof{Q}$ (static equivalence)
\item $\forall \Psi'. (\Psi\otimes\Psi',P,Q) \in \mathcal{R}$ (extension of arbitrary assertion)
\item If $\framedtrans{\Psi}{P}{\alpha}{P'}$ and $\bn{\alpha} \freshin \Psi, Q$, then there exists $Q'$ such that $\framedtrans{\Psi}{Q}{\alpha}{Q'}$ and $(\Psi,P',Q') \in \mathcal{R}$ (simulation)
\end{enumerate}
We let \emph{bisimilarity} $\bisim$ be the largest bisimulation. We write $\trisimsub{\Psi}{P}{Q}$ to mean $(\Psi,P,Q) \in\;\bisim$, and $P \bisim Q$ for $\trisimsub{\one}{P}{Q}$.
\end{defi}
Clause 3 is the same as for pi-calculus bisimulation.
Clause 1 requires that two bisimilar processes expose statically equivalent
assertion environments.
Clause 2 states that if two processes are bisimilar in an environment, they must be bisimilar
in every extension of that environment. Without this clause, bisimulation is not
preserved by parallel composition.
This definition might raise some red flags for the experienced concurrency theorist.
We allow the matching transition from $Q$ to have any provenance,
irrespective of what $P$'s provenance is.
Hence the \textsc{Com} rule uses information that is ignored for the purposes of bisimulation,
which in most cases would result in a bisimilarity that is not preserved by the parallel operator.
Before showing that bisimilarity is nonetheless compositional,
we will argue that bisimilarity would be too strong if Clause 4 required transitions with
matching provenances.
Consider two distinct terms $M,N$ that are connected to the same channels;
that is, for all $\Psi,K$ we have $\Psi \vdash M \chcon K$ iff $\Psi \vdash N \chcon K$.
We would expect $\overline{M}.0$ and $\outprefix{N}.0$ to be bisimilar because
they offer the same interaction possibilities.
With our definition, they are.
But if bisimulation cared about provenance they would be distinguished, because
transitions originating from $\overline{M}.0$ will have provenance $M$
while those from $\outprefix{N}.0$ will have $N$.
The key intuition is that what matters is not which provenance a transition has, but
which channels the provenance is connected to.
The latter is preserved by Clause 3,
as this key technical lemma hints at:
\begin{lem}(Find connected provenance)\label{lemma:provchaneq}
\begin{enumerate}
\item If $\framedprovtrans{\Psi}{P}{\inlabel{M}{N}}{\pi}{P'}$ and $C$ is a finitely supported nominal set, then there exists $\frnames{P},\frass{P},\ve{x},K$ such that $\frameof{P} = \framepair{\frnames{P}}{\frass{P}}$ and $\pi = (\nu\frnames{P};\ve{x})K$ and $\frnames{P} \freshin \Psi,P,M,N,P',C,\ve{x}$ and $\ve{x} \freshin \Psi,P,N,P',C$ and $\Psi \otimes \frass{P} \vdash M \chcon K$.
\item A similar property for output transitions (elided).
\end{enumerate}
\end{lem}
\begin{proof}
Formally proven in Isabelle, by a routine induction.
\end{proof}
\noindent In words,
the provenance of a transition is always connected to its subject,
and the frame binders can always be chosen sufficiently fresh for any context.
This simplifies the proof that bisimilarity is preserved by parallel:
in the original psi-calculi, one of the more challenging aspects of this proof is
finding sufficiently fresh subjects to use in the \textsc{Com-Old} rule,
and then using associativity and symmetry to connect them
(cf.~\cite[Lemma 5.11]{bengtson.johansson.ea:psi-calculi-long}).
By Lemma~\ref{lemma:provchaneq} we already have a sufficiently
fresh subject: our communication partner's provenance.
\begin{thm}[Congruence properties of strong bisimulation]\label{thm:bisimpres}\
\begin{enumerate}
\item $\trisimsub \Psi P Q \quad \Rightarrow \quad \trisimsub{\Psi}{P \parop R}{Q \parop R}$
\item $\trisimsub \Psi P Q \quad \Rightarrow \quad \trisimsub{\Psi}{(\nu x)P}{(\nu x)Q}$ if $x \freshin \Psi$
\item $\trisimsub \Psi {P_G}{Q_G} \quad \Rightarrow \quad \trisimsub{\Psi}{! P_G}{\;! Q_G}$
\item \label{case:case} $\forall i. \trisimsub {\Psi}{P_i}{Q_i} \quad \Rightarrow \quad \trisimsub{\Psi}{\caseonly\;{\ci{\vec{\varphi}}{\vec{P}}}}{\caseonly\;{\ci{\vec{\varphi}}{\vec{Q}}}}$ if $\vec{P}, \vec{Q}$ are assertion-guarded
\item $\trisimsub \Psi P Q \quad \Rightarrow \quad \trisimsub{\Psi}{\outprefix{M}{N}.P}{\outprefix{M}{N}.Q}$
\end{enumerate}
\end{thm}
\begin{proof}
Formally proven in Isabelle. All proofs are by coinduction.
The most interesting cases are parallel and replication,
where Lemma~\ref{lemma:provchaneq} features prominently.
We briefly outline a \textsc{Com} subcase of the replication case,
where the candidate relation is
\[\{(\Psi, R \parop !P, R \parop !Q). \trisimsub \Psi P Q \wedge \mbox{$P,Q$ are assertion guarded}\}\]
Suppose $P \bisim Q$ and that $!P$ derives a $\tau$ transition from
communication between two unfolded copies of $P$, with
input subject $M$ and output subject $K$.
We need to mimic the same communication between two copies of $Q$, but after
using $\trisimsub \Psi P Q$ to obtain a matching input transition, the subject $M$ is not useful
to derive a communication since it is $P$'s provenance, not $Q$'s.
However, we can obtain eligible subjects $M',K'$ by repeatedly applying Lemma~\ref{lemma:provchaneq}.
\end{proof}
In Theorem~\ref{thm:bisimpres}.\ref{case:case}, $P_i$ is the $i$:th element of
$\vec{P}$, and similarly for $Q_i$. The index variable $i$ ranges over the
length of the sequences $\vec{\varphi},\vec{P},\vec{Q}$, which we assume are
equal.
\begin{thm}[Algebraic laws of strong bisimulation]\label{thm:strong-struct}\
\begin{enumerate}
\item $\trisimsub{\Psi}{P}{P \parop \nil}$
\item $\trisimsub{\Psi}{P\parop ( Q \parop R)}{(P \parop Q) \parop R}$
\item $\trisimsub{\Psi}{P \parop Q}{Q \parop P}$
\item $\trisimsub{\Psi}{(\nu a)\nil}{\nil}$
\item $\trisimsub{\Psi}{P \parop (\nu a) Q}{(\nu a)(P \parop Q)}\mbox{ if $a \freshin P$}$
\item $\trisimsub{\Psi}{\outprefix{M}{N}.(\nu a)P}{(\nu a)\outprefix{M}{N}.P}\mbox{ if $a \freshin M, N$}$
\item $\trisimsub{\Psi}{\inprefix{M}{\vec{x}}{N}.(\nu a)P}{(\nu a)\inprefix{M}{\vec{x}}{N}.P}\mbox{ if $a \freshin \vec{x},M,N$}$
\item $\trisimsub{\Psi}{!P}{P \parop !P}$
\item $\trisimsub{\Psi}{\caseonly\;{\ci{\vec{\varphi}}{\vec{(\nu a)P}}}}{(\nu a)\caseonly\;{\ci{\vec{\varphi}}{\vec{P}}}}\mbox{ if $a \freshin \vec{\varphi}$}$
\item $\trisimsub{\Psi}{(\nu a)(\nu b)P}{(\nu b)(\nu a)P}$
\end{enumerate}
\end{thm}
\begin{proof}
Formally proven in Isabelle. All proofs are by coinduction.
\end{proof}
\noindent
Note that bisimilarity is not preserved by input, for the same reasons as the \pic.
As in the \pic, we can define \emph{bisimulation congruence} as the substitution
closure of bisimilarity, and thus obtain a true congruence which satisfies all the
algebraic laws above.
We have verified this in Nominal Isabelle, following~\cite{bengtson.johansson.ea:psi-calculi-long}.
\subsection{Weak bisimulation}\label{sec:weakbisimulation}
We have also proved the standard algebraic and congruence properties of weak bisimulation.
The results in this section were established for the original psi-calculi by
Johansson et al.~\cite{DBLP:conf/lics/JohanssonBPV10}; our contribution is to lift them to
psi-calculi without channel symmetry and transitivity. As for strong bisimulation,
it turns out that we may disregard provenances for the purposes of weak bisimulation, so we
can reuse the original definitions verbatim.
The definition of weak bisimulation is technically complicated in psi-calculi because of the
delicate interplay between assertions and reductions. For example, in the
pi-calculus weak bisimulation equates $P$ and $\tau.P$, but in psi-calculi this equation
cannot be admitted: $P$ may contain top-level assertions that disable interaction possibilities in parallel processes. Hence there may be situations where an observable action originating
from $Q$ is available in $Q \parop \tau.P$ (where $P$ has not yet disabled it)
but unavailable in $Q \parop P$.
For a comprehensive motivation of the definitions, we refer to Johansson et al.~\cite{DBLP:conf/lics/JohanssonBPV10}; below we restate the pertinent definitions for completeness.
\begin{defi}[Weak transitions]
\label{def:weakTrans}
$\Psi \frames \wtrans{P}{}{P'}$ means that either $P = P'$ or there exists $P''$ such that $\Psi \frames \trans{P}{\tau}{P''}$ and $\Psi \frames \wtrans{P''}{}{P'}$.
We write $\Psi \frames \wtrans{P}{\alpha}{P'}$ to mean that there exists $P'',P'''$ such that
$\Psi \frames \wtrans{P}{}{P''}$ and $\framedtrans{\Psi}{P''}{\alpha}{P'''}$ and
$\Psi \frames \wtrans{P'''}{}{P'}$.
\end{defi}
\begin{defi}
$P$ statically implies $Q$ in the environment $\Psi$, written $P \simplies_\Psi Q$, if\[%
\forall \varphi. \; \Psi \ftimes \frameof{P} \vdash \varphi \;\Rightarrow \;
\Psi \ftimes \frameof{Q} \vdash \varphi\]
If $\Psi = \unit$ we may write $P\simplies Q$.
\end{defi}
\begin{defi}[Weak bisimulation]
A {\em weak bisimulation}
$\mathcal R$ is a ternary relation between assertions and pairs of agents such that
${\mathcal R}(\Psi,P,Q)$ implies all of
\begin{enumerate}
\item Weak static implication:
\[\begin{array}{l}
\forall \Psi' \exists Q'', Q'. \\
\quad \Psi \frames \wtrans{Q}{}{Q''}\quad
\wedge\quad P \simplies_\Psi Q'' \quad \wedge\\
\quad \Psi \ftimes \Psi' \frames \wtrans{Q''}{}{Q'} \quad \wedge \quad
{\mathcal R}(\Psi\ftimes\Psi',P,Q')
\end{array}
\]
\item
Symmetry: ${\mathcal R}(\Psi,Q,P)$
\item
Extension of arbitrary assertion:\\
$\forall \Psi'.\ {\mathcal R}(\Psi \ftimes \Psi',P,Q)$
\item Weak simulation:
for all $\alpha, P'$ such that $\bn{\alpha}\freshin \Psi,Q$ and $\framedtrans{\Psi}{P}{\alpha}{P'}$ it holds
\[\begin{array}{@{}lrl@{}}
\mbox{if}\;\alpha = \tau: \exists Q' .
\;\Psi \frames \wtrans{Q}{}{Q'} \quad \wedge \quad {\mathcal R}(\Psi,P',Q') \\
\mbox{if}\;\alpha \neq \tau: \forall \Psi' \exists Q'', Q'''. & &\\
\quad\Psi \frames \wtrans{Q}{}{Q'''}\quad \wedge \quad P \simplies_\Psi Q'''
\quad \wedge\\
\quad \Psi \frames \trans{Q'''}{\alpha}{Q''} \quad \wedge \\
\quad \exists Q'. \;\Psi \ftimes \Psi' \frames \wtrans{Q''}{}{Q'}
\;\; \wedge \;\; {\mathcal R}(\Psi\ftimes\Psi',P',Q')
\end{array}\]
\end{enumerate}
\label{def:noweakwbisim}
We define $P \wbisim_\Psi Q$ to mean that there exists a weak bisimulation ${\mathcal R}$
such that ${\mathcal R}(\Psi,P,Q)$ and write $P \wbisim Q$ for $P \wbisim_\unit Q$.
\end{defi}
Weak bisimulation thus defined includes strong bisimulation, and thus
satisfies all the usual structural laws.
It is not preserved by $\caseonly$ and input, for the same reasons
as $+$ and input do not preserve weak bisimulation in the pi-calculus.
We employ the standard solution to obtain a congruence:
all initial $\tau$ steps must be
simulated by at least one $\tau$ step, and we furthermore close the relation under all
substitutions.
\begin{defi}[Weak congruence]
$P$ and $Q$ are weakly $\tau$-bisimilar, written $\taubisim{\Psi}{P}{Q}$, if $P
\wbisim_\Psi Q$ and the following holds:
for all $P'$ such that
$\framedtrans{\Psi}{P}{\tau}{P'}$ there exists $Q'$ such that
$\Psi\frames \wtrans{Q}{\tau}{Q'} \; \wedge \; P'\wbisim_{\Psi} Q'$,
and similarly with the roles of $P$ and $Q$ exchanged. We define $P \wcong Q$ to
mean that for all $\Psi$, and for all well-formed substitution sequences $\ve{\sigma}$,
it holds that
$\taubisim{\Psi}{P\ve{\sigma}}{Q\ve{\sigma}}$.
\end{defi}
The following theorems have been formally proven in Nominal Isabelle:
\begin{thm}\label{thm:weak-struct}$\underset{{\rm tau}}{\wbisim}$ satisfies all the algebraic laws of
$\bisim$ established in Theorem~\ref{thm:strong-struct}.
\end{thm}
\begin{proof}
Formally proven in Isabelle. The proof relies on the fact that
${\bisim} \subseteq {\wbisim}$, which we show by coinduction,
using $\bisim$ as a candidate relation.
\end{proof}
\begin{thm}\label{thm:weak-bisim-cong}$\wbisim$ satisfies all the congruence properties of $\bisim$ established in Theorem~\ref{thm:bisimpres} except~\ref{thm:bisimpres}.4.
\end{thm}
\begin{proof}
Formally proven in Isabelle, by coinduction.
\end{proof}
\begin{thm}\label{thm:weak-cong-cong}Weak congruence $\wcong$ is a congruence wrt.~all operators of psi-calculi.
\end{thm}
\begin{proof}
Formally proven in Isabelle, using Theorem~\ref{thm:weak-bisim-cong} where applicable.
\end{proof}
\subsection{Motivating the design}\label{sec:design}
We have added provenance annotations to an operational semantics that had no shortage
of annotations and side-conditions to begin with. The end result may strike the reader
as somewhat unparsimonious. Previously, psi-calculi had one label component---the channel
subjects---for keeping track of connectivity. We now have two; do we really need both?
In this section, we will explore the consequences of removing either channel subjects or
provenances from the semantics.
The short answer is that while we \emph{can} do without either one, the end result is
not greater parsimony.
\subsubsection{Do we need provenances?}
The fact that bisimilarity is compositional yet ignores provenances
suggests that the semantics could be reformulated without provenance annotations on labels.
To achieve this, what is needed is a side-condition $S$ for the \textsc{Com} rule which,
given an input and an output with subjects $M,K$,
determines if the input transition could have been derived from prefix $K$, and vice versa:
\begin{mathpar}
\inferrule*[]
{\framedtrans{\frass{Q} \ftimes \Psi}{P}{\bout{\ve{a}}{M}{N}}{P'} \\
\framedtrans{\frass{P} \ftimes \Psi}{Q}{\inlabel{K}{N}}{Q'} \\
S
}
{\framedtrans{\Psi}{P \pll Q}{\tau}{(\nu \ve{a})(P' \pll Q')}}
\end{mathpar}
\noindent But we already have such an $S$: the semantics \emph{with} provenances! So we can let
\[S = \framedprovtrans{\frass{Q} \ftimes \Psi}{P}{\outlabel{M}{(\nu \ve{a})N}}{(\nu\frnames{P};\ve{x})K}{P'} \wedge \framedprovtrans{\frass{P} \ftimes \Psi}{Q}{\inlabel{K}{N}}{(\nu\frnames{Q};\ve{y})M}{Q'}\]
\noindent Of course, this definition is not satisfactory: the provenances are still there, just swept under the
carpet. Worse, we significantly complicate the definitions by effectively introducing a stratified
semantics. Thus the interesting question is not whether such an $S$ exists (it does), but whether $S$
can be formulated in a way that is significantly simpler than the semantics with provenances.
The author believes the answer is negative: $S$ is a property about the roots of the
proof trees used to derive the transitions from $P$ and $Q$. The provenance records just enough
information about the proof trees to show that $M$ and $K$ are connected; with no provenances,
it is not clear how this information could be obtained without essentially reconstructing
the proof tree.
Another alternative is to use the proof tree itself as the transition label~\cite{boudol1988non,DBLP:conf/icalp/DeganoP92}. This makes the necessary information available, at the expense of making labels even more complicated.
\subsubsection{Do we need channel subjects?}
While we have chosen to test for channel connectivity in the \textsc{In} and
\textsc{Out} rules,
a semantics without channel subjects would defer the connectivity check until
the \textsc{Com} rule.%
\footnote{This is similar to a design first proposed by Magnus Johansson in an unpublished draft.
Johansson's design does not use provenances, but obtains a similar effect by including bound
subjects and bound assertions in labels. By partitioning provenance binders in two sequences,
we can recover frame binders from labels and thus found no need to include bound assertions.}
Let us call the former approach \emph{early connectivity}, and the latter \emph{late connectivity}.
The rules for late connectivity---eliding freshness conditions for readability, and
using $!$ and $?$ to distinguish outputs from inputs---would be:
\begin{mathpar}
\inferrule*[Left=\textsc{In-Late}]
{\,}
{\framedprovtrans{\Psi}{\inprefix{M}{\ve{y}}{N} \sdot
P}{?{N}\subst{\ve{y}}{\ve{L}}}{M}{P\subst{\ve{y}}{\ve{L}}}}
\inferrule*[left=\textsc{Out-Late}]
{\,}
{\framedprovtrans{\Psi}{\outprefix{M}{N} \sdot P}{!{N}}{M}{P}}
\inferrule*[left=\textsc{Com-Late}, right={$\ve{a} \freshin Q$}]
{\Psi \ftimes \frass{P} \ftimes \frass{Q} \vdash K \chcon M \\
\framedprovtrans{\frass{Q} \ftimes \Psi}{P}{!{(\nu \ve{a})N}}{(\nu\frnames{P};\ve{x})K}{P'} \\
\framedprovtrans{\frass{P} \ftimes \Psi}{Q}{?{N}}{(\nu\frnames{Q};\ve{y})M}{Q'}
}
{\framedprovtrans{\Psi}{P \pll Q}{\tau}{\bot}{(\nu \ve{a})(P' \pll Q')}}
\end{mathpar}
It is pleasant that this formulation allows the \textsc{In-Late} and \textsc{Out-Late} rules
to have no side-conditions. Save for the provenance bookkeeping, all questions
of connectivity are handled entirely in \textsc{Com-Late}, which results in a pleasing
separation of concerns. This may seem more parsimonious at first glance, but it introduces two issues
that makes the trade-off seem unfavourable: more complicated bisimulation and spurious transitions.
\begin{enumerate}
\item More complicated bisimulation. Consider a psi-calculus where channel connectivity is syntactic equality; that is, where
$\Psi \vdash M \chcon K$ holds iff $M=K$.
Fix $M,K$ such that $M\neq K$. Using bisimilarity as defined in Definition~\ref{def:strongbisim},
we can show that $\overline{M}.0 \bisim \overline{K}.0$: without subjects, these processes emit identical labels save for the provenance, which is ignored by bisimulation.
Hence bisimilarity fails to be preserved by the parallel operator: consider these processes in parallel with a process that can receive on $M$. Then $\overline{M}.0$ can communicate but $\overline{K}.0$ cannot.
The takeaway is that with late connectivity, a compositional notion of bisimulation needs to be
more careful with which provenance the mimicking transition may use. The intuition is that rather
than admitting any provenance, we admit provenances that are connected to the same channels.
We conjecture that the necessary adaptation is:
\begin{defi}[Late-connectivity bisimulation]\label{def:cfstrongbisim} A symmetric relation $\mathcal{R} \subseteq \assertions \times \processes \times \processes$ is a \emph{channel-free bisimulation} iff for every $(\Psi,P,Q) \in \mathcal{R}$
\begin{enumerate}
\item $\Psi \otimes \frameof{P} \;\simeq\; \Psi \otimes \frameof{Q}$ (static equivalence)
\item $\forall \Psi'. (\Psi\otimes\Psi',P,Q) \in \mathcal{R}$ (extension of arbitrary assertion)
\item If $\framedprovtrans{\Psi}{P}{\alpha}{\pi}{P'}$ and $\bn{\alpha} \freshin \Psi, Q$, then
\begin{enumerate}
\item If $\alpha=\tau$ then there exists $Q'$ such that $\framedprovtrans{\Psi}{Q}{\tau}{\bot}{Q'}$ and $(\Psi,P',Q') \in \mathcal{R}$
\item For all $M,K,N,\ve{x},\frnames{P},\frass{P},\frnames{Q},\frass{Q}$ such that $\alpha = ?N$ and $\pi = (\nu\frnames{P};\ve{x})K$ and $\frameof{P} = \framepair{\frnames{P}}{\frass{P}}$ and $\frameof{Q} = \framepair{\frnames{Q}}{\frass{Q}}$ and $\frnames{P},\frnames{Q} \freshin \Psi,P,Q,M,\ve{x}$ and $\ve{x}\freshin\Psi$ and $\Psi \otimes \Psi_P \vdash M \chcon K$, then there exists $\ve{y},K',Q'$ such that $\ve{y} \freshin \Psi$ and $\Psi \otimes \Psi_Q \vdash M \chcon K'$ and $\framedprovtrans{\Psi}{Q}{?N}{(\nu\frnames{Q};\ve{y})K'}{Q'}$ and $(\Psi,P',Q') \in \mathcal{R}$
\item (A similar clause for output transitions)
\end{enumerate}
\end{enumerate}
\end{defi}
In words, for every channel $M$ that the transition from $P$'s provenance is connected to, there
must be a transition from $Q$ with a provenance that is also connected to $M$. Note that static
equivalence is not sufficient to imply preservation of connectivity: the conditions
may be distinct, and even if equal may be obscured by the frame binders.
We find this definition of bisimulation intolerably ad-hoc and complicated.
\item Spurious transitions. Consider the representation of the $\pi$-calculus in psi-calculi, where $\terms = \nameset$, and where channel connectivity is syntactic equality on names. This representation is in one-to-one transition correspondence with the standard presentation of the $\pi$-calculus~\cite{Johansson10}, but if we use late connectivity, one-to-one transition correspondence is lost. The pi-calculus process $(\nu x)\overline{x}y$ should not have any outgoing transitions, but late connectivity semantics allows the derivation of a transition as follows:
\begin{mathpar}
\inferrule*[left=\textsc{Scope-Late}]
{
\inferrule*[left=\textsc{Out-Late}]
{\,}
{\framedprovtrans{\Psi}{\overline{x}y}{!y}{x}{0}}
}
{\framedprovtrans{\Psi}{(\nu x)\overline{x}y}{!y}{(\nu x)x}{(\nu x)0}}
\end{mathpar}
\noindent The existence of this transition is more of a blemish than a real problem.
It cannot be used to derive a communication because there exists no $y$ such that
$x \neq y$ and $x \chcon y$. It will be ignored by late-connectivity bisimulation
(Definition~\ref{def:cfstrongbisim}) for the same reason, so it remains true
that $(\nu x)\overline{x}y \bisim 0$. Still, we maintain the view that a derivable
transition should signify the readiness of the process to engage in some behaviour.
This transition signifies nothing.
\end{enumerate}
\subsection{Revisiting the counterexamples}\label{sec:counterexamples}
In the introduction, we mentioned that Bengtson et al.~\cite{bengtson.johansson.ea:psi-calculi-long} have counterexamples to the effect that without symmetry and transitivity,
scope extension is unsound. In this section we will revisit these counterexamples, with the
aim of convincing the reader that they do not apply to our developments in the present
paper.
We begin by quoting the counterexample used by Bengtson et al. to argue that scope extension requires channel symmetry~\cite[p.~14]{bengtson.johansson.ea:psi-calculi-long}:
\begin{quote}
Consider any psi-calculus where $\Psi_1$ and $\Psi_2$ are such that $\Psi_1 \ftimes \Psi_2
\vdash a \sch b$ and $\Psi_1 \ftimes \Psi_2 \vdash b \sch b$. We shall argue that also $\Psi_1 \ftimes \Psi_2 \vdash b \sch a$ must hold, otherwise scope extension does not hold.
Consider the agent
\[(\nu a,b)(\pass{\Psi_1}\parop\pass{\Psi_2}\parop\overline{a}\sdot \nil\parop
b\sdot \nil)\]
which has an internal communication $\tau$ using $b$ as subjects in the premises of the {\sc Com} rule.
If $b \freshin \Psi_1$ and $a \freshin \Psi_2$, by scope extension the agent should behave as
\[(\nu a)(\pass{\Psi_1}\parop\overline{a}\sdot \nil) \;\parop\; (\nu b)(\pass{\Psi_2}\parop
b\sdot \nil)\]
and therefore this agent must also have a $\tau$ action. The left hand component cannot do an $\overline{a}$ action, but in the environment of $\Psi_2$ it can do a $\overline{b}$ action.
Similarly, the right hand component cannot do a $b$ action. The only possibility is for it to do an $a$ action, as in
\[\framedtrans{\Psi_1}{(\nu
b)(\pass{\Psi_2} \parop b \sdot \nil)}{a}{\cdots}\]
and this requires $\Psi_1 \ftimes \Psi_2 \vdash b \sch a$.
\end{quote}
\noindent This counterexample is only valid because the authors were not careful about the orientation of channel equivalence judgements in the operational rules---understandably so, because they were designing a calculus with symmetric connectivity. To see this clearly, consider the preconditions to the rules for input and output in the original psi-calculi:
\begin{mathpar}
\and
\inferrule*[Left=\textsc{In-Old}]
{\Psi \vdash M \sch K }
{\framedtrans{\Psi}{\inprefix{M}{\ve{y}}{N}.P}{\inlabel{K}{N}\lsubst{\ve{L}}{\ve{y}}}{P\lsubst{\ve{L}}{\ve{y}}}}
\inferrule*[left=\textsc{Out-Old}]
{\Psi \vdash M \sch K }
{\framedtrans{\Psi}{\outprefix{M}{N}.P}{\outlabel{K}{N}}{P}}
\end{mathpar}
\noindent Note the inconsistency that in \textsc{In-Old}, the input channel occurs on the LHS of $\sch$, whereas in \textsc{Out-Old} the output channel occurs on the LHS. This of course makes no difference when $\sch$ is symmetric, but for an asymmetric connectivity relation it is important to use it consistently, with the input channel always going on the same side of $\sch$. Simply reorienting the channel equivalence judgement in \textsc{In-Old} suffices to make this counterexample
inapplicable even to the original psi-calculi.
This should not be taken to mean that the original psi-calculi do not require symmetry: we only
mean to say that the reason for the symmetry requisite is not clear from this counterexample.
For transitivity, Bengtson et al. give the following counterexample~\cite[p.~14]{bengtson.johansson.ea:psi-calculi-long}:
\begin{quote}
Let $\one$ entail $a \sch a$ for all names $a$, and
let $\Psi$ be an assertion with support $\{a,b,c\}$ that additionally entails
the two conditions $a \sch b$ and $b \sch c$, but not $a \sch c$, and thus does
not satisfy transitivity of channel equivalence. If $\Psi$ entails no other
conditions then $(\nu b)\Psi \sequivalent \one$, and we expect $(\nu b)\pass\Psi$
to be interchangeable with $\pass\one$ in all contexts.
Consider the agent
\[\overline{a}\sdot \nil \parop c \sdot \nil \parop (\nu b)\pass{\Psi}\]
By scope extension it should behave precisely as
\[(\nu b)(\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass\Psi)\]
This agent has a $\tau$-transition since $\Psi$ enables an interaction between the components $\overline{a}\sdot \nil$ and $c \sdot \nil$.
But the agent
\[\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass{\one}\]
has no such transition. The conclusion is that $(\nu b) \Psi$ must entail that the components can communicate, i.e.~that $a \sch c$, in other words $\Psi \vdash a \sch c$.
\end{quote}
\noindent The present author agrees with this reasoning, but reaches the exact opposite conclusion about
which process is at fault: the anomaly here is not that $\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass{\one}$ cannot reduce, but that $(\nu b)(\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass\Psi)$ can. If $a$ and $c$ are not channel equivalent, there should not be a derivable communication between the channels $a$ and $c$. The original psi-calculi nonetheless admit the derivation
\begin{mathpar}
\inferrule*[left=\textsc{Scope}]
{\inferrule*[left=\textsc{Par}]
{
\inferrule*[left=\textsc{Com-Old}]
{{\inferrule*[Left=\textsc{Out}]
{\Psi \vdash a \sch b}
{\framedtrans{\Psi}{\overline{a}\sdot \nil}{\overline{b}}{\cdots}}} \\
{\inferrule*[Left=\textsc{In}]
{\Psi \vdash c \sch c}
{\framedtrans{\Psi}{c \sdot \nil}{\underline{c}}{\cdots}}} \\
\Psi \vdash b \sch c
}
{\framedtrans{\Psi}{\overline{a}\sdot \nil \parop c \sdot \nil}{\tau}{\cdots}
}
}
{\framedtrans{\one}{\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass\Psi}{\tau}{\cdots}}
}
{\framedtrans{\one}{(\nu b)(\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass\Psi)}{\tau}{\cdots}}
\end{mathpar}
Notice how we use three different channel equivalence judgements to string together
a derivation via $b$, which both $a$ and $c$ are connected to. This is not a problem if
transitivity is intended, but leads to absurd derivations if the intention is to allow
non-transitive connectivity relations.
With the provenance semantics, the counterexample does not apply since no communication between $a$ and $c$ is possible: the only possibility is to apply the \textsc{Com} rule with matching subjects and provenances. This requires $\overline{a}\sdot \nil$ to have an output transition with subject $c$ and $c\sdot \nil$ to have an input transition with subject $a$, but such transitions cannot be derived because $a \sch c$ (in our notation $a \chcon c$) does not hold.
Finally, we observe that a slight variant of the counterexample for transitivity illustrates the
need for symmetry.
This time, let $\Psi$ be an assertion with support $\{a,b,c,d\}$ that entails the three conditions $a \sch b$ and $d \sch b$ and $c \sch d$ and none other.
The agent
\[\overline{a}\sdot \nil \parop c \sdot \nil \parop \one\]
will have no transitions, but this agent will have a $\tau$ transition:
\[(\nu b,d)(\overline{a}\sdot \nil \parop c \sdot \nil \parop \pass\Psi)\]
We conclude by the same reasoning as above that $\Psi \vdash a \sch c$ must hold,
or in other words, that channel equivalence must satisfy the law
\[a \sch b \wedge d \sch b \wedge c \sch d \Rightarrow a \sch c\]
\noindent This awkward-looking algebraic law is weaker than symmetry and transitivity,
and together with reflexivity it implies both.
It may well be that this weaker law is sufficient for the original psi-calculi to be
compositional (assuming a channel equivalence reorientation in the \textsc{In-Old} rule).
However, we find it difficult to imagine a useful connectivity relation that satisfies this law
but is neither reflexive, transitive nor symmetric.
\subsection{Validation}\label{sec:validation}
We have defined semantics and bisimulation, and showed that bisimilarity satisfies the expected
laws. But how do we know that they are the right
semantics, and the right bisimilarity?
This section provides two answers to this question. First, we show that our developments
constitute a conservative extension of the original psi-calculi.
Second, we define a reduction semantics and barbed bisimulation that are
in agreement with our (labelled) semantics and (labelled) bisimilarity.
Let $\apitransarrow{}_o$ and $\bisim_o$ denote semantics and bisimilarity as defined
by Bengtson et al.~\cite{bengtson.johansson.ea:psi-calculi-long},
i.e., without provenances
and with the \textsc{Com-Old} rule discussed in Section~\ref{sec:definitions}.
Along the same lines, let $\wbisim_o$ and ${\underset{{\rm tau}}{\wbisim}}_o$ and $\wcong_o$
denote weak bisimulation, weak $\tau$-bisimilarity and weak congruence as defined by
Johansson et al.~\cite{DBLP:conf/lics/JohanssonBPV10}.
Then conservativity can be stated as follows:
\begin{thm}[Conservativity]\label{thm:conservativity}When $\chcon$ is symmetric and transitive we have
\begin{mathpar}
\bisim_o\;=\;\bisim{}
\and \apitransarrow{}_o\;=\;\apitransarrow{}
\and \wbisim_o\;=\;\wbisim
\and \underset{{\rm tau}}{\wbisim}_o\;=\;\underset{{\rm tau}}{\wbisim}
\and \wcong_o\;=\;\wcong
\end{mathpar}
\end{thm}
\begin{proof}
Formally proven in Isabelle.
The bulk of the proof is to show that $\apitransarrow{}_o\;=\;\apitransarrow{}$.
The $\Leftarrow$ direction is by induction on the derivation of the
$\apitransarrow{}$ judgement, using symmetry to reorient the connectivity judgement
in the \textsc{In} case. In the \textsc{Com} case, associativity and
Lemma~\ref{lemma:provchaneq} are used to reconstruct the missing channel equivalence judgement
The $\Rightarrow$ direction is by induction on the $\apitransarrow{}_o$ judgement, and is more
involved; in particular, the \textsc{Com-Old} case requires relabelling the transitions
obtained from the induction hypotheses with the provenance of the other.
\end{proof}
\noindent Our reduction semantics departs from standard designs~\cite{Berry:1989:CAM:96709.96717,Milner:1990:FP:90397.90426}
by relying on reduction contexts~\cite{DBLP:journals/tcs/FelleisenH92} instead of
structural rules, for two reasons. First, standard formulations tend to include rules like these:
\begin{mathpar}
\inferrule*[]
{\trans{P}{}{P'}}
{\trans{P \parop Q}{}{P' \parop Q}}
\and
\inferrule*[]
{\ }
{\trans{\alpha.P + Q \parop \overline{\alpha}.R + S}{}{P \parop R}}
\end{mathpar}
\noindent A parallel rule like the above would be unsound because $Q$ might contain assertions that retract some conditions needed to derive $P$'s reduction.
The reduction axiom assumes prefix-guarded choice. We want our semantics to
apply to the full calculus, without limiting the syntax
to prefix-guarded $\caseonly$ statements.
But first, a few auxiliary definitions.
The \emph{reduction contexts} are the contexts in which communicating
processes may occur:
\begin{defi}[Reduction contexts]
The \emph{reduction contexts
, ranged over by $C$, are generated by the grammar
\[
\begin{array}{rrll}
C & := & P_G & \mbox{(process)} \\
& & \chole & \mbox{(hole)} \\
& & C \parop C & \mbox{(parallel)} \\
& & \caseonly\;{\ci{\ve{\varphi}}{\ve{P_G}}}\casesep{\ci{\varphi'}{C}}\casesep{\ci{\ve{\varphi''}}{\ve{Q_G}}} & \mbox{(case)} \\
\end{array}
\]
Let $\holes{C}$ denote the number of holes in $C$. $\cfill{C}{\ve{P_G}}$ denotes the process that results from filling each hole of $C$ with the corresponding element of $\ve{P_G}$, where holes are numbered from left to right; if $\holes{C} \neq |\ve{P_G}|$, $\cfill{C}{\ve{P_G}}$ is undefined.
\end{defi}
We do not need restriction contexts---instead, we rely on structural rules to
pull all restrictions to the top level.
To this end, we let \emph{structural congruence} $\equiv$ be the smallest equivalence relation on processes derivable using Theorems~\ref{thm:bisimpres} and~\ref{thm:strong-struct}.
The \emph{conditions} $\conds{C}$ and \emph{parallel processes} $\ppr{C}$ of a context $C$
are, respectively, the conditions in $C$ that guard the holes, and the processes of $C$ that
are parallel to the holes:
\[
\begin{array}{rrl}
\ppr{P_G} & = & P_G \\
\ppr{\chole} & = & \nil \\
\ppr{C_1 \parop C_2} & = & \ppr{C_1} \parop \ppr{C_2} \\
\ppr{\caseonly\;{\ci{\ve{\varphi}}{\ve{P_G}}}\casesep{\ci{\varphi'}{C}}\casesep{\ci{\ve{\varphi''}}{\ve{Q_G}}}} & = & \ppr{C} \\
& & \\
\conds{P_G} & = & \emptyset \\
\conds{\chole} & = & \emptyset \\
\conds{C_1 \parop C_2} & = & \conds{C_1} \cup \conds{C_2} \\
\conds{\caseonly\;{\ci{\ve{\varphi}}{\ve{P_G}}}\casesep{\ci{\varphi'}{C}}\casesep{\ci{\ve{\varphi''}}{\ve{Q_G}}}} & = & \{\varphi'\} \cup \conds{C}
\end{array}
\]
\begin{defi}[Reduction semantics]
The reduction relation $\mathord{\apitransarrow{}}\subseteq\processes \times \processes$ is
defined inductively by the rules of Table~\ref{table:reduction-semantics}.
\end{defi}
\begin{table}[tb]
\begin{mathpar}
\inferrule*[left=\textsc{Struct}]
{P \equiv Q \\ \trans{Q}{}{Q'} \\ Q' \equiv P'}
{\trans{P}{}{P'}}
\inferrule*[left=\textsc{Scope}]
{\trans{P}{}{Q}}
{\trans{(\nu a)P}{}{(\nu a)Q}}
\inferrule*[left=\textsc{Ctxt}]
{\ve{\Psi} \vdash M \chcon N \\
K = L\subst{\ve x}{\ve T} \\
\forall \varphi \in \conds{C}.\;\ve{\Psi} \vdash \varphi
}
{\trans{\ve{\pass{\Psi}} \parop \cfill{C}{\outprefix{M}{K}.P,\;\inprefix{N}{\ve x}{L}.Q}}{}{\ve{\pass{\Psi}} \parop P \parop Q\subst{\ve x}{\ve T} \parop \ppr{C}}}
\end{mathpar}
\caption{\rm Reduction semantics.
Here $\ve{\Psi}$ abbreviates the composition $\Psi_1 \otimes \Psi_2 \otimes \dots$, and
$\ve{\pass{\Psi}}$ abbreviates the parallel composition $\pass{\Psi_1} \parop \pass{\Psi_2} \parop \dots$---for empty sequences they are taken to be $\one$ and $\nil$ respectively.
}
\label{table:reduction-semantics}
\end{table}
In words, \textsc{Ctxt} states that if an input and output prefix occur
in a reduction context, we may derive a reduction if the following holds:
the prefixes are connected in the current assertion environment,
the message matches the input pattern, and all conditions guarding
the prefixes are entailed by the environment.
The $\ppr{C}$ in the reduct makes sure any processes in parallel to the holes
are preserved.
Note that even though an unrestricted parallel rule would be unsound in a psi-calculus with
non-monotonic composition, the following is valid as a derived rule:
\[
\inferrule*[]
{\trans{P}{}{P'}}
{\trans{P \parop Q_G}{}{P' \parop Q_G}}
\]
\begin{thm}\label{lemma:harmony}
$\trans{P}{}{P'}$ iff there is $P''$ such that $\framedtrans{\unit}{P}{\tau}{P''}$ and $P'' \equiv P'$
\end{thm}
\begin{proof}
A full proof is available in the technical report~\cite{pohjola:newpsireport}.
The $\Leftarrow$ direction is by induction on the derivation of $\trans{P}{}{P'}$.
The $\Rightarrow$ direction is via reduction to normal form, showing that for every
process $P$ there are $\ve{x},\Psi,P_G$ such that
\[P \equiv (\nu \ve{x})(\ve{\pass{\Psi}} \parop P_G)\tag*{\qedhere}\]
\end{proof}
For barbed bisimulation, we need to define what the observables are, and what contexts
an observer may use.
We follow previous work by Johansson et al.~\cite{DBLP:conf/lics/JohanssonBPV10}
on weak barbed bisimilarity for the original psi-calculi on both counts.
First, we take the barbs to be the output labels a process can exhibit: we define
$P\exposes{\outlabel{M}{(\nu \ve{a})N}}$
($P$ exposes $\outlabel{M}{(\nu \ve{a})N}$)
to mean
$\exists P'.\;\framedtrans{1}{P}{\outlabel{M}{(\nu \ve{a})N}}{P'}$.
We write $P \exposes{\overline{M}}$ for
$\exists \ve{a},N. P \exposes{\outlabel{M}{(\nu \ve{a})N}}$,
and $P\wexposes{\alpha}$ for $P \goesto{\tau}^\ast\exposes{\alpha}$.
Second, we let observers use \emph{static} contexts, i.e.~ones built from parallel
and restriction.
\begin{defi}[Barbed bisimilarity]\label{def:barbbisim} \emph{Barbed bisimilarity}, written $\barbbisim$, is the largest equivalence on processes such that $P \barbbisim Q$ implies
\begin{enumerate}
\item If $P\exposes{\outlabel{M}{(\nu \ve{a})N}}$
and $\ve{a} \freshin Q$ then $Q\exposes{\outlabel{M}{(\nu \ve{a})N}}$ (barb similarity)
\item If $\trans{P}{}{P'}$ then there exists $Q'$ such that $\trans{Q}{}{Q'}$ and $P' \barbbisim Q'$ (reduction simulation)
\item $(\nu\ve{a})(P \parop R) \barbbisim (\nu\ve{a})(Q \parop R)$ (closure under static contexts)
\end{enumerate}
\end{defi}
Our proof that barbed and labelled bisimilarity coincides only
considers psi-calculi with a certain minimum of sanity and expressiveness.
This rules out some degenerate cases:
psi-calculi where there are messages that can be sent but not received,%
\footnote{
More precisely, we refer to messages that can occur as objects of output transitions,
but not as the objects of input transitions.
This can happen in psi-calculi where the substitution function on terms
is not surjective, because
the rule \textsc{In} requires the message to be the result of a substitution.},
and psi-calculi where no transitions whatsoever are possible.
\begin{defi} A psi-calculus is \emph{observational} if:
\begin{enumerate}
\item For all $P$ there are $M_P,K_P$ such that
$\frameof{P} \vdash M_P \chcon K_P$ and
not $P\wexposes{\overline{K_p}}$.
\item If $N = (\ve{x}\;\ve{y})\cdot M$ and $\ve{y} \freshin M$ and
$\ve{x},\ve{y}$ are distinct
then $M\subst{\ve{x}}{\ve{y}} = N$.
\end{enumerate}
\end{defi}
\noindent The first clause means that no process can exhaust the set of barbs.
Hence observing contexts can signal success or failure without interference
from the process under observation.
For example, in the pi-calculus $M_P,K_P$ can be any name $x$ such that $x\freshin P$.
The second clause states that for swapping of distinct names,
substitution and permutation have the same behaviour.
Any standard definition of simultaneous substitution should satisfy this requirement.
These assumptions are present, explicitly or implicitly,
in the work of Johansson et al.~\cite{DBLP:conf/lics/JohanssonBPV10}.
Ours are given a slightly weaker formulation.
We can now state the main result of this section:
\begin{thm} In all observational psi-calculi, $P \barbbisim Q$ iff $\trisimsub{\one}{P}{Q}$.
\end{thm}
\begin{proof}
A full proof is available in the technical report~\cite{pohjola:newpsireport}.
Soundness is by coinduction on the definition of barbed bisimilarity,
using $\bisim_{\one}$ as candidate relation,
and using Theorems~\ref{lemma:harmony} and \ref{thm:bisimpres}, and the
fact that $\equiv$ is a strong bisimulation.
Completeness is by showing that $\{(\Psi,P,Q) : P \parop \pass{\Psi} \barbbisim Q \parop \pass{\Psi}\}$ is a bisimulation relation.
\end{proof}
\section{Expressiveness}
In this section, we study two examples of
the expressiveness gained by dropping symmetry and transitivity.
\subsection{Pi-calculus with preorders}\label{sec:prepi}
Recall that pi-F~\cite{gardner.wischik:explicit-fusions}
extends the pi-calculus with name equalities $(x=y)$ as first-class processes.
Communication in pi-F gives rise to equalities rather than substitutions,
so e.g.~$xy.P \parop \overline{x}z.Q$ reduces to $y = z \parop P \parop Q$:
the input and output objects are fused.
Hirschkoff et al.~\cite{DBLP:conf/lics/HirschkoffMS13} observe that fusion and
subtyping are fundamentally incompatible, and
propose a generalisation of pi-F
called the \emph{pi-calculus with preorders} or ${\pi}\!P$
to resolve the issue.
We are interested in ${\pi}\!P$ because its channel connectivity
is not transitive.
The equalities of pi-F are replaced with \emph{arcs} $a/b$ (``$a$ is above $b$'')
which act as one-way fusions: anything that can be done with $b$ can be done with $a$,
but not the other way around.
The effect of a communication is to create an arc
with the output subject above the input subject,
so $x(y).P \parop \overline{x}(z).Q$ reduces to $(\nu yz)(z/y \parop P \parop Q)$.
We write $A \vdash x \prec y$ to mean that $x$ and $y$ are related by the reflexive and transitive closure of the set of arcs $A$. $A$ is usually the set
of top-level arcs in the process under consideration, and will often be left implicit.
Two names $x,y$ are considered \emph{joinable} for the purposes of synchronisation if some name
$z$ is above both of them: formally, we write $x \curlyvee y$ for
$\exists z. x \prec z \wedge y \prec z$.
Hirschkoff et al.~conclude by saying that ``[it] could also be interesting to study the
representation of $\pi\!P$ into Psi-calculi.
This may not be immediate because the latter make use of on an equivalence relation on channels,
while the former uses a preorder''~\cite[p.~387]{DBLP:conf/lics/HirschkoffMS13}.
Having lifted the constraint that channels form an equivalence relation,
we happily accept the challenge.
We write ${\Psi}\!P$ for the psi-calculus we use to embed ${\pi}\!P$.
We follow the presentation of ${\pi}\!P$ from~\cite{DBLP:journals/jlp/HirschkoffMX15,DBLP:conf/fsen/HirschkoffMX15},
where the behavioural theory is most developed.
\begin{defi}
The psi-calculus ${\Psi}\!P$ is defined with the following parameters:
\[
\begin{array}{rrl}
\terms & \defn & \nameset
\\
\conditions & \defn &
\{x \prec y : x,y \in \nameset \}
\cup
\{x \curlyvee y : x,y \in \nameset \}
\\
\assertions & \defn & \powerfin{\{x \prec y : x,y \in \nameset \}}
\\
\unit & \defn & \{\}
\\
\otimes & \defn & \cup
\\
\chcon & \defn & \curlyvee
\\
\vdash & \defn & \mbox{the relation denoted $\vdash$ in~\cite{DBLP:conf/fsen/HirschkoffMX15}}.
\end{array}
\]
\end{defi}
\noindent The prefix operators of ${\pi}\!P$ are different from those of psi-calculi:
objects are always bound, communication gives rise to an arc rather than a
substitution, and a conditional silent prefix $[\varphi]\tau.P$ is included.
The full syntax, ignoring protected prefixes, is as follows:%
\footnote{We ignore protected prefixes because they are redundant, cf.~Remark~1 of \cite{DBLP:journals/jlp/HirschkoffMX15}.}
\begin{defi}[Syntax of ${\pi}\!P$]$\,$
\begin{center}
\begin{tabular}{rcll}
$P$ & := & $a/b$ & (arc) \\
& & $\Sigma_{i\in I}\pi_i.P_i$ & (prefix-guarded choice) \\
& & $P \parop Q$ & (parallel) \\
& & $(\nu x)P$ & (restriction) \\
$\pi$ & := & $a(x)$ & (input) \\
& & $\overline{a}{(y)}$ & (output) \\
& & $[\varphi]\tau$ & (conditional silent prefix) \\
$\varphi$ & := & $x \prec y$ & \\
& & $x \curlyvee y$ &
\end{tabular}
\end{center}
Here $I$ is a finite index set.
\end{defi}
We can now define our encoding of ${\pi}\!P$ prefixes:
\begin{defi}[Encoding of prefixes]\label{def:prefixenc}
The encoding $\semb{\_}$ from ${\pi}\!P$ to ${\Psi}\!P$ is homomorphic on all
operators except prefixes and arcs, where it is defined by
\[
\begin{array}{rrll}
\semb{a/b} & = & \pass{b \prec a} &
\\
\semb{\overline{a}{(y)}.P} & = & (\nu xy)(\overline{a}{x}.(\pass{x \prec y} \parop \semb{P}) &\mbox{ where $x\freshin y,P$}
\\
\semb{{a}{(y)}.P} & = & (\nu y)(\inprefix{a}{x}{x}.(\pass{y \prec x} \parop \semb{P})) &\mbox{ where $x\freshin y,P$}
\\
\semb{[\varphi]\tau.P} & = & \caseonly\;{\ci{\varphi}{(\nu x)(\inprefix{x}{x}{x}.0 \parop \overline{x}{x}.\semb{P})}} & \mbox{ where $x\freshin P$}
\end{array}
\]
For choice, we let $\semb{\Sigma_{i\in I}P_i} = {\caseonly\;{\ci{\ve{\varphi}}{\ve{\semb{P}}}}}$, where each $\varphi_i$ is a condition that is always entailed.\footnote{
Such a condition can either be added to the target language, or we can use e.g.~%
$a \prec a$ at the cost of some technical inconvenience.
See the technical report for details.
}
\end{defi}
\noindent This embedding of ${\pi}\!P$ in psi-calculi comes with a notion of bisimilarity per
Definition~\ref{def:strongbisim}.
We show that it coincides with the labelled bisimilarity for ${\pi}\!P$ (written $\sim$)
introduced in~\cite{DBLP:journals/jlp/HirschkoffMX15,DBLP:conf/fsen/HirschkoffMX15}.
\begin{thm}\label{thm:madiotbisim}
$P \sim Q$ iff $\semb{P} \bisim \semb{Q}$
\end{thm}
\begin{proof}
A full proof is available in the technical report~\cite{pohjola:newpsireport}.
We prove strong operational correspondence by a tedious induction,
then prove (respectively) that
$\{(P,Q). \trisimsub{\one}{\semb{P}}{\semb{Q}}\}$ is a bisimulation,
and that
$\{(\Psi,\semb{P},\semb{Q}). P \parop \Psi \sim Q \parop \Psi \}$ is
a bisimulation up to $\bisim$.
\end{proof}
Thus our encoding validates the behavioural theory of ${\pi}\!P$ by connecting it to our fully mechanised proofs, while also showing that a substantially different design of the LTS
yields the same bisimilarity.
We will briefly compare these designs.
While we do rewriting of subjects in the prefix rules,
Hirschkoff et al. instead use relabelling rules like this one
(mildly edited to match our notation):
\begin{mathpar}
\inferrule*[]
{\trans{P}{a(x)}{P'} \\ \frameof{P} \vdash a \prec b}
{\trans{P}{b(x)}{P'}}
\end{mathpar}
\noindent An advantage of this rule is that it allows input and output labels
to be as simple as pi-calculus labels.
A comparative disadvantage is that it is not syntax-directed, and that the LTS has
more rules in total.
Note that this rule would not be a viable alternative to provenances in psi-calculi:
since it can be applied more than once in a derivation,
its inclusion assumes that the channels form a preorder wrt.~connectivity.
${\pi}\!P$ also has labels $[\varphi]\tau$, meaning that a silent
transition is allowed in environments where $\varphi$ is true.
A rule for rewriting $\varphi$ to a weaker condition, similar to the above rule for
subject rewriting, is included.
Psi-calculi does not need this because the \textsc{Par} rules take the
assertion environment into account.
${\pi}\!P$ transitions of kind $\trans{P}{[\varphi]\tau}{P'}$
correspond to $\Psi\!P$ transitions of kind $\framedtrans{\{\varphi\}}{P}{\tau}{P'}$.
Interestingly, the analogous full abstraction result fails to hold for the
embedding of pi-F in psi-calculi by Bengtson et al.~\cite{bengtson.johansson.ea:psi-calculi-long},
because outputs that emit distinct but fused names are distinguished by psi-calculus bisimilarity.
This issue does not arise here because $\pi\!P$ objects are always bound; however, we believe the
encoding of Bengtson et al.~can be made fully abstract by encoding free output with bound output,
exploiting the pi-F law $a\,y.Q \sim a(x)(Q \parop x=y)$.%
\subsection{Mixed choice}\label{sec:choice}
This section will argue that because we allow non-transitive channel connectivity,
the $\caseonly$ operator of psi-calculi becomes superfluous.
The formal results here will focus on encoding the special case of mixed choice.
We will then briefly discuss how to generalise these results
to the full $\caseonly$ operator.
Choice, written $P + Q$, is a process that behaves as either $P$ or $Q$.
In psi-calculi we consider $P + Q$ to abbreviate
${\caseonly\;{\ci{\top}{P}}\casesep{\ci{\top}{Q}}}$
for some condition $\top$ that is always entailed.
\emph{Mixed choice} means that in $P + Q$,
$P$ and $Q$ must be prefix-guarded.
In particular, mixed choice allows choice between an input and an output.
There is a straightforward generalisation to $n$-ary sums that, in order
to simplify the presentation, we will not consider here.
Fix a psi-calculus
$\mathcal{P}=(\terms,\assertions,\conditions,\vdash,\otimes,\unit,\chcon)$
with mixed choice. This will be our source language.
For technical convenience we assume that $\mathcal{P}$ satisfies
the equation $\one\sigma = \one$ for all substitutions $\sigma$;
see the associated technical report~\cite{pohjola:newpsireport}
for a discussion on how this assumption can be lifted.
We will construct a target psi-calculus and an encoding such that the
target terms make no use of the $\caseonly$ operator.
The target language $\mathcal{E}(\mathcal{P})$ adds to $\terms$ the
ability to tag a term $M$ with a name $x$; we write $M_x$ for the
tagged term.
We write $\alpha_x$ for tagging the subject of the prefix $\alpha$ with $x$.
Tags are used to uniquely identify which choice statement a prefix is a summand of.
As the assertions of $\mathcal{E}(\mathcal{P})$ we use $\assertions \times \powerfin{\nameset}$,
where $\powerfin{\nameset}$ are the \emph{disabled tags}.
\begin{defi}[Target language]\label{def:targetlang} Let
$\mathcal{P}=(\terms,\assertions,\conditions,\vdash,\otimes,\unit,\chcon)$
be a psi-calculus.
Then $\mathcal{E}(\mathcal{P}) = (\terms_\mathcal{E},\assertions_\mathcal{E},\conditions_\mathcal{E},\vdash_\mathcal{E},\otimes_\mathcal{E},\unit_\mathcal{E},\chcon_\mathcal{E})$
is a psi-calculus whose components are as follows:
\[
\begin{array}{rlll}
\terms_\mathcal{E} & = & \multicolumn{2}{l}{\terms \uplus \{M_x: x \in \nameset, M \in \terms_\mathcal{E}\}}
\\
\assertions_\mathcal{E} & = & \multicolumn{2}{l}{\assertions \times \powerfin{\nameset}
}\\
\conditions_\mathcal{E} & = & \multicolumn{2}{l}{\conditions \uplus \{M \chcon N: M,N \in \terms\} \uplus \nameset
}\\
(\Psi,\mathbf{N}) \otimes_\mathcal{E} (\Psi',\mathbf{N}') & = & \multicolumn{2}{l}{(\Psi \otimes \Psi',\mathbf{N} \cup \mathbf{N}')
}\\
\unit_\mathcal{E} & = & (\one,\emptyset)
&\\
& & & \\
(\Psi,\mathbf{N}) & \vdash_\mathcal{E} & \varphi & \mbox{if } \varphi \in \conditions \mbox{ and } \Psi \vdash \varphi
\\
(\Psi,\mathbf{N}) & \vdash_\mathcal{E} & x & \mbox{if } x \in \nameset \mbox{ and } x \in \mathbf{N}
\\
(\Psi,\mathbf{N}) & \vdash_\mathcal{E} & M_x \chcon N_y & \mbox{if } \Psi \vdash M \chcon N \mbox{ and } x \neq y \mbox{ and } x,y\notin\mathbf{N}
\\
(\Psi,\mathbf{N}) & \vdash_\mathcal{E} & M_x \chcon N & \mbox{if } \Psi \vdash M \chcon N \mbox{ and } x\notin\mathbf{N}
\\
(\Psi,\mathbf{N}) & \vdash_\mathcal{E} & M \chcon N_x & \mbox{if } \Psi \vdash M \chcon N \mbox{ and } x\notin\mathbf{N}
\end{array}
\]
\end{defi}
We assume that the target language is constrained by a sorting system as in~\cite{DBLP:conf/tgc/BorgstromGPVP13}
that ensures only terms $M \in \terms$ can ever occur as objects of communication,
and in particular, that for all substitutions $\subst{\ve{x}}{\ve{T}}$,
$\ve{T} \subseteq \terms$.
The purpose of this simplification is to avoid having to consider input transitions such as
\[\framedtrans{\Psi}{\inprefix{M}{\ve{x}}N.P}{\inlabel{K}{K_x}}{P'}\]
\noindent that may result in substitutions where tagged terms must be substituted into
source-language terms or vice versa. It is possible to lift this assumption at the cost
of significant technical inconvenience~\cite{pohjola:newpsireport}.
The encoding $\semb{\_}$ from $\mathcal{P}$ to $\mathcal{E}(\mathcal{P})$
is homomorphic on all operators except assertion and choice, where it is defined as follows:
\begin{mathpar}
\semb{\pass{\Psi}} = \pass{(\Psi,\emptyset)}
\and
\semb{\alpha.P + \beta.Q} =
(\nu x)(\alpha_x.(\semb{P} \parop \pass{(\unit,\{x\})}) \parop \beta_x.(\semb{Q} \parop \pass{(\unit,\{x\})}))
\end{mathpar}
\noindent where $x \freshin \alpha,\beta,P,Q$.
\noindent If we disregard the tag $x$, we see that the encoding simply offers up both summands
in parallel. This clearly allows all behaviours of $\alpha.P + \beta.Q$, but there are two
additional behaviours we must prevent: (1) communication between the summands, and (2)
lingering summands firing after the other branch has already been taken.
The tagging mechanism prevents both,
as a consequence of how we define channel equivalence on tagged terms in $\mathcal{E}(\mathcal{P})$: tagged channels are connected if the underlying channel is connected.
To prevent (1), Definition~\ref{def:targetlang} requires the tags of connected channels
to be different,
and to prevent (2) the definition requires that the tags
are not disabled. Note that this channel connectivity is not transitive, not reflexive, and
not monotonic wrt.~assertion composition---not even if the source language connectivity is.
\begin{exa}
We illustrate the operational behaviour of the encoding for the process $R = \alpha.P + \beta.Q$.
Its encoding is
\[\semb{R} = (\nu x)(\alpha_x.(\semb{P} \parop \pass{(1,\{x\})}) \parop \beta_x.(\semb{Q} \parop \pass{(1,\{x\})}))\]
\noindent where $x$ is a fresh name. Suppose $\alpha$ is an output prefix with subject $M$,
and that channel connectivity is reflexive.
Then we can derive the transition $\trans{R}{\alpha}{P}$.
The corresponding derivation from $\semb{R}$ uses the connectivity judgement $M_x \chcon M$ in the \textsc{Out} rule to derive the following transition:
\[\trans{\semb{R}}{\alpha}{(\nu x)(\semb{P} \parop S)}\]
\noindent where
\[S = \pass{(1,\{x\})} \parop b_x.(\semb{Q} \parop \pass{(1,\{x\})})\]
\noindent Since $x$ is fresh in $\semb{P}$, by scope extension we have
$(\nu x)(\semb{P} \parop S) \bisim \semb{P} \parop (\nu x)S$.
Moreover, we have $(\nu x)S \bisim 0$. To see why, note first that $(\nu x)S$ has no
outgoing transitions: its only prefix has the tag $x$, which is disabled by its
top-level assertion $\pass{(1,\{x\})}$. Second, note that since this disabled tag is a local
name, its disabling has no effect on the environment.
\end{exa}
\begin{thm}[Correctness of choice encoding]\label{thm:choicecorrect}\
\begin{enumerate}
\item If $\framedtrans{\Psi}{P}{\alpha}{P'}$
then there is $P''$ such that $\framedtrans{(\Psi,\emptyset)}{\semb{P}}{\alpha}{P''}$
and $\trisimsub{(\Psi,\emptyset)}{P''}{\semb{P'}}$.
\item If $\framedtrans{(\Psi,\emptyset)}{\semb{P}}{\alpha}{P'}$
then there is $P''$ such that $\framedtrans{\Psi}{P}{\alpha_\bot}{P''}$
and $\trisimsub{(\Psi,\emptyset)}{P'}{\semb{P''}}$.
\item $\trisimsub{\one}{P}{Q}$ iff $\trisimsub{(\one,\emptyset)}{\semb{P}}{\semb{Q}}$.
\end{enumerate}
\end{thm}
\begin{proof}
A full proof is available in the technical report~\cite{pohjola:newpsireport}.
Forward simulation is by induction on the derivation of $\framedtrans{\Psi}{P}{\alpha}{P'}$,
backward simulation by structural induction on $P$ followed by
inversion on the derivation of the transition from $\semb{P}$.
Full abstraction is by showing that
\[\{((\Psi,\mathbf{N}),\semb{P},\semb{Q}) : \trisimsub{(\Psi}{P}{Q}\}\]
and
\[\{(\Psi,P,Q) : \trisimsub{(\Psi,\emptyset)}{\semb{P}}{\semb{Q}}\}\]
are bisimulation relations.
\end{proof}
\noindent Here $\alpha_\bot$ denotes the label $\alpha$ with all tags removed.
It is immediate from Theorem~\ref{thm:choicecorrect} and the definition of $\semb{\_}$
that our encoding also satisfies the other standard quality criteria~\cite{DBLP:journals/iandc/Gorla10}:
it is compositional, it is name invariant, and it preserves and reflects barbs and divergence.
In the original psi-calculi, our target language is invalid because of its non-transitive
connectivity.
However,
if we restrict attention to \emph{separate} choice (where either both summands are inputs
or both summands are outputs),
a slight modification of the scheme above yields a correct encoding
in the context of the original psi-calculi.
With separate choice we can drop the side-condition that tags of connected processes
are distinct from Definition~\ref{def:targetlang}---
the only use of this side-condition is to prevent communication between summands,
and separate choice already prevents this by construction.
With this modified definition of $\chcon_{\mathcal{E}}$,
we have that if $\chcon$ is symmetric and transitive, then
so is $\chcon_{\mathcal{E}}$. Or in other words, if the source language
is expressible in the original psi-calculi then so is the target language.
These results generalise in a straightforward way to mixed $\caseonly$
statements
\[{\caseonly\;{\ci{\varphi_1}{\alpha.P}}\casesep{\ci{\varphi_2}{\beta.Q}}}\]
by additionally tagging terms with a condition, i.e.~$M_{x,\varphi_1}$, that must be entailed
in order to derive connectivity judgements involving the term.
The generalisation to free choice, i.e.~$P+Q$ where $P,Q$ can be anything,
is more involved and sacrifices some compositionality.
The idea is to use sequences of tags, representing which branches
of which (possibly nested) case statements a prefix can be found in, and disallowing
communication between prefixes in distinct branches of the same $\caseonly$ operator.
\section{Conclusion and related work}
We have seen how psi-calculi can be conservatively extended to allow asymmetric and non-transitive
communication topologies, sacrificing none of the bisimulation meta-theory.
This confers enough expressiveness to capture a pi-calculus with preorders,
and makes mixed choice a derived operator.
The work of Hirschkoff et al.~\cite{DBLP:conf/lics/HirschkoffMS13}
is closely related in that it uses non-transitive connectivity;
see Section~\ref{sec:prepi} for an extensive discussion.
Broadcast psi-calculi~\cite{DBLP:journals/sosym/BorgstromHJRVPP15} extend
psi-calculi with broadcast communication in addition to point-to-point communication.
There, point-to-point channels must still be symmetric and transitive,
but for broadcast channels this condition is lifted, at the cost
of introducing other side-conditions on how names are used:
broadcast prefixes must be connected via intermediate \emph{broadcast channels}
which have no greater support than either of the prefixes it connects,
precluding language features such as name fusion.
We believe provenances could be used to define a version of broadcast psi-calculi
that does not need this side-condition.
Kouzapas et al.~\cite{DBLP:journals/corr/KouzapasGG14} define a similar
reduction context semantics for (broadcast) psi-calculi.
Their reduction contexts requires three kinds of numbered holes with
complicated side-conditions on how the holes may be filled;
we have attempted to simplify the presentation by having
only one kind of hole.
While (weak) barbed congruence for psi-calculi has been studied before
\cite{DBLP:conf/lics/JohanssonBPV10} (see Section~\ref{sec:validation}),
barbed congruence was defined in terms of the labelled semantics rather
than a reduction semantics, thus weakening its claim to independent
confirmation slightly.
There is a rich literature on choice encodings for the pi-calculus%
~\cite{DBLP:journals/iandc/Gorla10,DBLP:journals/iandc/NestmannP00,DBLP:conf/popl/Palamidessi97,DBLP:conf/fossacs/PetersN12,DBLP:conf/esop/PetersNG13},
with many separation and encodability results
under different quality criteria for different flavours of choice.
Encodings typically require complicated protocols and tradeoffs between quality criteria.
Thanks to the greater expressive power of psi-calculi,
our encoding is simpler and satisfies stronger quality criteria
than any choice encoding for the pi-calculus.
Closest to ours is the choice encoding of CCS into
the DiX calculus by Busi and Gorrieri~\cite{DBLP:conf/ecoopw/BusiG94}.
DiX introduces a primitive for annotating processes with \emph{conflict sets},
that are intended as a generalisation of choice.
Processes with overlapping conflict sets cannot interact, and when a process acts, every
process with an overlapping conflict set is killed.
These conflict sets perform the same role in the encoding as our tags do.
We believe the tagging scheme used in our choice encoding also captures DiX-style
conflict sets.
\section*{Acknowledgements}
These ideas have benefited from discussions with many people at Uppsala University, ITU Copenhagen,
the University of Oslo and Data61/CSIRO, including
Jesper Bengtson, Christian Johansen, Magnus Johansson and Joachim Parrow.
I would also like to thank Jean-Marie Madiot and the anonymous reviewers
for valuable comments on earlier versions of the paper.
\FloatBarrier
\bibliographystyle{alpha}
|
1,477,468,750,974 | arxiv | \section{Introduction}
In real-world distributed systems, the distributed components are often failure-prone.
As it is often hard to show that the undesired failures of these unreliable components would happen with sufficiently low probabilities, these components are often allowed to fail arbitrarily, i.e., being Byzantine \citep{RN2119}, in designing high-reliable fault-tolerant systems.
Meanwhile, by assuming some unit reliability of the distributed components and the independence of component failures in distributed systems, the probabilities of more than some number of the distributed components being simultaneously faulty would be sufficiently low \citep{Powell1992assumption}.
In this background, various kinds of Byzantine-fault-tolerant protocols (\emph{Byzantine protocols} for short) have been proposed in building reliable services with interconnected unreliable components.
However, most of the Byzantine protocols are proposed with the assumption of fully connected networks.
As the numbers of independent communication channels of the distributed components (referred to as the \emph{nodes}) are often practically restricted, these Byzantine protocols should be well extended to networks with low node degrees, especially in large-scale systems \citep{Leighton1992On}.
Unfortunately, as the network connectivity, message complexity, and communication rounds needed for reaching Byzantine-fault-tolerance can hardly be all lowered to satisfy the requirements of real-world applications, the Byzantine protocols are still not widely employed in large networks even with randomization \citep{FM1997,King2011Breaking} and the expense of a portion of \emph{given-up} nonfaulty nodes \citep{Dwork1986,BPG1989,Chandran2010}.
In this paper, to further break the limitations of the Byzantine protocols imposed on node degrees, messages, and time, we try to propose a new paradigm for designing efficient Byzantine protocols in large sparse networks with still high reliability.
Firstly, by identifying the main obstacle in further optimizing the state-of-the-art Byzantine protocols in sparse networks, we propose that the basic assumption of the traditional Byzantine adversaries should be extended in some ways for better evolving in large-scale systems.
Concretely, we would refine the original Byzantine adversary without weakening it.
The adversary can still arbitrarily choose the Byzantine nodes from all the nodes which run the same protocol.
Meanwhile, for constructing multi-scale Byzantine protocols in which several sub-protocols can run in several subsystem scales, a finer \emph{multi-scale adversary} would be defined in better capturing the multi-scale characteristics of large networks.
For this, we would first derive an approximate measurement of the system assumption coverage for constructing the multi-scale adversaries.
Then, by assuming some \emph{sufficiently strong} multi-scale adversaries, we would show that efficient Byzantine protocols can be designed in large sparse networks.
As the system assumption coverage is derived with only the general fault-independence assumption of distributed systems, high reliability can be reached if only this general assumption is not breached in real-world systems.
Comparing with state-of-the-art Byzantine protocols proposed for sparse networks, by adopting some sufficiently strong adversaries, the node degrees, message complexity, and communication rounds of the multi-scale Byzantine agreement are all reduced to logarithmic, which breaks the former limitations on these parameters.
Meanwhile, by refining rather than weakening the adversaries, the results (both including the possibilities and the impossibilities) built upon the classical adversaries are still valid.
With this, classical solutions can be employed as building blocks in playing the game with the finer adversary without losing their tightness in coping with the traditional adversaries.
So, comparing with the \emph{benign} adversary \citep{BIELY20115602}, \emph{random} adversary \citep{BENOR1996329boundeddegree}, and other kinds of weak adversaries, the classical results can be better leveraged with the multi-scale adversaries.
Also, comparing with the randomized solutions \citep{FM1997,King2011Breaking}, the deterministic solutions developed with the multi-scale adversaries can provide a better trade-off between the system assumption coverage and the fault-tolerance efficiency.
The rest of this paper is constructed as follows.
The related work and basic definitions are respectively given in Section~\ref{sec:related} and Section~\ref{sec:model}.
In Section~\ref{sec:obstacle}, the main obstacle of providing Byzantine-fault-tolerance in sparse networks is identified with concrete examples.
With this, an approximate measurement of the system assumption coverage is introduced, and the multi-scale adversary is proposed.
Then, efficient Byzantine protocols are developed with some multi-scale adversaries in Section~\ref{sec:algo}.
Lastly, we conclude the paper in Section~\ref{sec:con}.
\section{Related work}
\label{sec:related}
In the literature, \cite{PSL1980} provides the first Byzantine protocol for reaching deterministic agreement among the unreliable distributed components (or saying the nodes).
From that on, the \emph{Byzantine Generals} problem \citep{RN2119} is widely investigated in synchronous systems with the assumption of a malicious adversary who can arbitrarily choose a portion of the nodes in the system and arbitrarily control these nodes in preventing the other nodes from reaching an agreement.
Generally, it is shown that in tolerating $f$ Byzantine nodes, the number of nodes in the system cannot be less than $3f+1$, the network connectivity cannot be less than $2f+1$, and the deterministic execution time cannot be less than $f+1$ synchronous rounds \citep{Dolev1982StrikeAgain}.
In practice, although these lower-bounds might be acceptable in some small-scale systems, it is hard to apply the classical Byzantine protocols in large-scale systems.
In extending classical Byzantine protocols in large networks, several approaches have been proposed.
Firstly, by \emph{giving up} a small portion of nonfaulty nodes, \emph{incomplete} Byzantine protocols \citep{Dwork1986,BPG1989,Upfal1992,Chandran2010} can be built upon networks with small node degrees.
In this approach, \cite{Dwork1986} shows that deterministic \emph{almost everywhere} Byzantine agreement (BA) can be built upon \emph{bounded-degree} networks with constant node degrees.
\cite{Upfal1992} improves this result with constant Byzantine resilience at the expense of higher computational complexity.
Later in \cite{Chandran2010}, the computational complexity and the \emph{incompleteness} of the secure communication protocols are reduced at the expense of higher node degrees.
However, the scalability of the incomplete Byzantine protocols is still restricted by the overall message complexity, computational complexity, and basic communication rounds.
Secondly, by employing randomization, \emph{randomized} Byzantine protocols \citep{FM1989,FM1997,King2011Breaking} can achieve fast termination or lower message complexity.
In this approach, \cite{FM1989,FM1997} show that the secret-sharing-based \citep{ShareSecret1979} \emph{randomized} BA can terminate in expected constant rounds.
\cite{King2011Breaking} shows that the message complexity of \emph{randomized} BA can be lowered to $o(n^2)$.
However, the required communication rounds and message complexity can hardly be both reduced.
Meanwhile, these protocols are provided for fully connected networks.
Moreover, all randomized protocols are built upon an additional assumption of even distribution and independence of the generated random numbers.
This additional assumption makes the system reliability relying on the realization of the pseudo-random numbers.
On the whole, even with randomization, no Byzantine agreement can reach its goal with sublinear node degree, sublinear message complexity, and sublinear communication rounds at the same time.
These features gravely restrict the applications of higher-layer Byzantine protocols in distributed systems with large numbers of unreliable components.
To further reduce the overall complexity, network connectivity, and communication rounds of fault-tolerant protocols, another approach is to reinvestigate the basic fault assumption.
In \cite{Powell1992assumption}, by establishing a measurement of the \emph{component assumption coverage} for different failure modes, the author argues that the protocols designed with an inappropriate Byzantine fault assumption might be overweighed by protocols designed with some benign fault assumptions.
Thus, instead of handling the overall fault-tolerance problem under the traditional Byzantine adversary, a practical way is to provide the solutions directly with some sufficiently high system reliability.
In this approach, \cite{Steiner2008StartupRecovery} shows that high-reliable hard-real-time systems can be built upon practical Byzantine protocols with restricted failure modes of some communication components.
\cite{Gradient2019} shows that efficient self-stabilizing Byzantine clock synchronization can be built with a restricted Byzantine adversary.
\cite{YuCOTS2021} even shows that an efficient self-stabilizing synchronization solution can be built with standard COTS Ethernet components.
However, all these protocols are built upon some \emph{weak} single-scale adversaries.
With this, the system reliability would depend not only on the algorithms and the unit reliability of the nodes but on the \emph{component assumption coverage} and the restricted power of the single-scale adversary.
\section{Basic model and assumptions}
\label{sec:model}
\subsection{Basic system model}
Generally, the fault-tolerant system $\mathcal{S}$ consists of $n\in \mathbb N$ nodes (denoted as $V$) connected in an undirected network $G=(V,E)$.
In the words of fault-tolerance, each such node can be viewed as a fault-containment region (FCR) in considering the propagation of local faults.
Namely, with the definition of FCR \citep{RN896}, the faults occurring in an FCR cannot be directly propagated to another FCR in the system $\mathcal{S}$.
Nevertheless, the faulty nodes can manifest arbitrary run-time errors as the result of the occurrence of the Byzantine faults and may propagate these \emph{errors} to the nonfaulty nodes or even the whole system, if the protocols running in the system cannot well tolerate the faults occurring in a sufficient portion of the nodes in $\mathcal{S}$.
To design a Byzantine protocol $A$ running in $V$, we assume that the adversary can arbitrarily corrupt a subset $F\subset V$ and make all nodes in $F$ collude together in preventing the nonfaulty nodes $U=V\setminus F$ from reaching their desired goals in $\mathcal{S}$.
Such desired goals can be synchronous agreement, secure communication, reliable broadcast, etc.
Being compatible with \cite{Dwork1986,Upfal1992}, the nonfaulty nodes are also called the correct nodes in the synchronous systems.
By denoting the maximal allowed $|F|$ as $f$, the Byzantine resilience of the protocol $A$ is represented as $\alpha_{A,G}=f/n$.
With classical results \citep{RN2119}, we have $\alpha_{A,G}\in[0,1/3)$.
For simplicity, we assume that the adversary is static and $\mathcal{S}$ is synchronous.
Namely, $F$ is fixed during the execution of $\mathcal{S}$.
Besides, denoting $U=\{1,2,\dots,|U|\}$, the current round state of $\mathcal{S}$ can be represented as $x(k)=(x_1(k),x_2(k),\dots,x_{|U|}(k))$, where $x_i(k)$ is the state of node $i\in U$ in the $k$th round of the execution of $\mathcal{S}$.
Then, by collecting the $k$th round states of all neighbours of $i$ (including $i$), every node $i\in U$ would update its state as $x_i(k+1)$ during the $(k+1)$th round of the execution of $\mathcal{S}$.
In this paper, we only discuss fixed-round executions of $\mathcal{S}$.
With this, the states of $\mathcal{S}$ before the first and after the last rounds of an execution are respectively called the input and output of the execution.
\subsection{Large sparse networks}
Denoting the node degree of each node $i\in V$ in $G$ as $d_i$, $G$ is said to be sparse if $d=\max_{i\in V}\{d_i\}$ is sublinear to $f$.
In other words, we have $\forall{i\in V}:d_i=o(f)$ in sparse networks.
In such networks, as the adversary can corrupt all neighbors of some nonfaulty node $i\in U$ and thus separate $i$ from all other nonfaulty nodes $U\setminus\{i\}$ in the system $\mathcal{S}$, at most a portion of the nonfaulty nodes can reach their desired goal with a fixed Byzantine resilience.
In other words, there would be some nonfaulty nodes being given up in tolerating $f\geqslant d$ Byzantine nodes in the sparse network $G$.
Given the network $G$ and the faulty nodes $F$, the set of all given-up nonfaulty nodes in running the $A$ protocol is denoted as $X_{A}(F,G)$.
Following \cite{Upfal1992}, by denoting $Z_{A}(F,G)=F\cup X_{A}(F,G)$ and $P_{A}(F,G)=V\setminus Z_{A}(F,G)$, it is required that the nodes in $P_{A}(F,G)$ should reach their desired goal in $\mathcal{S}$.
Denoting $x_{A}=\max_{F\subset V}|X_{A}(F,G)|$ with $|F|\leqslant f$, $A$ is said to be an $x_{A}$-incomplete Byzantine protocol in tolerating $f$ Byzantine nodes in $G$ under the traditional adversary.
For a large-scale system $\mathcal{S}$ with hundreds or thousands of nodes, it is common that some protocols only run in a subset of $V$ in $\mathcal{S}$.
In this context, if a protocol $A_0$ runs only in $V_0\subset V$, we assume that no more than $\lfloor \alpha_{A_0,V_0}|V_0|\rfloor$ nodes in $V_0$ can be corrupted by the adversary.
Besides, following the assumption of \emph{independent failure of components} (which is a basic assumption for distributed systems), we assume that the faults that occurred in different nodes of $\mathcal{S}$ are independent with each other.
Following \cite{Powell1992assumption}, by expressing the unit reliability of a node $i\in V$ in some desired duration $\tau>0$ as $r_{i,\tau}=e^{-\lambda_i \tau}$, the failure rate of $i$ during the same duration $\tau$ can be represented as $p_{i,\tau}=1-r_{i,\tau}$.
For simplicity, we assume that all nodes in $V$ share the same unit reliability $r$ during the specific duration $\tau$, and thus the failure rate of every node in $V$ is simplified as $p=1-r$.
In considering practical scenarios, we assume $p\leqslant 10^{-4}$ with $\tau$ being $1$ hour.
\section{The asymmetry and the multi-scale adversary}
\label{sec:obstacle}
In designing Byzantine protocols for large-scale systems, it is crucial to have low complexity, fast termination, affordable networking requirement, low incompleteness, and sufficiently high resilience.
However, these desired properties can hardly be provided simultaneously with the assumption of the traditional adversary.
To ascertain this, an observation of some asymmetry of the sparse networks might be heuristic.
\subsection{The undesired asymmetry}
\label{subsec:asymmetry}
For a concrete example, let us examine the secure communication protocols proposed in the bounded-degree networks.
An interesting observation given in \cite{Upfal1992} shows that the arbitrarily chosen $2t$ faulty nodes cannot contaminate all transmission paths while $t$ such chosen ones can contaminate more than a half of the transmission paths.
Intuitively, this means that the adversary can leverage some asymmetry of the transmission paths.
However, to prevent the adversary from leveraging such asymmetry, we cannot expect to derive some weighted transmission schemes with parallel transmission paths.
To get an intuitive understanding of this, recall that the very initial fault-tolerance problem encountered in bounded-degree networks is that the faulty ones can overwhelmingly surround some correct nodes.
And in the incomplete solutions upon such networks, some correct nodes are allowed to be \emph{poor} (being \emph{given up}) and the remained \emph{non-poor} correct (\emph{npc} for short, also referred to as the \emph{privileged} nodes in \cite{Chandran2010}) nodes are expected to reach their desired goals in the Byzantine protocols.
Does all such \emph{npc} nodes are equivalently \emph{non-poor} in a bounded-degree network?
Obviously, the answer is \emph{no}, since the adversary can place more faulty nodes near some \emph{npc} nodes to make them more \emph{poor} than the other \emph{npc} nodes.
With this intuition, the so-called \emph{non-poor} property might better be extended to some multivalued \emph{luck} property, represented as $\omega(F,i)$ for every node $i\in V$ with the specific $F$.
For example, we can set $\omega(F,i)=0$ if $i\in F$ and define the \emph{npc} nodes as the ones whose \emph{lucks} are beyond some \emph{good-luck} threshold $\omega_0$.
However, as we do not know which nodes would be chosen in $F$ during any concrete execution, we do not know the lucks of the nodes before the execution.
So, for secure communication between two \emph{npc} nodes $i,j$ in playing the game with the traditional adversary, we can only assume that the lucks of $i$ and $j$ being just equal to the threshold $\omega_0$ in considering the \emph{worst-cases}.
Thus, the fact that some pairs of the \emph{npc} nodes might be with better \emph{lucks} than $\omega_0$ cannot be leveraged in designing secure communication protocols.
In this situation, on the one hand, for lower complexity, lower node degrees, and higher resilience, the good-luck threshold $\omega_0$ should be higher.
Nevertheless, on the other hand, for lower incompleteness, $\omega_0$ should be lower.
This dilemma gravely restricts the efficiency of secure communication protocols in large sparse networks.
\subsection{A finer assumption for multi-scale systems}
In offsetting the asymmetry, one possible way is to develop a better \emph{luck} property with a well-balanced \emph{good-luck} threshold in designing specific Byzantine protocols.
However, that would be coupled with the specific goals of the Byzantine protocols.
Alternatively, instead of taking the direction to construct Byzantine protocols only under the traditional adversary, it might make sense to reinvestigate some basic assumptions about the adversary.
Namely, the traditional assumption about the adversary is originally abstracted from fully connected small networks.
In large sparse networks (often with some multiple scales in integrating the building blocks, for example, see \cite{Chandran2010,Jayanti2020}), such assumption seems too coarse to capture the actual properties of the real-world systems.
Concretely, in a large-scale system $\mathcal{S}$, we often want to first construct some small-scale system $\mathcal{S}_0$ with the nodes $V_0$ satisfying $|V_0|\ll n$.
In constructing $\mathcal{S}_0$, we assume only the nodes in $V_0$ being employed.
Now with the assumption of \emph{independent failure of components}, as the failure-rate of every node $i\in V$ is no worse than $p$ for some desired working hours, it would suffice to assume that no more than $\lfloor \alpha_0 |V_0|\rfloor$ faulty nodes (still being arbitrarily chosen by the adversary) with some constant $\alpha_0\in (0,1)$ in satisfying any desired system reliability \citep{Powell1992assumption,Kopetz2004Hypothesis}.
Even when we choose some nodes in $V_0\subset V$ to further construct some other larger-scale systems with $|V|\gg |V_0|$, the assumption of up to $\lfloor \alpha_0|V_0|\rfloor$ faulty nodes in $V_0$ can remain unchanged.
In considering that the added complexity in realizing the nodes in $V_0$ might incur some additional failure-rate in each such node, we can firstly add the worst cases into $ p $.
This makes sense because all qualified real-world devices can provide some constant failure-rate $ p $ despite the various working loads.
So, the innocence of $V_0$ should be defended against the adversary such that, the $V_0$ should pay no more than it deserves in just running any protocol in just the $|V_0|$-scale system.
To be precise, when some protocol only runs with the nodes in $V_0$, for no reason that the adversary can corrupt more than $\lfloor \alpha_0|V_0|\rfloor$ nodes with some constant $\alpha_0$.
Thus, it is better to consider the adversary in some multi-scale context when there are protocols running in more than one scale in the system.
Note that such a \emph{finer} assumption does not contradict the traditional one.
Namely, in the largest scale $n=|V|$, the adversary can still arbitrarily corrupt up to $\alpha n$ nodes in $V$ (the rounding operations are ignored for simplicity when $n$ is large).
Meanwhile, the \emph{multi-scale adversary} can arbitrarily corrupt up to $\alpha_l n_l$ nodes in the protocols running for the $n_l$ nodes.
Generally, the resilience constant can be extended with a resilience function $\alpha:\mathbb N\to \mathbb N$ such that the adversary can arbitrarily corrupt up to $\alpha(s)$ nodes in the given $s$ nodes.
\subsection{A measurement of the system assumption coverage}
So, given the failure-rate $p$ of the unreliable nodes, the critical problem is to provide the resilience function $\alpha$ for the multi-scale system $\mathcal{S}$ with a sufficiently high system assumption coverage.
Here the \emph{system assumption coverage} is extended from the \emph{component assumption coverage} \citep{Powell1992assumption} where the failure modes of the components are the main concern.
Denoting $\mathcal{A}$ as the set of all instances of the Byzantine protocols running in $\mathcal{S}$ and $V_A$ as the set of nodes who run the instance $A\in\mathcal{A}$ in $\mathcal{S}$, the system assumption coverage of $\mathcal{S}$ under $\alpha$ can be represented as
\begin{eqnarray}
\label{eq:assumption coverage} R= \prod_{A\in \mathcal{A}}{Q(\lfloor\alpha(|V_{A}|)|V_{A}|\rfloor,|V_{A}|)}
\end{eqnarray}
where $Q(t,s)$ is a lower-bound of the probability that there are no more than $t$ faulty nodes in the overall $s$ nodes in the distributed system.
With the assumption of \emph{independent failure of components}, $Q(t,s)$ can be generally represented as
\begin{eqnarray}
\label{eq:t_in_s} Q(t,s)= \sum_{i=0}^{t}{\tbinom{s}{i} p^{i}(1-p)^{s-i}}
\end{eqnarray}
With Stirling's approximation $n!\approx \sqrt{2\pi n}(n/e)^n$ \citep{2010Art}, we approximately get
\begin{eqnarray}
\label{eq:approx} \binom{s}{i}\approx \sqrt{\frac{s}{2\pi (s-i)i}}(\frac{s}{s-i})^{s-i}(\frac{s}{i})^i
\end{eqnarray}
Thus, when $s$ is sufficiently large, with $\lim_{s\to \infty}(\frac{s}{s-i})=1$ and $\lim_{s\to \infty}(\frac{s}{s-i})^{s-i}=e^i$, we have
\begin{eqnarray}
\label{eq:approx2} \binom{s}{i}\approx \sqrt{\frac{1}{2\pi i}}(\frac{es}{i})^i
\end{eqnarray}
and thus
\begin{eqnarray}
\label{eq:approx3} Q(t,s)\approx \sum_{i=0}^{t}{\sqrt{\frac{1}{2\pi i}}(\frac{esp}{i})^i (1-p)^{s-i}}
\end{eqnarray}
For the convenience of calculation, as $R$ and $Q(t,s)$ are all very close to $1$, we denote $\nu=1-R$ and $P(t,s)=1-Q(t,s)$.
In our case, as the adversary can arbitrarily choose $F$ with $|F|\leqslant f$ and make all nodes in $F$ fail arbitrarily, $R$ and $\nu$ also respectively represent the system reliability and system failure-rate with respect to the specific working hours.
To calculate $Q(t,s)$, as the ratio of two adjacent items in the right side of (\ref{eq:t_in_s}) can be represented as
\begin{eqnarray}
\label{eq:ratio0} \frac{{\tbinom{s}{i+1} p^{i+1}(1-p)^{s-i-1}}}{{\tbinom{s}{i} p^{i}(1-p)^{s-i}}}= \frac{(s-i)p}{(i+1)(1-p)}
\end{eqnarray}
we have
\begin{eqnarray}
\label{eq:ratio} P(t,s)<\frac{\tbinom{s}{t} p^{t}(1-p)^{s-t}}{\beta-1}\approx {\sqrt{\frac{1}{2\pi t}}(\frac{esp}{t})^t \frac{(1-p)^{s-t}}{\beta-1}}
\end{eqnarray}
when $p\leqslant 1/(\beta s+1)$ holds.
By taking $\beta=2$ and $s<5000$, (\ref{eq:ratio}) would always hold with $p\leqslant 10^{-4}$.
Generally, any larger $s$ can also be handled by summing up the first $\beta ps$ items in calculating $Q(t,s)$.
From (\ref{eq:ratio}) we can see that, with the increase of $t$, $P(t,s)$ soon becomes negligible.
But when $t$ is small, $P(t,s)$ may have a significant effect on the overall system reliability.
So, to develop multi-scale systems, the main difficulty is to provide the small-scale protocols with high resilience.
Given such small-scale protocols, the larger-scale protocols can be built with a much-relaxed resilience function $\alpha$ for the larger $s$.
\section{Solutions and analysis}
\label{sec:algo}
In this section, we give some concrete examples of constructing multi-scale systems with multi-scale adversaries.
\subsection{Immediate Byzantine broadcast}
Firstly, as a simple and practical example, we show that with the assumption of a two-scale adversary, the logarithmic-round deterministic immediate Byzantine broadcast can be reached in logarithmic-degree networks with constant complexity.
Here, when the \emph{General} (correct or faulty) initiates the broadcast, the desired goal is reached iff
1) all correct nodes agree on the same value at the end of the same (finite) round and
2) all correct nodes agree on the value of the correct \emph{General}.
For this, the sparse network $G=(V,E)$ can be formed as an $s$-base hypercube $G_{\mathtt{H}s}$, as is shown in Fig.~\ref{fig:complementary2}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=3.3in]{complementary2.png}}
\caption{The sparse network $G_\mathtt{H7}$ in the bird's eyes.}
\label{fig:complementary2}
\end{figure}
In the $s$-base hypercube $G_{\mathtt{H}s}$ with $s=7$, each node is labeled with a $7$-base digital number and represented as a small circle in Fig.~\ref{fig:complementary2}.
Following the basic definition of a hypercube, for any two nodes $i,j\in V$, there is an edge $(i,j)$ on the undirected $G_\mathtt{H7}$ iff the labels of $i$ and $j$ are with one and only one different digit.
For example, in a $3$ dimensional $7$-base hypercube, the node $320$ is connected to the node $321$ and node $620$ but not connected to the node $230$ or node $231$.
As $G_\mathtt{H7}$ has at most $L=O(\log n)$ dimensions, $G_\mathtt{H7}$ is an $O(\log n)$-degree network.
By representing the $k$th dimension position of node $i$ in $G_\mathtt{H7}$ as the $k$th leftmost digit in the label of $i$, the node $a_L\cdots a_2 x$ with $x\in\{0,1,\dots,L-1\}$ form a $7$-node complete graph $K_7$ in the innermost (the first) dimension of $G_\mathtt{H7}$ shown in Fig.~\ref{fig:complementary2}.
As all $7$ nodes in an innermost $K_7$ are labeled with the same rightmost $L-1$ digits, these $L-1$ digits are used to label the innermost $K_7$.
With this, the node $a_L\cdots a_2 x$ is at the $x$ site in the $K_7$ labeled as $a_L\cdots a_2$.
For simplicity, the leftmost $0$ digits in a label can be omitted.
With $G_\mathtt{H7}$, the multi-scale Byzantine broadcast protocol can be constructed as follows.
Firstly, every node $i\in U$ would run one and only one $7$-node BA protocol $A_7$ in the innermost dimension of $G_\mathtt{H7}$ during the execution of $\mathcal{S}$.
By assigning one node $i_0\in V$ as the \emph{General}, the $7$ neighbors of $i_0$ (including $i_0$) in the innermost dimension of $G_\mathtt{H7}$ shown in Fig.~\ref{fig:complementary2} are said to be in the $0$ layer.
In the $0$ layer, the \emph{General} $i_0$ initiate its $0$ layer $7$ neighbors (denoted as $V_0$) with the current state of $i_0$ by running a very simple initiation protocol $I_7$.
Without loss of generality, let us assume the top-leftmost node $0$ in Fig.~\ref{fig:complementary2} being the \emph{General}.
Then, the $0$ layer BA protocol $A_7$ would be performed in $V_0$ (in the top-leftmost $K_7$ labeled with $0$) and would terminate in constant rounds.
At the termination of the $0$ layer BA, each node $j\in U\cap V_0$ would set its state with the agreed value and then initiate the $1$ layer $7$ neighbors of $j$ (in the vertical directions in Fig.~\ref{fig:complementary2}) with the current state of $j$ by running the same $I_7$ protocol.
With this, the nodes (denoted as $V_1$) in the other leftmost $6$ innermost $K_7$ (labeled from $1$ to $6$) would all be initialized.
Then, a differential BA protocol $B_7$ \citep{Garay2003} would be parallel performed in each initialized innermost $K_7$ with constant rounds.
Similarly, at the termination of these differential BA instances, each node $j\in U\cap V_1$ would run the $I_7$ protocol to initiate the $2$ layer $7$ neighbors of $j$ (in the horizontal directions in Fig.~\ref{fig:complementary2}).
With this, the nodes (denoted as $V_2$) in the other $6$ columns (except the ones represented by the ellipsis) would all be initialized.
Then, the differential BA protocol $B_7$ would be parallel performed in each initialized innermost $K_7$ (labeled from $10$ to $66$) with constant rounds.
Iteratively, this procedure would be performed until the $(L-1)$ layer differential BA terminates, with which the agreed value of the BA instances run in every $j\in U$ would be the final output of the overall protocol.
So, the overall protocol can terminate in $O(\log n)$ rounds with $O(1)$ complexity.
Now we show how this protocol can reach Byzantine broadcast.
Firstly, in the $0$ layer, the adversary is allowed to arbitrarily corrupt up to $2$ nodes in the innermost $7$ nodes.
With this, the $0$ layer BA instance can run correctly and output the agreed value for the nodes in $U\cap V_0$.
Then, in running the $I_7$ protocol between every innermost $K_7$ in $V_1$ and the innermost $K_7$ in $V_0$, by denoting the sites in the $K_7$ labeled with $w$ as $S_w$, the adversary is allowed to arbitrarily corrupt up to $2$ sites in $S_{w_1}\cup S_{w_2}$ when $w_1$ and $w_2$ has only one digit being different (or saying $w_1$ and $w_2$ are adjacent).
With this, at least $5$ correct nodes in every innermost $K_7$ can be initiated with the correct agreed value.
So, by performing the $7$-node differential BA \citep{Garay2003}, all correct nodes would have the correct agreed value in every initiated innermost $K_7$.
Thus, by iteratively applying this result, all correct nodes would have the correct agreed value at the end of the execution of the overall protocol.
Here, for reaching efficient deterministic Byzantine broadcast, a two-scale adversary is defined for the $7$-node BA protocols ($A_7$ and $B_7$) and the $14$-node initiation protocol $I_7$ (for two adjacent innermost $K_7$).
With this, it is easy to see that the overall protocol can be extended to the $s$-base hypercube $G_{\mathtt{H}s}$ under the same two-scale adversary defined for the $s$-node BA protocols and the $2s$-node initiation protocol.
Now we show how this adversary can be supported with practical system assumption coverage.
Firstly, to support the fault-assumption of the $s$-node BA instances, the probability is no less than $(1-P(\lfloor \alpha(s)s\rfloor,s))^{n/s}$ with $\alpha(s)=1/3$.
For the case $s=7$, we have $P(2,7)\approx {\sqrt{\frac{1}{4\pi}}(\frac{7ep}{2})^2 (1-p)^{5}}<40p^2$.
For the larger $s$, we generally have
\begin{eqnarray}
\label{eq:p_1}P(s/3,s)\approx {\sqrt{\frac{3}{2\pi s}}(3ep)^{s/3} (1-p)^{s-s/3}}<(3ep)^{s/3}
\end{eqnarray}
Secondly, to support the fault-assumption of the initiation instances, the probability is no less than $(1-P(\lfloor \alpha(s)s\rfloor,2s))^{n/s-1}$ with $\alpha(s)=1/3$.
For the case $s=7$, we have $P(2,14)\approx {\sqrt{\frac{1}{4\pi}}(7ep)^2 (1-p)^{5}}<160p^2$.
For the larger $s$, we generally have
\begin{eqnarray}
\label{eq:p_2}P(s/3,2s)\approx {\sqrt{\frac{3}{2\pi s}}(6ep)^{s/3} (1-p)^{2s-s/3}}<(6ep)^{s/3}
\end{eqnarray}
So, put it together, we get
\begin{eqnarray}
\label{eq:broadcast_r}R= (1-P(\lfloor s/3\rfloor,s))^{n/s}(1-P(\lfloor s/3\rfloor,2s))^{n/s-1}\nonumber\\
\approx (1-\sqrt{\frac{3}{2\pi s}}((3ep)^{s/3}+(6ep)^{s/3}))^{n/s}\nonumber\\
>(1-(6ep)^{s/3})^{n/s}
\end{eqnarray}
Now, to see how $R$ can be sufficiently high, let us take $s=16$ and $p=10^{-4}$.
In this case, we would have $R\geqslant 1-10^{-9}$ if only $n\leqslant 10^6$.
So, efficient multi-scale Byzantine broadcast protocols can be practically built upon sparse networks with high reliability.
\subsection{Immediate Byzantine agreement}
Given a specific \emph{General}, the Byzantine broadcast protocol provided above performs the immediate reliable broadcast of the \emph{General} in sparse networks with the two-scale adversary.
With this, we show how to build efficient Byzantine agreement in sparse networks with the same adversary.
Here, with every correct node $i\in U$ being initiated with a value $v_i$, the desired goal is reached iff
1) all correct nodes agree on the same value at the end of the same (finite) round and
2) all correct nodes agree on the value $v$ if $\forall i\in U:v_i=v$.
To build Byzantine agreement in the same sparse network $G_{\mathtt{H}s}$, the multi-scale Byzantine broadcast protocol can parallel run for every node $i\in V$ being the \emph{General}.
For efficiency, instead of running $n$ parallel Byzantine broadcast instances for the $n$ nodes, these instances can be run for the $n/s$ innermost $K_s$.
Concretely, in the first round, only $n/s$ $s$-node BA instances would be executed in the $n/s$ innermost $K_s$.
At the end of the first round, by running the $2s$-node initiation protocol $I_s$ for every pair of adjacent innermost $K_7$ in the $1$ layer, the agreed innermost $K_7$ can be viewed as a locally agreed super-node.
Thus, there would be at most $n/s$ Byzantine broadcast instances being parallel run in every correct node of $\mathcal{S}$ during the execution of $\mathcal{S}$.
Then, at the end of the last round, every node $i\in U$ can finally agree on the median of the $n/s$ output values of the $n/s$ Byzantine broadcast instances.
It is easy to see that this protocol reaches the goal of the deterministic immediate Byzantine agreement.
For efficiency, as there are at most $n/s$ Byzantine broadcast instances being run in parallel, the overall complexity would at most be $O(n)$, where the message complexity would be $O(\log n)$, as the messages generated for the parallel instances during the same round in every $O(\log n)$-degree node can be merged into one round-message.
Meanwhile, the required rounds, node-degrees, and system assumption coverage (also system reliability) of the Byzantine agreement protocol are all the same as the provided multi-scale Byzantine broadcast protocol.
So, deterministic $O(\log n)$-round Byzantine agreement can be reached in $O(\log n)$-degree network with $O(\log n)$ message complexity with high reliability.
\subsection{Incomplete Byzantine protocols}
One defect of the multi-scale Byzantine protocols presented above is that the system reliability is built upon the assumption coverage of all employed sub-protocols in all related scales.
In this situation, if the fault-assumption of any employed protocol is breached in any running instance, the overall system may fail.
To avoid this, we show how multi-scale Byzantine protocols can be built with tolerating the failure of some instances of the low-layer protocols.
As an intuitive example, here we investigate the classical secure communication in sparse networks.
For this, by denoting $i,j$ as two \emph{npc} nodes, the desired goal is reached iff
1) there are sufficient \emph{npc} nodes and
2) every message sent from $i$ can be correctly received by $j$ in some finite synchronous rounds and \emph{vice versa}.
To construct the overall protocol, we would extend the constant-resilience protocol proposed in \cite{Upfal1992} as the core building block.
Concretely, as the transmission scheme proposed in \cite{Upfal1992} incurs high computational complexity, here we focus on reducing the computational complexity of \cite{Upfal1992}.
For this, the sparse network $G=(V,E)$ can be formed as a multi-layer expander $G_{\mathtt{EX}s_0}$, as is shown in Fig.~\ref{fig:complementary1} with $s_0=4$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=2.6in]{complementary1.png}}
\caption{The multi-layer expander $G_{\mathtt{EX}4}$.}
\label{fig:complementary1}
\end{figure}
In Fig.~\ref{fig:complementary1}, it should be noted that all the vertical $L$ layers of $G_{\mathtt{EX}4}$ are implemented in just one layer communication nodes.
Namely, the $L$ small circles in every vertical line of Fig.~\ref{fig:complementary1} represent the same communication node.
In other words, the multi-layer expander $G_{\mathtt{EX}s_0}$ is actually a one-layer expander with $n=|V|$ communication nodes, each of which would act as $L$ different logical nodes in running the sub-protocols in the different layers.
At the $0$ layer, there are $r_0= n/s_0 $ independent subnetworks $G_{0,r}=(V_{0,r},E_{0,r})$ with disjoint node-sets $V_{0,r}$ for $1\leqslant r\leqslant r_0$, where $s_0$ is a pre-configured constant.
Each subnetwork $G_{0,r}$ is an $s_0$-node $d_0$-regular expander for running the $0$ layer Byzantine protocols.
For simplicity, we assume that $n$ is divisible by $s_0$ (otherwise, we can make up a slightly larger upper layer and only use the extra upper-layer nodes to run the high-layer protocols, the same below).
Then, at the $1$ layer, there are $r_1= n/s_1 $ independent subnetworks $G_{1,r}$ with $1\leqslant r\leqslant r_1$, where $s_1$ is also a pre-configured constant.
Each subnetwork $G_{1,r}$ is an $s_1$-node $d_1$-regular expander with $s_1=s_0/\theta_1$ ($\theta_1\in (0,1)$) and contains $ 1/\theta_1$ $0$ layer subnetworks.
Iteratively, by configuring the constant $s_{l}=s_{l-1}/\theta_{l}$ ($\theta_{l}\in (0,1)$), $r_{l}= n/s_{l} $ $l$ layer subnetworks $G_{l,r}$ with $1\leqslant r\leqslant r_{l}$ can be formed, each of which contains $ 1/\theta_{l}$ $(l-1)$ layer subnetworks, until it comes the $(L-1)$ layer expander with $s_{L-1}=n$ nodes.
Denoting the state of node $i\in U$ in the $l$ layer as $x_i^{(l)}$, to ensure that each low-layer protocols would not be affected by the upper-layer protocols, the state $x_i^{(l)}$ can only propagate to the state $x_i^{(l+1)}$ while the state propagation from $x_i^{(l+1)}$ to $x_i^{(l)}$ is prohibited.
With this, to deliver a message from $i$ to $j$, an $s_0$-node Byzantine protocol $C_0$ (for example, some Byzantine broadcast protocol or some secure communication protocol) would run in the $0$ layer subnetwork $G_{0,r}$ (containing $i$, the same below) to transmit the message of $i$ to all other nodes in $G_{0,r}$.
As $s_0$ is a constant, these instances would terminate in constant rounds with constant complexity.
With this, if the fault-assumption of the $C_0$ protocol is not breached in $G_{0,r}$, the message of $i$ would be correctly received in all \emph{npc} nodes of $G_{0,r}$ (denoted as $P_{C_0}(F_0,G_{0,r})$ and shown as green in the bottom layer of Fig.~\ref{fig:complementary1}).
Then, the $0$ layer state of every node $i_0\in P_{C_0}(F_0,G_{0,r})$ would be propagated to $x_{i_0}^{(1)}$ in the $1$ layer.
From the $1$ layer on, to reduce the complexity, instead of employing the protocol $C_0$, the logical nodes in the $1$ layer would run a new protocol $C_1$ by replacing the high-complexity operation taken in the original protocol proposed in \cite{Upfal1992} as the majority function (i.e., taking the majority values received from all transmission paths).
For this, all the correct nodes in $V_{0,r}$ would transmit the received message of $i$ to all other nodes in $G_{1,r}$.
So, it needs to show that a sufficient number of nodes $i_1\in V_{1,r}$ would receive the correct message of $i$ in more than a half of all transmission paths from $P_{C_0}(F_0,G_{0,r})$ to $i_1$, as long as the fault-assumption of the $C_1$ protocol is not breached in $G_{1,r}$.
As $G_{1,r}$ is a strong expander, this can be supported with a sufficiently large $\theta_1$.
For a simple example, when $\theta_1\approx 1/2$, the transmission scheme can be simplified as directly sending the message of $i$ to the neighbors in $G_{1,r}$.
With this, assuming the fault-assumption of $C_1$ being not breached, as every \emph{npc} node can have more \emph{npc} neighbours than the faulty and \emph{poor} neighbours in $G_{1,r}$, every \emph{npc} node can receive the correct message of $i$ in one round with applying the majority function.
Iteratively, with $\theta_l\approx 1/2$ for $1\leqslant l\leqslant L-1$, the $l$ layer \emph{npc} nodes can receive the correct message of $i$ in $O(l)$ rounds.
As $L=O(\log n)$, all the $L-1$ layer \emph{npc} nodes can receive the correct message of $i$ in $O(\log n)$ rounds.
Furthermore, a finer investigation of the low bound of $\theta_l$ is also within reach.
As is limited here, we leave this for the interested readers.
To measure the system assumption coverage, if the fault-assumption of $C_0$ and $C_1$ is not allowed to be breached, we can calculate the system reliability as
\begin{eqnarray}
\label{eq:broadcast_r1}\prod_{l=0}^{L-1}(1-P(\lfloor \alpha(s_l)s_l\rfloor,s_l))^{\frac{n}{s_l}}
\end{eqnarray}
In this case, the message of $i$ can be correctly received by $j$ if $i$ is a $0$ layer \emph{npc} node and $j$ is an $L-1$ layer \emph{npc} node.
Thus, if $i$ and $j$ are all \emph{npc} nodes in the $0$ layer and the $L-1$ layer (referred to as the overall \emph{npc} nodes), the goal of secure communication can be reached between such $i$ and $j$.
From (\ref{eq:broadcast_r1}) we can see that, as $s_l$ would become larger with the increase of $l$, the items with larger $l$ would soon become negligible.
So, the system reliability mainly depends on the items with the small $l$.
Now, if no more than $t_l$ instances of the secure communication protocols in the $l$ layer are allowed to fail with sufficiently small $l$, only a small portion of the overall \emph{npc} nodes would be affected.
Meanwhile, the items in (\ref{eq:broadcast_r1}) with the small $l$ can be improved as
\begin{eqnarray}
\label{eq:improve_r}\sum_{t=0}^{t_l}\tbinom{r_l}{t}(1-P(\lfloor \alpha(s_l)s_l\rfloor,s_l))^{r_l-t}P(\lfloor \alpha(s_l)s_l\rfloor,s_l)^t
\end{eqnarray}
With this, we can derive the $l$th item of (\ref{eq:broadcast_r1}) as $1-\nu_l$ with
\begin{eqnarray}
\label{eq:ratio_r} \nu_l<\frac{\tbinom{r_l}{t_l} p_l^{t}(1-p_l)^{r_l-t_l}}{s_l-1}\approx \sqrt{\frac{1}{2\pi t_l}}(\frac{er_l p_l}{t_l})^{t_l} /(s_l-1)
\end{eqnarray}
when $p_l=P(\lfloor \alpha(s_l)s_l\rfloor,s_l)\leqslant 1/(n+1)$ holds.
Thus, $1-\nu_l$ would be improved significantly with $t_l\geqslant 2$.
\subsection{Discussion}
\label{subsec:gainloss}
As we have seen, on one side, the assumption of the multi-scale adversary can place the protocol designers at a much-desired position in deriving easier Byzantine solutions.
Without this multi-scale assumption, the efficiency of the Byzantine solutions would be gravely limited with the identified asymmetry property of the sparse networks.
For example, the computational complexity of the secure communication protocol provided in \cite{Upfal1992} is very high.
The overall complexity of the more efficient protocols provided in \cite{Chandran2010} (also investigated in \cite{Jayanti2020}) is at least polynomial.
The resilience provided in \cite{Dwork1986} is relatively low.
On the other side, the system assumption coverage should be carefully calculated in real-world systems.
Nevertheless, we argue that a practical multi-scale adversary is a good starting point in constructing efficient multi-scale Byzantine protocols.
Firstly, in comparing with probabilistic Byzantine protocols \citep{BENOR1996329boundeddegree}, the probabilistic aspects of the multi-scale systems can be well encapsulated in the multi-scale adversary, with which the deterministic solutions can be decoupled with the calculation of the system assumption coverage.
Secondly, the multi-scale adversary can also provide a finer abstraction for the probabilistic properties in multi-scale distributed systems.
With this, the disadvantage of the asymmetric property of the sparse networks can largely be overcome in multi-scale networks.
Thirdly, even when the \emph{weakest point} of the basic multi-scale assumption is violated, i.e., some lower layer networks are corrupted by more faulty nodes than the ones that can be tolerated, multi-scale protocols can be built with tolerating the failure of some lower layer protocols.
Meanwhile, the original assumption of the single-scale adversary can also be included in the multi-scale ones.
Generally, the finer the multi-scale adversary is given, the better balance between the system assumption coverage and the efficiency of the deterministic Byzantine solutions can be expected in large-scale systems.
\section{Conclusion}
\label{sec:con}
In this paper, we have proposed a new paradigm of developing efficient Byzantine protocols for large sparse networks with high reliability.
Firstly, the undesired asymmetry of sparse networks in building efficient Byzantine protocols with the traditional adversary is identified.
In overcoming this asymmetry, multi-scale Byzantine protocols are proposed with the assumption of the so-called \emph{multi-scale adversary}.
In investigating the reliability of the systems developed with such multi-scale adversaries, an approximate measurement of the system assumption coverage is developed.
Then, it is shown that logarithmic-round deterministic BA can be built upon logarithmic-degree networks with logarithmic message complexity and high system assumption coverage.
It is also shown that the system reliability can be further improved with multi-scale Byzantine protocols that can tolerate the failures of low-layer small-scale protocols.
Meanwhile, with the multi-scale adversaries, the measurement of system assumption coverage and the development of deterministic Byzantine protocols are also decoupled.
With this, finer Byzantine protocols can be further developed for various kinds of large sparse networks.
\bibliographystyle{IEEEtran}
|
1,477,468,750,975 | arxiv | \section{Introduction}
Auxetic materials are rather special and unusual elastic solids. Their Poisson's ratio $\nu$ is negative, which means that it is easy to change their volume while fixing their shape, but it is hard to change their shape while fixing their volume \cite{Feynman2011}. This behavior is just opposite to that of an ideal liquid \cite{Born1939} and to that of an ideal pentamode metamaterial \cite{Kadic2012}. In general, auxetic materials can be anisotropic, in which case the Poisson's ratio turns into a Poisson's matrix \cite{Li1976, Rand2004}. There are no fundamental bounds for the values of the elements of the general Poisson's matrix \cite{Ting2005}. In sharp contrast, there are established bounds for stable elastic isotropic media. Here, the Poisson's ratio is connected to the ratio of bulk modulus $B$ (the inverse of the compressibility) and shear modulus $G$ via \cite{Rand2004}
\begin{equation}
\frac{B}{G}=\frac{1}{3} \frac{\nu+1}{0.5-\nu}.
\label{B_G}
\end{equation}
For a stable elastic solid, neither bulk nor shear modulus can be negative. For example, exerting a hydrostatic pressure onto a material with $B<0$ would lead to an expansion, further increasing the pressure, further increasing the volume, etc. This non-negativity together with equation (\ref{B_G}) immediately translates into the well-known interval of possible Poisson's ratios of $\nu \in [-1,0.5]$.
Effectively isotropic auxetic materials with $\nu<0 $ composed of disordered polymer- or metal-based foams have extensively been studied in the literature, for a recent review see Ref.\,\cite{Greaves2011}. It is not clear though how one would systematically approach, along these lines, the ultimate limit of $\nu=-1 $. Such ultimate extreme auxetic materials are called ``dilational'' because they support strictly no other modes than dilations. Intuitively, for example, if one exerts a force onto a statue of liberty made of a dilational material at any point and along any direction, one can change its volume, but it will always maintain exactly the shape of the statue of liberty. Obviously, this behavior is very much different from that of a regular elastic solid. As an impact would be distributed throughout the entire elastic structure, dilational materials can, for example, be used as shock absorbers \cite{Alderson1999, Miller2009}.
Early three-dimensional auxetic metamaterials with anisotropic Poisson's ratio have recently been presented \cite{Bueckmann2012}. It is again not clear though how this approach \cite{Bueckmann2012} could be brought towards an isotropic behavior with $\nu=-1$. In the literature, several conceptual models for dilational metamaterials have been discussed \cite{Milton2013,Milton1992,Lakes2007,Prall1997,Mitschke2011,Grima2000}. These, however, contain elements like ``perfect joints'' and ``rigid rods'' that still need to be translated to a three-dimensional continuous microstructure composed of one constituent material (and vacuum in the voids) that can be fabricated with current technology.
In this paper, inspired by the two-dimensional conceptual model of Ref.\,\cite{Milton2013}, we introduce a three-dimensional blueprint for such a dilational material. Several questions arise. Does this microstructure support unwanted easy modes other than the wanted dilations? Can this microstructure at all be described by a simple elasticity tensor and a constant mass density? For so-called Cosserat materials or for materials with anisotropic mass-density tensors \cite{Cosserat1909, Torrent2008, Milton2006, Schoenberg1983}, the answer would be negative. Our blueprint contains small internal connections mimicking the mentioned ``ideal joints''. How small do these connections have to be? The blueprint uses a simple-cubic translational lattice. Do we really get an isotropic Poisson's ratio? In general cubic elastic solids, the answer would be negative. To address all of these questions, we start by presenting calculated phonon band structures for our blueprint. Next, we compare these with static continuum-mechanics calculations. These can then directly be compared with our static experiments on macroscopic polymer-based metamaterials made by three-dimensional printing. Finally, we show that also microscopic versions can be fabricated by recent advances in galvo-scanner dip-in direct-laser-writing optical lithography.
A related but different idea has recently been demonstrated by a rubbery chartreuse ball with 24 carefully spaced round dimples \cite{Krieger2012}. These buckliballs can also be arranged into bucklicrystals \cite{Babaee2013}.
\section{The blueprint}
Our three-dimensional blueprint depicted in Fig.\,1 is based on a recently published two-dimensional conceptual model \cite{Milton2013}. This model contains ``ideal joints'' and ``rigid bars''. In our blueprint, the ideal joints are implemented by small connections between the square and the triangular elements. Upon pushing from one side, for example from the top, the inner squares rotate and the triangular outer connection elements get pulled inwards. Thus, ideally, the structure contracts laterally by the same amount as it contracts vertically. The Poisson's ratio would thus be $\nu=-1$. We will have to investigate though to what extend we approach this ideal for a finite connection size $d$ compared to the cubic lattice constant $a$. Furthermore, the very thin rigid bars in \cite{Milton2013} have been eliminated in our blueprint because they cannot be implemented using a single constituent material. As a result, it is not clear whether unwanted easy modes of deformation might occur. Indeed, in preliminary simulations, we have found that using only one sense of rotation of the squares, the squares do not only rotate around their center, but rather also translate. To eliminate this unwanted easy mode, we use a three-dimensional checkerboard arrangement with the discussed motif alternating with its mirror image. The small cubes with a side length identical to the thickness of all squares and triangles are not necessary for the function of the metamaterial. They are, however, crucial as markers in our measurements of the Poisson's ratio (see below). They are hence considered in all our band structure and static calculations to allow for direct comparison.
\begin{figure}
\includegraphics[scale=1]{Fig1.jpg}
\caption{(a) Illustration of our blueprint for a three-dimensional dilational metamaterial. A single unit cell is shown. This cell is composed of a checkerboard arrangement of a cubic motif and its mirror image. (b) One face of the structure with geometrical parameters indicated. The cubic lattice constant is $a$. The other parameters are: block size $b/a=0.25$, width of the holding element $w/a=0.048$, layer thickness $t/a=0.05$, holder length $h/a=0.235$, and connection size $d/a=0.5 \%$.}
\label{Fig1}
\end{figure}
\section{Phonon band structures}
The phonon band structure reveals all modes of the elastic metamaterial, possibly including unwanted easy modes other than dilations (see above). The long-wavelength limit of the band structure can be the starting point for a description in terms of effective elastic metamaterial parameters (see next section).
In our numerical band-structure calculations for the dilational metamaterial structure in Fig.\,1, we solve the usual elastodynamic equations \cite{acouMetaPhoCry} for the displacement vector
$\vec{u}(\vec{r},t) $ containing the time-independent rank-4 elasticity tensor $\tens{C} \left( \vec{r}\right)$ and the scalar mass density $ \rho(\vec{r}) $, i.e.,
\begin{equation}
\vec{\nabla} \cdot(\tens{C} \vec{\nabla} \vec{u} )-\rho \frac{\partial^2 \vec{u}}{\partial t^2 }=0,
\label{eq2}
\end{equation}
by using a commercial software package (COMSOL Multiphysics, MUMPS solver). We impose Bloch-periodic boundary conditions onto the primitive cubic real-space cell shown in Fig.\,\ref{Fig1}. We have carefully checked that all results presented in this paper are converged. Typically, convergence is achieved by using several ten thousand tetrahedra in one primitive real-space cell. We choose an isotropic polymer as constituent material with Young's modulus $1\, \rm{GPa}$, Poisson's ratio $0.4$, and mass density $1200\,\rm{kg/m^3}$. These values are chosen according to the below experiments. Due to the scalability of the elastodynamic equations, our results can easily be scaled to isotropic constituent materials with any different Young's modulus and density. The Poisson's ratio of the constituent material influences the results only to a very minor degree. The voids in the polymer are assumed to be vacuum. The lattice constant is chosen as $a=4.8\,\rm{cm}$, according to our below experiments on macroscopic polymer structures. However, the results can again easily be scaled to any other value of $a$. We represent the results in a simple-cubic Brillouin zone corresponding to the underlying simple-cubic translational lattice.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{Fig2.jpg}
\caption{ Calculated phonon band structures (blue dots) of dilational metamaterials, i.e., angular frequency $\omega$ versus wave vector for the usual tour through the simple-cubic Brillouin zone (see inset in (a)). The red straight lines are fits assuming a homogeneous cubic-symmetry effective medium. (a) connection size $d/a=0.5 \% $ (b) $d/a=5\%$; $a= 4.8\,\rm cm$. The grey area in (a) highlights a complete three-dimensional elastic band gap.}
\label{Fig2}
\end{center}
\end{figure}
Examples of calculated band structures are depicted in Fig.\,\ref{Fig2} for two different values of the ratio $d/a$. Shown are the six lowest eigenmodes. It becomes immediately clear that the slope of the low-frequency or long-wavelength acoustic modes is not the same for all directions, it is anisotropic. Along the $\Gamma\rm X $-direction (i.e., along the principal cubic axes), the velocity of the longitudinally polarized mode is smaller than that of the transversely polarized modes (see Fig.\,\ref{Fig3}). In isotropic elastic media, the opposite holds true. As to be expected, the phase velocities are larger for larger $d/a$.
We note in passing that the band structure in Fig.\,\ref{Fig2} for $d/a=0.5 \% $ exhibits a complete three-dimensional elastic (i.e., not only acoustic) band gap between normalized frequencies of $3.24\,\rm{kHz}$ and $3.55\,\rm{kHz}$. This region with zero bulk phonon density of states corresponds to a gap-to-midgap ratio of $9 \%$. This complements other possibilities reported in the literature \cite{acouMetaPhoCry, Wang2012}.
To further emphasize the anisotropy, we also plot the phase velocity in polar diagrams in Fig.\,\ref{Fig3}. Constant phase velocity would lead to circles in the two-dimensional cuts depicted. Clearly, the curves shown in Fig.\,\ref{Fig3} (a) and (b) are not circular at all, neither for the small nor for the large value of $d/a$. They do show four-fold symmetry though. This statement is not trivial, because the three orthogonal axes are not strictly equivalent in terms of the geometrical structure. This can be seen when comparing a view on the unit cell shown in Fig.\ref{Fig1} onto the $xy$- and the $xz$-planes. Nevertheless, the band structures show that wave propagation is equivalent for the three cubic axes.
\begin{figure}
\includegraphics[scale=1]{Fig3.jpg}
\caption{(a) and (b) are polar representations of the phase velocity at a wave number of $k=0.01/a$, i.e., the phase velocity in a particular direction is given by the radial length. The cut on the left is for the $xy$ plane, the cut on the right for a plane spanned by the [111] and the [110] directions. The cut on the left shows the fourfold rotational symmetry expected for a cubic structure. The connection size is (a) $d/a=0.5\%$ and (b) $d/a=5\%$. All other geometrical parameters are as quoted in Fig.\,1. The blue dots are derived from the phonon band structure, the red curves are the result of an effective-parameter description of a cubic-symmetry medium. (c) Selected eigenmodes for a fixed wave number of $k=0.2/a$ and for $d/a=0.5\%$. The corresponding eigenfrequencies increase from the left to the right. The black arrows indicate the direction of the wave vector $\vec{k}$, the red arrows the directions of the displacement vector $\vec{u}$. Shown are the $\Gamma \rm X$ direction identical to the principal cubic axes, the $\Gamma \rm M$ direction parallel to the cubic face diagonals, as well as for an oblique direction. For the latter, the modes are no longer purely transversely or longitudinally polarized.}
\label{Fig3}
\end{figure}
The anisotropic behavior of the phase velocity is connected to rather complex underlying eigenmodes that are illustrated by the examples shown in Fig.\,\ref{Fig3} (c). For waves propagating along the principal cubic axes or along the face diagonals, we find pure longitudinal or pure transverse polarization. For arbitrary oblique propagation directions with respect to the principal cubic axes, the eigenmodes are complicated mixtures of transverse and longitudinal polarization. In contrast, in an ideal isotropic elastic medium, the polarizations would be purely transverse or longitudinal.
\section{Retrieval of the elasticity tensor}
The phonon band structures presented in the previous section have shown pronounced anisotropies resulting from the cubic symmetry of the underlying translational lattice. In this light, one might suspect that the Poisson's ratio is anisotropic as well, whereas we aim at an isotropic Poisson's ratio.
We thus derive a Poisson's ratio from the phonon band structure. To do so, we compare the band structures with the expectation from continuum mechanics of an effective cubic-symmetry medium. For crystals obeying simple-cubic symmetry \cite{Paszkiewicz2008}, the rank-4 elasticity tensor $\tens{C}$ has the three different non-zero elements $C_{11}= C_{22}= C_{33}= C_{1111} = C_{2222} = C_{3333}, C_{12}= C_{13}=C_{23}= C_{1122}= C_{2211}= C_{1133}= C_{3311}= C_{2233}= C_{3322},\text{and } C_{44}= C_{55}= C_{66}= C_{2323}=C_{3232}=C_{2332}= C_{3223}= C_{1313}= C_{3131}= C_{1331}= C_{3113}= C_{1212}= C_{2121}= C_{1221}= C_{2112}$. Here, the elements with only two indices refer to Voigt notation \cite{Voigt1910}. All other elements are zero. Furthermore, we assume a constant scalar effective metamaterial mass density $\rho $, which is simply given by the volume filling fraction $f$ of the constituent material times its own bulk mass density (see Fig.\,1). For $d/a=0.5 \%$ ($d/a=5 \% $) we get $f=10.4\%$ ($f=11.2\% $). On this basis, we can now calculate the phonon band structure in the long-wavelength limit. To do so, one needs to connect the phase velocities with the elements of $\tens{C}$ and with $\rho$. Here, it is convenient to inspect the $\Gamma \rm M$ or [110] direction with three different phase velocities $v$ and three orthogonal eigenmodes that are either purely longitudinally (L) or purely transversely (T) polarized (see above). The latter either lie in the $xy$ plane or along the $z$-direction. Following \cite{Tsang1983}, the connections are given by
\begin{equation}
C_{44}= \rho \,(v_{110}^{{\rm T},z})^2,
\end{equation}
\begin{equation}
C_{12}= \rho\,(v_{110}^{\rm L})^2 - C_{44} - \rho \,(v_{110}^{{\rm T},xy})^2,
\end{equation}
\begin{equation}
C_{11}= 2\, \rho\,(v_{110}^{{\rm T},xy})^2 +C_{12}.
\end{equation}
The three different elements of $\tens{C}$ can immediately be computed from the three different phase velocities. One has to make sure though that the polarizations of the corresponding eigenmodes are the same as for the numerical band structure calculations. We have checked this aspect (not depicted). Furthermore, one needs to make sure that the elastic behavior is described for all other propagation directions as well. To this end, we compare in Figs.\,\ref{Fig2} and \ref{Fig3} the results from the phonon band structure (blue dots) with those of the effective-medium description (red lines). Obviously, we obtain excellent overall agreement for all conditions in the long-wavelength limit. This means that a description of the elastic metamterial in terms of an elasticity tensor $\tens{C}$ for a cubic-symmetry effective medium is adequate. As discussed in our introduction (see Cosserat and anisotropic-mass-density metamaterials), this finding itself is not trivial.
Having derived all non-zero elements of the effective metamaterial elasticity tensor, we can now apply established analysis to extract the Young's modulus $E$, the shear modulus $G$, the bulk modulus $B$ \cite{Bowers2009, Hill1952}, and the Poisson's ratio (or Poisson's matrix) \cite{Wojciechowski2005}. We have
\begin{equation}
E=\frac{C_{11}^2+C_{12}C_{11}-2C_{12}^2}{C_{11}+C_{12}},
\end{equation}
\begin{equation}
G=C_{44},
\end{equation}
\begin{equation}
B= \frac{C_{11}+2C_{12}}{3}.
\end{equation}
Examples are given in Table 1.
\begin{table}[h]
\caption{Examples of retrieved effective parameters. The three non-equivalent non-zero elements of the elasticity tensor $C_{11}$, $C_{12}$, $C_{44}\,=\,G$, the Young's modulus $E$, and the bulk modulus $B$ are given for selected values of $d/a$.\\}
\begin{tabular}{c|c|c|c|c|c}
$d/a$ (\%) & $C_{11}$ (MPa) & $C_{12}$ (MPa)& $C_{44}\,=\,G$ (MPa) & $E$ (MPa)& $B$ (MPa) \\ \hline
$5$ &$2.33$ &$0.0051$ &$3.5 $& $2.33$ &$ 0.78$\\
$4 $& $2.33$&$-0.16 $& $3.78$& $2.31$& $0.67$\\
$3$ & $2$&$-0.35$ &$3.36$ & $1.85$& $0.43$\\
$2.5$ & $1.3$& $-0.52$&$2.1$ & $6.07$&$ 0.0867$\\
$0.75$ &$1.14$ &$-0.507$ &$1.68$ &$ 0.33$& $0.042$\\
$0.5 $& $0.97$& $-0.445$&$1.35$ & $0.21$& $0.026$\\
$0.25$ &$0.85$ & $-0.406$&$1.06$ & $0.12$&$0.014$ \\
\end{tabular}
\end{table}
The Poisson's ratio for pushing along the principal cubic axes is given by \cite{Bowers2009, Hill1952}
\begin{equation}
\nu =\frac{C_{12}}{C_{11}+C_{12}}.
\end{equation}
However, the Poisson's ratio might still be different for arbitrary oblique pushing directions. Here, we use the more general expressions as given in Ref. \cite{Wojciechowski2005}, which are based on averaging along the directions normal to the pushing direction:
\begin{equation}
\nu (\phi ,\theta )=-\frac{Ar_{12}+B(r_{44}-2)}{16[C+D(2r_{12}+r_{44} )] }
\label{eq:nuang}
\end{equation}
Introducing the compliance tensor $\tens{S}=\tens{C}^{-1}$ (two indices refer to Voigt notation), the abbreviations in (\ref{eq:nuang}) are given by
\begin{eqnarray*}\label{eqn:nu_definitions}
r_{12}&=&\frac{S_{12}}{S_{11}}, \\
r_{44}&=&\frac{S_{44}}{S_{11}}, \label{subeqn-2:nu_definitions}\\
A&=&2\left[53+4 \rm{cos}{(2 \theta)}+7 \rm{cos}{(4 \theta)}+8 cos{(4\phi)} sin^4{(\theta)}\right],\label{subeqn-3:nu_definitions}\\
B&=&-11+4\rm{ cos}{(2 \theta)}+7 cos{(4 \theta)}+8 cos{(4\phi)} sin^4{(\theta)}, \label{subeqn-4:nu_definitions}\\
C&=&8 \rm cos^4 { (\theta)}+6 sin^4 { (\theta)}+2 cos{(4\phi)} sin^4{(\theta)}, \label{subeqn-5:nu_definitions}\\
D&=&2\left[ \rm sin^2 {( 2 \theta) }+ sin^4 { (\theta) } sin^2 {(2 \phi)}\right].\label{subeqn-6:nu_definitions}
\end{eqnarray*}
Here, $\phi$ and $\theta$ are the usual angles in spherical coordinates.
The resulting direction dependence of the Poisson's ratio is visualized in two different ways in Fig.\,\ref{Fig4} and Fig.\,\ref{Fig5}. Fig.\,\ref{Fig4} is the generalized polar representation, i.e., the Poisson's ratio is proportional to the length of the vector from the origin to the depicted surface. The Poisson's ratio is also visualized by the false-color scale. For large connections, e.g., for $d/a=5\%$, the behavior is obviously far from isotropic. Furthermore, the effective metamaterial Poisson's ratio is far from $-1$. For a decreasing connection size $d/a$, the effective metamaterial Poisson's ratio $\nu$ becomes more negative and reaches a nearly isotropic behavior at $d/a=0.25\%$. This means that experiments need to realize values of $d/a$ below $1\%$ or better. Fig.\,\ref{Fig5} depicts the derived minimum and maximum values of $\nu$ versus $d/a$. For the smallest numerically accessible values of $d/a$, the Poisson's ratio comes close to $-1$. The data do not appear to extrapolate to exactly $-1$ though. We suspect that this aspect is due to the small cubes in our blueprint that we have introduced for experimental reasons (see above). We expect that the effective metamaterial Poisson's ratio would converge to $-1$ in the hypothetical limit of no cubes and $d/a\rightarrow 0$ -- which cannot be realized experimentally though.
\begin{figure}
\begin{center}
\includegraphics{Fig4.jpg}
\caption{Three-dimensional polar diagram of the effective metamaterial Poisson's ratio $\nu$, i.e., the length of the vector from the origin to the surface is proportional to modulus of the Poisson's ratio. The Poisson's ratio including its sign is also indicated by the false-color scale. As the $d/a$ ratio decreases, $\nu$ becomes more negative and more isotropic, eventually approaching the ultimate limit of $-1$ for an isotropic elastic material. These results are derived from the band structures like exemplified in Fig.\,\ref{Fig2} and for the other geometrical parameters as in Fig.\,\ref{Fig1}.}
\label{Fig4}
\end{center}
\end{figure}
\begin{figure}
\includegraphics[scale=1]{Fig5.jpg}
\caption{Minimum and maximum effective metamaterial Poisson's ratio (see Fig.\,\ref{Fig4}) versus $d/a$ ratio. The red symbols are the minima and maxima derived from the phonon band structures, the blue dots are obtained from static continuum-mechanics calculations for pushing along one of the principal cubic axes.}
\label{Fig5}
\end{figure}
\section{Static continuum-mechanics calculations}
Often, the Poisson's ratio is measured in (quasi-)static experiments. To derive the Poisson's ratio, one pushes along one direction, e.g., the $z$-direction, leading to a certain strain (or relative displacement) $\epsilon_{zz}$, observes or calculates the displacement along the orthogonal $x$-direction (or the orthogonal $y$-direction), hence the element of the strain tensor $\epsilon_{xx}$, and computes the Poisson's ratio according to its definition
\begin{equation}
\nu = -\frac{\epsilon_{xx}}{\epsilon_{zz}}.
\end{equation}
Furthermore, experiments are based on finite-size samples, also containing a finite number of unit cells only. For metamaterials, the number of unit cells may be rather small. We thus also investigate the question to what extend measurement artifacts are to be expected for accessible-size samples.
The numerical calculations to be presented in this section have been performed with COMSOL Multiphysics using the Structural Mechanics module. The structure's geometry is created using the CAD COMSOL Kernel. The mesh is created within COMSOL Multiphysics using the preset parameter values called ``normal'' meshing with settings(referring to a $1\times 1\times 1 \rm m^3$ geometry size): Maximum Element Size = 0.1, Minimum Element Size = 0.018, Maximum Element Growth Rate = 1.5, Resolution of Curvature = 0.6, Resolution of Narrow Regions = 0.5. For example, for a connection size of $ d/a=0.5 \% $, this leads to about 90000 tetrahedral elements. We use the MUMPS Solver. In convergence tests, we have verified that the derived effective metamaterial Poisson's ratios are accurate to within 0.01. All geometrical parameters and constituent material parameters are as given above.
To mimic a fictitious infinitely extended crystal, we assume that all unit cells behave the same way (analogous to zero wave vector in the previous section). For convenience, we choose our coordinate system such that the crystal center of mass is fixed. For pushing along one principal cubic axis these conditions can be implemented by imposing anti-symmetric boundary conditions onto a single cubic unit cell. This means that the normal component of the displacement vector on one surface of the unit cell is constant on this surface and equal to the negative of the normal component on the opposing surface. To investigate the linear regime, we choose strains along the pushing direction of $1\% $.
The resulting behavior is illustrated in Fig.\,\ref{Fig6}(a). The length of the (black) arrows is exaggerated and indicates the local displacement vectors. The false-color scale shows the modulus of the local displacement vector. Note that the corners of the unit cell move nearly diagonally towards the center. The Poisson's ratio can immediately be computed from the components of these displacement vectors. For example, for $d/a=0.75\%$ corresponding to our below experiments, we obtain $\nu=-0.79$. We note in passing that the other points within the unit cell generally move in different directions than the corners. This has important implications for our below experiments in that we must not evaluate the movement of all points within the unit cell but rather only of the corners. To have a finite region for imaging and tracking, we have introduced the small cubes in Fig.\,\ref{Fig1}. The results of $\nu$ versus $d/a$ are shown in blue in Fig.\,\ref{Fig5}. Obviously, the agreement of these (static) values with those derived from the (dynamic) phonon band structures is good, providing further faith into the validity of our results. The most negative dynamic results tend to be more negative than the static ones. For example, for $d/a=0.75\%$, the minimum dynamic value is $\nu=-0.895$, the static one $\nu=-0.79$.
Static calculations have also been performed for a finite metamaterial sample containing $2\times2\times2$ unit cells (see Fig.\,\ref{Fig1}) as shown in Fig.\,\ref{Fig6}(b) to directly compare these with experiments. Here, we assume sliding boundary conditions parallel to the pushing stamp interfaces. Good agreement together with calculations for infinite crystals as above will allow us to extract Poisson's ratios under these extreme conditions. Fig.\,\ref{Fig6}(b) is an example for a finite crystal composed of $2\times 2\times 2$ unit cells. To compare to the experiments to be discussed below, one can measure the lateral strain of the left and right outer corners in the middle of the horizontal direction (see small circles) and divide these by the relative axial shift of the stamps. We obtain a Poisson's ratio for the finite crystal of $-0.76$, which is not too far from the one for the fictitious infinitely extended crystal of $\nu=-0.79$ in panel (a) of the same figure. These parameters have been chosen to match those of the experiments to be discussed next.
\begin{figure}
\includegraphics[scale=1]{Fig6.jpg}
\caption{(a) $2\times2\times2$ unit cells (compare Fig.\,\ref{Fig1}) out of an infinite crystal pushed upon along the $z$-direction. The resulting in-plane strain for an axial strain of 0.01 is depicted by the false-color scale projected onto the front surface of the cube as well as by the black arrows. (b) Same, but for a finite crystal with $2\times2\times2$ unit cells.}
\label{Fig6}
\end{figure}
\section{Macroscopic dilational metamaterials}
We have fabricated macroscopic (this section) and microscopic (next section) versions of the blueprint shown in Fig.\,\ref{Fig1}. The macroscopic samples are fabricated with the printer ``Objet30'' sold by former Objet, now Stratasys, USA. For the metamaterial, we have used the basic polymer ink ``FullCure850 VeroGray''. During the fabrication, however, one also needs a support material. The default is a mixture of ``FullCure850 VeroGray'' and ``FullCure705 Support'' that we have not been able to remove from the composite. Thus, we have chosen exclusively ``FullCure705 Support'', which can be etched out in a bath of NaOH base after hand cleaning. The structure files are exported in the STL file format directly from the geometry used in the COMSOL Multiphysics calculations and are imported into the Objet printer. The printing is done automatically in a standard process. An example of a fabricated macroscopic metamaterial structure is shown in Fig.\,\ref{Fig7}(a). The geometrical parameters are as defined in Fig.\,\ref{Fig1} with $d/a=0.75\%$ and $a=4.8\,\rm cm$. The constituent polymer material has a measured Young's modulus of about $E\approx 1.5\,\rm GPa$.
\begin{figure}
\includegraphics[scale=1]{Fig7.jpg}
\caption{(a) Photograph of a macroscopic polymer-based finite crystal with $2\times2\times2$ unit cells fabricated by 3D printing, following the blueprint and the parameters given in Fig.\,\ref{Fig1}. (b) Measured lateral versus axial strain (solid red curve) as obtained from an image correlation approach upon pushing along the vertical $z$-direction. The green circles on straight lines correspond to a Poisson's ratio of $-0.76$ and $-0.77$, respectively, the black circles to numerical calculations for $d/a=0.75\%$. Extrapolation to an infinite three-dimensional crystal (see previous section) delivers a Poisson's ratio of $\nu=-0.79$.}
\label{Fig7}
\end{figure}
The measurement setup for the macroscopic samples consists of two metallic stamps and a linear stage containing a force cell. The sliding boundary conditions (see previous section) are achieved by placing the watered sample between the stamps. We have alternatively attempted to implement fixed boundary conditions by gluing the sample to the stamp by double-sided tape. This has led to the same lateral displacements of the sample indicating very strong forces parallel to the stamps. This suggests to us that assuming sliding boundary conditions is adequate. We gradually push onto one stamp by moving the linear stage while fixing the other stamp and recording the images of one of the sample surfaces. These images are taken with a Canon EOS 550D camera in Full HD ($1920 \times 1080$ pixels) resolution and 24 frames per second.
The objective lens (Tamron SP 70-300mm f/4-5,6 Di VC USD) is located at a distance of approximately $1.5\,\rm m$ to the sample. We have checked that image distortions (e.g., barrel-type aberrations) are sufficiently small to not influence our experiments. The displacements of the unit cells' corners are tracked using an autocorrelation approach used previously \cite{Bueckmann2012}. Multiple measurements with increasing maximum strain for a crystal composed of $2 \times 2 \times 2$ unit cells (see Fig.\,\ref{Fig7}(a)) are depicted in Fig.\,\ref{Fig7}(b). The graph shows the strain along the horizontal $x$-direction versus the strain along the axial pushing direction ($z$). The solid red curve corresponds to two measurement cycles, i.e., the sample is pushed, released and pushed and released again. Clearly, the four parts are hardly distinguishable, indicating a nearly reversible elastic behavior. From fits with straight lines (see green dots) we deduce a Poisson's ratio of $-0.76\pm 0.02$. The corresponding numerically calculated strains for a finite sample with $2 \times 2 \times 2$ unit cells are shown as black dots. Using the identical structure parameters but assuming an infinite crystal (see previous section), we get a Poisson's ratio of $\nu=-0.79$. We note in passing that we have simultaneously measured the axial force from a load cell (not depicted). Comparing with theory, we obtain a metamaterial Young's modulus of $0.015\%\, E$ with the above bulk Young's modulus $E$.
\section{Microscopic dilational metamaterials}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.8]{Fig8.jpg}
\caption{Gallery of polymer dilational metamaterial microstructures with different sizes and aspect ratios following the blueprint illustrated in Fig.\,1 (without the small cubic tracking markers), all fabricated by 3D dip-in direct laser writing. (a) Photograph of a structure with $3\times3\times9$ unit cells (and a smaller one on the right-hand side) $a=180\, \rm{\upmu m}$. (b) Electron micrograph of two microstructure samples with overall aspect ratios of 1:1 and 2:1, respectively $a=35 \,\rm{\upmu m}$. (c) Magnified view onto one unit cell of the structure, revealing details within the unit cell (compare Fig.\,\ref{Fig1}), $a=35\, \rm{\upmu m}$.}
\label{Fig8}
\end{center}
\end{figure}
The structures shown in the preceding section have validated our theoretical blueprint of a three-dimensional dilational metamaterial but they hardly qualify as a ``material'' in the normal sense. Thus, it is interesting to ask whether corresponding structures with lattice constants $a$ that are two to three orders of magnitude smaller are in reach. Also, it would be highly desirable to obtain structures containing a larger total number of unit cells. We have thus also fabricated microscopic structures based on the same blueprint (without the small blocks for tracking).
For fabricating such microscopic dilational metamaterial samples, photoresist samples are prepared by drop-casting the commercially available negative-tone photoresist ``IP-Dip'' (Nanoscribe GmbH, Germany) on diced silicon wafers ($22\, {\rm mm} \times 22\,\rm mm$). We use the commercial direct laser writing (DLW) system Photonic Professional GT (Nanoscribe GmbH, Germany). In this instrument, the liquid photoresist is polymerized via two-photon absorption using a $40\,\rm MHz$ frequency-doubled Erbium fiber laser with a pulse duration of $90\,\rm fs$. To avoid depth-dependent aberrations, the objective lens (with numerical aperture $\rm NA = 1.3$ or $\rm NA = 0.8$, Carl Zeiss) is directly dipped into the resist. The laser focus is scanned using a set of pivoted galvo mirrors. Structural data are again created in STL file format using the open-source software Blender and COMSOL Multiphysics. Due to the demanding critical distances of the mechanical metamaterials, the scan raster is set to $200\,\rm nm$ ($400\,\rm nm$) laterally and $300\,\rm nm$ ($800\,\rm nm$) axially for the $\rm NA = 1.3$ ($\rm NA = 0.8$) objective lens. Each individual layer is scanned in the so-called skywriting mode, i.e., while the laser focus is scanned continuously, the laser power is switched between $0\,\rm mW$ (no exposure) and about $13\,\rm mW$ or higher (exposure) to build up the fine features of the metamaterial. The writing speed is set to $20\,\rm mm/s$. After DLW of the preprogrammed pattern, the exposed sample is developed for 20 minutes in isopropanol and acetone. The process is finished in a supercritical point dryer to avoid capillary forces during drying.
Optical and electron micrographs of different samples are depicted in Fig.\,\ref{Fig8}. The sample in panel (a) has overall dimensions of $0.54\,{\rm mm}\times 0.54\,{\rm mm}\times 1.62\,{\rm mm}$ ($3\times3\times9$ unit cells) yet, at the same time, minimum feature sizes in the sub-micron range.
Due to the smaller lattice constants, it is not possible though to resolve the details within the unit cell to track the positions of the small marker cubes for measuring the Poisson's ratio as done for the macroscopic samples.
\section{Conclusion}
We have designed, fabricated, and characterized a three-dimensional microstructure based on a simple-cubic translational lattice that effectively acts as an auxetic, converging for small internal connections $d/a\rightarrow 0$ to the ultimate limit of an isotropic three-dimensional dilational metamaterial of $\nu=-1$. Our experiments approach this limit. Interestingly, the Poisson's ratio becomes isotropic in the limit $d/a\rightarrow 0$, whereas the acoustic phase velocity and other elastic properties remain anisotropic.
If fabricated in larger volumes and composed of different constituent materials, such dilational metamaterials might find applications in terms of shock absorbers.
In our treatment, we have derived all elements of the effective metamaterial elasticity tensor and hence all elastic parameters by comparing the phonon band structure in the long-wavelength limit with continuum mechanics of homogeneous media. This parameter retrieval could be of interest for other cubic-symmetry elastic metamaterials beyond the specific example discussed here.
\section{Acknowledgements}
We thank the DFG-Center for Functional Nanostructures (CFN), the Karlsruhe School of Optics $\&$ Photonics (KSOP) and the National Science foundation through grant DMS-1211359 for support.
\section*{References}
|
1,477,468,750,976 | arxiv | \section{#1}\setcounter{equation}{0}}
\title{\bf On a new characterisation of Besov spaces with negative exponents
\author{
{\bf Moshe Marcus}\\
{\small Department of Mathematics,}\\
{\small Technion, Haifa}
\and
{\bf Laurent V\'eron}\\
{\small Department of Mathematics,}\\
{\small Universit\'e Fran\c{c}ois-Rabelais, Tours}
\date{}
\begin{document}%
\maketitle
\noindent{\it Dedicated to Vladimir Maz'ya with high esteem}
\newcommand{\txt}[1]{\;\text{ #1 }\;
\newcommand{\textbf}%% Bold face. Usage: \tbf{...}{\textbf
\newcommand{\tit}{\textit
\newcommand{\tsc}{\textsc
\newcommand{\textrm}{\textrm}
\newcommand{\mbf}{\mathbf
\newcommand{\mrm}{\mathrm
\newcommand{\bsym}{\boldsymbol
\newcommand{\scs}{\scriptstyle
\newcommand{\sss}{\scriptscriptstyle
\newcommand{\textstyle}{\textstyle}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\footnotesize}{\footnotesize}
\newcommand{\scriptsize}{\scriptsize}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\bel}[1]{\begin{equation}\label{#1}}
\newcommand{\ee}{\end{equation}
\newcommand{\eqnl}[2]{\begin{equation}\label{#1}{#2}\end{equation}
\newtheorem{subn}{Proposition}\begin{sub}\label{p:#1}}
\renewcommand{\thesubn}{}
\newcommand{\bsn}[1]{\defProposition}\begin{sub}\label{p:#1}{#1}\begin{subn}}
\newcommand{\end{subn}}{\end{subn}}
\newtheorem{sub}{Proposition}\begin{sub}\label{p:#1}}[section]
\newcommand{\dn}[1]{\defProposition}\begin{sub}\label{p:#1}{#1}}
\newcommand{\begin{sub}}{\begin{sub}}
\newcommand{\end{sub}}{\end{sub}}
\newcommand{\bsl}[1]{\begin{sub}\label{#1}}
\newcommand{\bth}[1]{\defProposition}\begin{sub}\label{p:#1}{Theorem}\begin{sub}\label{t:#1}}
\newcommand{\blemma}[1]{\defProposition}\begin{sub}\label{p:#1}{Lemma}\begin{sub}\label{l:#1}}
\newcommand{\bcor}[1]{\defProposition}\begin{sub}\label{p:#1}{Corollary}\begin{sub}\label{c:#1}}
\newcommand{\bdef}[1]{\defProposition}\begin{sub}\label{p:#1}{Definition}\begin{sub}\label{d:#1}}
\newcommand{\bprop}[1]{\defProposition}\begin{sub}\label{p:#1}{Proposition}\begin{sub}\label{p:#1}}
\newcommand{\eqref}{\eqref}
\newcommand{\rth}[1]{Theorem~\ref{t:#1}}
\newcommand{\rlemma}[1]{Lemma~\ref{l:#1}}
\newcommand{\rcor}[1]{Corollary~\ref{c:#1}}
\newcommand{\rdef}[1]{Definition~\ref{d:#1}}
\newcommand{\rprop}[1]{Proposition~\ref{p:#1}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\renewcommand{\arraystretch}{1.2}{\renewcommand{\arraystretch}{1.2}
\setlength{\arraycolsep}{2pt}\begin{array}}
\newcommand{\BAV}[2]{\renewcommand{\arraystretch}{#1}
\setlength{\arraycolsep}{#2}\begin{array}}
\newcommand{\begin{subarray}}{\begin{subarray}}
\newcommand{\end{subarray}}{\end{subarray}}
\newcommand{\begin{aligned}}{\begin{aligned}}
\newcommand{\end{aligned}}{\end{aligned}}
\newcommand{\begin{alignat}}{\begin{alignat}}
\newcommand{\EALG}{\end{alignat}
\newcommand{\begin{alignat*}}{\begin{alignat*}}
\newcommand{\EALGN}{\end{alignat*}
\newcommand{\note}[1]{\textit{#1.}\hspace{2mm}}
\newcommand{\note{Proof}}{\note{Proof}}
\newcommand{\hspace{10mm}\hfill $\square$}{\hspace{10mm}\hfill $\square$}
\newcommand{\\ ${}$ \hfill $\square$}{\\ ${}$ \hfill $\square$}
\newcommand{\note{Remark}}{\note{Remark}}
\newcommand{$\,$\\[-4mm] \indent}{$\,$\\[-4mm] \indent}
\newcommand{\quad \forall}{\quad \forall}
\newcommand{\set}[1]{\{#1\}}
\newcommand{\setdef}[2]{\{\,#1:\,#2\,\}}
\newcommand{\setm}[2]{\{\,#1\mid #2\,\}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\longleftarrow}{\longleftarrow}
\newcommand{\longleftrightarrow}{\longleftrightarrow}
\newcommand{\Longrightarrow}{\Longrightarrow}
\newcommand{\Longleftarrow}{\Longleftarrow}
\newcommand{\Longleftrightarrow}{\Longleftrightarrow}
\newcommand{\rightharpoonup}{\rightharpoonup}
\newcommand{\paran}[1]{\left (#1 \right )
\newcommand{\sqbr}[1]{\left [#1 \right ]
\newcommand{\curlybr}[1]{\left \{#1 \right \}
\newcommand{\abs}[1]{\left |#1\right |
\newcommand{\norm}[1]{\left \|#1\right \|
\newcommand{\paranb}[1]{\big (#1 \big )
\newcommand{\lsqbrb}[1]{\big [#1 \big ]
\newcommand{\lcurlybrb}[1]{\big \{#1 \big \}
\newcommand{\absb}[1]{\big |#1\big |
\newcommand{\normb}[1]{\big \|#1\big \|
\newcommand{\paranB}[1]{\Big (#1 \Big )
\newcommand{\absB}[1]{\Big |#1\Big |
\newcommand{\normB}[1]{\Big \|#1\Big \|
\newcommand{\rule[-.5mm]{.3mm}{3mm}}{\rule[-.5mm]{.3mm}{3mm}}
\newcommand{\thknorm}[1]{\rule[-.5mm]{.3mm}{3mm} #1 \rule[-.5mm]{.3mm}{3mm}\,}
\newcommand{\trinorm}[1]{|\!|\!| #1 |\!|\!|\,}
\newcommand{\bang}[1]{\langle #1 \rangle
\def\angb<#1>{\langle #1 \rangle
\newcommand{\vstrut}[1]{\rule{0mm}{#1}}
\newcommand{\rec}[1]{\frac{1}{#1}}
\newcommand{\opname}[1]{\mbox{\rm #1}\,}
\newcommand{\opname{supp}}{\opname{supp}}
\newcommand{\opname{dist}}{\opname{dist}}
\newcommand{\myfrac}[2]{{\displaystyle \frac{#1}{#2} }}
\newcommand{\myint}[2]{{\displaystyle \int_{#1}^{#2}}}
\newcommand{\quad}{\quad}
\newcommand{\qquad}{\qquad}
\newcommand{\hsp}[1]{\hspace{#1mm}}
\newcommand{\vsp}[1]{\vspace{#1mm}}
\newcommand{\infty}{\infty}
\newcommand{\partial}{\partial}
\newcommand{\setminus}{\setminus}
\newcommand{\emptyset}{\emptyset}
\newcommand{\times}{\times}
\newcommand{^\prime}{^\prime}
\newcommand{^{\prime\prime}}{^{\prime\prime}}
\newcommand{\tilde}{\tilde}
\newcommand{\subset}{\subset}
\newcommand{\subseteq}{\subseteq}
\newcommand{\noindent}{\noindent}
\newcommand{\indent}{\indent}
\newcommand{\overline}{\overline}
\newcommand{\underline}{\underline}
\newcommand{\not\in}{\not\in}
\newcommand{\pfrac}[2]{\genfrac{(}{)}{}{}{#1}{#2}
\def\alpha} \def\gb{\beta} \def\gg{\gamma{\alpha} \def\gb{\beta} \def\gg{\gamma}
\def\chi} \def\gd{\delta} \def\ge{\epsilon{\chi} \def\gd{\delta} \def\ge{\epsilon}
\def\theta} \def\vge{\varepsilon{\theta} \def\vge{\varepsilon}
\def\phi} \def\vgf{\varphi} \def\gh{\eta{\phi} \def\phi} \def\vgf{\varphi} \def\gh{\eta{\varphi} \def\gh{\eta}
\def\iota} \def\gk{\kappa} \def\gl{\lambda{\iota} \def\gk{\kappa} \def\gl{\lambda}
\def\mu} \def\gn{\nu} \def\gp{\pi{\mu} \def\gn{\nu} \def\gp{\pi}
\def\varpi} \def\gr{\rho} \def\vgr{\varrho{\varpi} \def\gr{\rho} \def\vgr{\varrho}
\def\sigma} \def\vgs{\varsigma} \def\gt{\tau{\sigma} \def\vgs{\varsigma} \def\gt{\tau}
\def\upsilon} \def\gv{\vartheta} \def\gw{\omega{\upsilon} \def\gv{\vartheta} \def\gw{\omega}
\def\xi} \def\gy{\psi} \def\gz{\zeta{\xi} \def\gy{\psi} \def\gz{\zeta}
\def\Gamma} \def\Gd{\Delta} \def\vgf{\Phi{\Gamma} \def\Gd{\Delta} \def\phi} \def\vgf{\varphi} \def\gh{\eta{\Phi}
\def\Theta{\Theta}
\def\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi{\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi}
\def\Omega} \def\Gx{\Xi} \def\Gy{\Psi{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}
\def{\mathcal S}} \def\CM{{\mathcal M}} \def\CN{{\mathcal N}{{\mathcal S}} \def\CM{{\mathcal M}} \def\CN{{\mathcal N}}
\def{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}
\def{\mathcal A}} \def\CB{{\mathcal B}} \def\CC{{\mathcal C}{{\mathcal A}} \def\CB{{\mathcal B}} \def\CC{{\mathcal C}}
\def{\mathcal D}} \def\CE{{\mathcal E}} \def\CF{{\mathcal F}{{\mathcal D}} \def\CE{{\mathcal E}} \def\CF{{\mathcal F}}
\def{\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}{{\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}}
\def{\mathcal J}} \def\CK{{\mathcal K}} \def\CL{{\mathcal L}{{\mathcal J}} \def\CK{{\mathcal K}} \def\CL{{\mathcal L}}
\def{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}{{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}}
\def{\mathcal Z}} \def\CX{{\mathcal X}} \def\CY{{\mathcal Y}{{\mathcal Z}} \def\CX{{\mathcal X}} \def\CY{{\mathcal Y}}
\def{\mathcal W}{{\mathcal W}}
\def\mathbb A} \def\BBb {\mathbb B} \def\BBC {\mathbb C {\mathbb A} \def\BBb {\mathbb B} \def\BBC {\mathbb C}
\def\mathbb D} \def\BBE {\mathbb E} \def\BBF {\mathbb F {\mathbb D} \def\BBE {\mathbb E} \def\BBF {\mathbb F}
\def\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I {\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I}
\def\mathbb J} \def\BBK {\mathbb K} \def\BBL {\mathbb L {\mathbb J} \def\BBK {\mathbb K} \def\BBL {\mathbb L}
\def\mathbb M} \def\BBN {\mathbb N} \def\BBO {\mathbb O {\mathbb M} \def\BBN {\mathbb N} \def\BBO {\mathbb O}
\def\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S {\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S}
\def\mathbb T} \def\BBU {\mathbb U} \def\BBV {\mathbb V {\mathbb T} \def\BBU {\mathbb U} \def\BBV {\mathbb V}
\def\mathbb W} \def\BBX {\mathbb X} \def\BBY {\mathbb Y {\mathbb W} \def\BBX {\mathbb X} \def\BBY {\mathbb Y}
\def\mathbb Z {\mathbb Z}
\def\mathfrak A} \def\GTB {\mathfrak B} \def\GTC {\mathfrak C {\mathfrak A} \def\GTB {\mathfrak B} \def\GTC {\mathfrak C}
\def\mathfrak D} \def\GTE {\mathfrak E} \def\GTF {\mathfrak F {\mathfrak D} \def\GTE {\mathfrak E} \def\GTF {\mathfrak F}
\def\mathfrak G} \def\GTH {\mathfrak H} \def\GTI {\mathfrak I {\mathfrak G} \def\GTH {\mathfrak H} \def\GTI {\mathfrak I}
\def\mathfrak J} \def\GTK {\mathfrak K} \def\GTL {\mathfrak L {\mathfrak J} \def\GTK {\mathfrak K} \def\GTL {\mathfrak L}
\def\mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O {\mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O}
\def\mathfrak P} \def\GTR {\mathfrak R} \def\GTS {\mathfrak S {\mathfrak P} \def\GTR {\mathfrak R} \def\GTS {\mathfrak S}
\def\mathfrak T} \def\GTU {\mathfrak U} \def\GTV {\mathfrak V {\mathfrak T} \def\GTU {\mathfrak U} \def\GTV {\mathfrak V}
\def\mathfrak W} \def\GTX {\mathfrak X} \def\GTY {\mathfrak Y {\mathfrak W} \def\GTX {\mathfrak X} \def\GTY {\mathfrak Y}
\def\mathfrak Z} \def\GTQ {\mathfrak Q {\mathfrak Z} \def\GTQ {\mathfrak Q}
\font\Sym= msam10
\def\SYM#1{\hbox{\Sym #1}}
\newcommand{\prt\Omega} \def\Gx{\Xi} \def\Gy{\Psi\xspace}{\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi\xspace}
\mysection {Introduction}
Let $B$ denote the unit N-ball and $\Sigma=\partial B$.
If $\mu$ is a distribution on $\Sigma$ we denote by $\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu)$ its
Poisson potential in $B$, that is
\begin{equation}\label{distripot}\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu) (x)=<\mu,P(x,.)>_{\Gs},\quad \forall x\in B,
\end {equation}
where $<\;,\;>_{\Gs}$ denotes the pairing between distributions on $\Gs$
and functions in $C^\infty (\Gs)$. In the particular case where $\mu$ is a measure, this can be written
as follows
\begin{equation}\label{measpot}\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu) (x)=\int _{\Sigma}P(x,y)d\mu (y),\quad \forall x\in B.
\end {equation}
In \cite {MV} it is proved that for $q>1$ the Besov space
$W^{-2/q,q}(\Gs)$ is characterized by an integrability condition on $\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S
(\mu)$ with respect to a wheight function involving the distance to
the boundary, and more precisely that there exists a positive constant
$C=C(N,q)$ such that for any distribution $\mu$ on $\Gs$ there holds
\begin {equation}\label {old}
C^{-1}{\norm \mu}_{W^{-2/q,q}(\Sigma)}
\leq \left(\int_{B}{\abs {\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu)}}^q
(1-\abs x)dx\right)^{1/q}
\leq C{\norm \mu}_{W^{-2/q,q}(\Sigma)}.
\end {equation}
The aim of this article is to prove that for all $1<q<\infty$ any
negative Besov spaces $B^{-s,q}(\Gs)$ can be described by an integrability condition on the
Poisson potential of its elements. More precisely, we prove
\bth {main}Let $s>0$, $q>1$ and $\mu} \def\gn{\nu} \def\gp{\pi$ be a distribution on $\Gs$. Then
$$\mu\in B^{-s,q}(\Sigma)\Longleftrightarrow
\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu)\in L^q(B;(1-\abs x)^{sq-1}dx).
$$
Moreover there exists a constant $C>0$ such that for any
$\mu} \def\gn{\nu} \def\gp{\pi\in B^{-s,q}(\Sigma)$,
\begin {equation}\label {equivnorm} C^{-1}{\norm \mu}_{B^{-s,q}(\Sigma)}
\leq \left(\int_{B}{\abs {\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu)}}^q
(1-\abs x)^{sq-1}dx\right)^{1/q}
\leq C{\norm \mu}_{B^{-s,q}(\Sigma)}.
\end {equation}
\end{sub}
The key idea for proving such a result is to use a lifting operator which
reduces the estimate question to an estimate between Besov spaces with positive
exponents. In one direction the main technique
relies on interpolation theory between domain of powers of analytic
semigroups. In the other direction we use a new representation
formula for harmonic functions in a ball.
\vspace{5mm}
\noindent{\bf Acknowledgment.}\hspace{2mm}The research of MM was supported by The Israel Science
Foundation grant No. 174/97.
\mysection {The left-hand side inequality (\ref {equivnorm})}
We recall that for $1\leq p<\infty$, $r\notin \BBN$, $r=k+\eta$ with
$k\in\BBN$ and $0<\eta<1$,
$$B^{r,p}(\BBR^d)=\left\{\varphi\in W^{k,p}(\BBR^d):\,
\myint {\BBR^d}{}\myint {\BBR^d}{}\myfrac {{\abs
{D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x)-D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(y)}}^p}{{\abs {x-y}}^{d+\eta
p}}dxdy<\infty, \forall \alpha} \def\gb{\beta} \def\gg{\gamma\in\BBN^d, \abs \alpha} \def\gb{\beta} \def\gg{\gamma=k,\right\}
$$
with norm
$${\norm \varphi}^p_{B^{r,p}}={\norm \varphi}^p_{W^{k,p}}+
\sum_{\abs\alpha} \def\gb{\beta} \def\gg{\gamma=k}\myint {\BBR^d}{}\myint {\BBR^d}{}\myfrac {{\abs
{D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x+y)-D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x)}}^p}{{\abs {y}}^{d+\eta
p}}dxdy.
$$
When $r\in \BBN$,
$$\begin {array}{l}B^{r,p}(\BBR^d)=\left\{\varphi\in
W^{r-1,p}(\BBR^d):^{^{^{^{^{^{}}}}}}\right.\\
\left.\myint {\BBR^d}{}\myint {\BBR^d}{}\myfrac {{\abs
{D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x+2y)+D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x)-2D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x+y)}}^p}{{\abs
{y}}^{p+d}}dxdy
<\infty,\forall \alpha} \def\gb{\beta} \def\gg{\gamma\in\BBN^d, \abs \alpha} \def\gb{\beta} \def\gg{\gamma=r-1,\right\},
\end {array}$$
with norm
$$\begin {array}{l}{\norm \varphi}^p_{B^{r,p}}={\norm \varphi}^q_{W^{k,p}}\\
\qquad\qquad\qquad\qquad +
\displaystyle{\sum_{\abs\alpha} \def\gb{\beta} \def\gg{\gamma=r-1}\myint {\BBR^d}{}\myint {\BBR^d}{}\myfrac {{\abs
{D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x+2y)+D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x)-2D^\alpha} \def\gb{\beta} \def\gg{\gamma\varphi(x+y)}}^p}{{\abs {y}}^{p+d}}dxdy}.
\end {array}$$
The relation of the Besov spaces with integer order of
differentiation and the classical Sobolev spaces is the following
\cite {LP}, \cite{Gr}
\begin {equation}\begin {array}{l}\label {inclusion}
B^{r,p}(\BBR^d)\subset W^{r,p}(\BBR^d)\quad \mbox {if } 1\leq p\leq
2,\\
W^{r,2}(\BBR^d)= B^{r,2}(\BBR^d),\\
W^{r,p}(\BBR^d)\subset B^{r,p}(\BBR^d)\quad \mbox {if } p\geq 2.
\end {array}\end {equation}
Since for $r\BBN_{*}$ and $1\leq p<\infty$, the space $B^{-r,p}(\BBR^d)$ is
the space of derivatives of $L^{p}(\BBR^d)$-functions, up to the total
order $k$, for noninteger $r$, $r=k+\eta$ with $k\in\BBN$ and $0<\eta<1$
$B^{-r,p}(\BBR^d)$ can be defined by using the real interpolation
method \cite {LP} by
$$\left[W^{-k,p}(\BBR^d),W^{-k-1,p}(\BBR^d)\right]_{\eta,p}=B^{-r,p}(\BBR^d).
$$
The spaces $B^{-r,p}(\BBR^d)$, or $1<p<\infty$ and $r>0$ can also be defined by duality with
$B^{-r,p'}(\BBR^d)$.
The Sobolev and Besov spaces $W^{k,p}(\Gs)$ and $B^{r,p}(\Gs)$ are
defined by using local charts from the same spaces in $\BBR^{N-1}$.
\medskip
Now we present the proof of the left-hand side inequality in the case $N\geq 3$.
However, with minor modifications, the proof
applies also to the case $N=2$ (see the remark ). Let $(r,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in [0,\infty)\times
S^{N-1}$ (with $S^{N-1}\approx\Gs$) be spherical coordinates in $B$ and put $ t=-\ln r$.
Suppose that $\mu\in B^{-s,q}(S^{N-1})$,
let $u=\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\mu)$ and denote by $ \tilde u$ the function $u$ expressed in terms
of the coordinates $(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)$. Then
\begin {equation}\label {equa/u1}
u_{rr}+\myfrac {N-1}{r}u_{r}+\myfrac {1}{r^{2}}\Gd_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}u=0,\quad\mbox
{in }(0,1)\times S^{N-1},
\end {equation}
and
\begin {equation}\label {equa/u2}
\tilde u_{tt}-(N-2)\tilde u_{t}+\Gd_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}\tilde u=0,\quad\mbox
{in }(0,\infty)\times S^{N-1}.
\end {equation}
Then the right inequality in (\ref {equivnorm}) obtains the form
\begin{equation}\label{equivleft}
\int^\infty_0\int_{S^{N-1}}\abs{\tilde
u}^q(1-e^{-t})^{sq-1}e^{-Nt}d\sigma} \def\vgs{\varsigma} \def\gt{\tau\,dt\leq
C\norm{\mu}^q_{B^{-s,q}(S^{N-1})}.
\end{equation}
Clearly it is sufficient to establish this inequality in the case that $\mu\in \mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O(S^{N-1})$
(or even $\mu\in C^\infty(S^{N-1})$), which is assumed in the sequel.
We define $k\in\BBN^{*}$ by
\begin{equation}\label{k-def}
2(k-1)\leq s<2k,
\end{equation}
with the restriction $s>0$ if $k=1$. We denote by $\BBb$
the elliptic operator of order 2k
$$\BBb=\left(\frac{(N-2)^{2}}{4}-\Gd_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}\right)^k
$$
and call $f$ the unique solution of
$$\mu} \def\gn{\nu} \def\gp{\pi=\BBb f\quad \mbox {in }S^{N-1}.
$$
Then $f\in W^{2k-s,q}(S^{N-1})$ since $\BBb$ is an
isomorphism between the spaces $B^{2k-s,q}(S^{N-1})$ and
$B^{-s,q}(S^{N-1})$.
Put $v=\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(f)$ in $B$, then $v$ satisfies the same
equation as $u$ in $(0,1)\times S^{N-1}$. Let $\tilde v$
denote this function in terms of the coordinates $(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)$. Then
\begin{equation}\label{equa/v2}
\begin{cases}
\tilde L{\tilde v}:= {\tilde v}_{tt}-(N-2){\tilde v}_t+\Gd_\sigma} \def\vgs{\varsigma} \def\gt{\tau{\tilde v}=0 &\text{in }\BBR_+\times S^{N-1},\\
{\tilde v}|_{t=0}=f, & \text{in }S^{N-1}.
\end{cases}
\end{equation}
Since the operator $\BBb$ commutes with $\Gd_{\sigma} \def\vgs{\varsigma} \def\gt{\tau}$ and $\partial/\partial t$,
and this problem has a unique solution which is bounded near $t=\infty$,
it follows that
\begin{equation}\label{equa/v3}
\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\BBb f)=\BBb \tilde v.
\end{equation}
Hence,
\begin{equation}\label{equa/v4}
\tilde u=\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\mu)=\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\BBb f)
=\BBb \tilde v.
\end{equation}
If $v^*:=e^{-t(N-2)/2}\tilde v$, then
\begin{equation}\label{equa/v*}\begin{cases}
v^*_{tt}-\frac{(N-2)^2}{4}v^* +\Gd_\sigma} \def\vgs{\varsigma} \def\gt{\tau v^*=0, &\text{in }\BBR_+\times S^{N-1},\\
v^*(0,\cdot)=f, &\text{in } S^{N-1}.
\end{cases}
\end{equation}
Note that
$$v^*=e^{tA}(f) \quad \text{where} \quad
A=-\paran{\frac{(N-2)^2}{4}I-\Gd_\sigma} \def\vgs{\varsigma} \def\gt{\tau}^{1/2}\Longleftrightarrow A^{2k}=\BBb,$$
where $e^{tA}$ is the semigroup generated by $A$ in $L^q(S^{N-1})$.
By the Lions-Peetre real interpolation method \cite {LP},
$$\left[W^{2k,q}(S^{N-1}),L^q(S^{N-1})\right]_{1-s/2k,q}=B^{2k-s,q}(S^{N-1}).
$$
Since $D(A^{2})=W^{2,q}(S^{N-1})$, $D(A^{2k})=W^{2k,q}(S^{N-1})$.
The semi-group generated by $A$ is analytic as any semi-group
generated by the square root of a closed operator, therefore
by \cite{Tr} p 96,
\begin{equation}\label{equivleft2}\begin{array}{rcl}
\norm{f}^q_{W^{2k-s,q}}&\sim& \norm{f}_{L^q(S^{N-1})}^q +
\myint{0_{_{}}}{\infty}\paran{t^{(2kqs/2kq)}\norm{A^{2k}v^*}_{L^q(S^{N-1})}}^q\dfrac{dt}{t}\\
&\sim& \norm{f}_{L^q(S^{N-1})}^q +
\myint{0_{_{}}}{1}\paran{t^{s}\norm{A^{2k}v^*}_{L^q(S^{N-1})}}^q\dfrac{dt}{t}\\
&=& \norm{f}_{L^q(S^{N-1})}^q +
\myint{0_{_{}}}{1}\paran{t^{s}e^{-t(N-2)/2}\norm{\BBb\tilde v}_{L^q(S^{N-1})}}^q\dfrac{dt}{t}
\end{array}\end{equation}
where the symbol $\sim$ denotes equivalence of norms. Therefore, by \eqref{equivleft2},
\begin{equation}\label{equivleft3}\begin{array}{rcl}
\norm{f}^q_{W^{2k-s,q}(S^{n-1})}&\geq& C \norm{f}_{L^q(S^{N-1})}^q +
C\myint{0_{_{}}}{1}\paran{t^{s}e^{-t(N-2)/2}\norm{\tilde u}_{L^q(S^{N-1})}}^q\dfrac{dt}{t}\\
&\geq& C \norm{f}_{L^q(S^{N-1})}^q +
C\myint{0}{1}\norm{\tilde u}_{L^q(S^{N-1})}^q e^{-Nt}t^{sq-1}dt.
\end{array}
\end{equation}
Furthermore,
\begin{equation}\label{equivleft4}\begin{array}{rcl}
\myint{0}{\infty}\norm{\tilde u}_{L^q(S^{N-1})}^q(1-e^{t})^{sq-1} e^{-Nt}dt &\leq& C
\myint{0}{1}\norm{\tilde u}_{L^q(S^{N-1})}^q (1-e^{-t})^{sq-1} e^{-Nt}dt\\
&\leq& C\myint{0}{1}\norm{\tilde u}_{L^q(S^{N-1})}^q e^{-Nt}t^{sq-1}dt.
\end{array}
\end{equation}
This is a consequence of the inequality
$$\int_{\partial B_r}|u|^q dS\leq(r/\gr)^{N-1} \int_{\partial B_\gr}|u|^q dS,$$
which holds for $0<r<\gr$, for every harmonic function $u$ in $B$.
By a straightforward computation, this inequality
implies that
$$\int_{|x|<1}|u|^q (1-r) \,dx\leq c(\gg) \int_{\gg<|x|<1}|u|^q (1-r)\,dx,$$
for every $\gg\in(0,1)$.
\par In view of the definition of $f$,
\begin{equation}\label{equivleft5}
\norm{\mu}^q_{B^{-s,q}(S^{n-1})} \sim \norm{f}^q_{W^{2k-s,q}(S^{n-1})}.
\end{equation}
Therefore, the right hand side inequality in \eqref{equivleft} follows from
\eqref{equivleft3},
\eqref{equivleft4} and \eqref{equivleft5}.\\[1mm]
\vsp {1}
\mysection {The right-hand side inequality (\ref {equivnorm})}
Suppose that $\mu$ is a distribution on $S^{N-1}$
such that $\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\mu} \def\gn{\nu} \def\gp{\pi)\in L^q(B;(1-{\abs x}^{sq-1})$. Then we claim that
$\mu\in B^{-s,q}(S^{N-1})$ and
\begin{equation}\label{equivright}C^{-1}{\norm \mu}_{B^{-s,q}(\Sigma)}
\leq \left(\int_{B}{\abs {\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S (\mu)}}^q
(1-\abs x)^{sq-1}dx\right)^{1/q}.
\end {equation}
Because of estimate (\ref{equivleft2}) it is
suffficient to prove that
\begin{equation}\label{right1}
\norm{f}_{L^q(S^{N-1})}\leq C \norm{u}_{L^q(B, (1-r)^{sq-1}\,dx)}.
\end{equation}
With $u=\BBb v$ this relation becomes
\begin{equation}\begin{array}{l}\label{right2}
\norm{f}_{L^q(S^{N-1})}\leq C \norm{\BBb v}_{L^q(B;
(1-r)^{sq-1}\,dx)}\\
\qquad\;\;\quad\qquad\leq C \left(\myint{0}{1}{\norm
v}^q_{W^{2k,q}(S^{N-1})}(1-r)^{sq-1}\,r^{N-1}dr\right)^{1/q}.
\end{array} \end{equation}
In order to simplify the exposition, we shall first present the case
where $0<s<2$.
\vsp {1}
\subsection {\bf The case $0<s<2$}
We take $k=1$. Since the imbedding of
$B^{2-s,q}(S^{N-1})$ into $L^q(S^{N-1})$ is compact, for any $\vge>0$
there is $C_{\vge}>0$ such that
$${\norm \varphi}_{L^q(S^{N-1})}\leq \vge {\norm \varphi}_{B^{2-s,q}(S^{N-1})}+
C_{\vge}{\norm \varphi}_{L^1(S^{N-1})},\quad \forall\,\varphi\in B^{2-s,q}(S^{N-1}).
$$
Therefore the following norm for $B^{2-s,q}(S^{N-1})$ is equivalent to the
one given in (\ref{equivleft})
\begin{equation}\label{right3}
\norm{f}^q_{B^{2-s,q}}=\norm{f}_{L^1(S^{N-1})}^q +
\myint{0_{_{}}}{1}\paran{t^{s}\norm{A^{2}
v^{*}}_{L^q(S^{N-1})}}^q\dfrac{dt}{t},
\end{equation}
and estimate (\ref{right2}) will be a consequence of
\begin{equation}\label{right4}
\norm{f}^q_{L^1(S^{N-1})}\leq
\myint{0_{_{}}}{1}\paran{t^{s}\norm{A^{2}
v^{*}}_{L^q(S^{N-1})}}^q\frac{dt}{t}.
\end{equation}
Integrating (\ref {equa/v*}) and using the fact that
\begin{equation}\label{lim}\lim_{t\to\infty}{\norm{v^{*}}}_{L^\infty(S^{N-1})}=
\lim_{t\to\infty}{\norm{v_{t}^{*}}}_{L^\infty(S^{N-1})}=0,
\end{equation}
yields to
$$v_{t}^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=-\myint{t}{\infty}A^{2}v^{*}(s,\sigma} \def\vgs{\varsigma} \def\gt{\tau)ds,\quad
\forall (t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times S^{N-1},
$$
and
\begin{equation}\label{right5}\begin{array}{l}
v^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=\myint{t}{\infty}\myint{s}{\infty}A^{2}v^{*}(\gt,\sigma} \def\vgs{\varsigma} \def\gt{\tau)d\tau ds,\\
\qquad\,\quad\;=\myint{t}{\infty}A^{2}v^{*}(\gt,\sigma} \def\vgs{\varsigma} \def\gt{\tau)(\gt-t)d\gt,
\quad \forall (t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times S^{N-1}.
\end{array}\end{equation}
Letting $t\to 0$ and integrating over $S^{N-1}$, one obtains
\begin{equation}\label{right6}\begin{array}{l}
\myint{S^{N-1}}{}\abs f d\sigma} \def\vgs{\varsigma} \def\gt{\tau\leq \myint{0}{\infty}\myint{S^{N-1}}{}\abs
{A^{2}v^{*}}\gt d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \\
\qquad\qquad\;\;\quad\leq
C(N,s,q,\gd)\left(\myint{0}{\infty}\myint{S^{N-1}}{}{\abs
{A^{2}v^{*}}}^qe^{\gd \gt}\gt^{sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \right)^{1/q}
\end{array}\end{equation}
for any $\gd>0$ ($\gd$ will be taken smaller that $(N-2)q/2)$ is the
sequel), where
$$C(N,s,q,\gd)=\left(\abs {S^{N-1}}\myint{0}{\infty}\gt^{(q+1-sq)/(q-1)}e^{-\gd
\gt/(q-1)}d\gt\right)^{1/q'}.
$$
Notice that the integral is convergent since
$(q+1-sq)/(q-1)>-1\Longleftrightarrow s<2$.
Going back to $\tilde v$
$$\myint{0}{\infty}\myint{S^{N-1}}{}{\abs
{A^{2}v^{*}}}^qe^{\gd \gt}\gt^{sq-1}d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt=
\myint{0}{\infty}\myint{S^{N-1}}{}{\abs {A^{2}\tilde v}}^qe^{(\gd-(N-2)q/2) \gt}\gt^{sq-1}d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt.$$
Since $u$ is harmonic
$$\myint{S^{N-1}}{}{\abs {\tilde u(\gt_{1},.)}}^qd\sigma} \def\vgs{\varsigma} \def\gt{\tau \leq
\myint{S^{N-1}}{}{\abs {\tilde u(\gt_{2},.)}}^qd\sigma} \def\vgs{\varsigma} \def\gt{\tau,\quad \forall
0<\gt_{2}\leq \gt_{1},
$$
or equivalently,
\begin{equation}\label{right7}\myint{S^{N-1}}{}{\abs {A^{2}\tilde v(\gt_{1},.)}}^qd\sigma} \def\vgs{\varsigma} \def\gt{\tau \leq
\myint{S^{N-1}}{}{\abs {A^{2}\tilde v(\gt_{2},.)}}^qd\sigma} \def\vgs{\varsigma} \def\gt{\tau,\quad \forall
0<\gt_{2}\leq \gt_{1}.
\end{equation}
Applying (\ref{right7}) between $\gt$ and $1/\gt$ for $\gt\geq1$ yields to
\begin{equation}\label{right8}\begin{array}{l}\myint{1}{\infty}\myint{S^{N-1}}{}{\abs
{A^{2}\tilde v }}^qe^{(\gd-(N-2)q/2) \gt}\gt^{sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt\leq
\myint{0}{1}\myint{S^{N-1}}{}{\abs
{A^{2}\tilde v }}^qe^{(\gd-(N-2)q/2) \gt^{-1}}\gt^{-sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt
\end{array}\end{equation}
Moreover there exists $C=C(N,q,\gd)>0$ such that
$$e^{(\gd-(N-2)q/2) t^{-1}}t^{-sq-1}\leq C
e^{(\gd-(N-2)q/2) t}t^{sq-1}, \quad \forall 0<t\leq 1.$$
Plugging this inequality into \eqref{right7} and using \eqref{right6}, one
derives
\begin{equation}\label{right9}
\myint{S^{N-1}}{}\abs f d\sigma} \def\vgs{\varsigma} \def\gt{\tau\leq C\left(\myint{0}{1}\myint{S^{N-1}}{}{\abs
{A^{2}v^{*}}}^qe^{\gd \gt}\gt^{sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \right)^{1/q}
\end{equation}
for some positive constant $C$, from which \eqref{right4} follows.
\subsection {the general case}
We assume that $k\geq 1$. Since the imbedding of $B^{2k-s,q}(S^{N-1})$ into
$L^q(S^{N-1})$ is compact, for any $\vge>0$
there is $C_{\vge}>0$ such that
$${\norm \varphi}_{L^q(S^{N-1})}\leq \vge {\norm \varphi}_{B^{2k-s,q}(S^{N-1})}+
C_{\vge}{\norm \varphi}_{L^1(S^{N-1})},\quad \forall\,\varphi\in B^{2k-s,q}(S^{N-1}).
$$
Thus the following norm for $B^{2k-s,q}(S^{N-1})$ is equivalent to the
one given in (\ref{equivleft})
\begin{equation}\label{right10}
\norm{f}^q_{B^{2k-s,q}}=\norm{f}_{L^1(S^{N-1})}^q +
\myint{0_{_{}}}{1}\paran{t^{s}\norm{A^{2k}
v^{*}}_{L^q(S^{N-1})}}^q\dfrac{dt}{t},
\end{equation}
and estimate (\ref{right2}) will follow from
\begin{equation}\label{right11}
\norm{f}^q_{L^1(S^{N-1})}\leq
\myint{0_{_{}}}{1}\paran{t^{s}\norm{A^{2k}
v^{*}}_{L^q(S^{N-1})}}^q\frac{dt}{t}.
\end{equation}
From \eqref{right5},
\begin{equation}\label{right12}
v^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=\myint{t}{\infty}A^{2}v^{*}(\gt,\sigma} \def\vgs{\varsigma} \def\gt{\tau)(\gt-t)d\gt,\quad
\forall (t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times S^{N-1}.
\end{equation}
Since the operator $A^{2}$ is closed,
$$
A^{2}v^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=\myint{t}{\infty}A^{4}v^{*}(\gt,\sigma} \def\vgs{\varsigma} \def\gt{\tau)(\gt-t)d\gt,
$$
and
\begin{equation}\begin{array}{l}\label{right13}
v^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=\myint{t}{\infty}(t_{1}-t)\myint{t_{1}}{\infty}A^{4}
v^{*}(t_{2},\sigma} \def\vgs{\varsigma} \def\gt{\tau)(t_{2}-t_{1})dt_{2}dt_{1},\\
\qquad\quad\;\,=\myint{t}{\infty}\myint{t_{1}}{\infty}(t_{1}-t)(t_{2}-t_{1})A^{4}
v^{*}(t_{2},\sigma} \def\vgs{\varsigma} \def\gt{\tau)dt_{2}dt_{1},\quad
\forall (t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times S^{N-1}.
\end{array}\end{equation}
Iterating this process one gets, for every $ (t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times
S^{N-1}$,
\begin{equation}\label{right14}
v^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=\myint{t}{\infty}\myint{t_{1}}{\infty}\ldots\myint{t_{k-1}}{\infty}
\prod_{j=1}^k(t_{j}-t_{j-1})A^{2k}v^{*}(t_{k},\sigma} \def\vgs{\varsigma} \def\gt{\tau) dt_{k}dt_{k-1}\ldots dt_{1}.
\end{equation}
where we have set $t=t_{0}$ in the product symbol. The following
representation formula is valid for any $k\in\BBN_{*}$.
\blemma {reduction} For any $(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)\in (0,\infty)\times S^{N-1}$,
\begin{equation}\label{right15}
v^{*}(t,\sigma} \def\vgs{\varsigma} \def\gt{\tau)=\myint{t}{\infty}\myfrac{(s-t)^{2k-1}}{(2k-1)!}A^{2k}v^{*}(s,\sigma} \def\vgs{\varsigma} \def\gt{\tau) ds.
\end{equation}
\end{sub}
\note{Proof} We proceed by induction. By Fubini's theorem
$$\begin{array}{l}\myint{t}{\infty}\myint{t_{1}}{\infty}(t_{1}-t)(t_{2}-t_{1})A^{4}
v^{*}(t_{2},\sigma} \def\vgs{\varsigma} \def\gt{\tau)dt_{2}dt_{1}=\myint{t}{\infty}A^{4}
v^{*}(t_{2},\sigma} \def\vgs{\varsigma} \def\gt{\tau)\myint{t}{t_{2}}(t_{1}-t)(t_{2}-t_{1})dt_{1}dt_{2}\\
\qquad\quad \qquad\quad \qquad\quad \qquad\quad \qquad\quad \qquad\quad
=\myint{t}{\infty}\myfrac{(t_{2}-t)^{3}}{6}A^{4}v^{*}(t_{2},\sigma} \def\vgs{\varsigma} \def\gt{\tau)dt_{2}.
\end{array}$$
Suppose now that for $t>0$, $\ell<k$ and any smooth function $\varphi$
defined on $(,\infty)$,
\begin{equation}\label{right16}
\myint{t}{\infty}\myint{t_{1}}{\infty}\ldots\myint{t_{\ell-1}}{\infty}
\prod_{j=1}^\ell(t_{j}-t_{j-1})\varphi(t_{\ell}) dt_{\ell}dt_{\ell-1}\ldots dt_{1}
=\myint{t}{\infty}\myfrac{(t_{\ell}-t)^{2\ell-1}}{(2\ell-1)!}\varphi (t_{\ell}) dt_{\ell}.
\end{equation}
Then
$$\begin{array}{l}\displaystyle{\myint{t}{\infty}\myint{t_{1}}{\infty}\ldots\myint{t_{\ell}}{\infty}
\prod_{j=1}^{\ell+1}(t_{j}-t_{j-1})\varphi(t_{\ell+1})
dt_{\ell+1}dt_{\ell}\ldots dt_{1}}\\
\qquad\quad\qquad\qquad\qquad\qquad\quad=
\myint{t}{\infty}\myint{t_{1}}{\infty}\ldots\myint{t_{\ell-1}}{\infty}
\prod_{j=1}^\ell(t_{j}-t_{j-1})\Phi(t_{\ell}) dt_{\ell}dt_{\ell-1}\ldots
dt_{1},\\
\qquad\quad\qquad\qquad\qquad\qquad\quad=
\myint{t}{\infty}\myfrac{(t_{\ell}-t)^{2\ell-1}}{(2\ell-1)!}\Phi(t_{\ell})dt_{\ell},
\end{array}$$
with
$$\Phi(t_{\ell}) =\myint{t_{\ell}}{\infty}(t_{\ell+1}-t_{\ell})\varphi(t_{\ell+1}) dt_{\ell+1}.
$$
But
$$\begin{array}{l}\myint{t}{\infty}\myfrac{(t_{\ell}-t)^{2\ell-1}}{(2\ell-1)!}
\myint{t_{\ell}}{\infty}(t_{\ell+1}-t_{\ell})\varphi(t_{\ell+1})
dt_{\ell+1}dt_{\ell}\\
\qquad\qquad\qquad\qquad\qquad\qquad\quad=\myint{t}{\infty}\varphi(t_{\ell+1})
\myint{t}{t_{ell+1}}\myfrac{(t_{\ell}-t)^{2\ell-1}}{(2\ell-1)!}(t_{\ell+1}-t_{\ell})dt_{\ell}dt_{\ell+1}\\
\qquad\qquad\qquad\qquad\qquad\qquad\quad
=\myint{t}{\infty}\varphi(t_{\ell+1})
\myint{0}{t_{ell+1}-t}\myfrac{\gt^{2\ell-1}}{(2\ell-1)!}(t_{\ell+1}-t-\gt)d\gt dt_{\ell+1}\\
\qquad\qquad\qquad\qquad\qquad\qquad\quad
=\myint{t}{\infty}\varphi(t_{\ell+1})\myfrac {(t_{2\ell+1}-\gt)^{2\ell+1}}{(2\ell+1)!}dt_{\ell+1}
\end{array}$$
as $\myfrac{1}{(2\ell-1)!}(\myfrac {1}{2\ell}-\myfrac
{1}{2\ell+1})=\myfrac{1}{(2\ell+1)!}$. Taking
$\varphi(t_{\ell+1})=A^{2\ell}v^{*}(t_{\ell+1},\sigma} \def\vgs{\varsigma} \def\gt{\tau)$ implies \eqref{right15}.
\medskip
\medskip
\noindent {\it End of the proof}. From \eqref{right14} and Lemma \ref {l:reduction}
with $t=0$, we get
\begin{equation}\label{right17}\begin{array}{l}
\myint{S^{N-1}}{}\abs f d\sigma} \def\vgs{\varsigma} \def\gt{\tau\leq \myint{0}{\infty}\myint{S^{N-1}}{}\abs
{A^{2k}v^{*}}\myfrac{\gt^{2k-1}}{(2k-1)!} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \\
\qquad\qquad\;\;\quad\leq
C(N,s,k,q,\gd)\left(\myint{0}{\infty}\myint{S^{N-1}}{}{\abs
{A^{2k}v^{*}}}^qe^{\gd \gt}\gt^{sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \right)^{1/q}
\end{array}\end{equation}
for any $\gd>0$ ($\gd$ will be taken smaller that $(N-2)q/2)$ is the
sequel), where
$$C(N,s,k,q,\gd)=\left(\abs {S^{N-1}}\myint{0}{\infty}\gt^{(2k-s-1/q')q'}e^{-\gd
\gt/(q-1)}d\gt\right)^{1/q'}.
$$
Notice that the integral is convergent since
$(2k-s-1/q')q'>-1\Longleftrightarrow s<2k$. As in the case $s<2$ we return to $\tilde
v$ and $\tilde u=A^{2k}\tilde u $, use the harmonicity of $u$ in order to
derive
\begin{equation}\label{right18}\begin{array}{l}\myint{1}{\infty}\myint{S^{N-1}}{}{\abs
{A^{2k}\tilde v }}^qe^{(\gd-(N-2)q/2) \gt}\gt^{sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt\\
\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\leq
\myint{0}{1}\myint{S^{N-1}}{}{\abs
{A^{2k}\tilde v }}^qe^{(\gd-(N-2)q/2) \gt^{-1}}\gt^{-sq-1} d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt
\end{array}\end{equation}
as in \eqref{right8} and finally
\begin{equation}\label{right19}\begin{array}{l}
\myint{S^{N-1}}{}\abs f d\sigma} \def\vgs{\varsigma} \def\gt{\tau\leq C\left(\myint {0}{1}\myint{S^{N-1}}{}
{\abs A^{2k}v^{*}}^q\gt^{sq-1}d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \right)^{1/q},\\
\qquad\qquad\qquad \leq C'
\left(\myint {0}{1}\myint{S^{N-1}}{}
{\abs {\tilde u}}^q\gt^{sq-1}d\sigma} \def\vgs{\varsigma} \def\gt{\tau d\gt \right)^{1/q},
\end{array}\end{equation}
which ends the proof of Theorem \ref {t:main}.
\medskip
\noindent {\it Remark}. If $N=2$ the lifting operator is
$$\BBb=\paran{1-\myfrac {d^{2}}{d\sigma} \def\vgs{\varsigma} \def\gt{\tau^{2}}}^k,
$$
and the proof is similar. moreover, since $\BBb$ is an isomorphism
between $B^{2k-s,1}(S^{1})$ and $B^{-s,1}(S^{1})$, the
result of Theorem \ref {t:main} holds also in the case $q=1$.
\mysection {A regularity result for the Green operator}
Put $(1-\abs x)=\gd (x)$. By duality between $L^q(B;\gd^{sq-1}dx)$ and
$L^{q'}(B;\gd^{sq-1}dx)$, we write
\begin{equation}\label{dual1}
\myint{B}{}\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\mu)\psi \gd^{sq-1}dx=-\myint{B}{}\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S(\mu)\Gd\zeta dx=-\myint{\Gs}{}\myfrac{\partial
\zeta}{\partial \gn}d\mu} \def\gn{\nu} \def\gp{\pi,
\end{equation}
where $\zeta$ is the solution of
\begin{equation}\label{dual2}\left\{\begin{array}{l}-\Gd\zeta=\gd^{sq-1}\psi\quad \mbox{in }B,\\
\qquad\zeta=0\qquad \mbox{on }\partial B.\end{array}\right.\end{equation}
In \eqref{dual1}, the boundary term should be written
$<\mu,\partial \zeta/\partial \gn>_{\Gs}$ if $\mu$ is a distribution
on $\Gs$. Then the adjoint operator $\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S^{*}$ is defined by
\begin{equation}\label{dual3}
\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S^{*}(\psi)=-\myfrac {\partial}{\partial\gn}\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I(\gd^{sq-1}\psi),
\end{equation}
where $\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I(\gd^{sq-1}\psi)$ is the Green potential of
$\gd^{sq-1}\psi$. Consequently, Theorem \ref {t:main} implies that
there exists a constant $C>0$ such that
\begin{equation}\label{dual4}
C^{-1}\norm {\psi}_{L^{q'}(B;\gd^{sq-1}dx)}
\leq \norm {\myfrac
{\partial}{\partial\gn}\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I(\gd^{sq-1}\psi)}_{B^{s,q'}(\Gs)}
\leq C\norm {\psi}_{L^{q'}(B;\gd^{sq-1}dx)}.
\end{equation}
But
$$\psi\in L^{q'}(B;\gd^{sq-1}dx)\Longleftrightarrow \gd^{sq-1}\psi\in
L^{q'}(B;\gd^{(sq-1)(1-q')}dx).
$$
Putting $\varphi=\gd^{sq-1}\psi$ and replacing $q'$ by $p$, implies
the following result
\bth{dual} Let $s>0$ and $1<p<\infty$. Then
$$\varphi\in L^p(B;\gd^{p(1-s)-1})dx)\Longleftrightarrow
\myfrac {\partial}{\partial\gn}\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I(\varphi)\in B^{s,p}(\Gs).
$$
Moreover there exists a constant $C>0$ such that for any
$\varphi\in L^p(B;\gd^{p(1-s)-1})dx)$
\begin{equation}\label{dual5}
C^{-1}{\norm {\varphi}}_{L^p(B;\gd^{p(1-s)-1})dx)}
\leq {\norm {\myfrac {\partial}{\partial\gn}\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I(\varphi)}}_{B^{s,p}(\Gs)}
\leq C{\norm {\varphi}}_{L^p(B;\gd^{p(1-s)-1})dx)}.
\end{equation}
\end{sub}
|
1,477,468,750,977 | arxiv | \section{Introduction}
The first formalism to quantum mechanics in phase space was proposed by
Wigner in 1932 \cite{wig1}. He was motived by the problem of finding a way
to improve the quantum statistical mechanics, based on the desity matrix, to
treat the transport equations for superfluids \cite{wig2, wig3, wig4}. Since
then, the formalism proposed by Wigner has been applied in different
contexts, such as quantum optics \cite{zak1, ViannaCezar}, condensed matter \cite{seb4, seb41, seb42}, quantum computing \cite{seb43, seb44, seb445},
quantum tomography \cite{dav02}, plasma physics\cite{seb5, seb6, seb7, seb9, seb10, seb12}.
Wigner introduced his formalism by using a kind of Fourier transform of the
density matrix, $\rho (q,q^{\prime })$, giving rise to what in nowadays
called the Wigner function, $f_{W}(q,p)$, where $(q,p)$ are coordinates of a
phase space manifold $(\Gamma )$. The Wigner function is identified as a
quasi-probability density in the sense that $f_{W}(q,p)$ is real but not
positive defined, and as such cannot be interpreted as a probability.
However, the integrals $\sigma (q)=\int f_{W}(q,p)dp$ and $\sigma (p)=\int
f_{W}(q,p)dq$ are distribution functions \cite{wig1, wig2}.
In Wigner formalism each quantum operator $A$ in the Hilbert space is
associated with a function $a_{W}(q,p)$ defined in $\Gamma $. The
application $\Omega _{W}:A\rightarrow a_{W}(q,p)$ is such that associative
algebra of operators in $\mathcal{H}$ defines an associative but
noncommutative algebra in $\Gamma $. The noncommutativity stems from nature
of the product between two operators in $\mathcal{H}$. Given two operators
A $ and $B$, we have the mapping $\Omega :AB\rightarrow a_{W}(q,p)\star
b_{W}(q,p)$, where the star (or Moyal)-product $\star $ is defined by \cit
{moy2}
\begin{equation}
a_{w}(q,p)\star b_{w}(q,p)=a_{w}(q,p)\exp [\frac{i}{2}(\frac{\overleftarrow{
\partial }}{\partial q}\frac{\overrightarrow{\partial }}{\partial p}-\frac
\overleftarrow{\partial }}{\partial p}\frac{\overrightarrow{\partial }}
\partial q})]b_{w}(q,p). \label{1}
\end{equation
(Throught this Letter we use natural units: $\hbar =c=1$). Note that Eq.(\re
{1}) can be seen as an operator $\hat{A}=a_{W}(q,p)\star $ acting on
functions $b_{W}(q,p)$, such that $\hat{A}(b_{W})=a_{W}\star b_{W}$. In this
sense, we can study unitary representations of Lie groups in phase space
using the Moyal product as defined by the operators $\hat{A}$. This gives
rise, for instance, to the Klein-Gordon and Dirac equations written in phase
space \cite{seb1, seb2, seb22, sig1}. The connection with Wigner function is
derived, providing a physical interpretation for the representation. As a
consequence, these symplectic representations are a a way to consider the
Wigner methods on the bases of symmetry groups. In the present work, we
apply this symplectic formalism to solve Dirac equation with electromagnetic
interaction in phase space. These results provide a starting point for our
analysis of nonclassical electromagnetic radiation sates in phase space.
The presentation of this Letter is organized in the following way. In
section 2, we define a Hilbert space $\mathcal{H}(\Gamma)$ over a phase
space $\Gamma$ with its natural relativistic symplectic struture. In section
3, we study the Poincar\'{e} algebra in $\mathcal{H}(\Gamma)$ and the
representation for spin 1/2. In section 4, the Dirac equation in phase space
with electromagnetic radiation is considered. Quasi-amplitudes of
probabilities are derived . In section 5, final concluding remarks are
presented.
\section{Hilbert Space and Symplectic Structure}
Consider $M$ an $n$-dimensional analytical manifold where each point is
specified by Minkowski coordinates $q^{\mu}$, with $\mu=0,1,2,3,4$ and
metric specified by $diag(g)=(-+++)$. The coordinates of each point in
T^{*}M$ will be denoted by $(q^{\mu},p^{\mu})$. The space $T^{*}M$ is
equipped with a symplectic struture by introducing a 2-form
\begin{equation}
\omega =dq^{\mu}\wedge dp_{\mu},
\end{equation}
called the symplectic form (sum over repeated indices is assumed). Consider
the following bidifferential operator on $C^{\infty}(T^{\ast}M)$:
\begin{equation}
\Lambda=\frac{\overleftarrow{\partial}}{\partial q^{\mu}}\frac
\overrightarrow{\partial}}{\partial p_{\mu}}-\frac{\overleftarrow{\partial}}
\partial p^{\mu}}\frac{\overrightarrow{\partial}}{\partial q_{\mu}},
\end{equation}
such that for $C^{\infty}$ functions, $f=f(q^{\mu},p^{\mu})$ and
g=g(q^{\mu},p^{\mu})$, we have
\begin{equation}
\{f,g\}=\omega(f\Lambda,g\Lambda)=f\Lambda g,
\end{equation}
where
\begin{equation}
\{f,g\}=\frac{\partial f}{\partial q^{\mu}}\frac{\partial g}{\partial p_{\mu
}-\frac{\partial f}{\partial p^{\mu}}\frac{\partial g}{\partial q_{\mu}}.
\end{equation}
is the Poisson bracket and $f\Lambda$ and $g\Lambda$ are two vector fields
given by $h\Lambda=X_{h}=-\{h,\}$. The space $T^{\ast}$
endowed with this symplectic structure is called the phase space, and will
be denoted by $\Gamma$.
The notion of Hilbert space associated with the phase space $\Gamma $ is
introduced by considering the set of square integrable functions, $\phi
(q^{\mu },p^{\mu })$ in $\Gamma $, such that
\begin{equation}
\int dp^{\mu }dq^{\mu }\phi ^{\ast }(q^{\mu },p^{\mu })\phi (q^{\mu },p^{\mu
})<\infty .
\end{equation
Then, we can write $\phi (q^{\mu },p^{\mu })=\langle q^{\mu },p^{\mu }|\phi
\rangle $, with
\begin{equation}
\int dp^{\mu }dq^{\mu }|q^{\mu },p^{\mu }\rangle \langle q^{\mu },p^{\mu
}|=1,
\end{equation
to be $\langle \phi |$ the dual vector of $|\phi \rangle $. We call this the
Hilbert space $\mathcal{H}(\Gamma )$.
\section{Poincar\'{e} Algebra and Dirac Equation in Phase Space}
Using the $star$-operators, $\hat{A}=a_{W}(q,p)\star $, we define 4-momentum
and 4-position operators, respectively, by
\begin{equation}
\hat{P}^{\mu }=p^{\mu }\star =p^{\mu }\exp [\frac{i}{2}(\frac{\overleftarrow
\partial }}{\partial q^{\mu }}\frac{\overrightarrow{\partial }}{\partial
p_{\mu }}-\frac{\overleftarrow{\partial }}{\partial p^{\mu }}\frac
\overrightarrow{\partial }}{\partial q_{\mu }})]=p^{\mu }-\frac{i}{2}\frac
\partial }{\partial q_{\mu }}, \label{mom}
\end{equation
\begin{equation}
\hat{Q}^{\mu }=q^{\mu }\star =q^{\mu }\exp [\frac{i}{2}(\frac{\overleftarrow
\partial }}{\partial q^{\mu }}\frac{\overrightarrow{\partial }}{\partial
p_{\mu }}-\frac{\overleftarrow{\partial }}{\partial p^{\mu }}\frac
\overrightarrow{\partial }}{\partial q_{\mu }})]=q^{\mu }+\frac{i}{2}\frac
\partial }{\partial p_{\mu }}. \label{pos}
\end{equation
From Eqs. (\ref{mom}) and (\ref{pos}), we can introduce the quantity $\hat{M
_{\nu \sigma }=\hat{Q}_{\mu }\hat{P}_{\sigma }-\hat{Q}_{\sigma }\hat{P}_{\mu
}$. The operators $\hat{P}^{\mu }$ and $\hat{M}_{\nu \sigma }$ are defined
in the Hilbert space, $\mathcal{H}(\Gamma )$, constructed with complex
functions in the phase space $\Gamma $, and satisfy the set of commutation
relations
\begin{equation}
\lbrack \hat{M}_{\mu \nu },\hat{P}_{\sigma }]=i(g_{\nu ,\sigma }\hat{P}_{\mu
}-g_{\sigma \mu }\hat{P}_{\nu }),
\end{equation
\begin{equation}
\lbrack \hat{P}_{\mu },\hat{P}_{\nu }]=0,
\end{equation
\begin{equation}
\lbrack \hat{M}_{\mu \nu },\hat{M}_{\sigma \rho }]=-i(g_{\mu \rho }\hat{M
_{\nu \sigma }-g_{\nu \rho }\hat{M}_{\mu \sigma }+g_{\mu \sigma }\hat{M
_{\rho \nu }-g_{\nu \sigma }\hat{M}_{\rho \mu }).
\end{equation
This is the Poincar\'{e} algebra, where $\hat{M}_{\mu \nu }$ stands for
rotations and $\hat{P}_{\mu }$ for translations (but notice, in phase
space). The Casimir invariants are calculated by using the Pauli-Lubanski
matrices, $\hat{W}_{\mu }=\frac{1}{2}\epsilon _{\mu \nu \rho \sigma }\hat{M
^{\mu \sigma }\hat{P}^{\rho }$, where $\epsilon _{\mu \nu \rho \sigma }$ is
the Levi-Civita symbol. The invariants are
\begin{equation}
\hat{P}^{2}=\hat{P}^{\mu }\hat{P}_{\mu },
\end{equation
and
\begin{equation}
\hat{W}^{2}=\hat{W}^{\mu }\hat{W}_{\mu },
\end{equation
where $\hat{P}^{2}$ stands for the mass shell condition and $\hat{W}^{2}$
for the spin.
To determine the Klein-Gordon field equation, we consider a scalar
representation in $\mathcal{H}(\Gamma )$. In this case, we can use the
invariant $\hat{P}^{2}$ to write
\begin{eqnarray}
\hat{P}^{2}\phi (q^{\mu },p^{\mu }) &=&(p^{2})\star \phi (q^{\mu },p^{\mu }),
\notag \\
&=&(p^{\mu }\star p_{\mu }\star )\phi (q^{\mu },p^{\mu }), \notag \\
&=&m^{2}\phi (q^{\mu },p^{\mu }), \label{kg1}
\end{eqnarray
where $m$ is a constant fixing the representation and interpreted as mass,
such that the mass shell condition is satisfied. Using Eq. (\ref{mom}), we
obtain
\begin{equation}
\left( p^{\mu }p_{\mu }-ip^{\mu }\frac{\partial }{\partial q^{\mu }}-\frac{
}{4}\frac{\partial }{\partial q^{\mu }}\frac{\partial }{\partial q_{\mu }
\right) \phi (q^{\mu },p^{\mu })=m^{2}\phi (q^{\mu },p^{\mu }), \label{kg2}
\end{equation
which is the Klein-Gordon equation in phase space.
The association of this representation with Wigner formalism is given by
\cite{seb2}
\begin{equation} \label{w1}
f_W(q^{\mu},p^{\mu})=\phi(q^{\mu},p^{\mu})\star\psi^{\star}(q^{\mu},p^{\mu}),
\end{equation}
where $f_W(q^{\mu},p^{\mu})$ is the relativistic Wigner function.
The representation for spin-$1/2$ leads to
\begin{equation} \label{d1}
\gamma^{\mu}\left(p_{\mu}-\frac{i}{2}\frac{\partial}{\partial q^{\mu}
\right)\psi(q^{\mu},p^{\mu})=m\psi(q^{\mu},p^{\mu}),
\end{equation}
which is the Dirac equation in phase space, where the $\gamma^{\mu}
-matrices fulfill the usual Clifford algebra, $(\gamma^{\mu}\gamma^{\nu}
\gamma^{\nu}\gamma^{\mu})=2g^{\mu\nu}$. The Wigner function, in this case,
is given by \cite{seb2}
\begin{equation} \label{w2}
f_W(q^{\mu},p^{\mu})=\psi(q^{\mu},p^{\mu})\star\overline{\psi
(q^{\mu},p^{\mu}),
\end{equation}
where $\overline{\psi}(q^{\mu},p^{\mu})=\gamma^0\psi^{\dagger}(q^{\mu},p^
\mu})$, with $\psi^{\dagger}(q^{\mu},p^{\mu})$ being the Hermitian of
\psi(q^{\mu},p^{\mu})$. We point out that the CPT theorem holds for non-commutative theories as showed in \cite{ch}. Therefore, such a theorem is also valid in phase space since the group structure remains the same.
One central point to be emphasized is that the approach developed here permits the calculation of Wigner functions for relativistic systems with methods,
based on symmetry, similar to those used in quantum field theory.
\section{Solution of Dirac Equation with Electromagnetic Interaction on
Phase Space}
In this section, we study interactions of a spin-1/2 charged particle with
an external electromagnetic field in Phase Space. The relevant equation is
the Dirac equation with minimal coupling
\begin{equation}
\left( \gamma ^{\mu }\hat{P}_{\mu }+m\right) \Psi =0,
\end{equation
being
\begin{equation}
\hat{P}_{\mu }\rightarrow \hat{P}_{\mu }-e\hat{A}_{\mu },
\end{equation
the minimal coupling prescription, where $\hat{A}^{i}=\frac{1}{2}\epsilon
^{ijk}B_{j}\hat{X}_{k}$ and $\hat{A}^{0}=0$, which represents the chosen
gauge. We also chose the magnetic field as $\mathbf{B}=(0,0,B)$. Thus, we
have
\begin{equation}
\left[ \gamma ^{\mu }\left( \hat{P}_{\mu }-e\hat{A}_{\mu }\right) +m\right]
\Psi =0. \label{edirac}
\end{equation
Now, we make the definition
\begin{equation}
\Psi =\left[ \gamma ^{\mu }\left( \hat{P}_{\mu }-e\hat{A}_{\mu }\right) -
\right] \psi =0. \label{defsol}
\end{equation
In order to obtain the energy levels, we substitute Eq. (\ref{defsol}) into
Eq. (\ref{edirac}) to give
\begin{equation}
\left[ \gamma ^{\mu }\gamma ^{\nu }\left( \hat{P}_{\mu }-e\hat{A}_{\mu
}\right) \left( \hat{P}_{\nu }-e\hat{A}_{\nu }\right) -m^{2}\right] \psi =0,
\label{ediracc}
\end{equation
with
\begin{equation}
\gamma ^{\mu }\gamma ^{\nu }=g^{\mu \nu }+\sigma ^{\mu \nu }\,,
\end{equation
wher
\begin{equation}
\sigma ^{\mu \nu }=\frac{i}{2}(\gamma ^{\mu }\gamma ^{\nu }-\gamma ^{\nu
}\gamma ^{\mu })=\frac{i}{2}[\gamma ^{\mu },\gamma ^{\nu }]. \label{sigma}
\end{equation
The components $\sigma ^{0i}$, $\sigma ^{ij}$ of the operator (\ref{sigma})
are \
\begin{equation}
\sigma ^{0i}=i\left(
\begin{array}{cc}
0 & \sigma ^{i} \\
\sigma ^{i} &
\end{array
\right) ,\text{ \ \ }\sigma ^{ij}=-\left(
\begin{array}{cc}
\epsilon _{ijk}\sigma ^{k} & 0 \\
0 & \epsilon _{ijk}\sigma ^{k
\end{array
\right) .
\end{equation
Note that these components are also expressed as $\sigma ^{0j}=i\alpha ^{j},$
$\sigma ^{ij}=-\epsilon _{ijk}\Sigma ^{k}.$ These results are explicitly
evaluated in the following representation of the $\gamma $-matrices:
\begin{align}
\gamma ^{0}& =\left(
\begin{array}{cc}
1 & 0 \\
0 & -
\end{array
\right) ,\;\;\;\gamma ^{i}=\left(
\begin{array}{cc}
0 & \sigma ^{i} \\
-\sigma ^{i} &
\end{array
\right) ,\;\;\;\gamma _{5}=\left(
\begin{array}{cc}
0 & 1 \\
1 &
\end{array
\right) , \notag \\
\alpha ^{i}& =\left(
\begin{array}{cc}
0 & \sigma ^{i} \\
\sigma ^{i} &
\end{array
\right) ,\;\;\;\Sigma ^{k}=\left(
\begin{array}{cc}
\sigma ^{k} & 0 \\
0 & \sigma ^{k
\end{array
\right) .
\end{align
with $\mathbf{\sigma }=(\sigma _{x},\sigma _{y},\sigma _{z})$ being the
Pauli matrices. Equation (\ref{ediracc}) can be written as
\begin{equation}
\left\{ \hat{P}^{\mu }\hat{P}_{\mu }-e\left( \hat{P}^{\mu }\hat{A}_{\mu }
\hat{A}^{\mu }\hat{P}_{\mu }\right) +e^{2}\hat{A}^{\mu }\hat{A}_{\mu
}+e\sigma ^{\mu \nu }\left[ \hat{P}^{\nu },\hat{A}_{\mu }\right]
-m^{2}\right\} \psi =0.
\end{equation
Confining the motion at the plane $\hat{X}\hat{Y}$ by the choice $\hat{P
_{3}=0$ and using the operators $\hat{P}_{\mu }=p_{\mu }-\frac{i}{2}\frac
\partial }{\partial x^{\mu }}$ and $\hat{X}_{\mu }=x_{\mu }-\frac{i}{2}\frac
\partial }{\partial p^{\mu }}$, we get the following equation
\begin{align}
& \left( -E^{2}-iE\frac{\partial }{\partial t}+\frac{1}{4}\frac{\partial ^{2
}{\partial t^{2}}-m^{2}\right) \psi +\Biggl\{p_{x}^{2}+p_{y}^{2}-\frac{1}{4
\left( \frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial
y^{2}}\right) \notag \\
& -eB\left[ \frac{i}{2}\left( p_{y}\frac{\partial }{\partial p_{x}}-p_{x
\frac{\partial }{\partial p_{y}}\right) +\frac{1}{4}\left( \frac{\partial
^{2}}{\partial y\partial p_{x}}-\frac{\partial ^{2}}{\partial x\partial p_{y
}\right) \right] \notag \\
& -i\left( p_{y}\frac{\partial }{\partial y}-p_{x}\frac{\partial }{\partial
}\right) -eB\left[ \left( xp_{y}-yp_{x}\right) -\frac{i}{2}\left( x\frac
\partial }{\partial y}-y\frac{\partial }{\partial x}\right) \right] \notag
\\
& +\frac{e^{2}B^{2}}{4}\left[ \left( x+\frac{i}{2}\frac{\partial }{\partial
p_{x}}\right) ^{2}+\left( y+\frac{i}{2}\frac{\partial }{\partial p_{y}
\right) ^{2}\right] +ieB\sigma ^{12}\Biggr\}\psi =0. \label{dracph}
\end{align
Since $\gamma _{5}$ commutes through all terms of Eq. (\ref{dracph}) and, if
we have found the solution $\psi $, then we must also have that $\gamma
_{5}\psi $ is a solution. In this case, In this case, the wave function can
takes on one of the form
\begin{equation}
\psi =\left(
\begin{array}{c}
\psi _{1} \\
\psi _{2} \\
\psi _{3} \\
\psi _{4
\end{array
\right) ,~~~\psi =\left(
\begin{array}{c}
\psi _{1} \\
\psi _{2} \\
-\psi _{1} \\
-\psi _{2
\end{array
\right) . \label{newsol}
\end{equation
Thus, we can select only one of these solutions as that will make the other
redundant. We specialize the solution
\begin{equation}
\gamma _{5}\psi =\psi . \label{newsolb}
\end{equation
We can write $\psi $ in terms of $\Psi $ in Eq. (\ref{defsol}) as
\begin{equation}
\psi =\frac{1}{2}\left( I+\gamma _{5}\right) \Psi , \label{solp}
\end{equation
where $I$ is the unit matrix. From Eq. (\ref{newsol}), we can show that Eq.
\ref{solp}) can be put in the for
\begin{equation}
\psi =\left(
\begin{array}{c}
\chi (E,t,p_{x},p_{y},x,y) \\
-\chi (E,t,p_{x},p_{y},x,y
\end{array
\right) =\left(
\begin{array}{c}
\varphi (E,t)\phi \left( p_{x},p_{y},x,y\right) \\
-\varphi (E,t)\phi \left( p_{x},p_{y},x,y\right)
\end{array
\right) , \label{soltrue}
\end{equation
where $\chi (E,t,p_{x},p_{y},x,y)$ is a two-component wavefunction. Note
that, in the representation (\ref{soltrue}), the upper two components of Eq.
(\ref{dracph}) are now completely decoupled from the lower two. So, we have,
in two-component form, the following equations:
\begin{equation}
\left( -E^{2}-iE\frac{\partial }{\partial t}+\frac{1}{4}\frac{\partial ^{2}}
\partial t^{2}}-m^{2}\right) \varphi =\lambda ^{2}\varphi , \label{solE}
\end{equation
\begin{align}
& \Biggl\{p_{x}^{2}+p_{y}^{2}-\frac{1}{4}\left( \frac{\partial ^{2}}
\partial x^{2}}+\frac{\partial ^{2}}{\partial y^{2}}\right) -i\left( p_{y
\frac{\partial }{\partial y}-p_{x}\frac{\partial }{\partial x}\right)
\notag \\
& -eB\Big[\left( xp_{y}-yp_{x}\right) +\frac{i}{2}\left( p_{y}\frac{\partial
}{\partial p_{x}}-p_{x}\frac{\partial }{\partial p_{y}}\right) \notag \\
& -\frac{i}{2}\left( x\frac{\partial }{\partial y}-y\frac{\partial }
\partial x}\right) +\frac{1}{4}\left( \frac{\partial ^{2}}{\partial
y\partial p_{x}}-\frac{\partial ^{2}}{\partial x\partial p_{y}}\right) \Big]
\notag \\
& +\frac{e^{2}B^{2}}{4}\left[ \left( x+\frac{i}{2}\frac{\partial }{\partial
p_{x}}\right) ^{2}+\left( y+\frac{i}{2}\frac{\partial }{\partial p_{y}
\right) ^{2}\right] +ieB\sigma ^{12}\Biggr\}\phi =\lambda ^{2}\phi ,
\label{solxp}
\end{align
where $\lambda$ is a constant. We point out that $E$ is not associated to $i\frac{\partial }{\partial t}$ a priori in Eq. \ref{solE}, thus the energy comes from $\lambda$. Note that Eqs. (\ref{solE}) and (\ref{solxp
) determines $\chi $ and, hence, from Eq. (\ref{solp}), it determines $\psi
. Performing a changing of variables in Eq. (\ref{solxp}) of the form
\begin{equation*}
z=p_{x}^{2}+p_{y}^{2}+eB\left( yp_{x}-xp_{y}\right) +\frac{e^{2}B^{2}}{4
\left( x^{2}+y^{2}\right) ,
\end{equation*
we note that the imaginary part of the equation vanished which yields
\begin{equation}
z\phi -e^{2}B^{2}\dot{\phi}-e^{2}B^{2}z\ddot{\phi}=\left( \lambda
^{2}+seB\right) \phi ,
\end{equation
where we have used $i\sigma ^{12}\phi =-s\phi $, with $s=\pm 1$. If we use
\omega =z/eB$ and $\phi =\exp {(-\omega )}F(\omega )$, the equation for
F(\omega )$ is found to be
\begin{equation}
\omega F^{^{\prime \prime }}+(1-2\omega )F^{^{\prime }}-(1-k)F=0,
\label{hyper}
\end{equation
where $F^{\prime }\equiv \frac{dF}{d\omega }$ and $k=(\lambda ^{2}+seB)/eB$.
Equation (\ref{hyper}) is of the confluent hypergeometric equation type
\begin{equation}
zF^{\prime \prime }(z)+(b-z)F^{\prime }(z)-aF(z)=0.
\end{equation
In this manner, the general solution for Eq. (\ref{hyper}) is given by
\begin{equation}
F\left( \omega \right) =\mathit{A}_{\mathit{m}}\,{\mathrm{M}\left( \frac{1}{
}-\frac{\,k}{2},\,1,\,2\,\omega \right) }+\mathit{B}_{\mathit{m}}\,{\mathrm{
}\left( \frac{1}{2}-\frac{\,k}{2},\,1,\,2\,\omega \right) ,} \label{kummer}
\end{equation
where $\mathrm{M}(a,b,z)$, $\mathrm{U}(a,b,z)$ are the Kummer functions, and
$\mathit{A}_{\mathit{m}}$, $\mathit{B}_{\mathit{m}}$ are constants. Since
only $\mathrm{U}(a,b,z)$ is square integrable, we consider it as a physical
solution. Thus, we can impose that $\mathit{A}_{\mathit{m}}=0$. Furthermore,
if $a=-n$, with $n=0,1,2,\ldots ,$ the series $\mathrm{U}(a,b,z)$ becomes a
polynomial in $z$ of degree not exceeding $n$. From this condition, we can
writ
\begin{equation}
1-k=-2n, \label{cpd}
\end{equation
from which, we can extract the relatio
\begin{equation}
\lambda ^{2}=eB\left( 2n+1+s\right) . \label{engy}
\end{equation
The wave function is given by
\begin{equation}
f_{m}(z)=\mathit{B}_{\mathit{m}}\,\,\mathrm{M}\,{\left( -n,\,1,\,\frac{2\,z}{e
}\right) }, \label{wf1}
\end{equation
where $\mathit{B}_{\mathit{m}}\,$\ is a normalization constant.
The Wigner function related to Dirac equation with an electromagnetic interaction is formally given by
$$
f_W(x,y,p_x,p_y)=\Psi_n(x,y,p_x,p_y)\star\overline{\Psi}_n(x,y,p_x,p_y).
$$
Thus Wigner function is used to determine mean values, for example, this result can be useful for theoretical and applied areas, such as: quantum optics, quantum tomography and quantum computing. We point out that Landau levels which appear in expression (\ref{engy}) represent as a matter of fact a planar oscillator and the variable z in Eq. (\ref{wf1}) give us information about the symplectic structure.
\section{Conclusion}
We have set forth a symplectic representation of the Poincar\'{e} group,
which yields quantum theories in phase space. We have derived the
Klein-Gordon and Dirac equations in phase space and, as illustrations,
studied the Dirac equation with electromagnetic interaction. The symplectic
representation is constructed on the basis of the Moyal or star product, an
ingredient of noncommutative geometry. A Hilbert space is then defined from
a manifold with the features of phase space. The states are represented by a
quasi-amplitude of probability, a wave function in phase space, the
definition of which makes connection with the Wigner function, i.e., the
quasi-probability density. Nontrivial, yet consistent, the association with
the Wigner function provides a physical interpretation of the theory.
Analogous interpretations are not found in other studies of representations
in phase space [25, 26]. One aspect of the procedure deserves emphasis. Our
formalism explores unitary representations to calculate Wigner functions.
This constitutes an important advantage over the more traditional
constructions of the Wigner method, which entail several intricacies
associated with the Liouville-von Neumann equation. Furthermore, the
formalism we have described opens new perspectives for applications of the
Wigner function method in quantum field theory. This aspect of the formalism
will be discussed in a forthcoming paper.
\section*{Acknowledgments}
This work was supported by the CNPq, Brazil, Grants No. 482015/2013-6 (Universal) and No. 306068/2013-3 (PQ); FAPEMA, Brazil, Grants No. 00845/13 (Universal) and No. 01852/14 (PRONEM).
|
1,477,468,750,978 | arxiv | \section{Introduction}
Research in Psychology has shown that humans handle the complexity of the real world by biasing or constraining their action choice at a given moment based on known object-related actions. In particular, recent fMRI studies show that the human brain uses \textit{action codes} -- automatically evoked memories of prototypical actions that are related to a given object -- to bias or constrain expectation on upcoming manipulations \cite{schubotz2014objects}. In effect, given an object, our brain simplifies the action selection process by constraining the decision to a predefined set of known actions. For instance, a knife and an apple seen together evoke the action codes of ``cutting apple with knife'' and ``peeling apple with knife''.
Our work presents an analogous mechanism for computational agents, showing that automatically generated action groupings can be used to improve the computational efficiency of both task planning and learning by constraining the action space. We present the \textit{Action Category Representation (ACR)}: an algorithm-agnostic, abstract data representation that encodes a mapping from objects to action categories (groups of actions) for a task. Specifically, we incorporate the idea of \textit{action codes} as the action categorization mechanism. We formally define an action code as the tuple:
\centerline{$((o_1, o_2 ... o_j),(a_1, a_2 ... a_k))$}
\noindent Where $(o_1, ... o_j)$ represents a set of objects and $(a_1, ... a_k)$ represents the set of actions associated with them for the task. For instance, the action code corresponding to the knife and apple example above is $((apple,knife), (peel,cut))$. In our work, we use action codes to build the Action Category Representation that can be used to improve computational performance in both task planning and reinforcement learning.
Action codes are closely related to the concept of object affordances \cite{gibson1977perceiving,mcgrenere2000affordances}, which are defined as action possibilities available to the agent for a given object. Affordances function by priming specific actions for the user by virtue of the object's physical properties (shape, size etc.). In contrast, action codes do not derive from the physical properties of objects, rather from the associative memories of what we use the objects for during everyday tasks. Thus, the notion of affordances is often independent of the task \cite{ellis2000micro,tucker1998relations} while action codes take the task into account. ACR builds on the notion of action codes, enabling an agent to learn object-action mappings based on prior experience.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{imgs/ACR_flow}
\captionsetup{width=\linewidth}
\caption{Objects in the low-level environment state are mapped via ACR to action categories to restrict the action set used in the planning or RL techniques}
\label{fig:ACR_flow}
\end{figure}
Within a computational framework, the primary benefit of ACR is to reduce the choice of actions the agent must consider. Thus, ACR serves as a layer of abstraction between low-level state information and the learning or planning technique used to control the agent (Figure \ref{fig:ACR_flow}). In this paper we:
\begin{enumerate}
\item describe the process of constructing ACR from the agent's experience or human demonstration of a task;
\item show that ACR has formal computational bounds that guarantee its use leads to at the worst case the same, and in the common case much improved, computational performance over traditional techniques that consider objects and actions without categorization;
\item present the computational benefits of using ACR in conjunction with PDDL planning to reduce planning time; and
\item present the computational benefits of using ACR with Q-learning to achieve improved learning performance.
\end{enumerate}
We validate ACR performance in two virtual domains: StarCraft and Lightworld. We conclude the paper with a discussion of our work and potential future uses of ACR.
\section{Related Work}
In this section, we position our paper in relation to existing work.
\subsection{Affordance Learning}
As discussed above, the concept of action codes is closely related to action affordances and affordance learning. Affordances model relationships between individual object \textit{properties} (shape, size, color etc.) to actions and observed effects and are formally defined as
\noindent \textit{(effect,(object,behavior))} \cite{csahin2007afford}. In contrast, action codes and by extension ACR, relate only semantic labeling and a holistic perception of objects such as ``cup'' or ``box'' to appropriate actions for a task.
Traditional approaches to affordance learning often involves ``behavioral babbling'' \cite{stoytchev2005behavior,montesano2008learning,lopes2007affordance} wherein the agent physically interacts with objects in a goal-free manner to discover their affordances. Hence, the resulting affordance representation is dissociated from a task, focusing instead on object properties. Such approaches involve several agent-object interactions affecting the scalability of the learning process, making it unfeasible in situations where there is an implicit cost or time constraint on the robot. ACR helps mitigate this cost by the grouping of actions into categories.
Two works closest to our approach are \cite{kjellstrom2011visual} and \cite{sun2010learning}. In \cite{kjellstrom2011visual}, Kjellstrom et al. describe an approach to visual object-action recognition that use demonstrations to categorize semantically labeled objects based on their functionality. This approach bridges the gap between affordance learning and task context since the learning is coupled with a task demonstration. However, it is unclear how the system would incorporate previously unseen objects unless they are observed from additional demonstrations. For instance, given a demonstration of pouring water into a ``cup'', the agent would require additional demonstrations to identify the similar functionality of a ``bowl''.
Sun et al. in \cite{sun2010learning} learn visual object categories for affordance prediction (Category-Affordance model), reducing the physical interactions with the objects. They use visual features of objects to categorize them on the basis of their functionality. However, it is unclear how the agent would deal with changing features and categories \cite{min2016affordance}, since the model is learned offline as compared to ACR which allows online learning of new objects and categories (Details in Sec 3). Regardless, their approach highlights some of the benefits of categorization on the scalability of learning, which motivates our work.
\subsection{Precondition Learning}
Preconditions can be expressed using predicates which may or may not relate to object affordances. For instance, ``At'' or ``isEmpty'' are object states whereas ``graspable'' is an affordance predicate \cite{lorken2008grounding}. Object-Action Complexes or OACs \cite{geib2006object} include instances of affordances as preconditions in the OAC instantiation. Their approach learns an ``object'' after physical interaction with it, i.e, there is no notion of an object prior to the interaction. For instance, the representation of a cube is learned after the agent grasps a planar surface. Other approaches such as \cite{ekvall2008robot} learn high-level task constraints and preconditions from demonstrations. In contrast to these approaches, ACR categorizes objects on the basis of action codes to improve planning performance as well as the learning performance of RL algorithms.
\subsection{Learning from Demonstration}
Human demonstrations have been used for both high-level task learning and low-level skill learning \cite{chernova2014robot}; a traditional assumption of LfD is that the human demonstrator is an expert, and the demonstrations are examples of desirable behavior that the agent should emulate. Our work focuses on high level task learning, but considers demonstrations more broadly as examples of what the agent \textit{can} do, rather than what it \textit{should}. This interpretation of the data enables our technique to benefit even from non-expert human users. Demonstration errors can be classified to one of 3 categories \cite{chernova2014robot}: Correct but suboptimal (contains extra steps), conflicting or inconsistent (user demonstrates 2 different actions from the same state) and entirely wrong (user took a wrong action) and we demonstrate the robustness of ACR to suboptimal demonstrations in Sec 6.
\subsubsection{LfD in planning}
Abdo et al. in \cite{abdo2012low} discuss the learning of predicates by analyzing variations in demonstrations. The learned predicates are then applied to plan for tasks and accommodate for environmental changes. Kadir et al. in \cite{uyanik2013learning} demonstrates execution of a task by leveraging human interactions. The agent interacts with all of the objects using \textit{all} of the precoded behaviors in its repertoire and uses forward chaining planning to accomplish the task goal. However, with increasing number of behaviors and objects, the search space for the planner can become quite large. Our approach using ACR can help reduce the action space making planning easier.
\subsubsection{LfD in RL}
Thomaz and Breazeal in \cite{thomaz2006reinforcement} discuss the effect of human guidance on an RL agent. Similar to our approach with ACR, the teacher guides the action selection process to reduce the action space for the RL agent. While both expert and non-expert guidance improved performance when compared to unguided learning, the final performance was sensitive to the expertise of the teacher.
Another well-known approach to integrating LfD and RL is Human-Agent Transfer or HAT \cite{taylor2011using}. Their approach uses a decision list to summarize the demonstration policy with a set of rules. However, it is sensitive to the number and optimality of the demonstrations \cite{suay2016learning,brys2015reinforcement}. We compare ACR to HAT in Sec 6 to demonstrate the benefits of ACR in terms of quantity and quality of the demonstrations.
\subsection{Object Focused Approaches}
In general, recent work in AI and Robotics has increasingly focused on modeling state not simply as a vector of features, but as a set of objects, such as Object-Oriented MDPs (OO-MDP), leading to improved computational performance due to data abstraction and generalization. In the context of reinforcement learning (RL), \textit{Object-focused Q learning (Of-Q)} represents the state space as a collection of objects organized into object classes, leading to exponential speed-ups in learning over traditional RL techniques \cite{cobo2013object}. More closer to our work, human input containing object-action associations has been used to effectively guide policy learning in Mario \cite{krening2016object}. The input advice is analogous to action codes, eg. ``Jump over an enemy''. Approaches such as \cite{barth2014affordances,cruz2014improving,wang2013robot} have used affordances in RL to prune the action space and improve learning. However, the formalisms in all these approaches differ from ACR and were not extended beyond Reinforcement Learning to planning of tasks.
\section{Action-Category Representation (ACR)}
The objective of ACR is to categorize objects based on the action codes of a task. In this section, we first present an example that illustrates the functionality of ACR and contrasts it with object affordance models. We then present the ACR formalism.
As an example, consider the task of packing a cardboard box during clean-up. The action codes for the cardboard box in this case are \textit{((box),(close))} and \textit{((box),(move))}. Based on these action codes, ACR groups the actions \textit{close} and \textit{move} into a single action category associated with the item \textit{box}. One of the benefits of ACR is that other objects sharing the same action codes, such as \textit{cooking pot} in a dish-clearing task and \textit{suitcase} in a travel-packing task, also become associated with the same action category, enabling the agent to reason about groups of similar objects across tasks that share action codes, despite the physical dissimilarities of the objects.
In our human example, a person seeing a knife and an apple may be primed to \textit{cut} or \textit{peel}, but may also select to ignore this bias and choose to wash the apple instead. In the agent's case, we similarly have the choice of treating ACR as a hard constraint on the actions available to the agent or as a flexible bias. In the sections below, we show how planning is well suited to use ACR as a hard constraint, and how ACR can be naturally combined with RL as a bias. We discuss possible extensions of this view in the conclusion of the paper.
In this paper, we show how ACR can be utilized in two ways. First, during task planning or learning, ACR improves computational efficiency by pruning the action space. Second, given an object not previously seen by the agent, ACR reduces the number of agent-object interactions required to learn its action associations for the task. Note that, in humans, action codes act as a bias and not a strict restriction on actions. In other words, a person seeing the knife and apple next to each other is primed to perform the actions \textit{cut} and \textit{peel}, but may override this bias too and \textit{put away} the apple instead. In the agent's case, we have the option to treat ACR as a hard constraint on the actions available to the agent or as a flexible bias that also allows re-expanding the action set. In the sections below, we show how planning is well suited to use ACR as a hard constraint, and how ACR can be naturally combined with RL as a bias. We discuss possible extensions of this view in the conclusion of the paper.
To construct ACR, the agent requires observations of objects in its environment and what actions are related to each object. These observations can be gained either through the agent's own exploration of the environment, or, more effectively, from a human teacher performing demonstrations of the task. During the observation phase, the agent maintains a log of action codes based on the actions performed and the objects that the actions were executed upon. We define $O$ as the set of all objects in the task environment, and $A$ as the set of all actions pertaining to the task. An observation log consists of a set of action codes and is represented by $L = \{\hat{c}_1, \hat{c}_2, ... \hat{c}_n\}$, where each timestep in the log is represented by an action code $\hat{c}_i = ((o_1, o_2 ... o_j),(a_1, a_2 ... a_k))$ with $o_j \in O$ and $a_k \in A$.
The act of building object-action relations can be formulated as a bipartite graph partitioning problem involving the action set $A$ and the objects set $O$. Given a graph $G(V,E)$, with vertices $V$ and edges $E$, the graph is \textit{bipartite} when the vertices can be separated into two sets, such that $V = A \cup O$, $A \cap O = \emptyset$, and each edge in $E$ has one endpoint in $A$ and one endpoint in $O$. In the context of ACR, $A$ represents actions, $O$ represents objects and an edge $\{a_i, o_i\}$ exists if action $a_i \in A$ and object $o_i \in O$ co-occur within any action code $\hat{c}_k \in L$. For instance, the action code \textit{((box), (push))} is represented by an edge from the action \textit{push} to the object \textit{box}. The resulting bipartite graph has a many-to-many association between objects and actions (Figure \ref{fig:OAR_graph} left). In ACR, the bipartite graph is generated incrementally from the action codes in the observation log.
The main computational units of ACR are \textit{action categories}, defined by a group or set of actions $A^c \subseteq A$. Given the bipartite graph above, for a given action $a_j \in A$, let $\hat{O}_{a_j}$ represent the set of objects for which that action co-occurs in some action code (i.e. the edge $\{a_j, o_k \in \hat{O}_{a_j}\}$ exists). Then we define an action category $A^c$ as:
\centerline{$A^c = \{a_j \ : \bigcup \hat{O}_{a_j} = \bigcap \hat{O}_{a_j}\}$}
This is interpreted as, ``The set of all actions $a_j$ such that union over all $\hat{O}_{a_j}$ is equal to intersection over all $\hat{O}_{a_j}$''. In other words, the action category $A^c$ contains a set of actions that are associated to the same set of objects, allowing us to group all those actions as one set. If we consider action categories as vertices themselves, then what results is a reduced one-to-many bipartite graph between action categories $A^c$ and the set of objects $O$ (Figure \ref{fig:OAR_graph} right), which is the representation we refer to as \textit{ACR}. Note that the entire set of actions can be grouped into categories such that $A = A^{c_1} \cup A^{c_2} \cup A^{c_3} \cup ... A^{c_n}$. We define $C = \{A^{c_1}, A^{c_2} ... A^{c_n}\}$ as the set of all action categories learned from observations. Note that, as in most prior work, we assume single-parametric actions \footnote{While it is possible to decompose multi-parametric actions to single-parametric actions as described in \cite{bach2014affordance}, we currently do not model them explicitly within ACR.} \cite{montesano2008learning,ugur2011goal,csahin2007afford}, and that preconditions and effects of those actions are known and can be perceived when planning \cite{agostini2015using,ekvall2008robot}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{imgs/modified_OAR_graph}
\captionsetup{width=\linewidth}
\caption{Bipartite graphs representing the relationship from objects to actions (left), and objects to action categories (right)}
\label{fig:OAR_graph}
\end{figure}
The construction of ACR is an online process, allowing learning of new objects and action categories over time with changes in the environment or task. As new action codes are learned, objects or actions can be incorporated by adding them to the graph, along with corresponding edges. A new action category $A^{c_i}$ may be added to $C$ when a new combination of associated actions is discovered, such that $A^{c_i} \neq A^{c_k}$ $\forall A^{c_k} \in C$. The resulting representation provides an automatically-generated online grouping of objects into categories based on action codes.
In this paper, we discuss characteristics of ACR that contribute to its novelty and significance:
\begin{enumerate}
\item groups actions based on action codes in order to reduce the action space for the agent,
\item contains and appropriately represents algorithm-agnostic information for planning, as well as RL, to improve their computational performance,
\item minimizes agent-object interactions for learning the action associations of a new object; and
\item requires one or few human demonstrations and is robust to the optimality of these demonstrations.
\end{enumerate}
\section{Computational Performance Analysis}
In this section, we present performance guarantees of ACR; in the following section we then validate our findings with case studies in StarCraft and Lightworld \cite{konidaris2007building} domains.
\subsection{Mathematical Analysis}
We define the total number of actions in a domain to be $|A| = n$, allowing us to bound the total possible action categories to be $|C| \leq 2^n - 1$, representing all possible action combinations from $1$ to $n$ actions. Then a given task involves a subset of these action categories $S \subseteq C$ and a set of objects $O$. The agent is assigned the task of learning $S$ and categorizing the objects in $O$ from observations of action codes.
One of the benefits of ACR is seen when the agent encounters a new set of objects $O'$ (not previously seen) and must discover which actions in $A$ are related to each object in $O'$ for the task execution. Below we present performance analysis of ACR and the baseline that uses no action categorization, with respect to the number of agent-object interactions prior to learning all the actions related to an object for the task ($A_{obj}$). Fewer $A_{obj}$ is computationally preferred since this reduces the number of agent-object interactions, making the learning or planning faster.
\subsubsection{$A_{obj}$ Without Categorization (Baseline)}
Without categorization, each action is considered independently, in which case to determine the set of actions applicable to a new object the agent must test out all $|A| = n$ actions on that object. That is, $A_{obj} = n$.
\subsubsection{$A_{obj}$ With Action Categories (ACR)}
The use of ACR can improve computational performance through action selection, enabling the agent to more effectively identify (or rule out) object interactions. In the presence of action categories our goal is to, with as few actions as possible, identify the category of a newly discovered object. To do so, we select actions from $A$, and for every attempted action that is unassociated (or associated) with the object we eliminate any action category in $S$ with (or without) that action from further testing. We use entropy as a measure of the most informative action to test so as to eliminate as many action categories as possible with each action tested. The entropy of an action $a$ is given by:
\smallskip \centerline{$H(a) = -p \log(p) - (1-p) \log(1-p)$}
\smallskip \centerline{Where, $p = \frac{\sum_{i=1}^{n} |\{a\} \cap A^{c_i}|}{|S|}$}
\smallskip \noindent The term $p$ denotes the probability that the action categories contain the action $a$ which is used to compute the entropy. Then the action that minimizes entropy is the most informative action. Therefore, the action $\hat{a}$ chosen for testing is given by:
\smallskip \centerline{$\hat{a} = \argminA_{a \in A} H(a)$}
In the best case, the category may be learned with a single action and hence the lower bound on $A_{obj}$ is 1. The worst case upper bound on $A_{obj}$ remains $n$: $1 \leq A_{obj} \leq n$.
That is, with categorization the performance is \textit{never} worse but typically better than without categorization. In practice, $S$ is usually a small subset of $C$, and therefore $A_{obj} << n$. In the case that a new action category must be learned, which occurs rarely in closed world domains such as StarCraft and Lightworld, $A_{obj}$ = n. In fact, in the experiments described below the agent obtains all possible action codes even from a single demonstration, allowing all the relevant action categories to be known prior to planning and learning.
\section{Case Study in StarCraft}
In this section, we first briefly describe our domain and highlight the complexity of the problem before discussing the computational benefits of using ACR with planning.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{imgs/extractors}
\captionsetup{width=\linewidth}
\caption{Refineries in StarCraft: Terrans, Zergs and Protoss (left to right), showing their distinct physical appearances}
\label{fig:extractors}
\end{figure}
StarCraft is a real-time strategy game which involves managing one of the 3 diverse civilizations (Terrans, Protoss and Zergs), producing buildings and units while destroying all of your opponents. Across the 3 civilizations, there are over 100 diverse units/buildings and reducing the number of agent-object interactions in this case makes the problem of task planning in StarCraft much more tractable. Figure \ref{fig:extractors} shows a ``refinery'', one of the buildings in StarCraft that is used to extract a gaseous mineral. Given the distinct appearances of the buildings across the 3 civilizations, it may be challenging to identify their related actions by using only physical features without any actual interaction.
\subsection{ACR Extracted from StarCraft}
We extracted ACR from one human demonstration in the Terrans civilization where the teacher successfully completed an in-game mission of creating a defense. Replay logs that summarize the action codes within the demonstration are readily available for StarCraft and are used to construct the ACR. In the case of physical systems, it is possible to extract action codes from human demonstrations using verbal communication during the demonstration or approaches such as \cite{gupta2007objects}.
\begin{table}[ht]
\centering
\includegraphics[width=0.35\textwidth]{imgs/OAR_integrated}
\captionsetup{width=\linewidth}
\caption{ACR built from human demo}
\label{fig:OAR_int}
\end{table}
Table \ref{fig:OAR_int} shows the complete ACR built from the human demonstration, highlighting the different action categories and object mappings to the action categories. We use this learned representation in the following section to describe its computational benefits.
\subsection{Computational Benefits of ACR with Planning}
We demonstrate two computational benefits of ACR with planning in terms of:
\begin{enumerate}
\item reduced number of object interactions or $A_{obj}$ required to learn all object-action associations prior to planning in StarCraft (Exploration phase)
\item improved planning performance due to reduced action space, demonstrated with combat formations in StarCraft
\end{enumerate}
\subsubsection{Benefits of ACR During Exploration Phase}
We demonstrate the benefits of ACR on $A_{obj}$ using build order planning. Build orders dictate the sequence in which units and structures are produced. Prior to planning for a task, there is usually an exploration phase during which the actions associated with the objects (in this case, for the units/structures specified in the build order) are first identified \cite{ugur2011goal}. This exploration phase adds to the overall planning time. Thus, a reduced $A_{obj}$ in the exploration phase would reduce overall planning time. We compare ACR and baseline (without categorization) on $A_{obj}$, during exploration phase of build order planning.
We explore build orders from all 3 civilizations. Table \ref{fig:build_order} shows the total number of agent-object interactions during the exploration phase, along with the number of previously unseen objects for which the object-action relations had to be learned.
\begin{table}[t]
\centering
\includegraphics[width=0.45\textwidth]{imgs/BO_exe2}
\captionsetup{justification=centering}
\caption{Total $A_{obj}$ for the different build order exploration phases}
\label{fig:build_order}
\end{table}
As shown in the Table \ref{fig:build_order}, the number of object interactions with ACR is significantly reduced compared to the baseline approach that does not use categorization. In the baseline case, every action (of the 9 actions shown in Table \ref{fig:OAR_int}) has to be attempted on each new object in the build order to discover all of its associations which is mitigated by the use of action categories. The results obtained here highlight the benefits previously discussed in the mathematical analysis of Section 4.1. While the feedback for an invalid action in StarCraft is instantaneous and incurs no significant time cost, in other domains such as task execution with robots, there may be implicit time and cost constraints associated with each interaction. Hence, with ACR it is possible to minimize interactions with the environment by grouping actions.
\subsubsection{Improved Planning Performance with ACR}
In this section, we combine ACR with an existing off-the-shelf PDDL Planner (Fast Forward) \cite{helmert2006fast} to demonstrate how ACR reduces the action space that the planner has to contend with, thus reducing the planning time.
For this evaluation we use a combat formation problem where combat units (Dragoons) have to form a particular arrangement on a section of the battlefield. We compared the classical planning approach without action categories (baseline) to planning with ACR. We increase the number of Dragoons and demonstrate its effect on the two planning approaches. Figure \ref{fig:planning_problem} shows a sample initial and goal states for the Dragoon formation.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\linewidth]{imgs/planning_problem}
\captionsetup{width=\linewidth}
\caption{Combat formation problem showing initial (left) and goal (right) states of formation using 3 Dragoons}
\label{fig:planning_problem}
\end{figure}
The overall pipeline for combining ACR with the planner is shown in Figure \ref{fig:PDDL_pipeline}. Contrary to the classical approach, ACR introduces the learned action categories into the domain and problem definitions for planning, leading to computational improvements. The classical planning approach instantiates each Dragoon as a separate entity with distinct variables, while the ACR based approach instantiates all the Dragoons in terms of the action category that they are mapped to, which is $A^{c_2}$ in this case (Table \ref{fig:OAR_int}). The domain and problem definitions are thus automatically generated from ACR. The plan is then generated using the domain and problem definitions.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{imgs/PDDL_pipeline}
\captionsetup{width=\linewidth}
\caption{Pipeline for integrating ACR with PDDL planner for planning in StarCraft}
\label{fig:PDDL_pipeline}
\end{figure}
\begin{figure}[tbh]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\linewidth]{imgs/time_vs_units}
\caption{Planning time vs. number of Dragoon units}
\end{subfigure}
\hspace{0.25em}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\linewidth]{imgs/states_vs_units}
\caption{Number of search states vs. number of Dragoon units}
\end{subfigure}
\caption{Graphs showing effect of number of Dragoons on planning time (Fig \ref{fig:planning_result}a) and number of search states (Fig \ref{fig:planning_result}b) for the baseline planning and ACR-based planning approaches}
\label{fig:planning_result}
\end{figure}
As shown in Figure \ref{fig:planning_result}, increasing number of Dragoons from 1 to 7, exponentially increases the search time for the classical planning approach as compared to the ACR-based approach (Figure \ref{fig:planning_result}a). This is because the number of states increases exponentially with the number of Dragoons. For instance, on a 5x5 grid, the number of states for 3 Dragoons is ${25 \choose 3}*3 = 6900$ for the classical planning approach and ${25 \choose 3} = 2300$ for the ACR-based planner since ACR instantiates all Dragoons in terms of their action category. Similarly, in the case of forward chaining planners such as \cite{uyanik2013learning}, ACR can help reduce the branching factor from $n_o*|A|$ (where, $n_o$ is the total number of objects or Dragoons in this case, and $|A|$ is total number of actions) to $\sum_{1}^{n} |o^{c_i}|*|A^{c_i}|$ (where, $|o^{c_i}|$ indicates number of objects mapped to action category $A^{c_i}$). Thus, ACR leads to computational benefits by pruning the action space the planner has to contend with.
\section{Case Study in Lightworld}
In this section, we discuss the computational benefits of applying ACR with RL. We first discuss our domain design, inspired by the Lightworld domain used in the Options RL framework \cite{konidaris2007building}. We then discuss the benefits of applying ACR with RL.
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{imgs/Lightworld_domain}
\captionsetup{justification=centering}
\caption{Example domain}
\label{fig:lightworld}
\end{figure}
Figure \ref{fig:lightworld} shows a sample domain. The game consists of a 7x8 grid of locked rooms with some doors operated by a switch and some doors operated by a key. The goal of the agent is to unlock the doors and move to the final reward. The agent has to move over the button or the key to either press or pick up the object. There are also spike pits that the agent needs to avoid while navigating the room. The agent receives a reward of +100 for reaching the goal state, and a negative reward of -10 for falling into spike pits which are terminal states. Additionally, the agent receives a negative step reward of -0.04. There are a total of 6 actions with each of the four grid directions, a pickup action and press action. The environment is deterministic and unsuccessful actions do not change the state.
\subsection{Integrating ACR with RL}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{imgs/RL_ACR}
\captionsetup{justification=centering}
\caption{Pipeline for integrating ACR with RL}
\label{fig:ACR_RL}
\end{figure}
Figure \ref{fig:ACR_RL} shows the integration of ACR with general RL algorithms. ACR influences the action selection step, given an observed state. With ACR, the agent chooses an action from within an action category $A^{c_i} \subset A$ of the known objects it can interact with, given the state. For previously unseen objects whose action categories are unknown, the agent chooses the entropy-wise selected action as described in Sec 4.1 to simultaneously infer the action category of the objects during learning. If the agent cannot interact with any objects, it chooses from the non object-related set of actions (analogous to an ``agent'' action category) given by, $A-\bigcup_{i=1}^{m} A^{c_i}$ where $m$ denotes the number of learned action categories. This reduces the action space for the agent.
As noted in Sec 3, we use ACR with RL to bias the initial learning rather than applying it as a hard constraint over the entire learning phase. We achieve this by allowing ACR to influence the action choice for a fixed number of episodes, denoted by $N_{ACR}$. This allows the agent to leverage the reduced action space while also learning the optimal policies in states where the ACR-guided policy may be suboptimal.
\subsection{Computational Benefits of ACR with RL}
We compare three RL agents: Q-learning, Q-learning with ACR and Q-learning with Human-Agent Transfer (HAT \cite{taylor2011using}). HAT uses human demonstrations to learn strategies (decision list) from demonstration summaries. We used Q-learning with $\epsilon$-greedy exploration with $N_{ACR} = 50$, $\alpha = 0.25$, $\gamma = 0.99$ and $\epsilon = 0.1$.
We used 5 expert and 5 suboptimal demonstrations separately, to compare the effect of demonstration quality on ACR and HAT. Suboptimal demonstrations refer to where the demonstrator either failed to complete the goal or took a suboptimal path to reach the goal state. In all experiments below, the ACR was built from a single demonstration that exposed the agent to all object-related actions necessary to complete the game. That is, the ACR encodes what the agent \textit{can} do, rather than what the agent \textit{should}. This imposes a more relaxed constraint on the teacher since it is easier to show the ``rules'' rather than the ``strategy'' which requires an expert. Importantly, unlike most existing approaches, the ACR built from expert or suboptimal demonstrations \textbf{do not differ} if the agent learned the same rules from either demonstration.
Additionally, to evaluate the benefit of entropy-based action selection in RL, the ACR-based approach treats keys and switches as previously unseen objects whose action-categories are unknown and must be simultaneously inferred during the course of the learning process.
We demonstrate three benefits of using ACR with RL:
\begin{enumerate}
\item robustness to demonstration quality: we show that ACR has a higher learning rate compared to HAT and Q learning when trained on suboptimal demonstrations
\item learning from few demonstrations: we show that ACR learns more efficiently than both HAT and Q learning when only a single demonstration is available
\item improved performance when combining ACR and HAT: we show that best overall performance is achieved when ACR is used to improve the performance of other LfD methods, in this case Human-Agent Transfer.
\end{enumerate}
\subsubsection{Effect of Demonstration Quality on ACR:}
\begin{table}[t]
\centering
\includegraphics[width=0.47\textwidth]{imgs/RL_table}
\captionsetup{justification=centering}
\caption{Comparison of the different approaches based on convergence episode and average number of actions taken by the agent (bold values correspond to ACR)}
\label{fig:RL_table}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{imgs/avg_R}
\captionsetup{justification=centering}
\caption{Comparison of Q-learning, HAT + Q-learning with 5 expert, 5 suboptimal demonstrations and ACR + Q-learning}
\label{fig:avg_R}
\end{figure}
As shown in \ref{fig:avg_R}, ACR performs much better than Q-learning approach and HAT trained on suboptimal demonstrations. ACR also minimizes the number of attempted actions as shown in Table \ref{fig:RL_table}. However, it does not perform better than HAT that uses expert demonstrations. This is because, with enough expert demonstrations, the information contained within ACR can be implicitly learned in the form of rules. Since ACR does not fully leverage the capabilities of a good teacher it does not outperform HAT that uses multiple expert demonstrations.
However, HAT is quite sensitive to the optimality of the demonstration. The starting reward for HAT is dependent on the teacher performance. As shown in Figure \ref{fig:avg_R}, the starting reward for HAT trained on expert demonstrations is much higher when compared to HAT trained on suboptimal demonstrations.
To summarize, in cases involving non-expert users, ACR can leverage the rules of the task in order to improve learning performance over the baseline approaches.
\subsubsection{Effect of Number of Demonstrations on ACR:}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{imgs/avg_R_1_demo.png}
\captionsetup{justification=centering}
\caption{Comparison of Q-learning, HAT + Q-learning with single expert demonstration and ACR + Q-learning}
\label{fig:avg_R_1_demo}
\end{figure}
Given only a single expert demonstration, HAT fails to accurately summarize the source policy (Figure \ref{fig:avg_R_1_demo}). The building of the decision list in HAT requires more data depending on the complexity of the domain. However, ACR was able to perform better than the baseline approaches with one expert demonstration. Hence, this makes ACR a feasible approach when there are are not enough demonstrations available to learn a good demonstration policy.
Figure \ref{fig:RL_perf} summarizes the effects of number and quality of demonstrations on learning performance of the different approaches. Q-learning is also shown for comparison.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{imgs/RL_perf_chart}
\captionsetup{justification=centering}
\caption{Summary of the average learning performances based on number and quality of demonstrations for HAT and ACR. Q-learning (RL) also shown for comparison.}
\label{fig:RL_perf}
\end{figure}
\subsubsection{Combining ACR with HAT}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{imgs/avg_R_subopti.png}
\captionsetup{justification=centering}
\caption{Performances of HAT + ACR for single expert demonstration and 5 suboptimal demonstrations compared to the baselines that consider the two separately}
\label{fig:avg_R_subopti}
\end{figure}
ACR is an algorithm and domain independent representation; as a result, one of its strengths is that it can be easily combined with complex learning methods, including ones that in themselves influence action selection, such as Human-Agent Transfer.
HAT consists of 3 steps: demonstration, policy summarization and independent learning.
Given one or more demonstrations, the teacher's behavior is summarized in the form of a decision list and then used to bootstrap the learning. We utilize the \textit{Extra Action} method from \cite{taylor2011using}, in which the agent executes the action suggested by the decision list for a fixed number of initial episodes before running regular RL.
HAT can bootstrap the learning in states where a good policy is obtained from the demonstrations, while for the ``bad'' states in which the demonstrator's performance was suboptimal, ACR helps accelerate the learning of the optimal policy by reducing the action space. We combine ACR and HAT by verifying that the action suggested by the decision list conforms with the retrieved action category. It does so, by making two checks:
\begin{enumerate}
\item In states with objects: action selection is restricted to the union of all non-object actions (e.g. movement) and object-related actions within ACR, ensuring that the agent does not try an incorrect action on the object (e.g. ``pick'' button).
\item In states without objects: action selection is restricted to non-object actions.
\end{enumerate}
As in the \textit{Extra Action} method, the above action selection method is used to bias exploration early in the learning process before continuing to classical RL using $\epsilon$-greedy Q-learning.
The results of this method are presented in Figure \ref{fig:avg_R_subopti}, showing improved performance when ACR is combined with HAT. Combining ACR with HAT trained on a single expert demonstration improves the learning performance beyond the case where either of the two approaches are considered separately. Hence, by combining ACR with HAT, it is possible to reduce the effect of number of demonstrations and their optimality on HAT, while also allowing ACR to maximally utilize the teacher demonstrations.
\section{Conclusion and Future Work}
To conclude, we presented the Action-Category Representation that allows online categorization of objects to action categories based on action codes. Our results demonstrate some of the key benefits of ACR in terms of reduced action space resulting from the action groupings, computational improvements when used with planning and RL, and reduced demonstration requirements with robustness to demonstration errors.
While the domains described here are discrete in nature, ACR is also applicable to continuous domains by discretizing the state space into states where interaction with an object is possible/not possible. For instance an object may be interacted with, if the agent is within a certain distance of it. Approaches such as \cite{mugan2008continuous} have discussed discretization of continuous state spaces for RL and in this manner, ACR can also be extended to continuous domains.
In our future work, we aim to address some of the limitations of our work in its current form. Since ACR currently models single parameter actions, it limits the applicability of ACR to real-world tasks. We plan to address this by incorporating multi-parameter actions by decomposing them into single-parameter actions \cite{bach2014affordance}. Additionally, future work will explore the possibility of using ACR as a bias with planning (as opposed to the hard constraint on action selection), by utilizing "Ontology-Repair" \cite{mcneill2007dynamic} to update ACR and improve its flexibility. Finally, we wish to extend the application of ACR to Deep Learning for computational improvements in learning performance.
\section{Acknowledgement}
This work is supported by NSF IIS 1564080.
\bibliographystyle{named}
|
1,477,468,750,979 | arxiv | \section{Stationary discs and holomorphic extension }
The problem of testing analyticity on a domain $D\subset {\Bbb C}^n$ by a family of discs has attracted a great deal of work. The first significant result goes back to Stout \cite{S77} who uses as testing family all the straight lines. Reducing the testing family, Agranovsky and Semenov \cite{AS71} use the lines which meet an open subset $D'\subset\subset D$. It is classical that the lines which meet a single point $z_o\in D$ do not suffice not even in the case of the sphere $\Bbb B^n$.
Other testin families are considered among others, by \cite{GS},\cite{R},\cite{G88}. In the present paper
for a strictly convex $C^\omega $ domain $D$, we prove that the stationary discs passing through a point of $\bar D$ is a testing family if the point belongs to the boundary $ \partial D$ and, otherwise, if it is supplemented by another $(2n-2)$-parameter, generic family. In particular this second can be chosen as the family of stationary discs through another point of $D$. This result is also present in a recent paper by Agranovsky \cite{A09}.
We deal with stationary/extremal discs in the sense of Lempert \cite{L81}. We first introduce some terminology. A disc $A$ is the holomorphic image of the standard disc $\Delta$; $\Bbb PT^*{\Bbb C}^n$ is the cotangent bundle with projectivized fibers, and $\pi$ the projection on the base point; $\Bbb T^*_{\partial D}{\Bbb C}^n$ the projectivized conormal bundle to $\partial D$ in ${\Bbb C}^n$.
\begin{Def}
\Label{d1.1}
A disc $A$ of $D$ is said to be stationary when it is endowed with a meromorphic lift $A^*\subset \Bbb PT^*{\Bbb C}^n$ with a simple pole attached to $ T^*_{\partial D}{\Bbb C}^n$, that is, satisfying $\partial A^*\subset \Bbb T^*_{\partial D}{\Bbb C}^n$.
\end{Def}
We fix a stationary disc $A_o$ of $D$ and, in the $\epsilon$-neighborhood of $A_o$, consider a certain number of $(2n-2)$-parameter families of stationary discs $\{\mathcal V_j\}_{j=1,...k}$ smoothly depending on the parameters. We denote by $\mathcal V_j^*$ the family of lifts of the discs in $\mathcal V_j$ and define
\begin{equation}
\Label{1.1}
M_j:=\underset {A^*\in\mathcal V_j^*}\cup A^*.
\end{equation}
The set $M_j$ is generically a CR manifold with CR dimension 1 except at the points of a closed set; we denote by $M_j^{\text{reg}}$ the complement of this set. We assume
\begin{equation}
\Label{1.2}
A^*_o\subset\underset j\cup M_j^{\text{reg}}.
\end{equation}
Here is our main result.
\begin{Thm}
\Label{t1.1}
Let $D\subset\subset{\Bbb C}^n$ be a strictly convex domain with $C^\omega$ boundary and $f$ a $C^\omega$ function on $\partial D$. Suppose that $f$ extends holomorphically along each disc $A\in\underset{j=1,...,k}\cup \mathcal V_j$ and that the sets $M_j$ which collect the discs of $\mathcal V_j$ satisfy \eqref{1.2}. Then $f$ extends holomorphically to $D$.
\end{Thm}
The proof is given in next section.
Theorem~\ref{t1.1} states a general principle: $(2n-2)$-families of stationary discs generically suffice. To exhibit explicit families the following criterion is very effective.
\begin{Thm}
\Label{t1.2} Let $\mathcal V$ be the discs through a point $z_o\in \bar D$ and $M$ the union of their lifts.
\begin{itemize}
\item[(i)] If $z_o\in D$, then
$$
A_o^*\setminus \pi^{-1}(z_o)\subset M^{\text{reg}}.
$$
\\
\item[(ii)] If $z_o\in \partial D$, then
$$
A_o^*\subset M^{\text{reg}}.
$$
\end{itemize}
\end{Thm}
\begin{proof}
(i): We first assume that $D$ coincides with the unit ball $\Bbb B^n$. It is classical that the stationary discs are the straight lines. By a biholomorphic transformation of $\Bbb B^n$ we can displace $z_o$ at $0$. It is helpful to use the parametrization
\begin{equation*}
\begin{matrix}
\partial\Bbb B^n\times(0,1)&\to&M
\\
(z,r)&\mapsto &(zr,[\bar z]),
\end{matrix}
\end{equation*}
where brackets denote projectivized coordinates. For fixed $r>0$, this describes a totally real maximal manifold of $\Bbb P T^*{\Bbb C}^n$; thus $\dim_{CR}M\le 1$. On the other hand, $M$ is foliated by discs and therefore $\dim_{CR}M=1$.
Instead, for $r=0$, we have $TM|_0=\{0\}\times \Bbb P^{n-1}_{\Bbb C}$; thus any point of $M|_0$ is CR singular since there the CR dimension jumps from $1$ to $n-1$.
We pass now to a general strictly convex domain $D$. We know from \cite{L81} that there is a mapping $\Psi:\Bbb B^n\to D$ which interchanges $0$ with $z_o$, is $C^\omega$ outside $0$, transforms holomorphically the lines of $\Bbb B^n$ (denoted $A_{\Bbb B^n}$) into the stationary discs of $D$ through $z_o$ (denoted $A_D$), and which fixes the tangent directions at the ``centers". Therefore, $\Psi$ lifts in a natural way to a mapping between the manifold $M_{\Bbb B^n}$ (the union of the $A^*_{\Bbb B^n}$'s) to the corresponding manifold $M_D$ (the union of $A^*_D$'s).
Denote by $\Bbb B^n_r$ the ball of radius $r$ and put $D_r:=\Psi(\Bbb B^n_r)$; we know from the theory of Lempert that
$$
A^*_{D_r}=(A^*_D)|_{D_r}.
$$
Since $(A^*_{D_r})|_{\partial D_r}\subset \Bbb PT^*_{\partial D_r}{\Bbb C}^n$, it follows that $M_D\setminus\pi^{-1}(z_o)\subset \underset r\cup \Bbb PT^*_{\partial D_r}{\Bbb C}^n$. Thus, $\Bbb PT^*_{\partial D_r}{\Bbb C}^n$ being maximal totally real for any $r$, we conclude that $M|_{\partial D}$ is a CR manifold except at points of $\pi^{-1}(z_o)$ and that it is CR-diffeomorphic, via $\Psi^*$, to $M_{\Bbb B^n}\setminus\pi^{-1}(0)$ .
\noindent
(ii): The proof is the same as in (i) but uses the boundary version of the Riemann-Lempert mapping Theorem as in Chang-Hu-Lee \cite{CL88}.
\end{proof}
We are ready for the following, explicit, result.
\begin{Thm}
\Label{t1.3}
Let $D$ be strictly convex with $C^\omega$ boundary and let $f\in C^\omega(\partial D)$. Either of the following hypothesis is sufficient for holomorphic extension of $f$ to $D$.
\begin{itemize}
\item[(i)] $f$ extends holomorphically along the stationary discs passing through two points of $D$.
\\
\item[(ii)] $f$ extends along the discs through a boundary point of $\partial D$.
\end{itemize}
\end{Thm}
\begin{proof}
(i): Let $z_o$ and $w_o$ be the ``centers" of the two systems of discs, let $A_o$ be the disc which connects $z_o$ to $w_o$, and denote by $M^{z_o}$ and $M^{w_o}$ the union of the discs through $z_o$ and $w_o$ respectively.
The lift $A^*_o$ is contained in $(M^{z_o})^{\text{reg}}$ apart from a single point over $z_o$; but this point is contained in $M^{w_o}$. Thus we can apply Theorem~\ref{t1.1}.
\noindent
(ii): If $z_o\in\partial D$, we have directly $A^*_o\subset (M^{z_o})^{\text{reg}}$.
\end{proof}
\begin{Rem}
Note that in (ii) the family of discs $\mathcal V^{w_o}$ is only used to cover the singular point of $M^{z_o}$ over $z_o$; for this purpose, a much more general family than of discs through another point $w_o$ is suitable.
\end{Rem}
\begin{Rem}
Discs by two points of the ball are also present, as a
testing family, in the recent preprint \cite{A09} by Agranovsky.
\end{Rem}
\section{Proof of Theorem~\ref{t1.1}}
Before starting the proof, we have to recall the main results from \cite{L81} which will be on use.
Stationary discs are stable under reparametrization. In particular, the pole can be displaced at any of their interior points. It is convenient to identify the lift $A^*$ to its image in the projectivized bundle $\mathbb{P} T^*{\Bbb C}^n$ with coordinates $(z,[\zeta])$. We assume that $D$ is strictly convex and that $\partial D\in C^\omega$. In this situation, a stationary disc and its lift $A^*$ are $C^\omega$ up to $\partial \Delta$. Moreover, one has the following basic result for whose proof we refer to \cite{L81}.
\begin{Pro}
\Label{p1.1}
For any point $(z,[\zeta])\in \mathbb{P} T^*{\Bbb C}^n|_D$ there is unique, up to reparametrization, the stationary disc whose lift $A^*_{(z,[\zeta])}$ contains $(z,[\zeta])$. Moreover, the correspondence
\begin{equation}
\Label{1.5}
(z,[\zeta])\mapsto A^*_{(z,[\zeta])},\qquad \mathbb{P} T^*{\Bbb C}^n|_D\to C^\omega(\bar\Delta),
\end{equation}
is a $C^\omega$ diffeomorphism.
\end{Pro}
We begin now the proof of Theorem~\ref{t1.1} and first remark that at any point of $M_j^{\text{reg}}$, the CR structure is fully provided by the discs $A^*\in V_j$ by which $M_j$ is foliated. Notice that $M_j$ has a natural ``edge" $E_j :=\underset{A\in\mathcal V_j}\cup\partial A^*$. The function $f$ can be naturally lifted to a function $F$ on $M_j$ by gluing the bunch of separate holomorphic extensions $\{f_{A}\}_{A\in \mathcal V_j}$. This is defined by
\begin{equation*}
F(z,[\zeta])=f_{A_{(z,[\zeta])}}(z),
\end{equation*}
where $A_{(z,[\zeta])}$ is the unique stationary disc of $\mathcal V_j$ whose lift $A^*_{(z,[\zeta])}$ passes through $(z,[\zeta])$. The crucial point here is that the $A$'s may overlap on ${\Bbb C}^n$ but the $A^*$'s do not in $\mathbb{P} T^*{\Bbb C}^n$. The function $F$ is CR on $M_j^{\text{reg}}$. Moreover, since $f\in C^\omega(\partial D)$ and $T^*_{\partial D}{\Bbb C}^n$ is maximally totally real with complexification $\mathbb{P} T^*{\Bbb C}^n$, then $F$ extends holomorphically to a full neighborhood of $\mathbb{P} T^*_{\partial D}{\Bbb C}^n$ in $\mathbb{P} T^*{\Bbb C}^n$. By propagation of holomorphic extendibility on $M_j^{\text{reg}}$ along the discs $A^*_{(z,[\zeta])}$, $F$ extends holomorphically to a neighborhood of $M_j^{\text{reg}}$. Since $A^*_o\subset \underset j\cup M_j^{\text{reg}}$, then we get the conclusion
\begin{equation}
\Label{2.1}
\text{$F$ is holomorphic in a neighborhood of $A^*_o$.}
\end{equation}
We prove now that \eqref{2.1} implies
\begin{equation}
\Label{2.2}
\text{$F$ is holomorphic in a neighborhood of any other stationary disc $A^*_1$ of $D$.}
\end{equation}
To see this, we suppose $A^*_{(z,[\zeta])}(0)=(z,[\zeta])$ and define a function $G$ by means of Cauchy integral
$$
G(z,[\zeta]):=(2\pi)^{-1}\int_{\partial\Delta}\frac{f\circ A_{(z,[\zeta])}(\tau)}\tau d\tau.
$$
This is defined for any $(z,[\zeta])\in \mathbb{P} T^*{\Bbb C}^n|_D$, is real analytic, and satisfies
$$
G=F\qquad\text{in a neighborhood of $A^*_o$.}
$$
Hence $F$, identified to $G$, extends holomorphically to the full $\mathbb{P} T^*{\Bbb C}^n|_D$. Since $\mathbb{P} T^*{\Bbb C}^n|_D$ is covered by the discs $A^*_{(z,[\zeta])}$ for $(z,[\zeta])\in \mathbb{P} T^*{\Bbb C}^n|_D$ (by Proposition~\ref{p1.1}), since $\partial A^*_{(z,[\zeta])}\subset \mathbb{P} T^*_{\partial D}{\Bbb C}^n$ and since $F$ is bounded over these boundaries, then $F$ is in fact bounded on the whole $\mathbb{P} T^*{\Bbb C}^n|_D$. Therefore it is constant with respect to $[\zeta]$. Thus it is a function of $z$ only, the holomorphic extension of $f$ to $D$.
\hskip14cm $\Box$
|
1,477,468,750,980 | arxiv | \section{Theoretical basis}
Within the Minimal Supersymmetric Standard Model (MSSM) the masses of
the ${\cal CP}$-even neutral Higgs bosons are
calculable in terms of the other MSSM parameters. The mass of the
lightest Higgs boson, $\mathswitch {m_\Ph}$, has been of particular interest as it is
bounded from above at the tree level to be smaller than the Z-boson
mass. This bound, however, receives large radiative corrections. The one-loop\
results~\cite{mhiggs1l,mhiggsf1l,mhiggsf1lb} have been
supplemented in the last years with the leading two-loop\ corrections,
performed in the renormalization group (RG)
approach~\cite{mhiggsRG}, in the effective
potential approach~\cite{mhiggsEP} and most recently in
the Feynman-diagrammatic (FD) approach~\cite{mhiggsFD}.
These calculations predict an
upper bound on $\mathswitch {m_\Ph}$ of about $\mathswitch {m_\Ph} \lsim 135 \,\, \mathrm{GeV}$.
The dominant radiative corrections to $\mathswitch {m_\Ph}$ arise from the top and
scalar top sector of the MSSM, with the input parameters $\Mt$, $M_{\mathrm{SUSY}}$ and
$X_{\Pt}$. Here we assume the soft SUSY breaking parameters in the diagonal
entries of the scalar top mixing matrix to be equal for simplicity,
$M_{\mathrm{SUSY}} = M_{\tilde{t}_L} = M_{\tilde{t}_R}$. The off-diagonal entry of the mixing
matrix in our conventions (see \citere{mhiggsFD}) reads
$\Mt X_{\Pt} = \Mt (A_t - \mu \cot\beta)$. We furthermore use the short-hand
notation $M_{\mathrm{S}}^2 := M_{\mathrm{SUSY}}^2 + \Mt^2$.
Within the RG approach, $\mathswitch {m_\Ph}$ is calculated from the effective
renormalized Higgs quartic coupling at the scale $Q = \Mt$. The RG
improved leading logarithmic approximation is obtained by applying the
one-loop RG running of this coupling from the high scale $Q = M_{\mathrm{S}}$
to the scale $Q = \Mt$ and including the one-loop threshold effects from
the decoupling of the supersymmetric particles at $M_{\mathrm{S}}$~\cite{mhiggsRG}.
This approach
relies on using the $\overline{\rm{MS}}$\ renormalization scheme. The parameters in
terms of which the RG result for $\mathswitch {m_\Ph}$ is expressed are thus
$\overline{\rm{MS}}$\ parameters.
In the FD approach, the masses of the ${\cal CP}$-even Higgs bosons are determined
by the poles of the corresponding propagators. The corrections to the
masses $\mathswitch {m_\Ph}$ and $\mathswitch {m_\PH}$ are thus obtained by evaluating loop corrections
to the $h$, $H$ and $hH$-mixing propagators. The poles of the
corresponding propagator matrix are given by the solutions of
\begin{equation}
\label{eq:mhpole}
\left[q^2 - m_{\mathswitchr h, {\rm tree}}^2 + \hat{\Sigma}_{hh}(q^2) \right]
\left[q^2 - m_{\mathswitchr H, {\rm tree}}^2 + \hat{\Sigma}_{HH}(q^2) \right] -
\left[\hat{\Sigma}_{hH}(q^2)\right]^2 = 0 ,
\end{equation}
where $\hat{\Sigma}_{hh}(q^2)$, $\hat{\Sigma}_{HH}(q^2)$,
$\hat{\Sigma}_{hH}(q^2)$ denote the renormalized Higgs-boson
self-energies. In \citere{mhiggsFD} the dominant two-loop
contributions to the masses of the ${\cal CP}$-even Higgs bosons of ${\cal O}(\alpha\alpha_s)$
have been evaluated. These corrections, obtained in the on-shell
scheme, have been combined with the complete one-loop
on-shell result~\cite{mhiggsf1lb} and the sub-dominant two-loop
corrections of ${\cal O}(\GF^2 \Mt^6)$~\cite{mhiggsRG}. The
corresponding results have been implemented into the Fortran code
{\em FeynHiggs}~\cite{feynhiggs}.
\section{Leading two-loop contributions in the FD approach}
In \citere{mhiggslle} the leading contributions have been extracted via
a Taylor expansion from the rather complicated diagrammatic two-loop
result obtained in \citere{mhiggsFD} and a compact expression for
the dominant contributions has been derived. Restricting to the
leading terms in $\Mt/M_{\mathrm{S}}$, $\mathswitch {M_\PZ}^2/\Mt^2$ and $\mathswitch {M_\PZ}^2/\mathswitch {M_\PA}^2$, the
expression up to ${\cal O}(\alpha\alpha_s)$ reduces to the simple form
\begin{equation}
m_{\mathswitchr h, \mathrm{FD}}^2 = \mathswitch {m_\Ph}^{2,{\rm tree}} +
\Delta m_{\mathswitchr h, \mathrm{FD}}^{2,\alpha} +
\Delta m_{\mathswitchr h, \mathrm{FD}}^{2,\alpha\alpha_s},
\label{eq:resFD}
\end{equation}
where the one-loop correction is given by
\begin{equation}
\Delta m_{\mathswitchr h, \mathrm{FD}}^{2,\alpha} = \frac{3}{2} \frac{\GF
\sqrt{2}}{\pi^2} \mtms^4 \left\{
- \ln\left(\frac{\mtms^2}{M_{\mathrm{S}}^2} \right)
+ \frac{X_{\Pt}^2}{M_{\mathrm{S}}^2}
\left(1 - \frac{1}{12} \frac{X_{\Pt}^2}{M_{\mathrm{S}}^2} \right)
\right\} .
\label{eq:oneloop}
\end{equation}
The two-loop contribution reads
\begin{eqnarray}
\label{mh2twolooptop}
\Delta m_{\mathswitchr h, \mathrm{FD}}^{2,\alpha\alpha_s} &=& \Delta m_{\mathswitchr h,\rm log}^{2,\alpha\alpha_s}
+ \Delta m_{\mathswitchr h,\rm non-log}^{2,\alpha\alpha_s} , \\
\Delta m_{\mathswitchr h,\rm log}^{2,\alpha\alpha_s} &=&
- \frac{\GF\sqrt{2}}{\pi^2} \frac{\alpha_s}{\pi}\; \overline{m}_{\Pt}^4
\left[ 3 \ln^2\left(\frac{\overline{m}_{\Pt}^2}{M_{\mathrm{S}}^2}\right) + \ln\left(\frac{\overline{m}_{\Pt}^2}{M_{\mathrm{S}}^2}\right)
\left( 2 - 3 \frac{X_{\Pt}^2}{M_{\mathrm{S}}^2} \right) \right] ,
\label{eq:mhlog} \\
\Delta m_{\mathswitchr h,\rm non-log}^{2,\alpha\alpha_s} &=&
- \frac{\GF\sqrt{2}}{\pi^2} \frac{\alpha_s}{\pi}\; \overline{m}_{\Pt}^4
\left[ 4 -6 \frac{X_{\Pt}}{M_{\mathrm{S}}}
- 8 \frac{X_{\Pt}^2}{M_{\mathrm{S}}^2}
+\frac{17}{12} \frac{X_{\Pt}^4}{M_{\mathrm{S}}^4} \right] ,
\label{eq:mhnolog}
\end{eqnarray}
in which the leading logarithmic and the
non-logarithmic terms have been given separately. The parameter
$\overline{m}_{\Pt}$ in \refeqs{eq:oneloop}--(\ref{eq:mhnolog}) denotes the running
top-quark mass at the scale $\Mt$, which is related to the pole mass
$\Mt$ in ${\cal O}(\alpha_s)$ via
\begin{equation}
\overline{m}_{\Pt} \equiv \overline{m}_{\Pt}(\Mt) =
\frac{\Mt}{1 + \frac{4}{3\,\pi} \alpha_s(\Mt)} ,
\label{mtrun}
\end{equation}
while $M_{\mathrm{S}}$ and $X_{\Pt}$ are on-shell parameters.
\begin{figure}[ht!]
\vspace{1em}
\begin{center}
\mbox{
\psfig{figure=MhsqOSTLLogtermPap.eps,width=11cm,height=8.0cm}}
\end{center}
\caption[]{The dominant one-loop and two-loop contributions to
$\mathswitch {m_\Ph}$ evaluated in the FD approach are shown as a function of
(the on-shell parameter) $X_{\Pt}$ for $\tan\beta = 1.6$.
The full curve shows the result including the new genuine two-loop\
contributions, \refeq{eq:mhnolog},
while the dashed curve shows the result where these
non-logarithmic two-loop corrections have been neglected.
}
\label{fig:mhnolog}
\end{figure}
The one-loop correction, \refeq{eq:oneloop}, as well as the
dominant logarithmic two-loop contributions, \refeq{eq:mhlog}, are
seen to be symmetric with respect to the sign of $X_{\Pt}$.
The non-logarithmic two-loop
contributions, on the other hand, give rise to an asymmetry in the $X_{\Pt}$
dependence through the term in \refeq{eq:mhnolog} which is linear in $X_{\Pt}/M_{\mathrm{S}}$.
The numerical effect of the non-logarithmic two-loop terms is
investigated in \reffi{fig:mhnolog}. The result for the dominant
contributions to $\mathswitch {m_\Ph}$ of \refeqs{eq:resFD}--(\ref{eq:mhnolog})
is compared to the result where the
non-logarithmic contributions of \refeq{eq:mhnolog} are omitted. The
numerical effect of the non-logarithmic genuine two-loop contributions
is seen to be sizable. Besides a considerable asymmetry in $X_{\Pt}$ the
non-logarithmic two-loop terms in particular lead to an increase in the
maximal value of $\mathswitch {m_\Ph}$ of about 5~GeV.
\section{Comparison between the FD and the RG approach}
The results for the dominant contributions derived by FD methods can be
compared with the explicit expressions obtained within the RG approach
which have been given in \citeres{mhiggsRG}. At the two-loop level,
the RG methods applied in \citeres{mhiggsRG} lead to the following result
in terms of the $\overline{\rm{MS}}$\ parameters $\mtms$, $\overline{M}_{\mathrm{S}}$, $\overline{X}_{\Pt}$
\begin{equation}
\label{eq:mh2twoloopRG}
\Delta m_{\mathswitchr h, \mathrm{RG}}^{2,\alpha\alpha_s} =
- \frac{\GF\sqrt{2}}{\pi^2} \frac{\alpha_s}{\pi}\; \overline{m}_{\Pt}^4
\left \{3 \ln^2 \left(\frac{\mtms^2}{\overline{M}_{\mathrm{S}}^2} \right) +
\left[2 - 6 \frac{\overline{X}_{\Pt}^2}{\overline{M}_{\mathrm{S}}^2}
\left(1 - \frac{1}{12} \frac{\overline{X}_{\Pt}^2}{\overline{M}_{\mathrm{S}}^2} \right) \right]
\ln\left(\frac{\mtms^2}{\overline{M}_{\mathrm{S}}^2} \right) \right \},
\end{equation}
which solely consists of leading logarithmic contributions. The one-loop result
for the dominant contributions in the RG approach has the same form as
\refeq{eq:oneloop}, where the parameters $M_{\mathrm{S}}$ and $X_{\Pt}$ have to be
replaced by $\overline{M}_{\mathrm{S}}$ and $\overline{X}_{\Pt}$, respectively.
The one-loop RG-improved effective potential expression
\refeq{eq:mh2twoloopRG} does not contain non-logarithmic contributions.
In the viewpoint of the RG approach these genuine two-loop
contributions are interpreted as two-loop finite threshold corrections
to the quartic Higgs couplings.
For a direct comparison of the FD result given in the last section with
the RG result of \refeq{eq:mh2twoloopRG},
one has to take into account that $M_{\mathrm{S}}$ and
$X_{\Pt}$ in the FD result are on-shell parameters,
while the corresponding parameters in the RG result, $\overline{M}_{\mathrm{S}}$ and $\overline{X}_{\Pt}$,
are $\overline{\rm{MS}}$\ quantities. The relations between these parameters are given in
leading order by~\cite{bse}
\begin{equation}
\overline{M}_{\mathrm{S}}^2 = M_{\mathrm{S}}^2
- \frac{8}{3} \frac{\alpha_s}{\pi} M_{\mathrm{S}}^2 , \qquad
\overline{X}_{\Pt} = X_{\Pt} + \frac{\alpha_s}{3 \pi} M_{\mathrm{S}}
\left(8 + 4 \frac{X_{\Pt}}{M_{\mathrm{S}}} -
3 \frac{X_{\Pt}}{M_{\mathrm{S}}} \ln\left(\frac{\Mt^2}{M_{\mathrm{S}}^2}\right) \right) .
\label{eq:xtmsms}
\end{equation}
Applying these relations for rewriting the FD result given in
\refeqs{eq:resFD}--(\ref{eq:mhnolog}) in terms of the $\overline{\rm{MS}}$\ parameters
$\mtms$, $\overline{M}_{\mathrm{S}}$, $\overline{X}_{\Pt}$ one finds that the leading logarithmic
contributions in the two approaches in fact coincide~\cite{bse}, as it
should be as a matter of consistency. The FD result, however, contains
further non-logarithmic genuine two-loop contributions which are not
present in the RG result. The effect of these extra terms within the
$\overline{\rm{MS}}$\ parameterization considered here is qualitatively the same as
discussed in the preceding section. They lead to an asymmetry in
the dependence of $\mathswitch {m_\Ph}$ on $X_{\Pt}$ and to an increase in the maximal value
of $\mathswitch {m_\Ph}$ compared to the RG result.
The analysis above has been performed for the dominant contributions
only. Further deviations between the RG and the FD result arise from
non-leading one-loop and two-loop contributions, in which the results
differ, and from varying the gluino mass, $m_{\tilde{\mathrm{g}}}$, in the FD result, which
does not appear as a parameter in the RG result. Changing the value of
$m_{\tilde{\mathrm{g}}}$ in the interval $0 \leq m_{\tilde{\mathrm{g}}} \leq 1$~TeV shifts the FD result
relative to the RG result within $\pm 2$~GeV~\cite{mhiggsFD}.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.